Game Model Format
BlitzMax Forums/BlitzMax Programming/Game Model Format
| ||
Here is a preliminary spec: http://www.leadwerks.com/post/GMFSpec.pdf The idea is that you can load this straight into your engine, without any of the fiddling around that other formats require. I haven't got all the animation stuff down yet, but I am thinking about just having a set speed with one animation key per "tick", so you are always just performing an interpolation between two regularly-spaced keys (instead of animation keys with arbitrary placements). There's support for pre-calculated binormal/tangent arrays, and the format can be extended by adding new "chunks" without breaking old loaders. It's very similar to the .b3d file format, but the key differences are: -Support for binormal/tangent arrays (required for bumpmapping) -Support for an arbitrary number of texcoord arrays -Vertex arrays are stored in the same sequence as they would be fed to the GPU, so you can read a whole array in one fast pass -Animation keys are interpolated and stored in regular "ticks" instead of arbitrary times -I intend to add support for pre-compiled Newton tree collision data, but this will be an extension of the spec, not part of the official format. I intend to have a few exporters made for the format, and they will be free to use for any purpose. |
| ||
Isn't it easier to just adopt a widely used format instead? It seems like going backwards, by creating a new format, and then do exporters for different tools, instead of using a standardish format like FBX or Collada. No, they (FBX and Collada) aren't implemented in your engine in 5 minutes, but with 99% certainty it will take longer/more money to add exporters for your format in XSI, Maya, Max etc. And with 100% certainty your format won't be interesting until every major platform supports it. If you still want your own format, at least consider: - Materials - Textures - Shaders - Lights - Cameras - Paths/Splines - Custom properties - How to universally extend the format without breaking compatability - etc. Stuff like that is needed unless all you want is a mesh with a bit of animation. |
| ||
especially the extendability point after your tools broke compatibility with each new version which was one of the main reason not to upgrade to it again as I already had to rewrite my plugins for CS4 which took quite some time due to the major changes. On the specs above: Nice basic.But it seems a bit clustered and "overloaded" (usage of VRTS) Not very flexible due to that high overload of the same property |
| ||
Isn't it easier to just adopt a widely used format instead? Yes. Unfortunately there is not a single good widely supported format out there. Name one that loads quickly and is widely supported. Even if you like Collada, it is not meant to be used as a final game format due to extreme amounts of pre-processing required, and the fact it is text-based. It seems like going backwards, by creating a new format, and then do exporters for different tools, instead of using a standardish format like FBX or Collada. Discreet won't release an official spec on the FBX format, so it is not trustworthy. Collada changes frequently, and is the worst 3D file format I have ever encountered. It isn't an industry standard, and never will be. Collada only exists because Sony wants to pump money into it to make it appear that Microsoft has competition. No, they (FBX and Collada) aren't implemented in your engine in 5 minutes, but with 99% certainty it will take longer/more money to add exporters for your format in XSI, Maya, Max etc. And with 100% certainty your format won't be interesting until every major platform supports it. That is why I am starting with an Unwrap3D exporter. Unwrap3D is very good at importing animated model formats, so there is a pathway to get data from almost any modeling package to my format. The purpose of moving each vertex array into its own chunk is so that the arrays can be extended, like the adding of binormal/tangent arrays, and so that an arbitrary number of texcoord arrays are supported. Unlike .b3d, the arrays are stored sequentially, so you can read the array in one swipe. I considered allowing byte, short, and double value arrays, but they are often not supported on different drivers, and OpenGL 3 is doing away with double float arrays, so I think it is safe to assume they will always be floats, or in the case of colors, 4 byte RGBA (RGB causes some ATI drivers to fall back to software rendering). The chunk format would allow extendability without breaking old loaders. I am not going to define a format for splines and other things I will never use. I have come to realize that much of the industry is intentionally obsfucated for the purpose of job security. Sony does not want a fast-loading binary file format with wide support, because it would allow teams with less than 80 people to compete. They do not want things to be straightforward and simple, they want it to be unnecessarily complex and convoluted because man hours are irrelevant to them. You make money by selling solutions. You can't sell a solution if you don't first create and maintain a problem. |
| ||
Here is the source for the MilkShape .b3d exporter: http://chumbalum.swissquake.ch/files/msBlitz3DExporter_src.zip It should be easy to modify to export this. |
| ||
Name one that loads quickly and is widely supported. FBX loads very quickly, and comes with a full SDK. That makes it easier for you, because if the file format gets an addition so does the SDK, and you just have to recompile if you want to takes advantage of the change. I don't particularily like Collada, in fact I agree that it is a bloated format, but that is besides the point. It is supported in max, maya, xsi, blender, milkshape, etc. which means you don't have to do anything but write a decent loader for it. Of course you might need to convert your files to an internal format, but that's as simple as writing a chunk of memory to a file, if you just want your own engine to load it. It still saves you from writing an exporter for software X, Y, and Z. So you end up needing to do 2 or 3 things, instead of writing/buying and supporting exporters for a number of different modelling/animation packages. I don't see writing an exporter for Unwrap3D (although it's a great piece of software) will help you or at least not the users of your engine. |
| ||
FBX loads very quickly, and comes with a full SDK. I did not know that. However, I am afraid that 1.52 mb of C++ code is going to do me very little good. I don't think an SDK is a good substitute for a clear and concise file spec. |
| ||
Thats right. But as fredborg pointed out: If you intend to get a "broad application support" take the widest supported format, no mather how "bloated" it might appear (more features means potentially more for you to use at a later date) and then convert that to your own internal format either at loading or by offering a pipeline tool. You can still write exporters for the major 3D tools straight to your own format after that, but the most important thing is that you get a usefull way for all first and the simple after the usefull works. Otherwise your format ends as B3D: Nice idea but nearly no app that works with all features. And the chance that your own implementations spread as wide as B3D might be even smaller ... B3D at least has a userbase of a few tousand users and a community that is willing to contribute solutions. Doing at that work yourself will eat the next 4+ months just for those exporters. And UU3D plug is nice but when then write one for UU3D 2 first and 3 pro after as most users have the V2 and remain on that until V3 somewhere offers a feature that makes up for the price. Normalmaps are nice but thats about it on "new features". Paying 40$ for an upgrade of an application that has +- the already bought application + very little new features won't help it spread that fast I fear. There are a multiple times more users on MS3D and Blender, so if you want to get a usefull plug, do it for one of those two first. |
| ||
It is impossible to load FBX and Collada without C++, because they both use large SDKs. No matter what, I need a file spec that covers everything I might need, whether it gets converted from some intermediate format or exported from a modeling program. I guess the lesson in all of this is that interchange formats and final game formats are quite different, and while interchange formats are pretty well covered, there hasn't really been any standardization of game-ready formats. I do not think game engines are so different that each requires data in its own format; they all pretty much store data the same way internally. |
| ||
Thats most likely the reason Microsoft brought their own "main format" with their API while OpenGL tries to make everything that open that it is already destructive again. And I do not fully agree on the last point with the same data: depending on how the engine works this definitely isn't true. Just as a simple example: fixed function pipeline based engine vs pure shader style engine Thats only the basic. Depending on what you support animation techniques wise (GPU skinning, instanciated animation, morph targets) the internal representation and the needed storage of data might considerably differ. As well features you support or do not support drastically differ, especially the material capabilities. (-> why X is chunk based, so devs can add their own chunks) If you don't care for size, do an XML based format. Will make it simpler to create the exporters for others. You can still offer a "compressor" that stores the XML content binary for distribution (or even binary AES encrypted) of the media. |
| ||
I'm kinda with you, Leadworks. I am also trying to work on a filesystem at the moment but haven't got very far. I am thinking I will make my own file format plus my own compression/encryption. I think you're on the right track. |
| ||
fixed function pipeline based engine vs pure shader style engine No one uses the fixed function pipeline. GPU skinning, instanciated animation, morph targets That has nothing to do with how the data is stored. Everyone uses separate vertex buffers. |
| ||
Just a quick note: your spec should tell that you use little endian items. A single note at the top is more than enough. My 0.02 $ |
| ||
Well, I put a post up in the general forum for a paid job, so we'll see if any Blitzers come through. |
| ||
Updated the spec, and I have found someone for a MilkShape exporter (just to get it started). I have done away with all submesh/surface conventions. I found in the past with Blitz3D that this caused a lot of problems with physical properties per material...for example, if a surface has a glass material, you don't want it to occlude line-of-sight tests, but what if another surface on the same mesh is a wall? The same problem occurs in my engine's physics system. So there's one material per limb/mesh/entity, and I have not regretted that decision yet. |
| ||
and where does that interfer with the submesh / surface system? Internally you will use batching for rendering anyway, something that a model format can never represent ... As well, nothing actually forces physical materials to be 1:1 tied to visual materials, it actually can but must not. Having the possibility to have it like that is something I actually would see as a benefit, because if I don't want to use per material I still can assign the same material onto all surfaces on the model and the "engine" can behave intelligent on such a case. |
| ||
Let's say you have an "ice" material on the floor, and brick on the walls. If it is all one object, it will have one collision body, and the physics system won't know how to treat the two subobjects differently upon collision. I have been dealing with this kind of thing for a while, and I believe this is the right solution. If anyone disagrees they don't need to use my format. |
| ||
No one uses the fixed function pipeline. The Source engine, to name but one. |
| ||
HL2 is several years old, and it only does that as a fallback. Regardless, this has nothing to do with the way the mesh data is stored. |
| ||
Made a revision of the chunk structure. I quickly realized I wanted to read the data into a generic hierarchy structure, without worrying about what any of the data actually meant. Then I can go through the structure and turn it into real model data. This is something the .b3d format does not allow. Here's my GMF node reader. It returns a TGMFNode, and then you can read each node's data to see what is actually contained: Type TGMFNode Field name:String Field size:Int Field data:TStream Field subnodes:TGMFNode[] Function Read:TGMFNode(stream:TStream,parent:TGMFNode=Null) Local node:TGMFNode Local subnodes:Int Local n Local bank:TBank If stream.Eof() Return node=New TGMFNode node.name=stream.ReadString(4) subnode_count=stream.ReadInt() Local subnodes_:TGMFNode[subnode_count] node.subnodes=subnodes_ node.size=stream.ReadInt() bank=CreateBank(node.size) stream.readbytes bank.buf(),node.size node.data=CreateBankStream(bank) For n=0 To subnode_count-1 node.subnodes[n]=Read(stream,node) Next Return node EndFunction Method Write(stream:TStream) stream.WriteString name stream.WriteInt subnodes.length stream.WriteInt data.size() CopyStream data,stream,data.size() EndMethod EndType Function LoadGMF:TGMFNode(url:Object) Local stream:TStream Local root:TGMFNode stream=ReadStream(url) If Not stream Return root=TGMFNode.Read(stream) stream.close() Return root EndFunction |
| ||
perhaps you are missunderstanding what was meant. The HL2 engine has no pure rendering core, as even the newest Q4 iteration does not have. They have fixed pipeline core + shaders for visuals. The only shooter thats pure shader and really uses it is crysis most likely (not sure of the DX9 version thought) and any DX10 game naturally as DX10 has no FFP anymore. Other games are world in conflict and supreme commander. But most games actually do FFP + shaders for visuals, not shaders even for basic world transformation etc. too power consuming and on its own already enough to kill 300 series cards (intel is out directly, its bandwidth already has problems keeping the stream up for pixel shader based multi stage materials) if you want to add even lower end SM2 shaders people see as "normal" for a few years now. |
| ||
You have no idea what you are talking about. The GPU converts FFP instructions into shader code anyways. Using shaders is less intensive. |
| ||
Here is a Blitz3D-made converter that will save a .gmf file:file$="test.3ds" Global binormalarray Global tangentarray AppTitle "GMF Converter" Graphics3D 400,300,0,2 m=LoadAnimMesh(file) While CountChildren(m)=1 If EntityClass(m)="Mesh" If CountSurfaces(m)>0 Exit EndIf EndIf m=GetChild(m,1) Wend If Not m RuntimeError "Failed to load model "+Chr(34)+file+Chr(34)+"." End EndIf f=WriteFile(StripExt(file)+".gmf") ;WriteString f,"GMFM" ;WriteInt f,1 ;WriteInt f,4 BeginChunk f,"GMFM",1,4 WriteInt f,1 SaveEntity(m,f) WaitKey() End Function StripExt$(path$) p=FindLast(path,".") If p path=Left(path,p-1) Return path End Function Function FindLast(s$,token$) For n=Len(s) To 1 Step -1 If Mid(s,n,1)=token Return n Next End Function Function SaveEntity(entity,f) subnodes=0 size=Len(EntityName(entity))+1+16*4 Select EntityClass(entity) Case "Mesh" If CountSurfaces(entity)>0 subnodes=subnodes+1 EndIf End Select subnodes=subnodes+CountChildren(entity) ;WriteString f,"NODE" ;WriteInt f,subnodes ;WriteInt f,size BeginChunk f,"NODE",subnodes,size WriteString f,EntityName(entity) WriteByte f,0 For x=0 To 3 For y=0 To 3 WriteFloat f,GetMatElement(entity,x,y) Next Next ;RuntimeError EntityClass(entity) Select EntityClass(entity) Case "Mesh" If CountSurfaces(entity)>0 WriteMesh entity,f EndIf End Select For c=1 To CountChildren(entity) SaveEntity GetChild(entity,c),f Next End Function Function WriteMesh(entity,f) ;RuntimeError CountSurfaces(entity) If CountSurfaces(entity)=1 surf=GetSurface(entity,1) WriteSurface surf,f Else BeginChunk f,"NODE",CountSurfaces(entity),16*4+1 WriteByte f,0 ;name For x=0 To 3 For y=0 To 3 WriteFloat f,GetMatElement(entity,x,y) Next Next For s=1 To CountSurfaces(entity) surf=GetSurface(entity,s) BeginChunk f,"NODE",1,16*4+1 WriteByte f,0 ;name For x=0 To 3 For y=0 To 3 WriteFloat f,GetMatElement(entity,x,y) Next Next WriteSurface surf,f Next EndIf End Function Function BeginChunk(f,id$,subnodes,size) Print id WriteString f,id WriteInt f,subnodes WriteInt f,size End Function Function WriteSurface(surf,f) matname$="" brus=GetSurfaceBrush(surf) If brus tex=GetBrushTexture(brus,0) If tex matname=TextureName(tex) EndIf EndIf BeginChunk f,"MESH",6,Len(matname)+1 WriteString f,matname WriteByte f,0 ;Print matname CalculateTBN(surf) ;Position BeginChunk f,"VRTS",0,CountVertices(surf)*12+16 WriteInt f,CountVertices(surf) ;num vertices WriteInt f,1 ;position WriteInt f,4 ;float WriteInt f,3 ;elements For v=0 To CountVertices(surf)-1 WriteFloat f,VertexX(surf,v) WriteFloat f,VertexY(surf,v) WriteFloat f,VertexZ(surf,v) Next ;Normal BeginChunk f,"VRTS",0,CountVertices(surf)*12+16 WriteInt f,CountVertices(surf) ;num vertices WriteInt f,2 ;normal WriteInt f,4 ;float WriteInt f,3 ;elements For v=0 To CountVertices(surf)-1 WriteFloat f,VertexNX(surf,v) WriteFloat f,VertexNY(surf,v) WriteFloat f,VertexNZ(surf,v) Next ;TexCoords BeginChunk f,"VRTS",0,CountVertices(surf)*8+16 WriteInt f,CountVertices(surf) ;num vertices WriteInt f,3 ;texcoords WriteInt f,4 ;float WriteInt f,2 ;elements For v=0 To CountVertices(surf)-1 WriteFloat f,VertexU(surf,v) WriteFloat f,VertexV(surf,v) Next ;Binormal BeginChunk f,"VRTS",0,CountVertices(surf)*12+16 WriteInt f,CountVertices(surf) ;num vertices WriteInt f,4 ;binormal WriteInt f,4 ;float WriteInt f,3 ;elements For v=0 To CountVertices(surf)-1 WriteFloat f,PeekFloat(binormalarray,v*12+0) WriteFloat f,PeekFloat(binormalarray,v*12+4) WriteFloat f,PeekFloat(binormalarray,v*12+8) Next ;Tangent BeginChunk f,"VRTS",0,CountVertices(surf)*12+16 WriteInt f,CountVertices(surf) ;num vertices WriteInt f,4 ;tangent WriteInt f,4 ;float WriteInt f,3 ;elements For v=0 To CountVertices(surf)-1 WriteFloat f,PeekFloat(tangentarray,v*12+0) WriteFloat f,PeekFloat(tangentarray,v*12+4) WriteFloat f,PeekFloat(tangentarray,v*12+8) Next ;Indices BeginChunk f,"TRIS",0,CountTriangles(surf)*6+8 WriteInt f,CountTriangles(surf)*3 ;num indices WriteInt f,2 ;word For t=0 To CountTriangles(surf)-1 WriteShort f,TriangleVertex(surf,t,0) WriteShort f,TriangleVertex(surf,t,1) WriteShort f,TriangleVertex(surf,t,2) Next End Function Function WriteString(f,s$) For n=1 To Len(s) WriteByte f,Asc(Mid(s,n)) Next End Function Function CalculateTBN(surf) vertexupdated=CreateBank(CountVertices(surf)) binormalarray=CreateBank(CountVertices(surf)*12) tangentarray=CreateBank(CountVertices(surf)*12) For t=0 To CountTriangles(surf)-1 a=TriangleVertex(surf,t,0) b=TriangleVertex(surf,t,1) c=TriangleVertex(surf,t,2) If PeekByte(vertexupdated,a)=0 Or PeekByte(vertexupdated,b)=0 Or PeekByte(vertexupdated,c)=0 v1x#=VertexX(surf,a) v1y#=VertexY(surf,a) v1z#=VertexZ(surf,a) v1u#=VertexU(surf,a) v1v#=VertexV(surf,a) v2x#=VertexX(surf,b) v2y#=VertexY(surf,b) v2z#=VertexZ(surf,b) v2u#=VertexU(surf,b) v2v#=VertexV(surf,b) v3x#=VertexX(surf,c) v3y#=VertexY(surf,c) v3z#=VertexZ(surf,c) v3u#=VertexU(surf,c) v3v#=VertexV(surf,c) x1#=v2x-v1x x2#=v3x-v1x y1#=v2y-v1y y2#=v3y-v1y z1#=v2z-v1z z2#=v3z-v1z s1#=v2u-v1u s2#=v3u-v1u t1#=v2v-v1v t2#=v3v-v1v r#=1.0/(s1*t2-s2*t1) If r<>0.0 sx#=(t2*x1-t1*x2)*Sgn(r) sy#=(t2*y1-t1*y2)*Sgn(r) sz#=(t2*z1-t1*z2)*Sgn(r) tx#=(s1*x2-s2*x1)*Sgn(r) ty#=(s1*y2-s2*y1)*Sgn(r) tz#=(s1*z2-s2*z1)*Sgn(r) If (Abs(sx)<0.0001 And Abs(sy)<0.0001 And Abs(sz)<0.0001)=True Or (Abs(tx)<0.0001 And Abs(ty)<0.0001 And Abs(tz)<0.0001)=False m#=Sqr(sx*sx+sy*sy+sz*sz) sx=sx/m sy=sy/m sz=sz/m If PeekByte(vertexupdated,a)=0 PokeByte vertexupdated,a,1 PokeFloat binormalarray,a*12+0,sx PokeFloat binormalarray,a*12+4,sy PokeFloat binormalarray,a*12+8,sz PokeFloat tangentarray,a*12+0,tx PokeFloat tangentarray,a*12+4,ty PokeFloat tangentarray,a*12+8,tz EndIf If PeekByte(vertexupdated,b)=0 PokeByte vertexupdated,b,1 PokeFloat binormalarray,b*12+0,sx PokeFloat binormalarray,b*12+4,sy PokeFloat binormalarray,b*12+8,sz PokeFloat tangentarray,b*12+0,tx PokeFloat tangentarray,b*12+4,ty PokeFloat tangentarray,b*12+8,tz EndIf If PeekByte(vertexupdated,c)=0 PokeByte vertexupdated,c,1 PokeFloat binormalarray,c*12+0,sx PokeFloat binormalarray,c*12+4,sy PokeFloat binormalarray,c*12+8,sz PokeFloat tangentarray,c*12+0,tx PokeFloat tangentarray,c*12+4,ty PokeFloat tangentarray,c*12+8,tz EndIf EndIf EndIf EndIf Next End Function And this will load the file into a node hierarchy and display it: Type TGMFNode Field name:String Field size:Int Field data:TStream Field subnodes:TGMFNode[] Function Read:TGMFNode(stream:TStream,parent:TGMFNode=Null) Local node:TGMFNode Local subnodes:Int Local n Local bank:TBank If stream.Eof() 'Notify "Unexpected end of stream." LOAD_GMF_ERROR=1 Return EndIf node=New TGMFNode node.name=stream.ReadString(4) subnode_count=stream.ReadInt() If subnode_count Local subnodes_:TGMFNode[subnode_count] node.subnodes=subnodes_ EndIf node.size=stream.ReadInt() bank=CreateBank(node.size) stream.readbytes bank.buf(),node.size node.data=CreateBankStream(bank) For n=0 To subnode_count-1 node.subnodes[n]=Read(stream,node) Next Return node EndFunction Method Write(stream:TStream) stream.WriteString name stream.WriteInt subnodes.length stream.WriteInt data.size() CopyStream data,stream,data.size() EndMethod Method Debug(tab$="") Local n Print tab+name Print tab+size tab:+" " If subnodes For n=0 To subnodes.length-1 subnodes[n].debug(tab) Next EndIf EndMethod EndType Private Global LOAD_GMF_ERROR Public Function LoadGMF:TGMFNode(url:Object) Local stream:TStream Local root:TGMFNode LOAD_GMF_ERROR=0 stream=ReadStream(url) If Not stream Return root=TGMFNode.Read(stream) stream.close() If LOAD_GMF_ERROR root=Null Return root EndFunction LoadGMF("test.gmf").debug() |
| ||
Leadwerks: No it exactly does not. Only DX10 hardware does as they do not have any FFP anymore at all. And it is actually less performant -> proofen by dozens of benchmarks of games that are less shader heavy where the 7800 badly outperforms an 8800GTX overclocked. |
| ||
I have got animation data working, and have updated the file spec. Bone indices and weights are just another vertex array now. The format is almost final. At this point, all I have is a .b3d to .gmf converter. However, I am talking to the authors of Unwrap3D and MilkShape, and they are willing to write exporters. If you're an artist, you probably won't care about this, but if you are a programmer, this makes life much easier. |
| ||
Can you ask them to write importers also? |
| ||
Dreamora, you misunderstood what Jodh meant. The fact that earlier DX versions still had a fixed function pipeline isn't important. The thing is that for quite a long time now, the fixed function pipeline funcitonality has under the hood internally converted into low-level grafix card instructions. These instructions more or less directly map to the shader code. This means that shader code has almost a one to one correspondance with graphix card instructions, while for fixed function pipeline the "distance" between the API and what is acutally executed is far bigger. |
| ||
Thats exactly what I understood. The difference is that not "all cards" work like that ... mainly those lacking FFP support at all and that are the cards targeted at an API where the FFP does not exist anymore, which is DX10. DX9.0c cards ie cards up to GF 79XX and the X19XX outperform their successors in DX9 games that are not heavy shader based (-> DX10 graphics programming style in DX9, using fully shader based cores even for basic model transformation etc, which is very demanding and leaves 400 series cards and below in the dust) so to say games that are targeted at DX9, not at DX10 without using the DX10 API. There are several very good examples of games where even a 7800GT can outperform an overclocked 8800GTX without that much of a problem on WinXP. So assuming that "all do it like that" only leads to very crappy performance on far too many system and nobody pays for an engine that sucks on the common user machine unless you have an engine for which someone would pay 100k upwards as they can invest money to make it work on other machines. But an indie targeted engine with that attitude is wasted time of its programmer as the potential users might (and most likely will) be distracted by the fact that it needs a semi monster to run at all. Josh has shown in the past that he has the potential to do great stuff, no question. But he also shows from time to time that he is more the tech guy with great interest in new interesting technics, which in cases like this is, at least to me, more than a minor problem. Because the stuff thats now new and interesting will need at least 12 months to ripe on the hardware end far enough to work on "medium" to low graphics cards ... I am not interested in solutions that need one to have at least a medium -> high graphics card to see the "basic stuff". Thats an attitude that not even Valve, Epic or id would survive (crysis shows that it has a serious impact on the sells no mather how good the intends and ideas were, or how good the underlying technology would be) |
| ||
What does a file format have to do with FFP and shader execution anyways? No matter what you are just uploading vertex arrays to the GPU. Can you ask them to write importers also? I'll see about it. |
| ||
It is a very clear file format. In my opinion the hierachie can be:GMFM MATR TEXT (texture layer 0) TEXT (texture layer 1) TEXT (texture layer 2) MESH SURF VRTS VRTS VRTS TRIS KEYS A global palette of materials with attributes like diffuse color, specular color, shader and textures. Than you have any meshs with surfaces and matrix/quanternion. A surface can refere to a material(like Blitz3D's Brush system). cu olli |
| ||
Well it was actually you who started that out with your posting on "nobody uses FFP", which only is true for DX10. Anything else with a few exceptions (you can most likely name them out of head as there are only a dozen or the like with pure shader driven core) is actually FFP + Shader for Visuals as it is much better suited for fallback on weaker system and better scaling on different hardware. SFP only works if your card has plenty of shader units. Otherwise you are going to suffer badly if even basic multitexturing needs several pixelshader units already. But yes, this is not on the model format itself but on the how it is stored to be usefully handled within an engine without the need to totally reformat the data there (because in that case you could just use any format if you have to intermediate convert it anyway. No point in using a new format thats broadly unsupported.) |
| ||
I chose to do away with the surface convention primarily because of my physics system. Because there is one physics body per entity, you cannot have physical material attributes like friction unless there is only one material applied to the entity. I also found in the course of development that the multisurface convention lead to a lot of headaches. For materials I simply included a string for external material files. I thought about including a materials structure, but every system is going to be so different, and most engine store material files externally. Maybe a basic material structure can be included, and the file can either reference an external material or a material within the file itself. |
| ||
I would actually prefer the "reference" way and a material format etc thats designed for it. Mainly because it allows to use different apps to work on the data and update parts of it without altering the rest (so basically using the same mesh data several times with totally distinct material definitions by simply replacing them ... potentially even at load time by level definitions) |
| ||
What do you mean? External material files left up to the user or some kind of defined structure? |
| ||
a defined structure thats part of the game model format as well but lay out to be "external". Like model references for "detail brushes" in a game level format for example where you only store references, not the actual data |
| ||
What information would this material format contain? See the problem I have is even the difference between the material format I use now and what I am planning in LE2 are so totally different; but if there was a more or less standard materials definition it would be okay for simple stuff, and maybe it could be referenced to an external file for more complex things. Haha...I posted this on the OpenGL forum and the Khronos group went right to work attacking it. It's funny, because if he wasn't part of the Collada project, he wouldn't bother to try to convince everyone it was a bad idea; he just wouldn't care: http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=233532#Post233532 |
| ||
What information the material format would contain: Visual Material: Information on Texture0 to Texture7, including UVSet, BlendCombination(Src-Dest), Position, Rotation and Scale Additional Textures (Normal, Ocapacity, Specularity) and values for them (specularity factor etc) Reference to physical Material Reference to sound material if intend to have "per material" sound reactions like step sounds or impact sounds Physical Material: Informations you would see as usefull here. To few experience to give usefull points here Sound Material: Sound and sound specific informations like range, loop, volume,falloff and other things depending on the intend feature. |
| ||
I think the really important data is the matrix hierarchy, mesh data, animations, and weighting. Everything else is just fluff that may vary depending on the implementation. So I am seeing this as not only a good file format, but as a base that can be extended to support whatever custom data the programmer needs. I'll make the exporter sources open and free so people can modify and add to them as they see fit. As it is now, you can have things like multiple texcoord sets and you can expand the format without breaking existing loaders. You could in fact implement a multi-surface design by adding multiple TRIS blocks. The thing I have had problems with was getting animated skinned mesh data from modeling packages to my engine, so I think if that core functionality is provided, this can help a lot of people. Then they can modify the exporters however they see fit to add their own functionality. |
| ||
old topic now but FBX is pretty much the dominant standard in file formats these days. |
| ||
FBX is an interchange format. You would not load game models from the FBX format each time the game ran. |