Atlas textures - Can Blitz3d do it?

Blitz3D Forums/Blitz3D Beginners Area/Atlas textures - Can Blitz3d do it?

Imperium(Posted 2013) [#1]
A Texture atlas is a large image containing a collection of sub-images, or "atlas" which contains many smaller sub-images, each of which is a texture for some part of a 3D object. The sub-textures can be rendered by modifying the texture coordinates of the object's uvmap on the atlas, essentially telling it which part of the image its texture is in.

That being said the point of this is to cut down on the amount of draws for texture information. The goal would be to free up some of the CPU.


Yasha(Posted 2013) [#2]
to cut down on the amount of draws for texture information. The goal would be to free up some of the CPU.


Can you elaborate?

Blitz3D can do atlasing well enough, because it's not really a formal technique so much as a generalisation of UV map support; but it's not immediately obvious what impact this would have either way on performance (unless you combine the objects as well into larger ones).


Imperium(Posted 2013) [#3]
I came across an interesting read this book: Game Art: Creation, Direction, and Careers.

http://www.amazon.com/Game-Art-Creation-Direction-Development/dp/1584503955

On page 192 & 193 it discusses Atlas textures and reusing UV's on objects. I will scan and upload the page so I don't have to type it. But it basically says this will help optimize the performance of draw calls which helps taken some burden off of the cpu.


Kryzon(Posted 2013) [#4]
Blitz3D can do it, and you can even set your UVs up in your favorite modelling application.
So each mesh's UVs will "capture" the relevant section from the atlas you want to texture each mesh with.

The thing is, texture atlasing by itself isn't that much of an optimization. I've read that the overall improvement is minimal. The overhead of rendering things is still the major problem.

What texture atlasing does best is open the door for single surface systems. Then you're not only doing texture atlasing and reducing the amount of texture switching, but you're also rendering most of your meshes with a single call.
This constitutes batching, a common optimization for hardware-accelerated real-time graphics.


Yasha(Posted 2013) [#5]
I have the feeling that you're talking about an engine tweak that requires access to the kind of low-level control Blitz3D doesn't expose. Making two separate objects in Blitz3D use the same texture isn't going to change the number of draw calls at all, because Blitz3D will use the same set of calls as part of the draw process for each object. The only way to change that from within B3D code is to do as Kryzon suggests and batch whole game objects into single surfaces within a mesh.


Things to consider:

-- texture changes like this are practically free on modern systems, reducing the number of calls to this particular part of the API is going to have no observable effect unless you have hundreds of thousands of objects

-- combining objects into a single surface speeds up the drawing, but if you need to move the sub-objects around, you now need to do it with VertexCoords calls, which will eat up the performance gain tenfold

-- do you have any reason to believe this is slowing down your program? Remember Knuth's golden rule: "premature optimisation is the root of all evil". Unless you have measured and know that this is your speed bottleneck, "optimisations" like this are a complete waste of your time, obfuscating your code and your art assets for no reason (and quite possibly slowing it down, because optimisation is both hard and unintuitive)


Kryzon(Posted 2013) [#6]
Making two separate objects in Blitz3D use the same texture isn't going to change the number of draw calls at all, because Blitz3D will use the same set of calls as part of the draw process for each object.

I'd forgotten about that. I' not sure how the engine deals with the same brush being applied to several entities with the same EntityOrder.

Even if they all use the same brush, Blitz3D might just re-bind all textures every time it renders a mesh, making the effort fruitless. It's a simple check to see if the current brush is different than the previous one, so I'm hoping Mark put it there.
In case he didn't, the only improvement with this would be less memory use from having a single texture instead of several individual ones - which is probably negligible.


Yasha(Posted 2013) [#7]
I' not sure how the engine deals with the same brush being applied to several entities with the same EntityOrder.


For the record, brushes are copy-on-assign and every entity and surface has precisely one internal brush that takes on properties when Paint-ed (this is why you need to free brushes that are extracted: because the extracted brush is a standalone copy not attached to, and that never will be attached to, an entity).

Textures are a brush property and have their own management system; textures are reference counted (one ref for every surface and entity, plus one for the code handle), and also cached (so loading the same texture twice will not cost any extra memory). As many objects as you like can share a texture without it being duplicated.


It's a simple check to see if the current brush is different than the previous one, so I'm hoping Mark put it there.


I think it does do this.


Imperium(Posted 2013) [#8]
No this post was meant to stimulate a discussion. My code has no speed issues. I was unfamiliar with the term Atlas textures until I came across it browsing one of my books.


_PJ_(Posted 2013) [#9]
I would imagine this is more a topic for the 3D forum, but anyway -
I haven't really read into the technical aspects of this "Atlas"ing, nor do I expect I fully understand all that's required but it did ptovoke the following thoughts:

1) Isn't this a means of say, LOD alterations in relation to the textures rather than the polygons of a surface?

2) Isn't this what DX actually does (behind the scenes) with regards to the MIP levels of DDS?

3) If the above is correct (or at least, close enough to the mark) I can imagine some situation whereby scale factors are used to calculate the level opf detail, required as well as the UV position.
This differs from the MIP-mapping of DDS etc. since it involves actually changing the texture for a specific UV region based on "how close the camera is"

For example:

VISIBLE Proportion of Surface:
100%
UV Range= 0.0 -> 1.0

10%
UV Range= 0.0 ->0.10

1%
UV Range= 0.0 -> 0.01

0.1%
UV = 0.0 ->




So at a zoom level, where only, say 1% of the mesh is visible, a texture that represents the scale of 1% (though the raw texture still at 'regular size' i.e. 512 pixels etc.) is then drawn to fill the region of

(U,V)..................((U+(U*0.01),V)
|..........|................|
|----------+----------------|
|..........|................|
(U,V+(V*0.01)_(U+(U*0.01), V+(V*0.01))



Presumably, though, this would be better if the entire mesh is swapped with a minimum few polygons that actually represent the size/shape of the visible region and the UV for this new surface can be attributed in full?













I suppose ultimately, the real case for any optimisation potential or impact will be how the image data for this atlas is loaded and processed - Do you take up a wealth of VRAM to store the textures, or risk slowdown in the "loading-in" of a new atlas/detail level etc.,????


Kryzon(Posted 2013) [#10]
I think I understand what you're proposing. Kinda like a "reverse" mipmapping.

Mipmapping gives you a simpler texture the farthest you are from the 1:1 pixel to texel ratio (screen pixel vs texture pixel). If you have less screen pixels to represent more texels, the GPU uses a simpler texture so you don't need to waste video memory with a huge texture with texels that would be skipped by the sampler, and the visual result is nearly the same.
If you're closer than that (that is, if you're closer to a triangle and you have more screen pixels than texels), mipmapping is simply not used by the GPU; But in your case, you'd want it to use a high-detail texture.

I don't know why people didn't do this, but I think it just was not practical. Having your artists make very detailed textures that wouldn't be seen so much from triangles being more likely to be away from the camera than very close.
You could maybe use this if you're doing software rasterizing, or using procedural textures (textures generated in real time by the CPU or GPU).


_PJ_(Posted 2013) [#11]
I found your description much clearer and it really made sense, Kryzon.


I don't know why people didn't do this, but I think it just was not practical. Having your artists make very detailed textures that wouldn't be seen so much from triangles being more likely to be away from the camera than very close

Sounds very likely to me.

I Just had a bizarre thought, sorry if this is getting off-topic a little, but since it's possible to quickly modify vertices' UV values (Well, actual modifying is quick, identifying which of the verts to modify is likely the slow bit)
I wonder how it would work out if instead of increasing the texels between UV to meet the visible pixels, the texture was reduced - relying on the texture repetition to invoke an almost "fractal"-like appearance.
Admittedly, this would not be good for games unless maybe dealing with terrain or normally "repetitive" surfaces etc...


Imperium(Posted 2013) [#12]
Can Blitz3d do voxels? I've always loved how terrain looked when those are used.