About Texture Limits

Blitz3D Forums/Blitz3D Programming/About Texture Limits

KuRiX(Posted 2005) [#1]
Hello friends. In my racing game i have set up more than 200 textures (256x256 most of them) giving me around 100MB. I need to do this because i am texturing buildings with realistic photos of my country.

Of course the render time is very affected. But what more problems could i have?. In theory, with enough video memory there should no problems???

Any Solution to this kind of design?


Neochrome(Posted 2005) [#2]
if you load your texture using flag=256 + (your other options) the textures will load directly to video memory and it would then be up to the GPU to do the hardwork... <-- this is my understanding, the only real problem then you would face is, Loops and complex maths per frames which would slow down the CPU and data transfer from cpu to GPU

(please correct me if im wrong though i like to learn)


Mustang(Posted 2005) [#3]
There are no "100% sure way" guidebooks how to do efficient 3D gfx, but trying to group your textures and models (surfaces) sensibly is usually the best option, and keeping you triangle batches high enough (2000 polys and up if possible) to avoid stalling. AGP will take care of the texture swapping (main mem <> VRAM), you don't need to cram them all in at the same time - but that would be the best / fastest case. Buffers and geometry take surprisingly lot of VRAM too, so textures are not the only thing that need fast memory.


KuRiX(Posted 2005) [#4]
What Are Triangle Batches?


Mustang(Posted 2005) [#5]
Triangle batch is a batch of triangles sent to the GPU for rendering (same surface / polygon properties). Using surfaces with small number of polygons will result small batch size and GPU will be idling - because setup and such take certain amount of time regardless of the actual amount of polgons to be sent... Sweetspot seems to be these days around 2000 triangles, meaning that a surface with 1 polygon will render just as fast as 2000 triangles. Larger batches are better because modern hardware is made so that it can churn out very fast gzillions of polygons - if they share the same properties... small change and it has to start again and get the machine going. So I would try to avoid low surface / polygon ratio if possible.


KuRiX(Posted 2005) [#6]
What if i have the same surface with two different textures on different triangles? Then it counts like 2 surfaces, no?

P.D: thanks for the info.

Other Question: I am using b3d pipeline for max to export. Must i check the "Use VidMEM" checkbox? or are the textures loaded in vram anyway?


Mustang(Posted 2005) [#7]
Ummm... how I know and understand surface is a collection of triangle texture properties like texture(s) UV(s), diffuse, specular, alpha and every other property that defines how the triangle looks and how is should be rendered.

If you have a polygon with diffuse (color) texture + lightmap on top of it, it's still one surface. If you have another polygon that differs from this setup, it's another surface. To be in the same surface polygon properties have to be identical.

If you two separate OBJECTS that have identical materials, those are separate surfaces to my knowledge. So every object is at least surface, even if they all have identical texture properties... that's why low-polygon objects are "wasting" render time.

Forcing stuff to VRAM can be good sometimes, but I'd be very careful when to do that because you are restricting DirectX and how it can manage your stuff over the AGP... and yes, all textures and geometry have be moved to VRAM so that GPU can access those.

Significance of the flag is that it marks the texture as non-movable meaning that DX cannot dump it from the VRAM if it needs more space for rendering the frame. Having lot's of (or big) textures marked this way reduces the amount of VRAM DX can use dynamically and might lead to drop in FPS in some cases if you go over board. All this again according to my "best knowledge and educated guesses"... only Mark can give 100% correct techical answers about Blitz3D.


sswift(Posted 2005) [#8]
Try to combine textures. If you have one building, try to combine all the textures for that building into one texture. Textures on modern 3d cards can easily be 1024x1024 or greater.

Also, try to optimize stuff. If you never see the roof, don't texture it. And if you do see the roof and you can use the same roof texture for many buildings, try to do that.

Also, buildings which are the same and near enachother combine into a single mesh. Then instead of ten surfaces, you'll have one or two.


Rhyolite(Posted 2005) [#9]
Hey Mustang, thanks for that 'batch triangles' info.

I assume the poly target of 2000 will vary from gfx card to gfx card? Just roughly, what kinda hardware would 2000 apply too and do you have any guidelines for older/newer gfx cards? Dont go looking, but if you know would be really useful :)

Cheers,
Rhy :)


Mustang(Posted 2005) [#10]

I assume the poly target of 2000 will vary from gfx card to gfx card?



Yup.


Just roughly, what kinda hardware would 2000 apply



Modern - meaning FX/R9 and up, maybe even GF3/4 (not including MX).

But even if using 100 and 1900 polygon model would render ~just as fast it's not wise to add polygons if there isn't true need and benefit for them - total amount of polygons (well vertices) in the VRAM is a factor too. But for game characters for example it would be good to use ~2K polys instead of 1K beacuse you're not probably losing speed there at all (if it's one surface model etc).

Construct models sensibly, but IMO there's no need to try to sneeze out every last "un-optimized" polygon... unless you're developing for really ancient hw, like GF1 or something.

Our latest 3DMark05 has a "batch rendering" test so you can check yourself if your card idles and where is the limit:


Batch Size test

The Batch Size test is an all new type of test in the 3DMark series. The test basically renders a very simple scene very much unoptimized, revealing a weakness in most graphics drivers available today. Graphics IHVs have for years educated game developers to render as large batches as possible. However, it would be beneficial if the rendering of smaller batches would be optimized too. This test has been requested ever since developing 3DMark2001, but for this 3DMark version more BDP members than just one asked for it. There are six runs of this test, where 128 meshes of 128x128 quads are draw with 8, 32, 128, 512, 2048 and 32768 triangles per batch. The last two batch sizes should be considered an optimized one for most drivers today, but the smaller the batch sizes get, the slower the rendering will be. Color change state changes are done between the rendering batches to make sure DX doesn't collapse the whole rendering into a single or very few batches. Early versions of this test without the state changes caused this, and gave quite obscure results. The test therefore also is somewhat dependent on how fast the driver does rendering state changes.