vram management

BlitzMax Forums/MiniB3D Module/vram management

ima747(Posted 2009) [#1]
The project I'm working on needs to load a fairly large number of the user's pictures and keep them all on screen at once. I can manage the size of the textures loaded and reduce quality of images on load as needed, however I don't know if there's a way to determine how much more space is available for textures so I have to guess at a quality level before hand...

So is there's any way to determine how much texture space is available (so I can make an accurate quality guess at the start, and stop loading pictures if I hit the top so it doesn't have to swap them around which sends the performance down to < 1fps which is obviously a bad thing)


slenkar(Posted 2009) [#2]
the only way that I know of is to have the pictures as bitmaps and use their size, a bit primitive I know.


SLotman(Posted 2009) [#3]
There's no way to find how much VRAM you have. The only solution is to set a minimum value as a requirement to your program, and work with that number.

But actually, you're not limited to system VRAM. Once that one gets filled (at least Direct-X, don't know about Open GL) will upload things to RAM, and transfer them to VRAM as necessary. It's way slower, but still, better than failing due to no available VRAM.

As to picture size in VRAM, if you're using texture compression there's now way to tell it that I know. If you're just loading BMPs, PNGs, etc - you should multiply width x height x bytes per color to get the memory size.

For example: a 256 x 256 textures in 32bpp = 256 x 256 x 4 = 256kb in VRAM.

Also remember that the VRAM is used in the front buffer/back buffer so you'll have to subtract the screen resolution from the available memory.


Mr. Write Errors Man(Posted 2009) [#4]
Things like mipmapping will multiply texture memory footprint by 1.33 or so.


ima747(Posted 2009) [#5]
Thanks for the great replys!

I am aiming with a minimum in mind however it's more the images that's I'm working with, since they can be user supplied they can range wildly and that causes problems.

I'm aware the system doesn't crash if I load too many, however they are all on screen at once, so if I hit a point where it starts swapping with ram, it has to do it a large number of times for every frame...

I was trying to track my memory usage by adding the pictures together but in retrospect my math was all wrong and I have no idea what I was thinking. Currently I'm loading them to pixmaps so I can grab their dimensions before converting them.

Query related to the memory size. TextureHeight/Width don't return the value of the image used for the texture but a scaled number above it. I'm thinking I should be using these numbers since they represent the dimensions of the actual texture rather than the picture in the texture... maybe I'm wrong and the pictures are stored in their original size still.


xlsior(Posted 2009) [#6]
. If you're just loading BMPs, PNGs, etc - you should multiply width x height x bytes per color to get the memory size.


...Except you don't.

- Blitzmax rounds up the height and width to the nearest power-of-two dimension, since many videocards aren't compatible with odd sizes
- Even if the source format is different, all images will be converted to 32 bit internally
- files like BMP and especially PNG will be compressed, but the files will be stored uncompressed in video memory -- so the actual file size has nothing to do with the memory requirements

in the end:

height (rounded to nearest power-of-2) * width (rounded to nearest power-of-2) * 4 (always stored in 32 bit)


ima747(Posted 2009) [#7]
That was my understanding xlsior, you can grab that power of 2 rounded hight and width using textureheight() and texturewidth(). I'm loading pixmaps scaling them down to reduce the texture size, then making a texture from the pixmap, so this should work well, assuming I don't forget about the screen and back buffers as well...


ima747(Posted 2009) [#8]
sorry to dredge up an aging post but I'm still working on this, so I thought it better than starting a new one.

Tracking the vram usage by calculating a rough size for my textures is working as well as could be hoped for.
textureVramSize = TextureHeight(tex) * TextureWidth(tex) * 4
works quite well, just gotta remember to track every single texture that could be on screen at once, plus the front and back buffers, and a little overhead for opengl to play with etc.

So tracking it is working well enough, as long as you give some wiggle room to the card. i.e. I'm trying not to use more than around 230mb of calculable space (loaded textures + front and back buffers) on a 256mb card it it works well. However this brings me back to the second half of my initial question. How does one determine the size of the card. I've read a number of places including this thread that you can't, however I think it's more accurate that you shouldn't be looking at card size, not that it's impossible. It has to exist in the computer somewhere. There are numerous threads on other forums for other languages (e.g. here for example regarding the same thing and all have mixed responses (and aren't blitzmax centered) so I've come home to ask again.

You can make the following assumptions.
1) I know every single texture that I would LIKE to have on screen at once and I know it's size in memory.
2) I intend to remove things from this list of targets down to a threshold safely bellow whatever the actual physical size is to allow for OS/Driver/Background process etc. overhead.
3) I will be targeting graphics cards with a reasonable minimum to begin with, probably 128mb, though support for 64mb would be lovely. This is primarily to allow larger than average cards (512 and up) to get more out of my app without having to make a manual change to a config file.
4) The textures will be scaled to fit a power of 2 texture space of no less than 256x256 and no more than 1024x1024. As an extension of this I haven't found anything relating to any card that stores it's textures in a minimum power of 2 space larger than 256x256 (as that would chew HUGE amounts of vram on a structural level) so odd memory handling for speed done by the cards themselves is a mute point.
5) It needs to be graphics card independent and preferably OS independent, though its not that big of a deal if a viable method can be found for mac and PC independently. But it can't be (as I've found on other forums) graphics card dependent such as looking up the registry value for ATI cards on windows (once solution I found) or digging out the nvidia driver function to query the onboard memory size (another solution).

Basically the point of finding out the vram size is so I don't have to ask the user (and most users wouldn't know anyway) something that the computer should know about itself. I can hard code for 128 and say that's a minimum and that's fine, but what about the guy with a 2gb card, no bonus for all that extra hardware.


xlsior(Posted 2009) [#9]
I know that the ATI drivers store the amount of onboard VRAM in the registry, which you can retrieve... I have some code in the code archives that does this for the older ATI cards, haven't tested it yet with newer generation cards.
It is ATI specific though, and won't tell you anyhting for other manufacturers... presumably there should be a universal method of digging this up from the registry, since applications like DXDiag can tell you...


ima747(Posted 2009) [#10]
Exactly, it's gotta be somewhere... since openGL moves things back and forth it has to know somewhere.

The thread I linked before
http://lists.apple.com/archives/mac-opengl/2001/May/msg00022.html
Looks like opengl functions to get the max memory size for opengl, along with current used memory... but I don't know enough about OpenGL to be able to get anything working along those lines. Especially since there's no defined constants involving memory in opengl.mod...


ima747(Posted 2009) [#11]
Something mac based that could be of use.
http://developer.apple.com/mac/library/qa/qa2004/qa1168.html
I haven't worked with C for mac in a long time so no idea if I can get this up and running but it could be start... anyone more practiced?


ima747(Posted 2009) [#12]
Additional googling turned up this blitmax thread, and specifically your post (#8) is of interest to me xlsior. And more specifically your mention of TotalVidMem() from B3d. never having used B3D proper, I don't know, and assume there is no way to have a look at what that function does (I'm willing to accept that it's probably flawed, but atleast it's something)... I thought I heard mention at some point that simon was given access to the B3D source to help with MiniB3D...

http://www.blitzbasic.com/Community/posts.php?topic=60505#674955

So scrapping opengl as a source for this information, and looking OS specific, it sounds like a directx query on windows and and IOKit/Core Graphics function on mac may return the magic number, or atleast a good guess.

It does bear repeating that I'm not looking for a flawless, byte perfect implementation of a texture memory handling system here, all I want is a guess at how many textures of a known size and bit depth I can use at one time before the memory swapping will start to destroy performance.


ima747(Posted 2009) [#13]
After some more fiddling and reading here's what I've got for a dirty vram size calculator.

THardwareInfo.MaxTexSize * THardwareInfo.MaxTexSize * 4

Only tested it on a couple of cards so far but it seems promising. Here's the assumptions I'm running under for this to be a viable option.

1) The maximum texture size a system can handle is <= the systems physical vram. (unconfirmed that this is compliant with opengl standards, but I don't think any current system would allow larger than vram images to be used, as it would have to do a HUGE memory swap mid render every frame...)
2) The system will never use 1 single texture that matches the limit. There are plenty of documented bugs with trying to use 1 maximum size texture, but I intend to use many smaller than max textures, I'm only interested in how many can I use at once.
3) the older the system the less likely this is to be a semi accurate number. Since I am targeting at the VERY lowest end 64mb cards, and may drop them as a minimum in favor of 128mb, I think one would be much harder pressed to find a 128 or greater graphics card that reports a minimum texture size that is a small fraction of it's vram size (e.g. it only allows up to 512x512 textures, but it can hold 32 of them at once...) as compared to older cards. This is related to the chipset of the cards themselves but again, I want as close to the vram size without going over as possible.
4) There will be an override in the program's settings for power users to customize if the system is detecting a much lower value than what they are sure their card has.

It's far far far from perfect but it's better than assuming a min requirement value and giving nothing to those that exceed it without them being power users which seems to be the only other viable option.


ima747(Posted 2009) [#14]
For further discussion check http://blitzmax.com/Community/posts.php?topic=87696 as I'm considering this a 3d graphics system query, not just minib3d.

I can pull the vram size on a mac, would love some help querying directx or wherever one might look on linux if anyone cares to follow.


Flemmonk(Posted 2009) [#15]
I had a quick glance through here I couldn't see it mentioned, but OpenGL has a feature to determine which textures are loaded into memory. Naturally it will try to determine which textures to keep in its memory if they're frequently used, but you can manually specify this. This might be what you're after.


ima747(Posted 2009) [#16]
while not being quite what I need, loading a bunch of textures and then testing to see how many are in memory would give one a rough guess at the memory capacity... how do you go about determining if they're in memory?