Using 16-bit graphics

BlitzMax Forums/BlitzMax Beginners Area/Using 16-bit graphics

ImaginaryHuman(Posted 2004) [#1]
Hi folks. I was wondering about how to go about using 16-bit graphics with BlitzMax, or at least with OpenGL.

So far all I've done is open 32-bit images and render them on a Graphics screen or window. Obviously if I change the color depth of the screen or the desktop to 16-bit, everything renders automatically in 16-bit. In fact, same goes for 256-colors even though that's not entirely supported with BlitzMax.

Now, I take it that this is wasting time, for 16-bit mode, because either the RGBA8888 data for the image has to be filtered down (and dithered if switched on) to less bits, either at the time of reading or writing to the backbuffer or at the time of Flipping the double-buffer.

I would like to know, if I purposefully detected that 16-bit color was needed, and open a 16-bit display, can I just load in some 16-bit image data (or produce it somehow), and if I do that, will rendering it be faster than if I just use 32-bit images outputting to a 16-bit display?

Or more to the point, what's the best way to make things most efficient for 16-bit graphics rendering? Either with BlitzMax's own commands or OpenGL directly. ???????


ImaginaryHuman(Posted 2004) [#2]
Anyone?


Hotcakes(Posted 2004) [#3]
As I understand it Max keeps images internally as 32 bit so loading a 16bit image will only be degrading picture quality. Maybe. Don't know about the direct OpenGL approach...


teamonkey(Posted 2004) [#4]
When you bind a texture to OpenGL (which LoadImage/DrawImage do for you) it's automatically converted to the same colour depth as the OpenGL context you created (using Graphics or bglCreateContext).

"Binding" a texture means copying it to a format that OpenGL can use. Your original TImage/TPixmap might be 32-bit, but the bound copy that is actually used by OpenGL might be 24-bit or 16-bit or 15-bit or 8-bit.

In other words, it's not any slower using a 32-bit .PNG than it is using a 16-bit image. The only problem is that 32-bit images take up more disk space.


ImaginaryHuman(Posted 2004) [#5]
Ok sounds good, thanks. But what about, then, if you want to render straight from a pixmap to the backbuffer, and the pixmap is 32-bit and the backbuffer is 16-bit? It seems that BlitzMax converts the format each time, which is obviously slow? Better to store the pixmap at 16-bit?

ie separate versions of images, 32-bit and 16-bit?


teamonkey(Posted 2004) [#6]
DrawPixmap works differently. It draws the individual pixels to the OpenGL buffer in the correct format. I don't think there should be any speed difference if you store it in a different format.

If you can use TImage instead of TPixmap, do it as it will be much faster. If you need your image to change a lot, I think you'd be better off writing directly to texture memory instead of to a pixmap.


ImaginaryHuman(Posted 2004) [#7]
If the pixmap image is in 16-bit format with 2 bytes for each pixel obviously there are a lot less memory reads needed to get the image through the CPU. That has to be slower. What I want to know is, if I load a 16-bit image into a 16-bit pixmap, is it faster than 32-bit. And I'm guessing it is.