Images and Pixmaps

BlitzMax Forums/BlitzMax Programming/Images and Pixmaps

Russell(Posted 2008) [#1]
When should one be used over the other, and why (I know pixmaps are quite a bit slower)?

I'm not exactly sure why we even have both: What's the main difference between them?

Thanks!
Russell


Gabriel(Posted 2008) [#2]
Pixmaps are in system memory and can be manipulated directly, so they can't be drawn without boshing them across the AGP pipe to the videocard first. Hence the slowdown you referred to.

Images are in video memory, and cannot be manipulated directly. Drawing them has no additional cost because they're already in video memory where the card needs them to draw them.

It's horses for courses. If you want to manipulate images, you have to use pixmaps because they're the only option. If you don't, it would pointless to have an overhead you don't need.


Perturbatio(Posted 2008) [#3]
A TImage uses a pixmap internally.


Russell(Posted 2008) [#4]
Ah, I see. It's too bad that we can't get the address of the buffer in the video card and address it directly...or can we? Would this make an appreciable difference in speed compared to doing the manipulations in system memory and blitting to the video card (as things are now, if I understand correctly)?

Thanks
Russell


Gabriel(Posted 2008) [#5]
Ah, I see. It's too bad that we can't get the address of the buffer in the video card and address it directly...or can we?

As far as I'm aware, this physically isn't possible. By which I mean that it absolutely cannot be done, not just that BlitzMax cannot do it. To the best of my knowledge anyway.


Russell(Posted 2008) [#6]
Hmm... I thought video cards were mapped to an actual memory range of the system... But now that I think about it, with GPUs, etc being their own processors, it makes sense for them to have their own local addressing system.

Having grown up on non-Windows systems, such as the Commodore 64 (which has directly addressable video memory), I guess I was confusing the two methods.

So how do you write new code that does super fast things with video, yet still manage to be general enough to work on 'all' video cards? Does OpenGL and DX somehow just access the GPU functions of the video card in a low level way, or what? Just curious...

Thanks for the info,
Russell

[Edit] An example of very fast video manipulation is MAME, which can manipulate entire screens of pixels at 30fps quite easily. Is this sort of speed unavailable outside of C/C++/Asm?


ImaginaryHuman(Posted 2008) [#7]
I'm kind of a bit miffed with Opengl not prividing a buffer address. I mean, it's supposed to be hardware agnostic and have no hardware-specific stuff in the API, but you would think that once you DID set up a hardware-independent-api-driver backbuffer it might at least provide you with some kind of buffer pointer. After all you'd think that OpenGL knows what it is - unless perhaps it isn't allowed to operate at a low-enough level to find out by the o/s? But then surely there is some o/s calls that might tell you.

As to the pixmap thing.. the OpenGL documentation refers to main-memory-images as `pixmaps`, I don't think it's a DirectX terminology so much (totally guessing here). Basically because you now have two areas of memory - main memory and video memory, and because pretty much everything on the GPU has to have direct access to the memory it uses for textures and such, it has to be video ram only. It usually cannot read or write main memory. So you have to have a way of saying that data is within video ram, ie like a texture, and an alternative way to talk about memory which is outside of the access of the GPU - ie main memory and thus pixmaps (or bitmaps).

This really all stems from having GPU's which need to access dedicated video ram, and probably in some cases you could just as easily have gained direct access to videoram, but I guess its the way they wanted to design the system.

A pixmap is in main memory and can be access by the CPU in your program code, so you can manipulate pixels etc. But pixmaps cannot be displayed by the video hardware. You have to transfer the pixmap image data into a texture within video ram, ie an `Image`, before texture mapping can take place to show you the image (or use DrawPixmap).

Reminds me of the Amiga days where you only had `chipram` which could be displayed by the video hardware plus also could be accessed by the CPU. Aren't Intel integrated cards like that? But anyway.

You actually can change textures/images in videoram directly but you need to use OpenGL extensions or more recent DX calls.


Russell(Posted 2008) [#8]
Oh well, guess we'll just have to rely on what OGL and DX expose to us...

[Edit] Now I'm beginning to see the advantage to having 500MB video cards (other than allowing huge/numerous textures and 3D data): No swapping from/to system memory needed!

Russell


ImaginaryHuman(Posted 2008) [#9]
It does help if you want big textures. Not so important for 2D though.


remz(Posted 2008) [#10]
Along with everything being said, Pixmap are quite fast when you need direct access to its memory buffer. There is way to draw a pixmap with OpenGL a bit faster than using DrawPixmap which does a YFlip internally thus allocating memory, copying lots of bytes, etc.

To sum it up: TImage vs TPixmap, it pretty much depends on what you intend to do.


ImaginaryHuman(Posted 2008) [#11]
Yah, make your pixmaps be the correct way up and then do glDrawPixels(). Pretty silly converting the entire pixmap and flipping it on the fly.


remz(Posted 2008) [#12]
You can say that.
Here's what I use if anyone's interested.
It assumes GL_RGBA but that can be made more flexible easily. Also of interest is that with glpixelzoom(sizex, -sizey) you can draw your pixmap scaled by OpenGL. 1 million times faster than calling ResizePixmap.

Function FastGLDrawPixmap(p:TPixmap, x:Int, y:Int)
	SetBlend(SOLIDBLEND)
	glDisable(GL_TEXTURE_2D)
	glpixelzoom(1, -1)
	glRasterPos2i(0, 0)
	glBitmap(0, 0, 0, 0, x, -y, Null)
	glPixelStorei(GL_UNPACK_ROW_LENGTH, p.pitch Shr 2)
	glDrawPixels(p.width, p.height, GL_RGBA, GL_UNSIGNED_BYTE, p.pixels)
	SetBlend(ALPHABLEND)
End Function