Are these image sizes safe?

BlitzMax Forums/BlitzMax Programming/Are these image sizes safe?

GfK(Posted 2014) [#1]
I need to have an info panel at the bottom of the screen. It will be 1024x64 pixels. Is that safe on the majority of hardware nowadays, or am I better off having it split into 64x64 images and tiling it?


Derron(Posted 2014) [#2]
It just depends on the max-size of textures a graphics card can handle.

If you are not sure about it - create your own image-class handling splitting too large images automatically - think there was a BigImage-type posted some years ago.


bye
Ron


GfK(Posted 2014) [#3]
I've already done that and used it in past games. I just wondered if it was a pointless exercise in 2014.


ImaginaryHuman(Posted 2014) [#4]
Most people nowadays have 2048 texture size, even all mobile devices since like iPhone 4 or before.


Calibrator(Posted 2014) [#5]
Interesting question - I was wondering the same recently.

I often read that a texture size of 1024 would be a "good minimum" to reach a bigger audience but what will I really lose if I would work with a 2048 texture size?
In other words: What minimum PC/Mac graphics hardware would I need for a 2048 texture size?

Background:

I'm dabbling with a tile-based game and will use 64x64 pixel tiles (not smaller, except for "missiles" and other effect). Using a 1024x1024 texture would only allow for 256 tiles, though.

A texture size of 2048x2048 would naturally increase that to 1024 tiles - which would be much better: More animations, a general library of landscape and object tiles and, last but not least, simplify programming.
I'm strictly speaking "pure" Bmax/Max2D commands like DrawImage() etc. here and I'd like to avoid unnecessary slowdowns because of excessive texture swapping behind the scenes.
Obviously, a larger texture size increases RAM usage as well but it seems to me that this is a smaller problem.


xlsior(Posted 2014) [#6]
Correct me if I'm wrong, but I thought that Blitzmax by default wasn't smart enough to retain the texture in between multiple drawing commands?

I seem to remember that there were some 3rd party tweaks necessary to actually get any speed benefits from cramming all of them into a single image


Floyd(Posted 2014) [#7]
Many years ago some cards/drivers couldn't deal a width and height that were very different.
A 1024x1024 texture might work, but not 1024x64.

I have no idea if this is still true.


BLaBZ(Posted 2014) [#8]
You can query the users machine to determine max texture size -

May need to change the drawing to OpenGL -

local s:int 
glGetIntegerv(GL_MAX_TEXTURE_SIZE, varptr s)


Also, I'm pretty sure blitzMAX resizes images for openGL

ie: an image size 1024x64 will actually be stored in graphic memory as 1024x1024


Kryzon(Posted 2014) [#9]
General texturing optimizations for both the APIs offered by Max2D can be seen here:

OpenGL:
http://www.mesa3d.org/brianp/sig97/perfopt.htm#Texturing

Direct3D 9:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb147263(v=vs.85).aspx#Texture_Size

What you can do is compare the desired image size with the maximum size supported by the hardware. If it's bigger than the maximum then split it to something smaller. This is done at a pixmap level - no TImages being used yet. If the splitting is necessary, you can easily create two pixmaps from splitting a bigger one, and then create images from those pixmaps.
One can use the following API calls to find out the largest texture dimensions supported by the hardware.

OpenGL:
Note that the OpenGL method below is more reliable than querying the GL_MAX_TEXTURE_SIZE constant, which is regarded by the OpenGL documentation as an "estimated value," since it's not involving the cost of the pixel format.
SetGraphicsDriver( GLMax2DDriver() )
Graphics( 800, 600 )

Local maxTextureSize:Int

Local tempSize:Int = 65536 'Start with a prohibitively large power-of-two size.
Repeat
	Local t:Int

	'Test if a dummy texture can be created with a large size.
	
	glTexImage2D( GL_PROXY_TEXTURE_2D, 0, 4, tempSize, tempSize, 0, GL_RGBA, GL_UNSIGNED_BYTE, Null )
	glGetTexLevelParameteriv( GL_PROXY_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, Varptr t )
	
	If t Then 
		maxTextureSize = tempSize 'Found the biggest size supported.
		Exit
	EndIf	
	
	If tempSize = 1 Then RuntimeError "Unable to calculate maximum texture size."
	If tempSize > 1 Then tempSize :/ 2
Forever

Print maxTextureSize 'This is the maximum size for the width and height.

End

Direct3D 9:
SetGraphicsDriver( D3D9Max2DDriver() ) 
Graphics( 800, 600 )

Local d3d9:IDirect3D9 = D3D9GraphicsDriver().GetDirect3D()

Local myCaps:D3DCAPS9 = New D3DCAPS9
d3d9.GetDeviceCaps( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, myCaps )

'Capability names obtained from 'pub.mod\directx.mod\d3d9.bmx.'

Print myCaps.MaxTextureWidth
Print myCaps.MaxTextureHeight

'The following is the maximum value that you can have in one dimension 
'of a texture when the other dimension is '1.' 
'For example, if the value is '4096,' you can make textures with a ratio of
'4096 : 1.'
'The ratio isn't necessarily the maximum width or height: You can have
'a maximum width or height of '8192' but a maximum ratio of '4096 : 1.'
'This would mean that you can have textures of actual sizes of 
'4096 x 1' or '8192 x 2,' but not not '8192 : 1.'

Print myCaps.MaxTextureAspectRatio

'OpenGL does not have a 'maximum aspect ratio value,' so you can assume
'it is the maximum width or height supported: 'maxTextureSize : 1.'

myCaps	= Null
d3d9	= Null

End

Correct me if I'm wrong, but I thought that Blitzmax by default wasn't smart enough to retain the texture in between multiple drawing commands?

I have looked at the source to assert the following.
Independently of the driver used, Max2D keeps the last texture used between multiple drawing commands.
Consecutive DrawImages() of the exact same image only bind and enable the texture for the image on the first call. So it would be wise to order one's drawing calls so that the repeated images are drawn consecutively and benefit from this.

Also, I'm pretty sure blitzMAX resizes images for openGL.

If you load an image that is already of a power-of-two size and such size is supported by the hardware, regardless of the driver, no scaling is done.
If one of the dimensions of the image is not a power-of-two, the texture that will contain the pixmap is padded up to the next highest power-of-two value. The pixmap is still not scaled - the texture that contains it is just bigger (with unused data). There's no visual difference to the original image, and collisions still work as usual.
The only occasion where a pixmap is scaled (that is, what you loaded is different than what is being displayed) happens with the OpenGL driver: If the image that you loaded or created from a pixmap has a size (be it power-of-two or not) that exceeds the maximum size supported by the hardware, the pixmap is scaled down to acceptability. With the Direct3D driver, it tries to create the texture anyway and throws an error if it fails.

Many years ago some cards/drivers couldn't deal a width and height that were very different.
A 1024x1024 texture might work, but not 1024x64.

For non-square textures (textures that have differing width and height, although both being power-of-two), it seems that graphic cards from the GeForce 2 series and its peers and earlier do require that the width and height of a texture are made equal. But then they also have some other strange requirements such as maximum texture sizes of 256 x 256 etc.
But those are very old cards (2002 and earlier).
If your game uses power-of-two, non-square textures with either of the official graphics drivers for Max2D, it should be supported by most systems beyond 2003, which is quite a broad audience.

Further reading:
- Ogre3D forum post
- GameDev Net forum post
- http://www.hard-light.net/forums/index.php?topic=973.10;wap2


xlsior(Posted 2014) [#10]
For the OpenGL driver only, if the image exceeds the supported hardware size then it is downscaled to acceptability.


Of course, that would mean muddy/blurry sprites in that case, I presume.


Calibrator(Posted 2014) [#11]
Thanks folks, especially Kryzon for providing the test routines.

I modified the OGL one a bit to make it easier to test it on other machines:

SetGraphicsDriver GLMax2DDriver()

Graphics 800,600

DrawText "GLMax2DDriver",50,100

Local tempSize:Int = 16384    'Start with a prohibitively large power-of-two size.
Local maxTextureSize:Int

Repeat
	Local t:Int

	'Test if a dummy texture can be created with a large size.

	glTexImage2D( GL_PROXY_TEXTURE_2D, 0, 4, tempSize, tempSize, 0, GL_RGBA, GL_UNSIGNED_BYTE, Null )
	glGetTexLevelParameteriv( GL_PROXY_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, Varptr t )

	If t Then
		maxTextureSize = tempSize 'Found the biggest size supported.
		Exit
	EndIf

	If tempSize = 1 Then RuntimeError "Unable to calculate maximum texture size."
	If tempSize > 1 Then tempSize :/ 2
	Forever

DrawText "MaxTextureSize = " + maxTextureSize, 50, 150    'This is the maximum size for the width and height.

DrawText "(Hit a key to close the window)",50,400

Flip

WaitKey
EndGraphics

End


A Geforce 770 on Win7 throws out 16K (very likely the maximum, increasing the initial tempsize value doesn't change it), while an old XP PC from 2006 with an Intel 945GM Express Chipset yields 2K.

Given that my game is still right at the beginning of development I can comfortably live with a 2K maximum...


ImaginaryHuman(Posted 2014) [#12]
Just one thought... if you are targeting desktops.. and pretty much the minimum desktop resolution now is at least 1024x768 (although higher is more popular), to show such a resolution REQUIRES a `texture size` of at least 1024 (at least from what I've seen I don't think I've ever encountered a resolution which is larger than the max texture size).... so it would tend to suggest 1024 is at least very widely supported. I don't think you have to worry about anything smaller.


therevills(Posted 2014) [#13]
and pretty much the minimum desktop resolution now is at least 1024x768

Not quite true... there are quite a few laptops which has a maximum resolution height of 600...


As to the original question, I would have thought 1024x64 would be blown up to 1024x1024 by BMax... To be safe I think I would split it into smaller images - "GrandMa" hasn't updated her PC in decades.


ImaginaryHuman(Posted 2014) [#14]
Look at Unity's webplayer stats, or steams hardware survey, there's probably less than 1% of users with anything less than 1024x768.


GfK(Posted 2014) [#15]
there's probably less than 1% of users with anything less than 1024x768.
It's actually 1.5%. But that can add up to a hell of a lot of potential customers.


Gabriel(Posted 2014) [#16]
If your game uses power-of-two, non-square textures with either of the official graphics drivers for Max2D, it should be supported by most systems beyond 2003, which is quite a broad audience.

I don't think that's what Floyd was referring to. There was an issue on hardware which does support NPOT textures that they only supported them up to a maximum ratio of 8:1 or 1:8 width/height.

Since 1024 to 64 is 16:1 ratio, that would cause a problem if that issue still exists. Like Floyd, however, I have no idea if it's still an issue. I pack all of my odd-shaped images into 2048x2048 texture atlases for performance reasons. I doubt it's an issue on modern hardware, but wouldn't want to say 100%.


col(Posted 2014) [#17]

I would have thought 1024x64 would be blown up to 1024x1024 by BMax...



Nah...

From what I remember pixmaps and textures are handled in this way...

ALL textures and pixmaps are initially loaded and stored in Max2D as pixmaps.
When it comes to drawing a pixmap it's pretty much a byte copy to the display surface.


EDIT:- This isnt explained very well, Kryzons explanation is clearer and correct.
Now when it comes to textures ( TImage ) its a completely different story :)
When it comes to drawing an TImage, remember it was graphic file was loaded as a TPixmap and now needs to be uploaded to the gpu as a texture. If one, or both, of the pixmap dimensions is SMALLER than its NEAREST ( images are not down-sized, only up sized if needed ) power of 2 then a copy of the pixmap is created and rescaled to that nearsest power of 2 dimensions, this copy is then uploaded to the gpu card to be used as the texture that you'll end up seeing. So if you load an image of 1000x60 it will be resized as a pixmap to 1024x64 and that temporary pixmap is passed to the gpu. You may be thinking about aspect ratios at the point, but thats taken care of by the texture being displayed on a 2 triangle quad at the size of the original pixmap dimensions.

And after that...

Depending on the flags used is whether mip maps are generated or not, and if they are then the mip maps are generated manually as separate pixmaps too. As I'm sure you know each mipmap is the same image but reduced in size by half the width and half the height for each mip map. The mip maps are then halved again and the process keeps going until they are 1 x 1 pixels. Each mipmap is loaded into to appropriate gpu texture as its generated.

This all happens during the first use of the DrawImage call that uses the TImage. The now newly created gpu texture is stored with the TImage and used in any further calls that use that TImage - ie its texture is not created every frame, just the very first one. EDIT:- Actually the texture is saved in a TImageFrame which is driver related and the TImageFrame is queried for the appropriate TImage instance which is then drawn use the appropriate driver.


Kryzon(Posted 2014) [#18]
Hello.

There was an issue on hardware which does support NPOT textures that they only supported them up to a maximum ratio of 8:1 or 1:8 width/height.

Since 1024 to 64 is 16:1 ratio, that would cause a problem if that issue still exists. Like Floyd, however, I have no idea if it's still an issue. I pack all of my odd-shaped images into 2048x2048 texture atlases for performance reasons. I doubt it's an issue on modern hardware, but wouldn't want to say 100%.

Since the Direct3D 9 driver has that 'MaxTextureAspectRatio' capability field, we can assume that a piece of hardware that supports D3D9a and above does not have that limitation - or at least, you can verify if it does (that field would have to report a value of '8').
In any case, when Max2D is unable to create a texture for an image, the OpenGL driver throws a RuntimeError and the Direct3D 9 driver writes the error to the StdOut stream (that is, it writes to the console).
The error can be intercepted.

If one, or both, of the pixmap dimensions is SMALLER than its NEAREST ( images are not down-sized, only up sized if needed ) power of 2 then a copy of the pixmap is created and rescaled to that nearsest power of 2 dimensions,

This is incorrect EDIT: It's correctly actually. Both the dimensions are always converted to a power-of-two value, but if a dimension is already a power-of-two value, it is preserved (vide function Pow2Size() on both drivers).
So it's not any smaller value that is modified - this value would also have to be non power-of-two.

An image can be downscaled (in that you lose detail) with the OpenGL driver if the ideal power-of-two dimensions found for it (whether they were preserved from the original image dimensions or computed as the next nearest values) are too large such that they're not supported by the hardware.
GLMax2D uses that "proxy texture" mechanism to test if the texture can be created, and if it can't - even with a perfect power-of-two size - the pixmap is scaled down.

So if you load an image of 1000x60 it will be resized as a pixmap to 1024x64 and that temporary pixmap is passed to the gpu. You may be thinking about aspect ratios at the point, but thats taken care of by the texture being displayed on a 2 triangle quad at the size of the original pixmap dimensions.

To make sure that no confusion is being made here, what both drivers do in this case is that they get the original 1000x60 pixmap and paste it on the top-left corner of a new blank 1024x64 pixmap, and then they convert this new pixmap to a texture that has UV values that capture only 1000x60 pixels. The original image resolution is preserved, as only the 1000x60 portion is drawn. The rest of the pixels are not visible.
If you load an image with a size of 256x512, for example, it would yield a pixmap and consequent texture of 256x512. The image is already sized as a power-of-two.

For more clarification, the following is what happens on both drivers when you load an image that is not a power-of-two:



col(Posted 2014) [#19]
We are saying the same thing, you worded it better. If a pixmap dimension isn't a power of 2 then it is scaled up to its nearest, this is for both dimensions. The dimension is always scaled up though, not down via the Pow2Size() function.


So it's not any smaller value that is modified - this value would also have to be non power-of-two.


I don't understand why you're querying here? The Pow2Size() checks if the value is smaller than its nearest but higher power of 2, is that not I've written?



So if you load an image of 1000x60 it will be resized as a pixmap to 1024x64 and that temporary pixmap is passed to the gpu.


I stand corrected here, what you show in your pics is the correct method.

In my defense I was recalling from memory :P after checking, it turns out I was wrong!


Kryzon(Posted 2014) [#20]
Hello.
I must have read it wrong; On that point you are correct indeed.

Regards.


TomToad(Posted 2014) [#21]
Wait! You mean it isn't all done by magic? :D

Actually knew that graphic cards had a size limit, especially older ones. Never even thought of the possibility of an aspect ratio limit. Am I right in thinking if I have an anim strip that is 32 pixels wide, but 2048 pixels long, I should be ok if it is loaded as 32x32 pixel frames? I believe that BMax divides the image down into several individual images with LoadAnimImage(), or is it something I need to consider if I want my programs to run on older hardware?


GfK(Posted 2014) [#22]
Right, think I'm going to stick with my tiledImage class. I think we're all agreed that splitting it into 128x32-pixel tiles is mostly safe, yes? (the original image is actually 1024x96).

I figure if somebody hasn't updated their PC in the last fifteen years, then they're not likely to be buying games for it anyway.


Derron(Posted 2014) [#23]
Why splitting it to 128x32 ?

I thought 2x 512*64 + 2x 512*32 would do too. Remember to minimize draw calls and texture swaps so use as less portions as possible.

That "tiledImage"-class should take care of it by itself.

bye
Ron


GfK(Posted 2014) [#24]
Well, currently all the cells have to be the same size since it just uses LoadAnimImage. I could push it to 256x32 but then I'm right on the 8:1 ratio.

If you want to demonstrate how the tiledImage class could handle cells of varying sizes whilst taking into account scale and image handle position, then I'm all ears.


Derron(Posted 2014) [#25]
This must be something I have found years ago somewhere here on the forums.

It takes care of scale - feel free to add the image handle.




I do not use this code currently as I minimize texture swaps - which gets foiled by this approach.

EDIT: Some of the code was replaced by individual functions - should be no problem to rewire that parts with functions of yours - don't know if the original code contained something in there.


bye
Ron


ImaginaryHuman(Posted 2014) [#26]
If it was me I wouldn't bother splitting it at all, I'd just use the 1024 texture. Why try to support 15-year old pc's? Most game releases these days barely even support anything older than 5-10 years.