Scrolling Background

BlitzMax Forums/BlitzMax Programming/Scrolling Background

Sean Doherty(Posted 2005) [#1]
I am making a top down shooter and I would like to make scrolling backgorund. Is there an easy way to do this with a single bitmap where I can just specify the offset and the bitmap would wrap or do I have to create a bunch of tiles?


Perturbatio(Posted 2005) [#2]
Unless every part of your background needs to be unique, tiles are the better option.


Dubious Drewski(Posted 2005) [#3]
You could cut a large bitmap into tiles and you'd get the
effect you want, and every part of it could be unique. It
would just take alot of video memory.


smilertoo(Posted 2005) [#4]
Using 1 big bitmap is a terrible way of doing scrolling, use tiles.


Will(Posted 2005) [#5]
I think what you want is the TileImage(image:timage, x, y) command.

These people are right if you have a complex project or other stuff will be slowing it down, but if this is a really easy game, tileimage should be ok,


Sean Doherty(Posted 2005) [#6]
Is there a background tile splitting around here?


RktMan(Posted 2005) [#7]
i'm kind of curious about peoples thoughts on this subject.

so far all people have said is "it is a bad idea" without any information as to why ?

i'm doing a scrolling shooter, where background is a "map" and the player can go any which way, within the bounds of the map.

i actually started off writing some code/routines to do tiles and a "scrolling viewport".

i ran into a couple of issues with this design :

a) i was using the technique others had described elsewhere about "tiling" the background, then doing a CreateImage() of the backbuffer to get a static image of the "viewport".

this was fast when the player was within the "static image" limits, but when the player pushed the edge of the map, and it had to rebuild, it was _dog_ slow.

b) i'm not an artist, and it seemed like the artwork would be a pain (trying to get individual tiles to line up together and look right etc ...)

so ... i'll ask the question :

_why_ is having a single large image as a map for your shooter a bad thing ?

strictly memory ?

it can't be for performance ? - my scrolling routine now has _great_ performance when i'm scrolling over a large, static image, and slows to a crawl when i'm assembling tiles between frames.

i created a PNG just for a test, with a size 4096x4096.

i don't have alot of colors and stuff in it right now ... but initially, the size on disk doesn't seem too unreasonable.

i would appreciate and definitive information on this subject.

Tony


smilertoo(Posted 2005) [#8]
Yes, its mainly a memory issue. And the 4k image limit also limits map size if you use 1 image.


RktMan(Posted 2005) [#9]
what is the 4k image limit ?


ImaginaryHuman(Posted 2005) [#10]
I expect that most graphics cards have some kind of maximum texture size limit. Some older cards can't do a texture bigger than 256x256. Apart from hitting those kind of limits, if you can work around that I dont see any problem why you cant have a bigger image and scroll it around.

Scroll techniques with OpenGL and such can be a bit trickier than the old-school methods. It's mainly because you don't have a hardware pointer that you can just change to scroll the screen, and usually the whole backbuffer has to be reassembled every frame because it gets trashed.

There are several different scrolling techniques.

One that you might consider is to have a texture (image) which is a little taller and wider than one full screen, e.g.640+64 x 480+64 (or whatever is the next nearest multiple of 64 depending on whether you break things up into smaller tiles). Then, when you scroll, let's say down and to the right, you just move the coordinates of where your `top left` is, in the image. You of course then have to draw your image in four parts. You let things wrap off the right and left edges and the top and bottom edges. You then use the `hidden` extra strip down the edges to copy strips of new graphics from main memory or from another texture. So if you are moving down and to the right, you replace the strip at the top and left with new graphics and it eventually scrolls into view. You keep the memory upload stuff to a minimum by only uploading the least amount of strip possible each frame, which if important cus the bus bandwidth is low and uploading (to the existing texture) uses the CPU.

I don't think there's any problem with using a really big bitmap if that's what you want to do. It's the only way to go for games like Worms and Scorched Earth, for exmaple. You can't really use repetitious tiling there.

As far as changing that superbitmap and storing changes goes, you would have to spool those off to main memory as they occur - or have your own software drawing routines which draw in main memory on an identical bitmap and then upload parts of it as it scrolls.


LarsG(Posted 2005) [#11]
RktMan: you don't have to create an image everytime you go past the screen size...
If you do it that way, then I'm pretty sure you've misunderstood the way tiles are used and drawn to the screen..

I'm sure there are som tile drawing code on these boards, but if you can't find some, I can probably help you with the basics of making a tile engine.. (I think I've got some code laying around)


RktMan(Posted 2005) [#12]
@LarsG :

i must have misunderstood then. i actually based some of my work on an example from these boards. the example was one that showed how to do a "tiled" map, except that the example showed how to draw the tiles to the buffer, then create an image to be used statically, until the boundaries of the "view" changed, at which time, the example would redraw the backbuffer with the changed set of tiles, capture the image, and store the result.

in my implementation, this seemed to have the effect of being very fast when scrolling around on top of the captured image, but would come to its knees when the map was scrolling, because it was doing a capture every time an edge moved passed the image boundary.

this was without any optimization on my part, so i know it could have been better.

in any event, i quickly realized that this methodology wasn't quite what i thought i wanted for my game.

i wanted a more colorful and diverse background for my scroller.

i wanted a "map" for my top down shooter that was more like a painting of the terrain, rather than being an assembled set of tiles.

so, this is where i am now.

trying to understand the pros/cons of having "one big image" as the "lowest layer" of my shooter.

i still don't understand the "texture sizes" issue.

i've seen folks talk about "texture size" and what video cards can handle.

how does this affect me ?

i'm pretty sure for instance, than even with my "scrolling tiles" code that i had working, that i was exceeding 4k and/or 256x256 for the captured image.

this seemed to have no effect on how my code worked.

i assume that generally, "^2" sizes of things probably makes the computer/video crad more effecient, and not exceeding the amound of memory on the video card for texes is a good thing, but other than those issues, how important is it for me, the developer, to have to worry about the details of how a given video card deals with textures.

thanks for all of the input so far.

Tony


ImaginaryHuman(Posted 2005) [#13]
Hey Tony,

I don't know about DirectX but I can describe from an OpenGL viewpoint.

Originally, OpenGL started off by supporting textures that are 64x64. That was considered to be the standard minimum allowable size for a given texture. I guess they based that on thinking that it would be supported on as much graphics hardware as possible accross the board. So what this means is that even if you wanted an image that is really small, smaller than 64x64, it has to be stored `within` a 64x64 texture - and generally the rest of the pixels will be wasted.

As graphics hardware advanced it tended to support bigger texture sizes, although usually they would still be powers of 2 and always square. It's related to how the hardware was set up and how it optimized based on knowing the texture size etc. Probably. So nowadays on new cards you can have much bigger textures, and maybe your card even supports 2048x2048 textures without any problems, but the same is not true of all cards. So while something might work with that size texture, for you, it might fail especially on older gfx cards for other people. Generally speaking people say that 256x256 is a pretty good size that you can trust most people to be able to support with their gfx hardware without causing things to break. So that means, if you really want to be backwards compatible and reach as many people as possible with the same code, you'd have to break things down into 256x256 `tiles`, ie lots of smaller textures rather than one big one.

As far as texture sizes go, powers of two are generally faster to deal with in binary and with simple math than unusually sized dimensions. Also in earlier OpenGl, it only supported square textures. In higher versions of OpenGL they added support for rectangular textures, but it's not a part of the OpenGL 1.2 basic command set supported by BlitzMax. You'd have to use `extensions` to the OpenGL system, which may or may not be supported by all gfx cards and platforms.

So, if you try to turn a pixmap into an image, or load an image, or create an image, and you provide dimensions that are not an exact multiple of 2, from 64 upwards, BlitzMax has no other choice but to use the `next biggest` power-of-two size so that your image can fit within it. And yes it wastes the extra memory space. In particular, a 640x480 image has to be stored in a 1024x1024 texture which wastes a lot of space.

To draw an image, which is actually meant to be smaller than the texture that it is stored in (like 640x480), BlitzMax just creates a quad (two triangles) on the screen and tells it that the coordinates within the texture are whatever the corners of the image is, so that the wasted space part is ignored. You end up only seeing the image you created and the wasted space is not displayed.

So you have to bear in mind that if you indirectly `cause` a texture to be used that is bigger than the image area that you want to work with, video memory is being wasted. Also bear in mind that especially older hardware does not support really big single textures, so like 2048x2048 and especially 4096x4096 may well not be supported on many systems.

Now, something that you want to try to do, if you can make or find routines that help you with it (I think there's something in the archives or other forums), you want to try to use any `wasted space` on a texture, to store other images, to use up the `real estate` as much as possible. So, say you had small bullet images, you could probably store, say, 16 of them on a 64x64 texture. Then there's no wasted space and it's not having to make a whole 64x64 texture for every single bullet frame. Then, you just have to make a textured quad with the portion of the texture for whatever relevant image you want displayed, with the appropriate `texture coordinates` within the texture. Again, those codes/forums may help.

You also want to bear in mind that in order for OpenGL to use an image to draw anything with, ie to put it on the screen, it has to be usually transferred from main memory to graphics memory. That transfer is done with the CPU as a basic memory copy - except it also involves some rearranging of the data to turn it into the correct format/size etc. That tends to be time intensive, and certainly slower than how the graphics card usually draws stuff. The same is true of DrawPixmap - it's using the CPU to draw `over` the graphics bus (between main mem and video mem), and generally speaking the graphics bus is somewhat slower than the speed at which the gfx card can draw stuff.

So you want to try to limit transfers between main memory and graphics memory as much as possible. If there is some way that you can only use OpenGL as a `display system`, rather than using it to draw things that you want to read back in, that will help with your speed. You may then have to create some other routines (or find some) that does some drawing in main memory, to pixmaps or arrays or whatever, which you can then read from, with the CPU, much more efficiently than if you tried to get that information from video memory. The thing is, if you wanted to `download` image data from video memory to main memory, such as with GrabPixmap, it again goes over the graphics bus which is slow, and using the CPU to main memory, which is slower than the gfx-card's drawing speed by quite some times.

Also you would probably want to try to avoid having to `create` new images (textures) on the fly, because it has to do a memory allocation and initialization, plus probably an upload of data over the bus or a grabbing of data from the backbuffer. That takes quite some time. If you can set up a static/dummy texture to keep, over time, and then re-upload to it or re-grab to it, that is faster than making whole new ones. To do this you probably will have to bypass Max2D and work with OpenGL directly.

Because the image being displayed/drawn by OpenGL is somewhat separated from the CPU and main memory, off in video memory on the gfx card, you generally want to limit having to cross that bridge as much as you can. They have things like display lists and vertex arrays which are more advanced ways to avoid constantly `uploading` stuff. Also if you want to `sense` the images you create, in some way, like reading pixels, detecting pixel collisions, searching for colors, or drawing some kind of `permanent change` to a backdrop, you generally run into some problems. Reading data, with the CPU, means again that data has to be passed over the graphics bus which is slow. If you are going to be sensing the presence of boundaries with some kind of pixel reading, it's probably going to be faster to read that in main memory FROM a main-memory-based pixmap. Especially if you are going to be reading a lot of pixels, or doing full-screen changes to pixels in a way that you can't do on the gfx card, you should do it to a main-memory pixmap.

So you might want to think about what kind of engine you are going to make. If you are only going to use OpenGL as an OUTPUT system, a `rendering library`, and you have everything else you need stored in main memory for detecting for example a tile map or image boundaries, etc, then it should work pretty efficiently and effectively. If you want to get INPUT from the images you draw, or you want to take the backbuffer and read from it - ie to make a permanent change to a background or something, then you really ought to think of other ways to get the information that you want - preferably by keeping a copy of your bitmap/game world in a pixmap and using that for reading.

What you could do is then just think of OpenGL as a display device, and as you `scroll` the display, upload only the smallest amount of `new pixels` at the edges of the scroll area each frame. You can use the `offline` bitmap to store your permanent changes or destructed landscapes or whatever, and just upload those in small chunks (tiles) to the gfx card for display. That's one way of doing it.

If you don't need to store big changes, you could just keep your static background in one or more textures but that does mean using up a lot of video ram. Bear in mind that older or cheaper gfx cards generally have less video ram. Mine has 64mb for example but my other ibook only has 2mb graphics mem (it doesn't have hardware acceleration very much so it uses a software openGL device using main memory for textures - and is really slow).

I hope that sheds some more light on it.


Hotcakes(Posted 2005) [#14]
http://www.blitzbasic.com/codearcs/codearcs.php?code=1440
The only code you'll ever need. Splits large bitmaps into definable ^2 sizes, handles rotation and scaling as if it were one large image.


RktMan(Posted 2005) [#15]
@AngelDaniel

that was an _awesome_ post with a superb amount of detail.

thank you very much.

i've re-read your post a couple of times now, trying to digest your content and take some notes, so i can think through my design.

one thing i wanted to confirm, if you have any more characters left in your bit bucket for me is to confirm or deny something i saw posted somewhere else :

the difference between a TImage and a TPixmap is that a TImage is in video memory and a TPixmap is in main memory.

so to some of your points, when manipulating one of these types vs. the other type, is where you have to be careful about mutations of these types, because that is where the implicit copying of the data across the graphics bus occurs ?

again, thanks loads for the information you posted, it was extremely helpful to me.

@Vanilla

thanks for the code link.

i'm digesting it as well.

cheers,
Tony


TartanTangerine (was Indiepath)(Posted 2005) [#16]
the difference between a TImage and a TPixmap is that a TImage is in video memory and a TPixmap is in main memory.

Actually a when you draw a pixmap the pixels are blitted to the backbuffer. When you draw an image BMAX creates a textured Quad.

Also, if you are going to use a large image to contain all tiles then do not use the inbuilt anim image commands. Bmax will create a new texture from each of the image frames - THIS IS BAD. BMAX will switch textures/surfaces everytime you draw a different frame THIS IS CRAZY.

You need one of these > http://www.blitzbasic.com/Community/posts.php?topic=51647
The above code will save you memory and increase the speed of your render.


Scott Shaver(Posted 2005) [#17]
Sean you might take a peak at this

http://www.scottshaver2000.com/forum/viewforum.php?f=17

it uses a map editor and the maps can be displayed any place on the screen with anyview port size. Hope it helps. Maybe the code will give you some ideas.


xlsior(Posted 2005) [#18]
@AngelDaniel: Very informative, thanks for taking the time to write all that!


ImaginaryHuman(Posted 2005) [#19]
You're welcome.

I think Indiepath summed up the answer to the question about memory, and made the good point about trying to put all `frames` of an animation into a single texture. Reason being that when you have to switch to a different texture, for drawing, that takes extra time. And like Indie said, when you draw a pixmap, it uses a glDrawPixels() call which `uploads` the pixels to the backbuffer memory in video ram, using the CPU. When you actually draw an image, the gfx hardware is being used to zap it from video memory into the backbuffer, also in video mem. DrawImage can be 10-100 times faster than DrawPixmap, therefore.

Image=Texture=Backbuffer=Hardware-Accelerated=Video Memory Based

Pixmap=Bitmap=Array=CPU-Processed=Main Memory Based

I think that just generally you have to think about what you need to do to make your engine work. If you don't need to read from the background and you don't need to keep a permanent change that's not part of a tile map system, things are pretty easy. OpenGL is highly geared towards DRAWING stuff, it's really quite poor with regards to READING stuff. If you want to read stuff you are most likely better off reading it straight from main memory, rather than transfer it - which means keeping two versions of your game world - one `backup` in main memory, possibly featuring only the minimal amount of pixels/information you need for your purposes - and one `displayable` in video ram that you use to make something to phsyically look at.