TileMap Engine finally modularized

Community Forums/Showcase/TileMap Engine finally modularized

_Skully(Posted 2009) [#1]
Man I tell ya I will never make that mistake again! I had my editor and engine all intertwined, but i have finally completed the necessary code surgery to separate them...

Here is all the code necessary to load and display TileMax right now...

SuperStrict

Include "TileMax.bmx"
Global ScreenWidth:Int = 1024
Global ScreenHeight:Int = 768
Global ScreenDepth:Int = 32
Global screenRefresh:Int = 60

SetGraphicsDriver GLMax2DDriver()
Graphics ScreenWidth,ScreenHeight,ScreenDepth,screenRefresh
SetImageFont Null

Global TMax:tilemax=New tilemax
Global F:FPS=New FPS
TMax.LoadLevel("Media/Maps/default/")
TMax.Currentlevel.editmode=False

While Not KeyDown(KEY_ESCAPE)
	Cls
	TMax.DrawMap(0,0,ScreenWidth,ScreenHeight,%0111)	' Draw all layers
	DrawText F.RunFPS(),0,0								
	If KeyDown(KEY_DOWN) TMax.FocusY:+1
	If KeyDown(KEY_UP) TMax.FocusY:-1
	If KeyDown(KEY_LEFT) TMax.FocusX:-1
	If KeyDown(KEY_RIGHT) TMax.FocusX:+1
	TMax.MoveOnMap(TMax.FocusMapTile,TMax.FocusX,TMax.FocusY)	
	Flip 0
Wend


I'd like to get an idea of how the engine itself is running fps wise... I've included a compiled version of the code above in the zip file called "fullscreen". Can a few people please run it up?


_Skully(Posted 2009) [#2]
I just about spouted off about a bug in BlitzMax... but it was no bug at all...

I split the drawing operation to draw the top layer separately from the bottom layers like this:
SuperStrict

Include "TileMax.bmx"
Global ScreenWidth:Int = 1024
Global ScreenHeight:Int = 768
Global ScreenDepth:Int = 32
Global screenRefresh:Int = 60

SetGraphicsDriver GLMax2DDriver()
Graphics ScreenWidth,ScreenHeight,ScreenDepth,screenRefresh
SetImageFont Null

Global TMax:tilemax=New tilemax
Global F:FPS=New FPS
TMax.LoadLevel("Media/Maps/default/")
TMax.Currentlevel.editmode=False

While Not KeyDown(KEY_ESCAPE)
	Cls
	TMax.DrawMap(0,0,ScreenWidth,ScreenHeight,%0111)	' Draw background layers
	DrawText F.RunFPS(),0,0								' This is where the actors will get drawn
	TMax.DrawMap(0,0,ScreenWidth,ScreenHeight,%1000)	' Draw foreground layer
	If KeyDown(KEY_DOWN) TMax.FocusY:+1
	If KeyDown(KEY_UP) TMax.FocusY:-1
	If KeyDown(KEY_LEFT) TMax.FocusX:-1
	If KeyDown(KEY_RIGHT) TMax.FocusX:+1
	TMax.MoveOnMap(TMax.FocusMapTile,TMax.FocusX,TMax.FocusY)	
	Flip 0
Wend


And what I got was the front layer flickering. For the life of me I couldn't figure out what was going on until I had another look at my code. Because I use a recursive drawing technique I needed to know when a tile had already been drawn... to do that I set a level wide "tick" value to millisecs() and the compared that when drawing. Well, apparently the map was drawing sometimes within 1ms so the second pass got the same "tick" value and hence did not draw... wow... Max is fast! So fast I thought it was a bug LOL


_Skully(Posted 2009) [#3]
Ok, so I updated the fullscreen version to include a happy face drawing after the back layers and before the top layer so you can see that working anyway :)

The only way I could rid myself of the flicker problem was to temporarily clear all the tiles of their "tick" value prior to draw..I know thats costing me a couple cpu cycles but I should be able to remove it once there is more going on.


Pete Rigz(Posted 2009) [#4]
Runs at about 750fps here with a geforce 8800gts/amd x2 2.7ghz. You say you've modularized the code, but in that case shouldn't you be using import instead of include?


_Skully(Posted 2009) [#5]
lol... yes.. but for some reason its not building... theres another thread about that

So, for now I am including :(


_Skully(Posted 2009) [#6]
Its properly mod'd now.. I had a pooched BMax installation somehow.


MGE(Posted 2009) [#7]
Have you tested your code using a projection matrix yet? If so, how do you combat the artifacting lines between tiles when rendered? Thx.


_Skully(Posted 2009) [#8]
Not sure why I would need a projection matrix???

{edit} is this an aspect ratio fix?


Pete Carter(Posted 2009) [#9]
it runs about 750fps ish with a core 2 duo 2.4 with nvidia 8600gt on xp.


_Skully(Posted 2009) [#10]
Them are some nice numbers!

Any particularities or weirdnesses?


TaskMaster(Posted 2009) [#11]
Runs at about 300fps on my notebook.

One thing I noticed is that the smiley face thing goes behind some of the tiles. Is it suppose to do that?


_Skully(Posted 2009) [#12]
Yes.. there are 4 layers... bottom, collision, background, and foreground. The tiles that go in front are from the foreground layer.

The first call to drawmap draws the bottom 3 layers and then the last call draws the foreground after the smiley is drawn

If you run the editor you can turn each layer on and off to see what is on what layer.


kfprimm(Posted 2009) [#13]
Import "TileMax.bmx" will work fine. It'll save you the time of having to recompile the engine each time you test the program. Nowadays, I've gotten completely away from using Include.


_Skully(Posted 2009) [#14]
I work on it as an import but I test the build every now and again to make sure it works :) I agree that having to build each time is annoying but in the end it will be a mod...


MGE(Posted 2009) [#15]
"Not sure why I would need a projection matrix???"

Well, the days of not supporting wide screen modes should be long gone so you need to support a way to run your game on wide screen monitors so it looks good. It's no longer acceptable to just pick a fixed resolution and force the end user into playing a game that looks squashed.

Using a projection matrix (code in the forums, do a search) allows you to have a virtual resolution for your game and it will scale automatically to whatever physical resolution you select for your game and the aspect ratio will look fine. The problem with this is, in a tile map engine, since the entire screen goes through a gpu scaling/filtering process, tiles end up being displayed with faint artifacting lines around them depending on the scale factor of the projection matrix. Adding borders between tiles can help the problem but I've never seen a 100% solution to the problem, espeically when tiles have alpha smoothed edges so they blend in nicely with other tiles when overlayed, etc.

I havn't seen a tile engine correct this problem yet, so the game ends up using a fixed resolution with the game centered in the middle of the screen. Which is indeed a viable solution that works, but you end up not being able to run the game utilizing the full screen.

Anyway, since it appears you're going all out with this project (I stopped short on mine because I could never solve the problem) I thought you might have had a solution to it. If you have time, you might want to research it and see if you can come up with a solution to allow a tile engine to work with a projection matrix.


_Skully(Posted 2009) [#16]
ok.. well... I'm not sure I will suffer the same problem since you give TileMax the upper left border and width, height to display and it will just fill the area with map... so if your screen is wider it will fill it. It actually starts the drawing operation at screen centre

TMax.DrawMap(0,0,ScreenWidth,ScreenHeight,%1000)	' Draw foreground layer


The only thing I could see a projection matrix doing is correcting for aspect ratios, but it sounds like its a head-ache in its current incarnation


_Skully(Posted 2009) [#17]
MGE,

Does that sound like it would eliminate this issue?


MGE(Posted 2009) [#18]
Unforutantely no. It's basically a problem due to the way the gpu "filters/smooths" the edges of the tiles when things are scaled. Does your tile map support scaling or rotation? You may see the artifacting lines during that as well.

Granted for hobbiyst development it really won't matter. But if you're developing a game for a portal where they force you to deal with "running the game at desktop resolution" or "forcing you to make the game work in widescreen modes", etc, etc, you'll come up against these issues.

One thing I didn't try was using tiles that were 8bit with no alpha edges. That might solve the problem but it limits colors, etc.

Again this was all so frustrating I said the hell with it. lol... Maybe GreyAlien can jump in with his thoughts since he's worked with BFG for quite a while now and I think at one point he was dealing with similar issues.


_Skully(Posted 2009) [#19]
Hmm.... since Max2D uses 3D to render, I wonder if its possible to get a hold of a quad and rotate it at the 3D level.. that would certainly fix these issues. I've followed the Max2D code down pretty low but I haven't figured out what its rendering with yet to know if its possible.

Any scaling or rotation done to individual tiles will produce artifacts... what needs to happen is the tiles need to be rendered and then the whole thing scaled/rotated


Grey Alien(Posted 2009) [#20]
I realise this is an old thread, but I just found it via google. I too have noticed the same problems when using a projection matrix on a tile-based game and it look yucky. I was hoping the projection matrix scaled everything AFTER it was drawn, but it doesn't, it scales everything as it is drawn and thus you get artifacts. The only solution would be to draw it all then use some GFX card jiggery pokery to scale (or indeed rotate) it as a whole. The same problems occur when SCROLLING a tilemap at non-integer coordinates (for smoothness) as I found out a couple of years back. At the time we talked about using "meshes" (to make the graphics card know that the tiles are all JOINED to each other and to render them appropriately without gaps) to solve the problem but it never went anywhere due to my lack of knowledge and general lack of interest. Unfortunately that's where I currently am.


_Skully(Posted 2009) [#21]
Hi Grey Alien,

This is the code I'm using ...

Type TVirtualGraphics
	Global virtualWidth!, virtualHeight!
	Global xRatio!, yRatio!
	
	Function Set(width#=640, height#=480, scale#=1)
		TVirtualGraphics.virtualWidth = width
		TVirtualGraphics.virtualHeight = height
		TVirtualGraphics.xRatio! = width / Double(GraphicsWidth())
		TVirtualGraphics.yRatio! = height / Double(GraphicsHeight())
		
	?Win32
		Local dxVer:Byte
		Local D3D7Driver:TD3D7Max2DDriver = TD3D7Max2DDriver(_max2dDriver)
'		Local D3D9Driver:TD3D9Max2DDriver = TD3D9Max2DDriver(_max2dDriver)

		If TD3D7Max2DDriver(_max2dDriver) <> Null
			dxVer = 7
		EndIf
 '		If TD3D9Max2DDriver(_max2dDriver) <> Null
'			dxVer = 9
'		EndIf

		If dxVer <> 0 'dx driver was set, otherwise its GL
			Local matrix#[] = [2.0 / (width / scale#), 0.0, 0.0, 0.0,..
			 										0.0, -2.0 / (height / scale#), 0.0, 0.0,..
			 										0.0, 0.0, 1.0, 0.0,..
		 											-1 - (1.0 / width), 1 + (1.0 / height), 1.0, 1.0] ',scale#]
			
			Select dxVer
				Case 7
					D3D7Driver.device.SetTransform(D3DTS_PROJECTION, matrix)
				Case 9
'					D3D9Driver._D3DDevice9.SetTransform(D3DTS_PROJECTION, matrix)
			End Select
		Else
	? 
		glMatrixMode(GL_PROJECTION)
		glLoadIdentity()
		glOrtho(0, width / scale:Float, height / scale:Float, 0, - 1, 1)
		glMatrixMode(GL_MODELVIEW)
		glLoadIdentity()
	?Win32
		EndIf
	?
	End Function
	
	Function MouseX:Float()
		Return (BRL.PolledInput.MouseX() * TVirtualGraphics.xRatio!)
	End Function
	
	Function MouseY:Float()
		Return (BRL.PolledInput.MouseY() * TVirtualGraphics.yRatio!)
	End Function
End Type


DX9 is commented out... it still causes banding/artifacting/morier etc but not too bad.. but its going to happen no matter what is done. Apart from drawing the image to a 3D texture and scaling it that way I suppose.


ImaginaryHuman(Posted 2009) [#22]
Your code looks pretty much like the projection matrix code I've seen floating around. I guess it depends if you want to support OpenGL and/or DirectX. In OpenGL the glOrtho() command is what sets up the projection. Note that regardless of whether you set up a 2D projection or a 3D one, the only difference is the way the resulting matrix influences the position of vertices positioned further away from the camera. Ie 3D really just means `with a perspective foreshortening`. A 2D orthographic projection still uses the same matrix, the same math, the same hardware, the same processes, it simply just doesn't apply a perspective transformation based on the Z axis.

The idea of drawing all tiles to exact integer coordinates and then grabbing that into/drawing it into a texture and then re-drawing it scaled up and stretched is one possible solution, but it's far from ideal. You want the graphics card to automatically sample your textures with a higher resolution when you're drawing to a higher resolution display, so that you see more detail. Ideally you would have your textures in a resolution that is either higher than or equal to the resolution they will be rendered as. Scaling down isn't as bad as scaling up, but equal resolution is best.

One thing I'm doing at the moment is working with an off-screen graphics buffer in main memory, composed of several pixmaps (or one large one). I make persistent changes to tiles in main memory using the cpu and then upload tiles to a group of largeish textures in graphics memory - many tiles to a texture. This transfer is a 1:1 pixel mapping at integer coordinates. The textures together would represent a mostly visible portion of a tilemap, the only difference being that the tiles were drawn as duplicate copies in the pixmaps - enabling them to be destructible with preservation of changes on a per-pixel basis across the whole level. (That said, large tilemaps (which in part are designed to save on memory consumption, besides optimizing asset generation and use) uses a lot more memory this way. But what I can then do is draw large sections of textures at once, maybe only 4 textures to cover the screen, and texture bilinear filtering will take care of most joins between tiles - except at the edges of textures. A double-buffered texture system at an offset could perhaps draw over the first buffer's edges. But anyway...so long as my graphics are the same resolution as the display is at after aspect correction, it looks mostly ok. If the resolution stretches, it may require that I recalculate/redraw all graphics to properly match the resolution, which may not be viable depending on your art pipeline.

One thing which comes to mind here is... IF you are able to re-generate all of your game graphics in-game (which might be a good idea if you really want 100% support of multiple resolutions/aspects, and I think something that will be the way of the future - which I am working on right now), you can regenerate them at a stretched size so that you draw then with a 1:1 pixel ratio and set the projection matrix to its default (ie no stretching at all). ie you move the stretching into the image generation step, rather than into the rendering step. ie you simple are drawing wider/narrower source media without adjustment. But this requires that you generate all media in-game, which is not likely for most games - but it is something I am doing.

Back to the main topic at hand, though, I scoured the internet trying to find references to triangle meshes and adjacent edges and antialiasing techniques etc and really did not find much, and nothing satisfactory, although I did find answers to a few questions I had of my own about antialiasing anomalies.

This wikipedia page is maybe one of the closest clues to how a tilemap mesh might be achieved: http://en.wikipedia.org/wiki/Triangle_mesh

It mentions basically the use of triangle strips. No idea about DX, but in OpenGL a triangle strip is where you define all of the points along a whole strip of joined-together triangles, like a ribbon, where after the first triangle you just define one more vertex and it completes an extra triangle, sharing 2 points with the previous triangle. Obviously this is not a grid of triangles as such, and there is really no way to define a grid mesh in the way that you'd think of it as all the vertices being joined together. You can bend the triangle strip around at the end and do another `row` of triangles but the outside edge is not joined to the edge from the previous row. Only the previous and next triangle in the strip are joined together.

Another possibility mentioned there is where you use the vertex arrays of OpenGL. You can make a vertex array draw a triangle strip - same thing as doing it in immediate mode, but faster. But you can also define objects that `share vertices`, and allows you to share a vertex more than once. Using glDrawElements() you pass not only the array of vertices for the triangle strip, but also you pass an array of indexes. The indexes are used to associate a vertex with an element from the array, and you can re-reference previously used elements, and skip around in the data randomly. This can create more efficient rendering if vertices are re-used a lot (otherwise glDrawArrays is faster). The thing is, since vertices are being shared, supposedly the entire mesh's coordinates are calculated first before being converted to triangles by the gpu, which MIGHT (no idea if it does) produce correct seamlessness at the edges of tiles (where a tile is two triangles next to each other).

From what I can tell, and I haven't tested it, nor have I found anything else that suggests otherwise, this use of glDrawElements or even basic triangle strips is the only thing that makes something `a mesh`. I don't know if there is any reason to believe that doing this would achieve anything other than faster rendering. I haven't found any written evidence yet that says that by doing this you'll get perfect sub-pixel precision at the edges of each triangle. However, if you take one large texture and you draw this mesh by varying the texture coordinates across the mesh, the texture will be interpolated using bilinear filtering and should at least produce perfect edge-blending of `tile images` WITHIN the edges of the mesh. Maybe. My concern is that this idea rests upon the notion that a texture drawn across triangle boundaries will render perfectly. And again I don't know that it would, or why it would. And even if it does, there still lies the same old problem of ugliness at the outer extremes of the mesh. Your textures might not be big enough (per hardware support) to cover the whole screen, so you might have to have some seams, thus some ugliness. What we really need to know is some evidence that using a `mesh` like this produces better edge results between triangles, even if the edges of the mesh are bad.

You might then try looking at using polygon antialiasing, which is where the edges of polygons are antialiased properly based on pixel coverage. Sounds good? Well, it has its issues. It's supported on most but not old hardware. It has the issue that in order to produce graduations between pixels it has to use some kind of blending, which means you might have to sort overlapping triangles from front to back (or vice versa) before drawing. Also at the pixel level, if for example you draw the edge of a triangle which covers 33% of a pixel, and you draw it in red, and then you draw another triangle on top of it, it's going to blend the top triangle's color with the previously blended triangle's color, producing an odd result. This is due to the fact that the *results* of the calculation are stored in the pixel, rather than information about which *parts* of the pixel are covered. There is no sub-pixel accuracy and as soon as you render something you lose track of which value was contributed by which part of the triangles drawn. There's not way around that using this method.

So then it makes you think about full-screen antialiasing. At least 16x16 would be needed for 256 levels of antialiasing. This would look mostly accurate, but requires a tonne of fill-rate ie draw the screen 256 times and then merge pixels from all of those 256 images down into one pixel value - something that might have to involve floating point math and maybe shaders. And then it would be even better if the buffers are floating point format, which is also slower and uses even more memory. Not to mention how much video ram is needed for 256 screens!

One conclusion I'm coming to is that perfect antialiasing is 100% impossible, because we're working with a flawed medium. A screen that has been digitalized into limited, restricted, isolated parts - pixels. The very fact that there is such a thing as `resolution` at all means there is going to be error. We can make the error smaller but not get rid of it.

I would like someone to post a screenshot of what they think is the problem regarding artefacts when scaling/scrolling tiles. Does it only apply when a tile has a straight edge? Does it only apply when two tiles are drawn next to each other at floating point coords?

Sorry this is such a long post... it's not resolved to my satisfaction yet. I'm going to have to run some tests on this over the weekend.


Grey Alien(Posted 2009) [#23]
Yep, I'm using similar code with similar side-effects. The only option seems to be a mesh or putting everything on one texture then scaling.


ImaginaryHuman(Posted 2009) [#24]
Scaling a predrawn texture is going to introduce a new problem. A lack of texture resolution. Mipmapping might help a bit quality-wise but if you stretch a texture to fill a widescreen display for example, it's going to lose some per-pixel quality and show some blurriness and previously crisp edges are going to start to be offset to subpixel coordinates. Also there is no guarantee that a large enough texture is supported by the hardware to cover a whole screen.

And just to clarify, there's nothing unusual or wrong about the above code posted and referenced, the only thing in GL that defines the matrix is the glOrtho() command which is being used correctly. There is no other way to do that part. So it's not like `other code` would work better.

I think if you are wanting to really support the whole idea of a) multiple resolutions, b) multiple aspect ratios, c) multiple screen sizes, d) perfect 1:1 pixel mapping, e) perfect sub-pixel accuracy within images, f) perfect sub-pixel accuracy at the edges of images, then you have to kind of rethink your whole approach.

To avoid all scaling and aspect ratio adjusting and projection matrix changes/weirdness, you need to be able to just use a 1:1 ortho projection with no scaling at all. That means that to solve the problem of aspect ratios/screen sizes/zooming etc you cannot modify the projection. And yet you still are left with varying resolution and the need for in-game zooming. The solution to that is to regenerate all of your graphics in realtime or near-realtime during the game, ideally using fully scalable/procedural vector graphics. Ie instead of changing how they are displayed, change how they are generated in the first place. You can generate new graphics that compensate automatically for *the current* aspect and resolution. Then to draw it is a 1:1 pixel mapping. Although vectorgraphics doesn't handle the idea of pre-drawn per-pixel bitmap art very well.

But this still doesn't deal with issues introduced by floating point coordinates. As soon as you use floats, you go sub-pixel, which means you need realtime antialiasing techniques, and these same problems with edges. Back in the Amiga days you never had subpixel accuracy, everything pretty much HAD to be at integer coordinates. All of the 2D tilemap games were integer coords. And if your game graphics don't need to be stretched you can still render to integer coords. Your scrolling can be at integer coords too. The only reason you'd run into subpixel problems or adjacent tiles having a problem is if you try to draw something at a floating point coordinate. Is it needed? And if it is, how can it work? I still gotta think about that part.


Grey Alien(Posted 2009) [#25]
For me I'm not currently planned on higher res than 1024x768 and I still don't support widescreen (groan away), and as I've target cards than can handle a rather conservative 1024x1024 textures (i.e. almost all), then grabbing the screen to a texture then zooming as a whole would actually work for me if it was fast.

True in Amiga days it was not needed because most people fixed their scrolling at 50Hz (on PAL systems) or 60Hz on NTSC so it always looked smooth. Of course, as you know, now on PC we cannot rely on *any* fixed Hz so we have to use Delta Time or equivalent and so we end up with horrible jiggling if we draw at integer coords (due to the out of sync scrolling speed and screen Hz), or we draw at floating point coords and it looks lots smoother BUT there's a problem with tile edges. I think that pretty much sums it up.

I hope we are not derailing this thread too much _Skully, I guess we really just need one thread about this whole topic...


_Skully(Posted 2009) [#26]
No problems here.. I like to see the variety of options available to deal with this issue.

Obviously the easy solution is to just set the resolution and be done with it... whether windowed mode or not. Then everything is nice and clean... However, stand-alone apps can't get away with that as easily as say a browser app.

Its really too bad BMax doesn't run in a browser... this problem doesn't exist there and distribution is easiest :P


ImaginaryHuman(Posted 2009) [#27]
I would think the problem does exist in a browser especially because the size of the browser window changes dramatically and is totally inconsistent. If you want `full screen` in the browser, ie use the whole available space, then you have the same problem.

Gray - good point about the impact of integer coordinates on timing/juddering due to out of sync frequencies. So ideally it would be best to get float positioning working properly - see that other thread in blitz programming where I just proposed a solution.