Widescreen & Black Bars

BlitzMax Forums/BlitzMax Beginners Area/Widescreen & Black Bars

MGE(Posted 2007) [#1]
So you've coded your game in 4:3 standard ratio but you want to center the screen on a widescreen (16:10,etc) and display black bars as needed.

It's all very doable until you start drawing graphics outside of the game area. Normally, this would all be handled due to the card clipping everything. But in this scenario, the graphics would not be clipped automatically until they went off the physical screen.

I read this in another thread: "If you want your game to play at a fixed size, then simply draw the game in the centre of the display and slap some black bars where necessary."

That's where I'm stumped. The most elegant solution would be to use the SetViewPort() function, but that seems to be broken or not 100% gpu/cross platform compatible.

Another easy solution but adding more overhead at render time, is to use DrawRect() to draw the bars at the end of your render loop.

Or am I overlooking a stable, obvious solution? Thanks.


ImaginaryHuman(Posted 2007) [#2]
You have two options which you've already touched upon. Either you draw black bars over the edges of the viewable area to cover up any misdrawn stuff, or you set up the graphics API to do clipping - in OpenGL it'll be setting up a Scissor window and enabling scissor tests. SetViewport does this for you, so if that does not work on your graphics card then you have little choice but to draw some black rectangles. Doing so should NOT be a particularly big performance hit - it's just a couple of quads with no texturing.

Another option is to make sure you clip the drawing yourself in software, but requires extra work.

Another question is, if you want it to display in a 4:3 ratio but the user has chosen a 16:10 ratio mode, why not override this and choose the closest 4:3 mode which has approximately the same pixel resolution? ie if they choose 1440x900 then you want something close to 900 pixels height, so 1024x768 is not too far off, or 1152x900 or whatever it is in 4:3. Then, so long as it is not STRETCHED and has the right ratio, you would not have to worry about making black bars.


Gabriel(Posted 2007) [#3]
I would use SetViewport. Indeed, albeit that I don't use Max2D, I *do* use the direct equivalent of SetViewport. If it doesn't work, it sounds like a bug and should be fixed, because I've never come across the inability to setup a viewport properly in any other API.

If you have code/a GPU/whatever that breaks with SetViewport, then report it and hopefully it can be fixed. Viewports are definitely the way to go IMO. Not because of drawing two black bars, rather because you want to *not* draw all the stuff that's behind them.


ImaginaryHuman(Posted 2007) [#4]
When you do SetViewport you're not really using the viewport to crop the graphic - a viewport is a definition of an area of the screen and how its coordinates translate to the world coordinates of objects. It is possible for drawing to occur outside of the viewport using the same scaling of coordinates. To stop that you have to use a `scissor rectangle` test so that it prevents drawing of pixels outside the viewport area. BRL has confused the term viewport - it should really be more like `CropToViewport` or ViewportScissor becasue it's doing two things. In GL at least you should NOT get drawing outside the viewport when the scissor test is working.


Gabriel(Posted 2007) [#5]
I'm not sure what you're saying. In the first sentence, it looks like you're saying that Max2D doesn't clip and in the second sentence, it looks like you're saying that it does.


Grey Alien(Posted 2007) [#6]
I'll be drawing two big fat rectangles. To be honest it won't slow down any half decent cards. I'm not relying on viewport.


MGE(Posted 2007) [#7]
Thanks for the comments, much appreciated. I agree with GA, drawing the rects seems to make the most sense. Especially since we know it will work cross platform wise as well. ;)


Sledge(Posted 2007) [#8]
Is this even an issue? Users will be able to set whether their video driver stretches or clips 4:3 fullscreen software in their display settings.


MGE(Posted 2007) [#9]
It's only an issue if you the programmer do not mind the potential for your screen to be stretched. ;)


Torrente(Posted 2007) [#10]
And what if, like Sledge says, they choose for their driver to clip fullscreen software. Do you have a way to check for this, or will you now have the automatically clipped space along with your black rectangles?


MGE(Posted 2007) [#11]
I could be wrong, but I think the general idea of using this technique to avoid potential stretching, would be to first get the desktop screen ratio, if it's wide screen then possibly check available screen modes and then switch to the nearest wide screen mode available that allows your game to fit in it. Then center your game on that screen and draw any black bars as needed.

But I think I'm going to use the extra space if there is any, and render some graphics, fx, etc, instead of black space. Another way to think is to just code your game with 2 modes in mind, normal and wide screen. Then go through the various screen modes available. (Taken from the docs)
Print "Available graphics modes:"
For mode:TGraphicsMode=EachIn GraphicsModes()
 Print mode.width+","+mode.height+","+mode.depth+","+mode.hertz
Next

Put a few compatible modes in a list and let the end user configure how they want to play the game.


Grey Alien(Posted 2007) [#12]
Some users won't have a clue though. Depends on the game audience I guess. For now I'm doing 4:3 with bars based on reading the desktop res. This is what BFG do on their games according to Emmanuel.


ImaginaryHuman(Posted 2007) [#13]
SetViewport `sets a viewport`. A viewport is NOT a constrained area within which graphics are drawn. A viewport is a matrix transformation from world coordinates to window coordinates. It has nothing to do with cropping the graphics operations. That's why you also have to use a cropping feature such as a scissor window. In BlitzMax SetViewport does both - it sets up a viewport transformation matrix and switches on the scissor window to clip graphics. You must have both to prevent drawing outside the area.


MGE(Posted 2007) [#14]
"For now I'm doing 4:3 with bars based on reading the desktop res."

Are you going to use the desktop res for your game as well? Or just to calculate the screen ratio? Some of these desktops run in a pretty high res.


TomToad(Posted 2007) [#15]
Is this even an issue? Users will be able to set whether their video driver stretches or clips 4:3 fullscreen software in their display settings.

Not all video cards support clipping (aka pillarboxing) in the drivers. My laptop does, and so I have no problems with 4:3 games, but my desktop computer doesn't, so I need the software to do the clipping for me. If it isn't supported in the software, then it'll be stretched on my computer in fullscreen mode.
Eventually I'll upgrade my system, and when I do, I'll look for a card that supports pillarboxing. But it'll be a while before that happens since money is a little tight for me right now.

@MGE Developer, you could always give the user a choice. You can have the program default to using Max2D viewports, but if the user is having problems with the graphics, then they can set an option that'll draw black rectangles instead.


Gabriel(Posted 2007) [#16]
SetViewport `sets a viewport`. A viewport is NOT a constrained area within which graphics are drawn. A viewport is a matrix transformation from world coordinates to window coordinates. It has nothing to do with cropping the graphics operations. That's why you also have to use a cropping feature such as a scissor window. In BlitzMax SetViewport does both - it sets up a viewport transformation matrix and switches on the scissor window to clip graphics. You must have both to prevent drawing outside the area.

ou can't cover every possibility unless you have You just contracticted yourself again, so I still can't grasp what you're saying. I think you're saying that BlitzMax terminology is different from OpenGL terminology, which isn't surprising since OpenGL isn't the only game in town. Or indeed in BlitzMax.

And what if, like Sledge says, they choose for their driver to clip fullscreen software. Do you have a way to check for this, or will you now have the automatically clipped space along with your black rectangles?

Check for what? There's nothing to check for. If what Sledge says is true, it doesn't affect you because you're not running in a 4:3 resolution, you're running in a widescreen resolution and faking 4:3.


ImaginaryHuman(Posted 2007) [#17]
It can be a bit confusing. `ViewPort` should mean a `port` out of which you `view` the world, and presumably that upon doing so you are only able to see *through* the portal and not around it. Kind of like a window. You can see the world through a window but not through the wall around the outside of the window.

The same idea of a viewport is implemented in BlitzMax as SetViewport - ie it's supposed to define a `window` area within the wall (the `screen` real-estate) through which you view a graphical world, and supposedly you can ONLY see through that window. You should not be able to see any of the areas outside of it. However, to make that happen in OpenGL there are actually a combination of steps necessary, under the hood, to make it work, because OpenGL's idea of what a viewport is differs. OpenGL's viewport is more of a scaling operation to convert from the area of the world that the camera sees to the size of the window that you want to view it through, and has nothing to do with stopping graphics from being drawn. The OpenGL docs specifically say that setting the viewport (the OpenGL viewport) does not necessarily prevent objects being drawn outside of that area. That's why OpenGL has something called a Scissor Window, which is actually closer to the idea of a BlitzMax viewport, where you say I only want you to draw pixels that are within a rectangular area. Every pixel that is about to be drawn has to be tested to see if it lies within this area and if so it is drawn, otherwise it is not drawn. So BlitzMax's SetViewport is actually setting up OpenGL's version of a viewport AND setting up a scissor window to do the clipping part of it. It's a two-part process. I have NO idea how DirectX does it.

To elaborate...

In GL, you start by setting up a camera lens. The camera lens determine how wide the field of view is and therefore how telescopic the lens is. Wide angle lenses are able to see areas of the game world quite far apart from each other, while a more telephoto lens really narrows down to a small field of view, which gives the impression that a small object is much bigger than it is.

To set up the camera lens you have to set up a Projection Matrix. The projection matrix defines what size area of the game world the camera lens is able to see. More specifically, you have to tell it what world coordinates are visible at the edges of the lens's view. In Max2D where an orthographic projection is used, you don't have to worry about huge 3D game worlds so you usually set it up to say the camera can see coordinates from 0 to ScreenWidth horizontally and from 0 to ScreenHeight vertically. Then you draw an object with world coordinates - lets say the camera can see from 0 to 800 horizontally and from 0 to 600 vertically - if you draw something centered at 400,300 it will be right in the middle of the camera's view. It just so happens that because the coordinates that the camera lens is looking at are exactly the same coordinates that the screen can display, there is a direct correlation between world coordinates and screen coordinates, so no other conversion is necessary. In the case of 2D, the translation from camera coordinates to screen coordinates has an `empty matrix`, ie it does nothing to change them. Most people who set up a 2D projection will use the same coordinate range for the game world as for the number of pixels on the screen that they want it to display as.

However, for 3D things are a bit more complex. Usually you will not want to use a coordinates system of 0..800 x 0..600 because that's going to give you very big numbers very quickly, and if you want a really big game world you might even get an overflow in the coordinates. More often people will say that the top left corner of the camera lens can see whatever is at -1.0,-1.0, and whatever it can see in the bottom right corner is at 1.0,1.0. The coordinate 0,0 is then the center of the camera lens. Because this coordinate system does not directly map onto the number of pixels in the screen, there has to be a conversion.

Firstly a `modelview` matrix is applied to all coordinates to move the actual game world in front of the camera - so that it appears that the camera was moved to look at a position within the game world. Then the `projection matrix` is applied to the coordinates to convert from a 3D coordinate system to a flat 2D coordinate system - the amount that it alters these coordinates is determined by the sense of perspective required. The amount of perspective has something to do with the field of view of the lens, and also the math that assumes you're looking at a 3D space - a `perspective projection`. Then you need something to convert from those `flattened` coordinates to the coordinates on the screen. The very thing to do that is GL's version of a `ViewPort`. The Viewport matrix is applied to convert from post-perspective-flattened coordinates into screen coordinates.

The way that the viewport matrix is defined, you are basically saying that no matter what world coordinates the camera lens is able to see, and no matter how big an area that might be within the game world, SCALE IT to fit into a specific area of the screen. So although the camera lens might be able to see a world from -1.0,-1.0 to 1.0,1.0 in world coordinates, you might tell it to `project` everything within that space onto the screen to fit within a window 200 x 200. Regardless of the apparent `size` of objects within the game world, or the sense of size given by the world coordinates, the size of the viewport window (200x200) determines the final size of how big that area looks to you on the screen. Within that 200x200 area, at the top left of that area you will see whatever object is at -1.0,-1.0 in world coordinates, and at the bottom right of that area (at 200,200 relatively speaking) you'll see whatever is at 1.0,1.0 in world coordinates. The world defines a coordinate system, the camera lens projection defines what area of the world you can see at one time, and the viewport determines over what pixel area on the screen that view will be stretched to fit. It's kind of like making a dynamically sizeable photo-sensor within a digital camera, where instead of being fixed at say 2 megapixels you can make it whatever size you want. You're telling GL to `record` what it `sees` through its lens, to a specific-sized area of your screen, measured in pixels.

Now, here's where the difference between a viewport and a clip-window comes in. The only thing that the viewport matrix does is convert coordinates. It's a math operation. It has absolutely no clipping `code` because, by default, in the case of a viewport which is the size of the actual screen, you don't need to do any clipping. The clip operation is extra GPU processing time. You will notice that in the BlitzMax code for SetViewport, when the viewport is the same size as the screen it switches off the clipping functionality to save GPU time (ie it switches off the scissor-window test).

Let's say you set your projection matrix to show you an area -1.0,-1.0 to 1.0,1.0 within the game world. That means the camera lens can only see that amount of space. Then you open a display which is, say, 800x600 in 4:3 aspect ratio. If you create an OpenGL `Viewport` you have to define not only its size but also its position. Let's position it at 50,50 and let's say it is 200x200 pixels. The viewport is in `window coordinates` so is always measured in whole intereger pixels. Although you've told it to squeeze the view that the camera lens sees into an `eye` the size of 200x200 pixels, you have NOT told it to only draw within that area. You've only said that to scale from 3D coords to 2D coords you have to scale the coordinate system itself by whatever amount is the ratio between 200x200 and -1.0x1.0. This *amount of stretching* is set in the viewport matrix as a math operation. So now if you draw an object in your game world at, say, 2.0,2.0 you would *think* that it is outside of the camera lens and therefore is going to be outside of the viewport and not drawn. Not so. 2.0,2.0 in world coordinates would translate into, let's say, 400,400 in window coordinates, *based on* the scaling set up in the viewport. It is entirely possible that when you try to draw that object at 2.0,2.0 it will actually draw outside of the initial viewport `window`, in a position which is equally scaled by how much the viewport scales the coordinates. This is all just math operations, it's not computer code making decisions about whether or not pixels lie within or without a given rectangle. The coordinate system is extrapolated out beyond the viewport window that you defined. The only purpose of defining the size of the viewport window is to define how much, in relation to the world coordinate system, after the 3D coords have been flatted into 2D, those coords get scaled. This can apply *anywhere* on the screen, regardless of whether it is inside or outside of the little window that you specified as your example of how much things should be scaled to output.

When you say `I want my viewport to be 200x200`, you are really saying, `as an example of how much I want my coordinates to stretch, and assuming that -1,-1 is at 0,0 on my screen, I want 1,1 in world coordinates to show up at 200,200 on my screen.`. You could just as easily say `I want 2,2 in world coordinates to show up at 400,400`. That would give you the appearance of exactly the same sized game world. You could say `I want 0.5,0.5 in world coordinates to show up at 150,150 on my screen`. These all mean the same thing in terms of how much *stretching* is going on. It doesn't matter really what size you're specifying. You're not telling it to clip to that specific window area, you're telling it that, as an example of the ratio, AT those coordinates, you will find the corresponding world coordinates that are in that corner of the view. From this it figures out the `scale factor` and forgets all about what the original coordinates were that you gave it. It stores the scaling as a scale operation in the viewport matrix. The viewport matrix is then used to size *all* coordinates regardless of where they are on the display, and totally forget about what area window you gave as your example of scaling.

In OpenGL, unless your viewport window (scale factor) is the same size as the total pixel dimensions of your screen (or canvas/context), there is every possibility that anything that would seem to be outside of the camera's lens's view can and will be drawn outside of the viewport. The size of the lens only defines how much of the world can be seen IF the viewport is fullscreen-size. Otherwise the size of the lens is, like the viewport, just a kind of scaling operation. Things outside the lens's view simply have their coordinates translated by as much as the things inside the view do. The thing that determines whether the object ends up on the screen, after all is said and done, is whether the coordinates, after making through the modelview matrix, the projection matrix, and the viewport matrix (which is part of the projection matrix, actually), happens to lie within the maximum screen/canvas coordinates.

To stop things from drawing outside of a given area, which can be the same area as you defined for the viewport but *does not have to be*, you define a separate thing called a Scissor window. The scissor window is a rectangular area measured in pixels, in screen coordinates, within which you are saying to only draw the pixels if they are within that area. For *every single pixel* that is about to be drawn, when scissor `testing` is switched on (glEnable(GL_SCISSOR)) and the scissor window is defined, every pixel has its coordinates tested to see if it is within that scissor window area and if so it is drawn, otherwise it is not drawn. This is the *only* way to actually crop the output of rendering operations. Normally you would set the scissor window to be the same position and size as the viewport you defined, so that then you can *assume* that anything outside the viewport will not be drawn. But this is a mangling of the purpose of the viewport and is trying to make it seem as though the functionality of the scissor window is also the functionality of the viewport. This is what BlitzMax has done. It sets both a viewport matrix and a scissor window and passes it off as being a clippable-viewport. But there is really nothing to stop you from clipping areas of the screen which are either within or without or overlapping the viewport. If you wanted to you could tell it to only draw within a tiny rectangle within the viewport window, or a rectangle that overlaps one side of the viewport and extends beyond the viewport. Since BlitzMax is trying to make things `simpler` or easier for you, it assumes that the scissor window is anchored onto the position and size of the viewport window, so you only ever have to deal with one `crop window` rather than the more complex underlying setup which comprises a crop window and a coordinate-space-scaling operation.

It is uncertain whether some graphics cards will draw outside the GL *viewport*, in spite of the scissor window, but some cards will allow things to be drawn outside of it and some won't. That's why, to be on the safe side, you really need to have the scissor window used, along with scissor testing being switched on, along with a viewport being defined to translate to window coordinates. BlitzMax sets the viewport matrix and the scissor window for you in the hope that it will always prevent pixels being drawn outside the viewport - which happens to be the same position and size as the scissor window. But apparently this does not always work on all cards, and maybe in DirectX it's a whole different ball game.

Using SetViewport you are doing everything you really can do to stop those pixels being drawn undesirably. The only other thing you can do is to draw some black bars, or set a resolution in an aspect ratio that you can make full use of.

Speaking of aspect ratios, the modelview matrix should be used to scale the size of the model rather than scaling the projection matrix because there are apparently some off anomalies which will show up when you start to do things like lighting and stuff. By scaling the model you can make it seem as though you are viewing a stretched world and therefore can emulate different aspect ratios. So long as you know the ratio that you made your graphics in, e.g. a 4:3 mode, you can figure out how to convert that to show at the right proportions in a 16:10 mode, for example. Then when you change the size of the lens to say that it can see a bit more of the game world, everything gets scaled on the fly to keep the exact right proportions. There are several scaling/matrix operations going on, for every pixel *every* frame.

Then we get into all the problems of knowing whether there is a correlation between the aspect ratio of the pixels, ie ScreenWidth in pixels divided by ScreenHeight in pixels, and the aspect ratio of the amount of physical space that those pixels consume. If your display modes are all displayed with the same pixel aspect ratio as the physical aspect ratio then it's pretty easy to figure how to translate to get the right proportions. But if you start getting modes that are stretched so that the pixel ratio is different to the physical ratio, then now you have more of a problem. If you knew the full size of the physical display hardware when in its highest resolution then possibly you could say that you now know - if the pixel ratio is in proportion, the physical aspect ratio of the screen, and can therefore translate it within software. But if there is some difference in the ratios then now you have no way of telling. You'd need to either access the display driver, or get information from the manufacturer, or have the user physically measure the visible screen area, so that you could figure out how to stretch your aspect ratio to match.

Detecting the desktop resolution is a solution that works maybe most of the time, since we are assuming there is a direct mapping of 1:1 between the pixel ratio and the physical proportions of the screen, but if this is not the case then you have to find a more advanced solution - or let your game show at a funky stretched resolution. Obviously you could screen out display modes that are not the same ratio as the desktop mode, but then you're limiting user choices in an attempt to avoid a problem that you can't see a solution to. It seems the only way to handle all situations is to have the user tell you the measurements of their physical display, measured with a ruler, for each resolution, and to then convert that to the right ratio in software. You could also auto-detect more common aspect ratios and assume they will be right and then allow the user to stretch the display with on-screen controls. It's obviously better overall to automate as much as you can, without shutting out the user's freedom.

Another method I am using is to ask the user the dimensions of the display, and then to ask them whether all their display modes are in proportion to each other - or to have them measure a different mode which should have a different ratio, and then use that user-input to correctly scale all resolutions. But really at the moment there is no 100% trustable solution which does not involve the user having to get involved, mainly because graphics card drivers are not reliable enough to tell you about the display and BlitzMax is not accurate enough to tell you about the aspect ratios of display modes, and none of these things are clever enough to tell you if the user scaled their screen with the controls on their monitor. With an LCD/TFT screen it's somewhat easier or more predictable but not with a CRT monitor. It all just depends on how much you want to get the user involved and how much you want to simplify things. The question is do you narrow down their options to avoid the unusual scenarios, or do you open up to all resolutions and then try to provide compensation/calibration tools.

But anyway, rambling on ... I hope this is detailed enough to make the distinction between viewports and scissor windows and all that a bit clearer. It's a complicated matter.


MGE(Posted 2007) [#18]
wow, thanks IH for the info, very informative and does make things clearer. Now if you don't mind, please go write a module that does everything you mention so the rest of us can just plug it in and go. ;) lol.. If it ends up working in both dx/ogl, I'd pay for it!


dmaz(Posted 2007) [#19]
[edit]text removed...


ImaginaryHuman(Posted 2007) [#20]
I have no idea how DirectX works and to be honest am not very interested in finding out. I like OpenGL. I am writing an engine/set of modules, part of which includes setting up cameras and lenses and all this fun aspect ratio stuff, but it won't be ready for consumption for some time yet.

There are some projection matrix modules floating about by other people which would cover most of the stuff you would want to do.


Grey Alien(Posted 2007) [#21]
Are you going to use the desktop res for your game as well? Or just to calculate the screen ratio? Some of these desktops run in a pretty high res.
calculate screen ratio. If matching one can't be found choose desktop res (may make the game have black bars across the top and bottom if too high res!)


MGE(Posted 2007) [#22]
GA - cool, we're on the same wave length about how to handle this scenario. ;)


Grey Alien(Posted 2007) [#23]
+ on lots of things it seems. That's what happens when you make a framework, you have to learn every last detail about a particular topic in order to choose the "best" or most "compatible" way for your framework. That's why it takes AGES, but phew, the knowledge I've gained is good :-)