Alpha Question

Blitz3D Forums/Blitz3D Programming/Alpha Question

jfk EO-11110(Posted 2010) [#1]
Hi everybody. I've got a question about alpha transparency.

Is it possible to have less alpha transparency than 0.002 ? When I try this, it is rounded to zero and the Mesh becomes invisible. You might think 0.002 or 0.001 doesn't matter since both is practically invisible to the human eye, but this is about multiple quads in front of the camera, eg. 1000 Quads, and each one should have 1.0/1000.0 alpha transparency.

Any ideas? I tried to combine a 50% translucent texture with entity alpha, with no luck so far. Maybe there's an other hack?

Note even if =.002 can be used and this would mean 500 quads, for some reason and / or somehow this seems to be limited to 256 levels. Is there an 8 Bit alpha limit?


Yasha(Posted 2010) [#2]
The entity's alpha itself is a single-precision float as you would expect, but I have the feeling that it gets "translated" to a single byte somewhere along the line.

Consider: a pixel can contain an ARGB value. This means that texture can only implement 8-bit alpha precision. By analogy, when the mesh is rendered to the scene, it's converted (by magic!) to pixels which are bound by the same constraints (alpha is preserved when rendering, even though it doesn't really work properly with Z-ordering). Regardless of the actual alpha value of the mesh, it will therefore be compressed into whatever the final scene display can handle.

Going over to the actual issue you're working on... had you given any thought to mixing quads of multiple colours? I'm not sure if you could multiply the range up (i.e. somehow create 16-bit range) because that implies a reset to zero somewhere along the line for one of the colours, which is probably impossible... but by alternating two or three with an additive instead of alpha blend (actually.... by analogy with additive or multiply blend you could reach the same conclusion about alpha's precision limits) you could get 512 or 768 levels of precision, which is a small improvement...?


Robert Cummings(Posted 2010) [#3]
Drivers can discard alpha pixels that have too many layers at certain thresholds, I'm not sure if this is what you're experiencing.


jfk EO-11110(Posted 2010) [#4]
Rob, this might be it. Even mixing with Fog seems to limit it to 8 Bits.
Hey Yasha, you just gave me "new hope". Could be something. EG. instead of all layers just black I could use them in full red, full green and full blue sequences, resulting in 3 times more brighness levels. AFK to try it...


jfk EO-11110(Posted 2010) [#5]
Ok, update. I tried several things, seems like everyything that is based on alpha is strictly limited to 8 Bit. But after all I kind of managed to get 9.5 Bit aka 768 levels.

I have to say, this alpha thing was just needed to create some kind of zbuffer substitute. So the goal was to do a depth render somehow. After all those failures with blendmodes, alpha, fog, autofade etc. (I even used a sequence of renderworlds with individual ambientlight and Camera Range), I had this idea, most likely not new , anyhow. Going trough all vertices of the scene, setting their color to the distance to the light / shadow mapper. This worked suprisingely well. BUT now I still had to display this buffer in a way that allows more than 256 shades of grey. So I added those tween-tones, eg. 303031 and 303131 between 303030 and 313131. Watching the result magnified in a graphics app clearly showed a diffrence between it and the ordinarly 256 scales of true grey image. BUT it also showed, that the greyscales are dithered by default, making it rather useless for depth determination., or at least just another problem with it.

Was a nice experiment, but I think it doesn't make much sense since in the end of the day it simply isn't fast enough. I guess shaders are the way to go.


Yasha(Posted 2010) [#6]
Why do you need it to be in shades of grey? Why not use the full colour-space? If you're going to use vertex colours I can't see any reason why you would be limited to 8 bits; you've got 24, same as the real depth buffer! You could either increment a single RGB value proportional to distance, or you could handle each value separately and increment green at intervals of 256 red, etc, each one mod 256 so they roll over to zero...

The moral: no reason to be using grey for this; that's where your precision is going! The eventual image will be a bit psychedelic to look at but, it doesn't need to be parsable by human vision anyway, so who cares? (This is pretty much what a real Z-buffer looks like)

Although yeah, colouring individual vertices on a complex scene is difficult performance-wise.

Last edited 2010


jfk EO-11110(Posted 2010) [#7]
I found it not very heavy, compared to other tasks like reading buffers, or writing to textures.

First I thought so too, 24 bit yeah. But now I guess it's not so easy. Remember, it's vertices. They will fade from one vertex color to the next one, across the entire triangle. It will be diffrent from a depth-relative psychodelic pattern we want for 24 bits. At least I think so. (Got code?)


Yasha(Posted 2010) [#8]
Hmmm. I hadn't considered the possibility of triangles crossing order-of-magnitude boundaries. Guess that wasn't such a great idea...

I have one more new idea: supplemental software renderer? A software render takes time, but if you only rendered the Z values, I guess it would still be faster than any of the other methods presented so far. You'd still use a hardware render for the actual display image with textures, special effects and whatnot.

Of course this is basically only one step removed from using shaders anyway... not to mention that any machine with a powerful enough CPU for this to be a sensible proposition will likely have a reliable GPU for shaders too. Still, it's a potential "pure DX7" solution requiring minimal external libraries (software rendering really only needs some direct memory access stuff for fast pixel writing).


jfk EO-11110(Posted 2010) [#9]
This might be a solution, although I never heared of something like that for Blitz3D. For now I'll try and see if FestExt can do anything, eg. with the perspective mapping mode.


Yasha(Posted 2010) [#10]
"VirtualGL" by Nate the Great is a reasonably-fast example. I think it'd need a lot of optimisation though... I was overestimating the performance a bit.

One other problem I just realised is that to software render a scene would still require knowledge of all the vertex positions. So you're back in the same boat as with stencil shadows: either use only segmented meshes, or write a replacement animation engine (which was my own eventual choice).

Sorry, seems my ideas aren't very good today.


jfk EO-11110(Posted 2010) [#11]
I think it might be best to use a language with shader access, even whith only shader model 1. Some operations can be done real quick with Pixelshaders, using the RISC architecture of the Shader chips and optimized asm.

But it's nonetheless fun to try things with plain Blitz3D. For example I just used to write the reverse version of the towel mesh experiment mentioned in the other shadow thread:

Now the towel is projected to the zbuffer from the players point of view. Each quad of the towel is vertexcolored with a unique color (allowing 24 bit unique vertices in theory). Then a render is taken from the lights point of view. Now I only have to check all pixels and use the found RGB values right as indicies to paint certain texels of a shadow map (those indicies not found must be in the shadow). But there's a simple yet persistant problem: Some quads are not detected from the lights perspective, due to steep normals and/or limited render size. This results in an ugly moire artefact.

There are several ways to get perspective mapping kind of working, but with every method there are problems with moire patters and rounding errors. I still try to find something that will fix it.

Meanwhile I had an other idea: Instead of Vertex Brightness realtive to the vertex's distance to the camera, how about to set the vertexcolor to rgb=x,y,z ?? Ok 8 Bit again. Other than that, imagine the possibilities. A simple render will tell you about the world location of every pixel. Then again, it's almost the same like using a lookuptable, like the one in my little sample code in Robs "Todays Shadows" thread. I am still thinking about a hires zbuffer using "rainbow colors"...