Why does this code draw colors with double value?
BlitzMax Forums/OpenGL Module/Why does this code draw colors with double value?
| ||
I cannot figure out why this OpenGL code turns all glColor() calls into double their value. If I do glColor4b($80,$80,$80,$80) i get white pixels drawn. If I do $04,$04,$04,$04 I get $08 pixels draws. Everything gets doubled. I cannot understand why. What am I doing wrong or missing? Any ideas/suggestions greatly appreciated. I know it's not the reading of the pixel that is in error. It's the actual drawing of it. |
| ||
This seems to be entirely a problem with glColor4b(). glColor4f(0.5,0.5,0.5,0.5) does draw mid-grey. Just got to understand why. |
| ||
This would explain it: "Signed integer color components, when specified, are linearly mapped to floating-point values such that the most positive representable value maps to 1.0, and the most negative representable value maps to -1.0. (Note that this mapping does not convert 0 precisely to 0.0.) Floating-point values are mapped directly." So this means that glColor3b or glColor4b are actually SIGNED bytes, so if you say $80, it maps that as 1.0, which is same as $FF because the highest value can only be $7F. What you have to use to do normal unsigned byte hex values, ie glColor3ub or glColor4ub (ub for unsigned byte). Good to know. I think I used `b` in some other test programs without realising it! |