How is Modulate2X Working Underneath?
Blitz3D Forums/Blitz3D Programming/How is Modulate2X Working Underneath?
| ||
I was hoping I might get a reply from Mark or Simon on this one, but maybe Tom or one of the others who has helped with Blitz3d's development over the years might know this too. I want to exactly emulate the blending mode that Blitz3D calls Modulate2X. I don't know for sure if Blitz doesn't anything a little differently from the general way of Mod2X, but if it does, I want to do that too, since I would like to be able to use my mightmaps from Gile[s]. Essentially TrueVision3D doesn't natively use this blend mode for meshes ( only landscapes, for splatting, I guess ) and I'm going to see if I can write a simple pixel shader to emulate it. Just a simple shader which emulates a base texture on texture layer 0 with UV set 0 and a modulate2x lightmap on layer 1 with UV set 1. But there's precious little good information on modulate2x around as far as I can see. I know what it does, but not how it does it, so to speak. I know that anything over 128,128,128 brightens the base texture and anything below darkens the base texture, but I'm not sure exactly what math is going on to achieve this. So exactly what is Blitz doing here? |
| ||
It just multplies the result by 2, so something like this gives the same result:float4 texColor = tex2D(diffuseSampler,in.uv0); float4 lmapColor = tex2D(lightmapSampler,in.uv1); return (texColor * lmapColor * 2.0f); |
| ||
Is it not something more like this?If Tex2R <= 128.0 ;multiply blend NewR = Tex1R * Tex2R / 128 else ;additive blend NewR = Tex1R + ( Tex2R - 128 ) * 2.0 If NewR > 255 NewR = 255 endif Where Tex1 is the diffuse and Tex2 the lightmap. May be wrong but I thought it used multiply blending when < 128 and additive when >=128 Stevie |
| ||
No :) |
| ||
In which case, can you explain how do you darken the diffuse texture by multiplying by 2.0? |
| ||
Sure :) If you assume color ranges go from 0.0 to 1.0, then you will see the following is true: colorA = 1.0 colorB = 0.2 finalColor = colorA * colorB finalColor = 1.0 * 0.2 finalColor = 0.2 This is regular multiply (also known as modulate) blending, now consider the following: colorA = 1.0 colorB = 0.2 finalColor = colorA * colorB * 2.0 finalColor = 1.0 * 0.2 * 2.0 finalColor = 0.4 This is what modulate2x does, modulate4x does the same thing except the multiplier is 4 instead. And if colorB is above 0.5 you will see the following: colorA = 1.0 colorB = 0.6 finalColor = colorA * colorB * 2.0 finalColor = 1.0 * 0.6 * 2.0 finalColor = 1.2 Now 1.2 is out of the color range so it is clamped to 1.0 when output to the screen. |
| ||
Ah .. now that makes sense .. cheers Fred! |
| ||
Thanks Mikkel, that's simpler than I expected, and you even presented it as shader code, many thanks for that. I'm sure I can put this into a simple little shader. |
| ||
Well I've never done any shader programming before, but it wasn't too hard. With Mikkel's code there, a few tutorials and a little help from some guys on the TV3D forums, I managed to get the shader working correctly. I'm guessing there's not much interest in the code here, being as B3D doesn't support shaders, but what they hey, if anyone wants the code, here it is ( Public Domain, I can't very well do anything else since I needed help to write it )//--------------------------------------------------------------// // Modulate 2X Multitexturing //--------------------------------------------------------------// //--------------------------------------------------------------// // Pass 0 //--------------------------------------------------------------// float4x4 matViewProjection : WorldViewProjection; struct VS_INPUT { float4 Position : POSITION0; float2 TexCoordSet0 : TEXCOORD0; float2 TexCoordSet1 : TEXCOORD1; }; struct VS_OUTPUT { float4 Position : POSITION0; float2 TexCoordSet0 : TEXCOORD0; float2 TexCoordSet1 : TEXCOORD1; }; VS_OUTPUT vs_main( VS_INPUT Input ) { VS_OUTPUT Output; Output.Position = mul( Input.Position,matViewProjection ); Output.TexCoordSet0 = Input.TexCoordSet0; Output.TexCoordSet1 = Input.TexCoordSet1; return Output; } texture BaseTexture < string filename = "base.tga"; >; texture LmapTexture < string filename = "lmap.tga"; >; sampler BaseTex = sampler_state { texture = <BaseTexture>; }; sampler LmapTex = sampler_state { texture = <LmapTexture>; }; float4 ps_main(float2 uvset1:TEXCOORD0, float2 uvset2:TEXCOORD1) : COLOR0 { float4 baseColor=tex2D(BaseTex, uvset1); float4 lmapColor=tex2D(LmapTex, uvset2); return (baseColor * lmapColor * 2.0f); } //--------------------------------------------------------------// // Technique Section for Default_DirectX_Effect //--------------------------------------------------------------// technique Modulate2x { pass Pass_0 { VertexShader = compile vs_2_0 vs_main(); PixelShader = compile ps_2_0 ps_main(); } } Make sure your model has two UV sets. If you want to use just one UV set for both textures change : float4 lmapColor=tex2D(LmapTex, uvset2); to float4 lmapColor=tex2D(LmapTex, uvset1); and you should probably also change float4 ps_main(float2 uvset1:TEXCOORD0, float2 uvset2:TEXCOORD1) : COLOR0 to float4 ps_main(float2 uvset1:TEXCOORD0) : COLOR0 Although it's not strictly necessary, but there's no point passing parameters you're not going to use to the pixel shader. You *might* have to change a semantic here or there for different engines, but I think those are pretty standard semantics I use. Also bear in mind that shader compiles for Shader Model 2.0 only. It's not possible to compile to 1.4 or 1.1 without changes, and I'm not familiar enough with shader models and differences between them to make those changes. Plus my game is requiring Shader Model 2.0 to run, so I don't care :P Hope it's of use to someone. Or if nothing else, maybe it'll make someone else think, hey, that shader programming's pretty easy if that yutz Gabriel can pick it up in a day. And it's not too bad, at least the basics aren't. I won't be writing an offset bumpmapping shader just yet. ;) |
| ||
Hi, Congratulations on writing your first shader :) But, instead of passing the texture coordinates individually, I would recommend passing a Stuct with the variables. It's a lot easier if you want to expand it to do more things. And there is nothing in the shader that would make it incompatible with vs/ps 1.1, so you can compile it to that instead if you like. But vs/ps 2.0 can generate the same shader so it's faster (ie. less instructions), in most cases anyway. Here is a slightly tweaked version: Interestingly for this shader PS 2.0 (5 instructions/2 registers) generates a more complex output than PS 1.1 (3 instructions/1 register)... So sticking to PS 1.1 is probably better in this case. |
| ||
Cool. Thanks for the optimization tips, I'm still at the bottom of the food chain with shader programming so I appreciate all the tips I can get. I actually came across an error compiling the shader for 1.1, but that was probably because I was testing it on a model with 1 UV set ( and I believe 1.1 doesn't allow you to use the same UV set twice, at least that was the error EffectEdit gave me ) but thanks for the correction there. I'm probably going to need all the speed I can get so if 1.1 is faster, great! |
| ||
interesting and informative read... thx guys... fredborg... --Mike |