strage behavior on different GPUs with direct3d9

BlitzMax Forums/BlitzMax Programming/strage behavior on different GPUs with direct3d9

Rone(Posted 2008) [#1]
Hi,

Now I am a step forward with direct3d. I restarted the implementation of the renderinterface and came to a strange behavior of direct3d...

I created a cube and a sphere for testing, but on my geforce 7950gt some tris are black...one side of the cube and also 30% of the sphere. But on Klepto's ATIx800 it is all ok.
Then I tested it with my laptop with ATIx300. There is only the sphere rendered and the cube is completely missing...

What results do you become?
Click here!

That's the type for renderables, which produces this things( Maybe I do something wrong, but its almost simple):




DStastny(Posted 2008) [#2]
One suggestion is to error check your DX9 calls. You are just calling them all and assuming they are working. They all return HRESULTS for a reason. That will help locate your error. The other is to enable the debug runtime debugging and use a debugger capable of trapping the output.

Most like you are getting an error somewhere and it is a setup issue. When developing the Max2d Directx9 driver I had a lot of trouble bouncing between Nvidia and ATI and in every case there was always an issue in the DX9 debug log and sure enough something was coded wrong after cross referencing to the Windows Documentation.

Just like people find with OpenGL support different drivers are more or less tolerant of buggy code.

Doug


Rone(Posted 2008) [#3]
Thanks, I will try that.
btw: we are using your Max2dDirectx9 driver as the base for the d3d9 part of miniB3d...


Dreamora(Posted 2008) [#4]
cube fine on the sphere a quarter + 6 triangles are missing so I would guess the last xy "quads" you add.

one thing to add might be: why do you use byte ptr and assign byte arrays?
Why not just memalloc() if you are not interested in the TArray class instance at all ...


Rone(Posted 2008) [#5]
Do you mean using this:
Field verts:Byte Ptr = MemAlloc(vert_array_size)

instead of this:
Field verts:Byte Ptr = New Byte[vert_array_size]
?
Hmm, I did not think about, because I thought it makes no difference... but I will change it.


Dreamora(Posted 2008) [#6]
Well the difference is the later is handled by BMs GC, the first isn't.

So you have control over the alloc and the de allocation (memfree) without any BM interferences.

I don't know how large the difference otherwise is.


DStastny(Posted 2008) [#7]
@Rone, let me know if you find anything borked in the driver. I know the texture creation code around TImageFrame probably could be more robust. The way I implemented MIPMAPPING just assumes the card can handle it.

Also you might want to try and understand how I used the AddDeviceObject and use SendMessage to communicate with Objects that have been bound to the Direct3D9Device to inform it of Resets with the device driver. That might be where your problem is as if the Device is lost the Dx9 driver will attempt to tear down rebuild the Direct3ddevice with a call to Reset and if your object doesnt respond to the message it will put Dx9 into a funky state. This is If if remember on area that is hugely important to be manged correctly. Your TDX9Renerable really needs to respond to that.

Doug


Doug


ziggy(Posted 2008) [#8]
Tested here with an intel DX9 compatible card and the application just shows a window and closes inmediatelly. Nothing to see here.


Rone(Posted 2008) [#9]
@ziggy, same here at gf7200...what graphic card do you use?
Just for information, because there is apparently an complete different behavior between ATIx300, ATIx800, gf7200 and gf7950gt...

@Budman, thanks for info. I have not yet really looked at your
dx9 driver, just changed the initialization a little bit, in order to get it work...I will look into it after work.


ziggy(Posted 2008) [#10]
It is an intel Movile Intel 945 Express Chipset Family


Dreamora(Posted 2008) [#11]
Sounds like the basic init is totally broken ...
Intels CPU to VGA adapters might not be that good but they work normally the most "standard" on DX9 (unlike NVIDIA and ATI with their optimations and fixed pipeline through shader emulation on the X1000+ GF7000+)


Rone(Posted 2008) [#12]
hmm, it seems that the problems comes exclusive from the D3D9GraphicsDriver inititialization. if I only use a very simple inititialization it works fine:
Direct3D9 = Direct3DCreate9( $900 )

If Not Direct3D9  Then 
	assert "error creating d3d9interface!"
EndIf

		
PParams = New D3DPRESENT_PARAMETERS
		
PParams.SwapEffect       = D3DSWAPEFFECT_DISCARD;
PParams.hDeviceWindow    = hwnd;
PParams.Windowed         = bWindowed;
PParams.BackBufferWidth  = w;
PParams.BackBufferHeight = h;
PParams.BackBufferFormat = D3DFMT_A8R8G8B8;


hr = Direct3D9.CreateDevice( 0 ,D3DDEVTYPE_HAL,hWnd,D3DCREATE_SOFTWARE_VERTEXPROCESSING,PParams,Direct3DDevice9) <> D3D_OK
		
If hr Then 
	assert "error creating d3d9device!"
EndIf 


Direct3DDevice9.GetBackBuffer(0,0, D3DBACKBUFFER_TYPE_MONO, BackBuffer );


The missing tris, comes from using D3DCREATE_HARDWARE_VERTEXPROCESSING regardless that _Direct3D9.CreateDevice returns D3D_OK.

But when using D3DCREATE_SOFTWARE_VERTEXPROCESSING in the d3d9driver, the driver crashes after a few seconds with an unhandled memory exception at Flip... _SwapChain.Present(Null, Null, Null, Null, flags) ?!

So, I think I write a own small TGraphicsDriver, in order to quickly continue... impartial therefrom I dont know whats the problem with D3DCREATE_HARDWARE_VERTEXPROCESSING.


Dreamora(Posted 2008) [#13]
Sure that the $900 for the DX_VERSION is correct?
Thought it was 3x (depending on the DX SDK you use logically, see the _xx in the DLL)
$900 would be the major version not the DX Runtime Version ...


Rone(Posted 2008) [#14]
I am a little bit confused, because I have ever assigned DIRECT3D_VERSION, which is $900. Also in all my books and samples $900 is used...Budman's dxdriver also use it.

But the DX reference says really there must be D3D_SDK_VERSION assiged, which is 32 on my installation...
Create an IDirect3D9 object as shown here:

LPDIRECT3D9 g_pD3D = NULL;
    
if( NULL == (g_pD3D = Direct3DCreate9(D3D_SDK_VERSION)))
    return E_FAIL;

/*==========================================================================;
 *
 *  Copyright (C) Microsoft Corporation.  All Rights Reserved.
 *
 *  File:   d3d9.h
 *  Content:    Direct3D include file
 *
 ****************************************************************************/

#ifndef _D3D9_H_
#define _D3D9_H_

#ifndef DIRECT3D_VERSION
#define DIRECT3D_VERSION         0x0900
#endif  //DIRECT3D_VERSION

// include this file content only if compiling for DX9 interfaces
#if(DIRECT3D_VERSION >= 0x0900)
/*
...
*/
#ifdef D3D_DEBUG_INFO
#define D3D_SDK_VERSION   (32 | 0x80000000)
#define D3D9b_SDK_VERSION (31 | 0x80000000)

#else
#define D3D_SDK_VERSION   32
#define D3D9b_SDK_VERSION 31
#endif



Dreamora(Posted 2008) [#15]
I don't know.
It confused me as well but up to DX 9c there was never the situation that 10 concurrent version (+-) existed and you needed to switch which runtime DLL which you use (which makes a difference on how the shader compiler behaves etc)


DStastny(Posted 2008) [#16]
The driver version number just version of the header you are using has no impact except to expose different features. The driver version used by my driver is from older SDK. Actually from Marks own header. Not the problem.

@Rone

here is the intilization code in my driver
' try to create different devices
		' falling back to full software vertex processing if necessary
		If _Direct3D9.CreateDevice( 0,D3DDEVTYPE_HAL,_FocusHWND,D3DCREATE_PUREDEVICE|D3DCREATE_HARDWARE_VERTEXPROCESSING|D3DCREATE_FPU_PRESERVE,_FocusPresentParams,_Direct3DDevice9)<>D3D_OK
			If _Direct3D9.CreateDevice( 0,D3DDEVTYPE_HAL,_FocusHWND,D3DCREATE_HARDWARE_VERTEXPROCESSING|D3DCREATE_FPU_PRESERVE,_FocusPresentParams,_Direct3DDevice9)<>D3D_OK
				If _Direct3D9.CreateDevice( 0,D3DDEVTYPE_HAL,_FocusHWND,D3DCREATE_SOFTWARE_VERTEXPROCESSING|D3DCREATE_FPU_PRESERVE,_FocusPresentParams,_Direct3DDevice9)<>D3D_OK			
				    DXLog "failed To create device _D3dDev9"
					_DestroyDirect3DDevice9()
					Return Null
				End If
			End If		
		EndIf	



As you see it is a fail through model. So it tries to create the most compatibile driver for the card.

Simple question does the rendering work in MAX2d?
ie the sample Applications I ship in the ZIP.

Thanks
Doug


Rone(Posted 2008) [#17]
Yes, directX9Max2d works fine.

The posted .exe was builded with unchanged initialization. Then automatically the first with HARDWARE_VERTEXPROCESSING is used...I have only comment out _PresentParams.EnableAutoDepthStencil = true, in order that the screen dont stay black...
_Direct3D9.CreateDevice(0,D3DDEVTYPE_HAL,_FocusHWND,D3DCREATE_PUREDEVICE|D3DCREATE_HARDWARE_VERTEXPROCESSING|D3DCREATE_FPU_PRESERVE,_FocusPresentParams,_Direct3DDevice9)

returns D3D_OK, but produces errors on the most cards. And that is also in my simple initialization...very strange.

_Direct3D9.CreateDevice(0,D3DDEVTYPE_HAL,_FocusHWND,D3DCREATE_SOFTWARE_VERTEXPROCESSING|D3DCREATE_FPU_PRESERVE,_FocusPresentParams,_Direct3DDevice9)
returns also D3D_OK and renders correctly, for a few seconds . Then the application crashes at _SwapChain.Present(Null, Null, Null, Null, flags)...I cant undersand this ;)

I not really want to write a new graphics driver, because yours looks good and I think the problem must be somewhere else.
Also I can not detect any error in my TDXRenderable type.


DStastny(Posted 2008) [#18]
Well if the 2d is working the problem lies somewhere else, As the Max2d driver sets up vertex and index buffers and drives them fine. I have feeling its the creatation of your vertex/index buffers some combination of the flags or mechnism for filling is causing the fits.

	D3DDevice.CreateVertexBuffer(vert_array_size, D3DUSAGE_WRITEONLY, D3D_VERTEX_FORMAT, D3DPOOL_MANAGED, VertexBufferObj, Null) 
			D3DDevice.CreateIndexBuffer(tri_array_size, D3DUSAGE_WRITEONLY, D3D_INDEX_FORMAT, D3DPOOL_DEFAULT, IndexBufferObj, Null) 
		


Looking at this you are creating the Vertex Buffer the managed Pool and the Index Buffer in the Defaul Pool.

Not having my SDK docs handy Managed means if the device is lost DirectX will manange recreation across a reset of the device. Change them both to Managed.

Also and check creation succeeds
		If D3DDevice.CreateVertexBuffer(vert_array_size, D3DUSAGE_WRITEONLY, D3D_VERTEX_FORMAT, D3DPOOL_MANAGED, VertexBufferObj, Null) <>D3D_OK 
			Throw "Failed To Create VB"		
		End If		
				
		If D3DDevice.CreateIndexBuffer(tri_array_size, D3DUSAGE_WRITEONLY, D3D_INDEX_FORMAT, D3DPOOL_MANAGED, IndexBufferObj, Null) 
			<>D3D_OK 
			Throw "Failed To Create Index Buffer"		
		End If	


Next

Your method UpdateVBO.

I dont understand the flags with and or stuff but have to assume there is some purpose for that inside your implemenation.

I am concerned that you are writting to a buffer even if lock fails. So restructure this like so..

Dont hold two locks at same time.

			Local VertexBufferStart:Byte Ptr
			Local IndexBufferStart:Byte Ptr
		
			If VertexBufferObj.Lock(0, no_verts * VERTEXSIZE, VertexBufferStart, D3DLOCK_NOSYSLOCK) =D3D_OK
				MemCopy(VertexBufferStart, verts, no_verts * VERTEXSIZE) 
				VertexBufferObj.Unlock() 			
			End If
			
			If IndexBufferObj.Lock(0, no_tris * 3 * INDEXSIZE, IndexBufferStart, D3DLOCK_NOSYSLOCK) =D3D_OK 
				MemCopy(IndexBufferStart, Byte Ptr(tris), no_tris * 3 * INDEXSIZE) 
				IndexBufferObj.Unlock() 
			End If



FYI this was the biggest problem I had with Nvidia and ATI different cards and drivers caused way strange behavior. Originally in version 1 I held the locks open and pumped in tri's until full then unlocked and rendered. I forget which but one vendor always either crashed outright or failed to render if the lock was held across a windows message pump and due to structure of Max2d only way to fix was to create a memory buffer and lock fill unlock render. So I wound up creating my own system memory triangle buffer. Took the performance hit but got stablitiy.

Thinking about it now I know why your getting working results with a SOFTWARE_VERTEX_PROCESSING pipeline. The vertex buffers are kept in system memory until rendering so any kind of funky memory behavior with you locking unlocking with the types of Buffers your creating are being hidden since your always dealing with system memory. But with Hardware Vertex Processing your actually messing with GPU memory directly hence the strange behavior.

Hope this helps. Your questions have actually make me look at Max for first time in long (well since the reflection bug in 1.26/8???) caused a problem for someone using the Max2dDx9 Driver.

I had actually at one time started writing a Max3dDriver Model similar to what you guys are doing but some frustrations with the limitations of Max as language(Lack of system level thread-safety and awful debug capabilities having to expand all 3d math inline to avoid function call overhead) have had me table it to see if BRL tries to address this limitiations.

I will be more than willing to try to help you guys anyway I can. If I can get my old machine up and running I'll see if I can get can my Vertex/Index Buffer Objects available to you guys so you can just deal with filling them and rendering them. I had most of these nuainces pretty well under control and they plugged right into the Dx9Driver so it managed there state creation and destruction. Right now my full time job is killing me timewise not leaving much time for fun.

Doug


Rone(Posted 2008) [#19]
Thanks for your help.
I have made your suggested changes and played a little bit with the flags, but still have the same problems...

The reset_vbo flags I use in UpdateVBO comes from miniB3d and are set if the vertexdata is changed...but here it is only provisorily, reset_vbo <> 0 would be enough.
instead of that I will adapt the lock area according to to the changed VB or IB index.

btw: I think every frame lock, fill and unlock the vertexbuffer, cant be a solution, they must only be updated if the vertexdata chaged. That also not possible with many big meshes ;) ...or did I get you wrong?


Dreamora(Posted 2008) [#20]
requesting a lock is the more performance breaking problem than updating the mesh unless you push a massive amount of data through it.

so a few large objects are better than a lot of small.

If you have highly dynamic objects, having meshes handled like Blitz3D does by sending them every frame again instead of have them reside on GPU is most likely the better solution due to the cost of a lock compared to the cost of a send


DStastny(Posted 2008) [#21]
@Rone,

I am doing sme reading of SDK while drinking the morning Joe.

When a vertex buffer is created, CreateVertexBuffer uses the usage parameter to decide whether to process vertices in hardware or software.

If CreateDevice uses D3DCREATE_HARDWARE_VERTEXPROCESSING, CreateVertexBuffer must use 0.
If CreateDevice uses D3DCREATE_SOFTWARE_VERTEXPROCESSING, CreateVertexBuffer must use either 0 or D3DUSAGE_SOFTWAREPROCESSING. For either value, vertices will be processed in software.
If CreateDevice uses D3DCREATE_MIXED_VERTEXPROCESSING, CreateVertexBuffer can use either 0 or D3DUSAGE_SOFTWAREPROCESSING


Now this is interesting since the documentation is somewhat ambiguous as it says use that WRITE_ONLY Flag or possibly suffer performance hit.

try changing the D3D_USAGE_WRITEONLY to 0.

Other thing and here I am guessing but your using a static VB not a Dynamic buffer like I use in the Max2d implmentation.

Are you changing the object while running or is this simple test like with a cube?

Doug


DStastny(Posted 2008) [#22]
One final thing before I head off to work, just looking at your code.

You use EnableVB to create the buffer. If the underlyng memory describing your VB changes your not resizing the VB. That would be a problem if you change it after creating it. Not necessairy cause of this problem but down the road would be.

Doug


Rone(Posted 2008) [#23]
Unfortunately I am at work at the moment. Later I will test changing the D3d_USAGE addicted to D3DCREATE.
This is simply a test, UpdateVB is only called on time. I will post an complete sample with an own d3d initialization, which makes also trouble using HARDWARE_VERTEXPROCESSING, but works with SOFTWARE_VERTEXPROCESSING

hmm, I think EnableVB resizes automatically the Vertexbuffer to vert_array_size, which is increase when adding a vertex.


DStastny(Posted 2008) [#24]
If you have something encapsulated outside everything, Ill debug it for you :)

Doug


Rone(Posted 2008) [#25]
So, here's the sample with latest changes. Instead of using my Direct3d type for initialization, you ca also use your driver with only comment out:

If _flags & GRAPHICS_DEPTHBUFFER
			'_PresentParams.EnableAutoDepthStencil = True ' true if we want z-buffer
			'_PresentParams.AutoDepthStencilFormat=D3DFMT_D16			
		End If	
		If _flags & GRAPHICS_STENCILBUFFER
			'_PresentParams.EnableAutoDepthStencil=True ' true if we want z-buffer
			'_PresentParams.AutoDepthStencilFormat=D3DFMT_D24S8			
		End If


Warning, it's ~1200 lines of code because of TMatrix and TSurface from miniB3d.
TSurface is completely adjusted to TRenderable, which is normaly an interface implementation...


Edit:
hmm, if I create the window with CreateWindowExA insted of using a maxgui window, the programm also chrashes at device.present, when using D3DCREATE_SOFTWARE_VERTEXPROCESSING...same as using your d3d9GraphicsDriver :)

Edit:
ok, _PresentParams.EnableAutoDepthStencil = True works also fine...I yust must clear the zBuffer( Field ClsMode :Int = D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER ) :)


DStastny(Posted 2008) [#26]
I found it :)

Its a bug in the way you are allocating and locking the Index Buffer.

I am trying to figure out exactly what it should say but thought I would share what I found.

Reason sometimes it works sometimes it does depends on logic that resizes the index and vertex buffers.


Doug


Rone(Posted 2008) [#27]
awesome...big thanks :)


DStastny(Posted 2008) [#28]
I changed the d3dx to one I had so you will need change it back to one you want to use.

Changes that I remember doing :)
If D3DDevice.CreateIndexBuffer(tri_array_size*INDEXSIZE,D3DUSAGE_WRITEONLY, D3D_INDEX_FORMAT, D3DPOOL_DEFAULT, IndexBufferObj, Null) <> D3D_OK Then
				Assert "Failed to create IB"
			EndIf


Need to multiply by indexsize since tri_array_size is number of elements not number of bytes added back D3DUSAGE_WRITEONLY directx9 debugger complained it needed to be there counter to documentation.

Method BeginScene()
		Direct3DDevice9.Clear(0,Null,D3DCLEAR_TARGET,clsColor,0,0);
    	Return Direct3DDevice9.BeginScene()=0
	End Method


You are expecting this to be true if succesful. All DirectX call return zero on succuss, if winapi returns HRESULT 0 means success.

There is a bunch of debug code i put in to track down to bad indexbuffer size.

Let me know if it now shows what you expect.

Doug







DStastny(Posted 2008) [#29]
This has been interesting excercise. There is still one Debug message I am seeing that I dont like but it doesnt seem to have much impact nor can I explain it.

[5424] Direct3D9: (WARN) :Stream 0 stride and vertex size, computed from the current vertex declaration or FVF, are different, which might not work with pre-DX8 drivers 


I think this is due to fact your FVF has 2 texture coordinates and it knows your not using textures. It does not seem to affect behavior.

I would suggest you manage the IndexBuffer the same way you manage the VertexBuffer with direct memory manipulations instead of the sliceing.

Now the approach you are using for static geomitry is ok. Unless you have lots of tiny meshes then you would want to merge the vertex and index buffers.

This will be aweful for for anykind of dynamic or animated mesh. For that you need a global dynamic buffer similar to what I used in the Max2d.

I also would probably not use Dx9Max2d driver as the base as it manages the begin scene end scene states. Use the

Base driver.

SuperStrict
Framework pub.win32
Import "DX9Graphics.bmx"
Import brl.GLGraphics
SetGraphicsDriver(D3D9GraphicsDriver(),GRAPHICS_BACKBUFFER|GRAPHICS_DEPTHBUFFER)



Function Clear(dwColor:Int)
	D3D9GraphicsDriver().Direct3DDevice9().Clear(0,Null,D3DCLEAR_TARGET|D3DCLEAR_ZBUFFER,dwColor,1.0,0)	
End Function

Function BeginScene:Int()
	Return   D3D9GraphicsDriver().BeginScene()=0
End Function

Function EndScene()
   D3D9GraphicsDriver().EndScene()
End Function

Graphics 640,480,0

While Not KeyHit(Key_Escape)
  Clear($FFFF0000)
  If BeginScene()

	EndScene()
  End If
  Flip 0
Wend



You really dont want to mix Max2d rendering be it Dx9 or OpenGL as Max2d. Its not very effeicent rendering and state managment which will affect 3d performance in complex scenes greatly.

Thanks for posting If you guys are interested I would like to show you some work that I had done similar to what your attempting but havent explored it fully out but might be something you would intereted in.

I will be making some changes to the Dx9Max2d library based upon some things that I have seen after relooking at it while trying to figure out your problem.

Doug


Rone(Posted 2008) [#30]
Thanks again!
Fundamentally a stupid mistake of me ;)
Works fine now :)

Ok, at first I will use the base driver, but in case of mixing Max2d with miniB3d, it must be definitely supported, because of compatibility to original miniB3d...the compatibility generally is difficulty, because there are some aspects, which we would design differently and also in case of the new features.

It would be nice to see your work and I am definitely interested...


Rone(Posted 2008) [#31]
@Budman, have you ever had problems while debugging a d3d9 application?
Since I am using textures, the application ends with 'Process complete' when I click on the debug tree or perform a debug step, but besides the program runs solid...

btw: I thaught about mixing 2d and 3d. And came to the conclusion that you're right. A single surface 2d system on top of miniB3d, like Draw3d, would be a better solution.


Dreamora(Posted 2008) [#32]
And what do you intend to do with Windows User of Max2D? You know that this is DX7 so you can and will not be able to intermix it, and I don't assume you plan to base your "core featureset" on something inofficial which does not work across all the machines. And I would not assume that DX9 will ever happen officially (and even if, DX7 is still present and will be used on many machines) ...

It would be simpler and more stable to replicated the Max2D function set within your code as a submodule but in a compatible way to your own rendering part. At least unless you intend to write a DX7 driver as well.


Rone(Posted 2008) [#33]
@Dreamora, I do not really understand what you mean, because Windows User which uses minib3d are currently bounded on OpenGl in case of using Max2d for a HUD or something...

Generally it was just an idea which was in my head for some time...and a DX7 implementation is nearest not planed from my side, because I think complete d3d9 will still take some time. And I am just a little bit unhappy with the RenderInterface design, so it will also be enhanced... Also I must see how much time-consuming study will be in the next semester ;)


DStastny(Posted 2008) [#34]
@Rone
For debugging I havent noticed the behavior your talking about however I rarely use the built in debugger, I use debug logging to debug Dx Code in BMAX since the BMAX debugger is crap(Thats being polite). Its 2008 and I have to but debugstop and compile my code. Give me a break.

Do you have the DirectX Debug Drivers from the SDK activated. If not I cant recommend it enough. Get the SDK use the applet to enable debug drivers and use this
[url]http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx[/url] to capture the drivers debug spew. It will find pretty much everything you might be doing wrong.

Doug


Rone(Posted 2008) [#35]
Thanks, DebugView is really an easing. :)
However the debugger chrashes after calling
Local hr:Int= D3DDevice.CreateTexture(WIDTH, HEIGHT,level,usage,Internal,D3DPOOL_MANAGED,Texture[i],Null)
but besides ttextures works well...

The texture loading method looks as follows:
(It is in almost the same manner as in your driver)



DStastny(Posted 2008) [#36]
@Rone I dont see anything in particular. But am wondering about the array of textures if the Debugger is choking on that. I know when reflection was added it caused problems with arrays of Objects that extened IUknown. So maybe if debugger is inspect array of textures its have fits.

I assume you are single stepping through this? And it makes the call and then crashes the IDE?

Doug


DStastny(Posted 2008) [#37]
Thats really weird. I can duplicate it. Not sure what the deal is. Guess I need to do some debugging :)

Doug


DStastny(Posted 2008) [#38]
Its a similar bug its the array of textures IDirectTexture9.

I changed your code to just use a single texture and its fine.

Unless BRL fixes it your code is not the problem. To work around you will need to create your own simple container wrapping the textures if you want to manage the array.

Might be good idea as you can put all the code to manage locking etc. inside that class

Doug


Rone(Posted 2008) [#39]
unbelievable...thanks a lot.

Works now with a texture container.
And I got Lightning and materials working, too.

At next I think I will paste into our render interface a test some miniB3d samples :)
If I dont oversight something miniB3d should run completly with d3d9 then! :) :)


DStastny(Posted 2008) [#40]
Cant wait to see it!

Doug


Rone(Posted 2008) [#41]
Hi,
if you are interesseted, here is the actual source.
Any improvements are welcome :)

The material is not yet working as in miniB3d. Possibly I set some wrong Renderstates. It seems that sometimes only the shadowed areas are colored. Also the specular component does not work yet. So, still needs some improvements.. ;)

Here is the link for .exe and the textures:
Click here!




Rone(Posted 2008) [#42]
Seems that the code is too long for one post, is that possible? So here's the rest:




ToM C.(Posted 2008) [#43]
Hi Rone,
on my RADEON 9600 TX all is working fine.

The only thing I had to change was the LoadLibraryA ("d3dx9d_33") to "d3dx9_36".
In a future state is there a posibility to use automatically the correct dll installed ?


Yahfree(Posted 2008) [#44]
the exe doesnt do anything, I see a window for a split second then it quits


Dreamora(Posted 2008) [#45]
if _36 was present, 33 should be present as well, otherwise you use Vista and forgot to run the DX Webinstaller which installs the about 10 missing versions of DX9.0c ... :)


Rone(Posted 2008) [#46]
@Yahfree: What graphic card do you use?

I have just implemented our miniB3d interface, basically copy/paste the above code...works fine on ATI, but still compatibility problems with nVidia-cards.


ToM C.(Posted 2008) [#47]
@Dreamora: I don't use Vista. The dll wich is present is "d3dx9_33" not "d3dx9d_33". Maybe it is a DX developer edition Rone is useing and the "d" is standing for. I don't know.

Before the change inside the code I had the same probleme like Yahfree if I run the exe.


Dreamora(Posted 2008) [#48]
Yupp the d3dx9d_33 specifies the debug dll instead of the retail one which would be what you first pointed out. Missed the d there and therefor only answered to the version part.
That error definitely needs to be fixed as it means crash on most systems and kills the performance. (above that it can affect shaders and lead to anomalies)