Blitzmax 3D engine + Modelisation Software

Community Forums/Showcase/Blitzmax 3D engine + Modelisation Software

Bobysait(Posted 2016) [#1]


[Click to open the page on a new tab starting from the bottom]

About the engine
It's a blitzmax engine built as a standalone module
- it uses its own version of the glew module and glgraphics, and is built as a real max2d driver
So any (or almost) 2D commands works on top of the 3D (with blending, alpha etc...).
- it won't be free (but not that expensive - around the 40$ with lifetime updates. Once you nuy it, it's for life)
- Still in developpement but reaches the end (it's actually in the beta test stage while building some demos to show up the engine and feeding some samples to work with when it will be released)

[Latest Demo in Video - Last update on 2016-10-28th]
Almost all procedurally generated Landscape





About the the modelisation software (not currently in developpemnt - will be continued when the engine will be released)
Kind of a 3ds max replicant, but very limited in features, it is made to create b3d models/scenes without the needs for an expensive framework (like thousands of bill for discreet softwares or ...)

- support commun primitives with parametrable details (cube, sphere, torus, cone, cylinder, grid ... more to come)
- import/export b3d (it won't support any other format, as long as I don't have the spec for more format and the most frequent are not open-sourced, like fbx) with support for animations and bones
- material and texture editor
- realtime rendering mode
- normal and depth map rendering
- outline mode
- transformation : move/rotate/scale relative to world/parent/local/screen matrix

- 3D powered by the Bigbang engine (GlSl 3.1 engine for blitzmax) that will be distributed as a module for the software owners (Yep, I didn't find any name or logo for the modelisation software yet)

The main features the engine doesn't support that blitz3d use :
- Per vertex lighting (I only support per pixel lights)
- flat shaded rendering (it could be done in opengl direct mode, but as long as the engine only use shaders for rendering, it would require to split all triangles to get this kind of render. It's still possible to acheive, but without more needs for this, I won't implement it)
BTW, is there anybody who used the flat mode ? (except for polymaniacs)

(Click on the image for Higher resolution)














ps : FPS drawn on the status bar are not what you can acheive with the engine, but only the frame limited fps - the editor is made to consume the less possible the cpu, there is only in rendering mode that it goes up to 45 fps just to test the scene fluidity.
In Game (without frame limiting), it's generally faster than the blitz3d engine ... depends on many things (like graphic cards drivers, and some (very few) features are lower on bigbang due to OpenGL restrictions, while almost all others are faster)


Update 2016-05-01
WebGL portage
Online Demo available here
(ECMAScript5 - works on almost all browsers, except Safari on iPad 2)

And Another demo Here
(ECMAScript6 - not available for all browsers, EcmaScript6 is too new, but will soon be supported anywhere.)


Doggie(Posted 2016) [#2]
I can't quite make out if this is a 3d modeler or a game engine...
I suppose both. Could you post more about character creation and rigging?
That's what I'd want it for most. I won't have any use for the engine since it
is for BlitzMax which is over my head, so to ask me what it would be worth
to me depends on how good it is at character creating and animating and
buildings,props etc that I can use with Blitz3d. Just off the top of my head
I might buy it within a range of $20.00 to $40.00. Not saying it might not
be worth more than that but my budget would preclude paying more.
On a side note, it's really cool to see new programs like this once again.

DOG


RemiD(Posted 2016) [#3]
Congrats for the 3d modelisation software, but i am not interested by a modeling software at the moment. (don't want to learn to use a new tool now that i am good at using another...)

However i may be interested to use your Bigbang engine in the future if it has the same capabilities than Blitz3d and a similar syntax and if it is stable enough and "future proof".

A few questions :

What are the advantages of using your Bigbang engine compared to the Blitz3d engine ? I suppose the possibility to use per pixel lighting and shaders ? (bumpmapping ? precise glow of some pixels ? other ?)

Is it compatible with most graphics cards (not too old of course), and is it compatible with windows only or also with mac osx ?

Can you explain briefly how we are supposed to use this 3d engine with Blitzmax ? (how to install)

Can you provide a few code examples to demonstrate how to use it with Blitzmax ? (also to see if the syntax is similar to Blitz3d)

Thanks,


Bobysait(Posted 2016) [#4]

Could you post more about character creation and rigging?
That's what I'd want it for most.



On the current stage, there is no specific tool to build something special, it's designed as 3dsmax, same commands to turn/move etc ... the camera has its "uncentered turn around the pivot" style, it use polygon instead of triangle, but it's intended to be generic. So there is all what we need to build and animate a character, but there is nothing specifically designed to build and animate a character ... if you understand what I mean.
Of course, for the future, if there is a demand for specific stuff, I'll add it.

<< Tell me what you want, I will learn you how to live without it >> :)


Is it compatible with most graphics cards (not too old of course), and is it compatible with windows only or also with mac osx ?


At the moment, it's windows capable
It should be cross-platform, but as I don't own a mac nor a linux machine, I can't test for the moment.
I'm actually installing a Ubuntu 15 on a dual-core 2*2.5 Ghz PC (a machine just old enough, with an HD 4780)
For the mac version, it will comes later when I'll get a mac :)
But what is sure : the engine use OpenGL only with GLSL 330, so it won't be compatible with "very old" graphics card, while it should be compatible with more than 95% of the running machines, according to steam survey for example. (but I assume Steam survey is restrictive and not accurate at all as it essentially benchmark "gamers" machine ^^ ...). Most of the 2010 and later graphics card are GLSL 3.3 capable, so, there is just the 6 years old hardware that won't work.
(ps : but a 2005 machine can support a glsl 3.3 capable graphics card that can be found on any store for 20 euros ... so ... it's not really a big deal, as those customers are not 3D customers or they would already have updated their hardware)
The Bigbang engine is not made to run some "small blitz3d prototype" but for bigger project.
> it supports lots of entities with lots of surfaces, with bump, specular, and can use a space optimizer for rendering LOD (space optimizer released as an external module)



What are the advantages of using your Bigbang engine compared to the Blitz3d engine ?


- The first advantage is Blitzmax vs Blitz3d language
- Almost all the commands from blitz3d are wrapped to the Bigbang engine, so a blitz3d code will have 95% (really approximative) chance to be fully translated and capable to run without a modification.
The only add : "Import mdt.bigbang" on top to import the module.
I'll get back to advantages later, let's talk about "how to" instal and use :

- copy/paste the module in the mod dir
- create a new bmx file for your project
- add <Import mdt.bigbang>
- eventually specify the graphics driver if there is stuff that need to be done before the creation of the 3D window
> SetGraphicsDriver (BigBangDriver())

That's all, what comes next is what you would have in Blitz3D

For example, this is a simple scene of a camera turning around a cube



This is the same using bigbang on blitzmax



The main difference is the color format > I choosed float [0.0, 1.0] while blitz3d use byte [0,255]
The result is not exactly the same because bigbang use per pixel lighting and enable emissive, specular and ambiant for each light.
And what makes a big difference :
- the framerate.
this a simple scene test with the same code as above, just replacing the sphere with :

First value is Blitz3D Fps, second is BlitzMax with BigBang Fps
* a simple cube (12 tris) : 2000 , 2000
* a 32 segments sphere ( 3968 tris): 1000, 1900
* a 64 segments sphere (16128 tris) : 500, 1850
* a 96 segments sphere (36480 tris) : 275, 1850

So, the BigBang engine does not suffer from large amount of triangles when Blitz3d just decrease as fast as the tris count rise


now, what is fun with blitzmax, it's the Object mode.

This can replace the fps code, it's faster to write.

CreateLight(2).SetDiffuse(1,.8,.6).Move(20,20,-20)


this is the light setup.

and this is a turnentity in object
piv:BEntity = createpivot() ' or BPivot.Create() works too.
piv.turn(0,120,0)
piv.move(0,2,8)
piv.scale(2,1,3)


the blitz3d code for this :
piv = createpivot()
turnentity(piv, 0,120,0)
moveentity(piv, 0,2,8)
scaleentity(piv, 2,1,3)


So, there is many way to code the same thing, it's up to the user to choose the mode that best fit its style.


===============================================================================================

About the engine spec :
===============================================================================================
===============================================================================================
note :
> "Blitz3D-like" = same behavior, and same (or almost) syntax)
> "*=" same feature as blitz3d
> "*+" addon blitz3d doesn't have
> "*-" not implemented (Yet?)

===============================================================================================

[Material/Texture] : Blitz3D-like
------------------
*+ native bump map support (just use the Texture Blend mode TEX_BUMP or TEX_BUMPFACE > Bump Face multiply the texture normal with the vertex normal)
*+ native specular map support (texture blend TEX_SPEC/TEX_SPECULAR)
*+ native displacment map support (TEX_DISPLACE, cumulative with Bump : TEX_DISPLACEBUMP, wich will offset the texture coords for the next layers)
*+ native per pixel lighting
*+ Frame Buffer support (mesh and textures stored in VRam > faster access)
*+ Render to texture using frame buffer


[Entity System] : blitz3d-like SceneGraph
--------------------
*= nested nodes
*= quaternion rotations


[Rendering] : Blitz3D-like
--------------------
(but the rendered screen doesn't actually totally looks like blitz3d, because of per pixel lighting and some specificity that blitz3d doesn't deal with)
*+ post-rendering effect using shader (for outline, glow etc ...)
* render can export color (normal rendering) and output on other textures : depth map, and screen-space normals (as seen on the screen above)
* multiple wire modes : disabled/enabled/default/backface customizable for each brush (it's not a "full scene wireframe" but it can also be if brush set to default and camera set to wireframe(true) )
> wire backface mode enable to draw front face with fill mode and backface wired. (pretty usefull for editors)


[Camera] : Blitz3D-like
---------------------
*+ Can render in different ways
> camera.RenderEntity(entity, true) = render the entity (and its hierarchy, if recursive=true) from a single camera (you will use this for shadows for example )
> camera.RenderWorld() = render everything for a single camera
> RenderWorld () = render everything with every camera
> create and render renderlist -> you can manually set what you want to render (in addition to the space optimizer, this is the best way to render very large scenes with large amounts of models and stuff)


[Mesh/Surface] : Blitz3D-like
--------------------
*+ All surface properties are independant and sharable !
> you can share the vertex coords with an other surface and have a specific vertex normal array for each surface.
So you can copy entities (that will share memory for everything) and specify a UV for second layer, so the whole scene can be lightmaped without the needs to duplicate meshes and surfaces.


[Collision/Picking] : Blitz3D-like
--------------------
*+ You can create your own picking method by extending the default PickingDetector CollisionMethod and/or CollisionResponse Classes (the callbacks used to check and validate a collision and what happens when it happens)
So, there is all you need to create a "real" physic engine without too much effort as the main structur is already there.
I've not included a physic engine because the more generic, the lower it runs... so with all the tools to build one, it will better fit the scene if you create what you really need, but of course the collisions involved in the engine already works as blitz3d's one (I didn't benchmarked them, but I managed a big scene with thousands of entities moving with collisions with a very decent fps).


[Animations] : blitz3d like
------------------
*+ with editable/sharable keymaps


Stevie G(Posted 2016) [#5]
This sounded great Bobsait, until I read this part ;)

BTW, is there anybody who used the flat mode ? (except for polymaniacs)


It would be interesting to know what isn't there from the B3d command set to know how much effort it might take to convert.


Bobysait(Posted 2016) [#6]
Yep, sorry for that, No offense, I think you're the only one I know that have a use for flat shaded polygons :)

GLSL doesn't allows the OpenGL direct mode that enables flat shading.
Normals are computed in the shader, not in the opengl states.
So, to integrate a flat mode, it would require to "explode" the triangles, ie: 3 vertices per triangle, then set the vertex normals to the triangle normal. (there is aready all the stuff in the engine to make this), it's possible, but not optimized at all since it increase drastically the vertex count, while, in a meantime, the engine can afford a massive amount of vertices, that's not such a big deal too ...

An other way to do it, (which would requires a small modification in the source engine, but would be probably the best way) would be to use dFdx / dFdy which should be present in early version of glsl, to override the vertex normal array in the shader if flat mode is specified ...
It shouldn't be too hard to add, I think I'll take a look in the evening.

ps :
@Doggie

I can't quite make out if this is a 3d modeler or a game engine...
I suppose both.



BigBang is the 3D engine for blitzmax (a module to import "Import mdt.bigbang")
The software is a modelisation tool made with blitzmax that "uses" the BigBang engine for rendering the 3D.

There is no need for the tool to create 3D with the module, as there is no need for the module to use the tool. They are 2 separates products.
But, as long as 3D engines require nice polished models to get beautifull games, it's better to have a full compatibility pack :)
All the tool supports, the engine is capable of > ie : WYSIWYG
(What You See Is What You Get)

See : when using 3dsmax + blitz3d, most of the stuff you create in 3ds max is impossible to use in blitz3d, there are only the basic features (material with color and textures, skinned mesh with bones, etc ...)
What you see in 3ds max is far from what you get when loading the model in blitz3d. The difference between both makes the workflow really nasty and big waste of time to test colors, camera depth and else...

While here, the tool + the engine is the best way to be sure that what you exported will be 100% compatible with your game, and will exactly looks-like what you modelized.


About the price :
It will probably depends on how many people are ready to pay for it. The more potential users, the lower the price. But I will probably start with something around the 70 euros (85$ or something like this?) for the pack or 40 euros (around 45/50 $) per product > so, you'll earn a bit by buying the 2 at once.
But as said, if there is more users, I will lower the price accordingly.
Of course, each product can work as standalone > the modelisation tool will produice b3d for blitz3d or irrlicht or any other engine that support b3d
While the bmax engine can use standard b3d files coming from Fragmotion, UUnwrap3D, 3ds max (+ b3d pipeline, if you have an old windows xp to run the old 32 bit 3ds max version :/ ) etc ...
It can also work whithout b3d as blitz3d does, and build everything from procedural stuff.
But that's obvious stuff.


Stevie G(Posted 2016) [#7]

So, to integrate a flat mode, it would require to "explode" the triangles, ie: 3 vertices per triangle, then set the vertex normals to the triangle normal. (there is aready all the stuff in the engine to make this), it's possible, but not optimized at all since it increase drastically the vertex count, while, in a meantime, the engine can afford a massive amount of vertices, that's not such a big deal too ...



In order to get proper flat shading in B3d you need to unweld all the triangles and calculate per triangle normals anyway. All my models are set up this way. I'm pretty sure I don't even use Fx 4 (flatshading) at all so it will probably work as expected.

I'll wait to see what the finished command set will be but this looks like excellent work.


Bobysait(Posted 2016) [#8]
@Stevie G:
I didn't mention it, but you can also texture the mesh with a bump map, so it will replace the vertex normal or mulltiply it. (in multiply mode, with some flat shaded style normal map, it could be pretty nice actually)


KronosUK(Posted 2016) [#9]
Will it do shadows or post processing effects?


RemiD(Posted 2016) [#10]
Also do you include ready to use shaders or code examples to do effects like per pixel lighting, per pixel shadows, bumpmapping, per pixel glow, reflections (cubemapping ?)


Also you have not talked about terrains, can you please give us some infos ?


Also about the support, do you plan to do it long term, even if you are not paid ? This is a big issue with many game engines, where the creator either disappear or simply has not the skills to fix the bugs/compatibility problems or the willingness to do it after some time (take a look at what happened with xors3d or nuclear basic...)


Steve Elliott(Posted 2016) [#11]
A good point from RemiD. If it were me I would make sure my users were supported and happy - because I've been let down in the past myself.


KronosUK(Posted 2016) [#12]
+2. Nonetheless still quite excited by this announcement. I've been getting the programming itch again lately.


Bobysait(Posted 2016) [#13]
@KronosUK
* Shadows are not included for a simple reason : for me, shadows are like physics engine,
it's a third party stuff that can't be generic enough to work anytime with any scenes and
won't be efficient if not adapted to the game/application.
But, I will provide a library (for free) as extension to the 3d module to generate realtime shadow (maybe cascaded shadow)

There is already a sdk to create post-processing effect (like the outline on the screens above, it's a full screen post-process)


@RemiD :
per pixel lighting does need nothing, it's incorporated to the engine as a replacement for the lighting.
Just create a light (like you do it in blitz3d), when rendering (using renderworld or renderentity etc...) it will compute the lighting on the fragment shader.
So there is no hard work to do to get them working :)
for the bump, I have made some samples for some scene.
you still have to know that, the bump is not a hard stuff in the engine (so as the specular map)

Local NormalTex = LoadTexture("My Normal Texture File.png")
TextureBlend tex, BLEND_BUMPFACE

Local SpecTex = LoadTexture("My Specular Texture File.png")
TextureBlend tex, BLEND_SPECULAR

EntityTexture MyEntity, NormalTex,0
EntityTexture MyEntity, SpecTex,1


Then render the scene, it will bump the light and use specular map for shininess



Also you have not talked about terrains, can you please give us some infos ?



Ok, that's the deal of the moment : I didn't make any terrain engine.
I have never used terrain with blitz3d except for some demos ...
I will !
I already have an idea of the technology I'm going to use
More info to come ...



Also about the support, do you plan to do it long term, even if you are not paid ? This is a big issue with many game engines, where the creator either disappear or simply has not the skills to fix the bugs/compatibility problems or the willingness to do it after some time (take a look at what happened with xors3d or nuclear basic...)


This is the engine I use, it has already fully replaced the blitz3d engine on my computer, so yep, it 's a long term engine that I will push up to the limit until glsl 3.3 will be obsolete as directx 7 engines are, but at this moment, i will update it so it's still up to date with new technologies.
At the start, it was an android engine I made (as android 4 and higher only suppport glsl version, I had to create a shader based engine), I made a portage on blitzmax, and I must admit I've already had some much fun with it that I have left the android version (I will publish it when i will figure how to create android project with the bmxng)

In the end, it's already been 4 years I'm working on it, it is in beta version since half a year, still adding stuff to make it more and more complete ...


ps : I'm curious to know why nobody asked something like "Why using your engine instead of minib3d ?"
Happy I have not to anwer that :)


KronosUK(Posted 2016) [#14]
Thank you for your answer.

I can live without a dedicated terrain function as long as I can build my own meshes through the engine. CreateMesh/CreateSurface etc.

Hopefully you can/will release a list of the full command set at some point.


RemiD(Posted 2016) [#15]

This is the engine I use, it has already fully replaced the blitz3d engine on my computer, so yep, it 's a long term engine


This is a good thing.



I'm curious to know why nobody asked something like "Why using your engine instead of minib3d ?


The answer for me is simple : i am ready to pay a reasonable fee to not have to think about fixing bugs/compatibility problems. The time you spend doing something you cannot spend it doing another thing.
minib3d seems functional, and openb3d seems functional too, but the programmers who currently fix/improve the engine may not be here forever, and i don't really want to do it. So if somebody can do it, good for me.
The other thing is that i know that you have used Blitz3d since at least 2005 (if i remember correctly), that you have a good level, and you seem to still enjoy making video games after many years when others have stopped and disappeared, so i think that this is a good sign.


My suggestion to you would be to provide for free a trial with a time limit, so that we can try it. (but limit the time an executable can run at 5 minutes or less so that it incites users to buy it.) As you want.


Bobysait(Posted 2016) [#16]
The command list will be on the site mdt.bigbang.free.fr (for now, until I migrate to a "real" server) as the online documentation.

For now, it would be really faster to list the unsupported command :)

BTW, I didn't mention it : it works perfectly with max2d as the BigBangDriver is an extension of TMax2DDriver with all the overriden methods working.
It also works perfectly on a maxgui canvas (as you can see from the screen, of the modelisation tool), it uses shaders as well to render the 2D. I didn't benchmarked the 2d module, but it's pretty fast too. Maybe I should compare it to the glmax2d one, there could be some surprise


Matty(Posted 2016) [#17]
I think you've done a good job. I won't use it since I don't use blitzmax yet and am not really involved in as much 3d gamedev as I used to be but it looks good.


Flanker(Posted 2016) [#18]
Nice ! I could be interested too if the synthax is near blitz3d-like. Did you implement mesh creation commands like blitz3d ?

About the modeling software, does it handle polygon extrusion, vertices moves and smooth groups like 3ds max ?

Also, will you provide a limited demo version so we can test it ?


Bobysait(Posted 2016) [#19]
mmm....
1/ Yes
2/ Yes
3/ Yes

:)


Pingus(Posted 2016) [#20]
Quite interresting, especially if it works with MAX2D. Can 2D be rendered 'behind' the 3D ?


Bobysait(Posted 2016) [#21]
what do you mean by "behind the 3D" ?




This produice, this :




Yue(Posted 2016) [#22]
Hi, this is a video game engine ?, sorry but my English is very bad and the translator does not help much .


Steve Elliott(Posted 2016) [#23]
Yes, game engine, based on Blitz 3D - but for Blitz Max.


Bobysait(Posted 2016) [#24]
It's not a "game" engine, but a "Graphics" engine.
It provides a SDK for creating and rendering 3D
Game engines are more "high level" and build on top of 3D engines.

BigBang is more like miniB3D. You can use it to create a game engine, like you would use Blitz3D, and then, a game.

Just imagine you have from low-level to high level :

Graphics hardware
- 3D API (OpenGL/Direct X)
-> provide interfaces to manage the hardware

- Graphic engine
-> provide interfaces to create complex stuff
like converting data to models/textures and interface the rendering

- Game Engine
-> provide interfaces to the game developper to create specific stuff like behaviors for IA, physics etc ...
most of what will transit next will be data as scripted code / xml / lua etc ...

- Game
-> provide interfaces to the user (the gamer)
It feeds the game engine with data like worldmaps / xml / User interface / etc ... so the player ... can play

BigBang is the 3D Graphics Engine

Then, the modelisation is a modelisation tool, it's not an engine, it helps to create models. It's not part of the Bigbang module, it just contributes to make it more complete
So the whole pack (module + tool) can look-like a "suit" for creating a game engine :)


Naughty Alien(Posted 2016) [#25]
..interesting movement in BMX 3D area..i like it..so, when this BigBang is going to be available for some tests?


Bobysait(Posted 2016) [#26]
A demo will come in the week (so you can try something done with the engine and test if it works for anyone <anyone who match the minimum configuration>)
In the same time, I'm building the doc for the engine, when done, I'll compile the sources for windows with some restriction (mac will come later, as I don't own a mac for the moment, and linux version will come as soon as I've installed it)

The restrictions will be :
1 > a small cinematic showing "powered by bigbang" (probably with a "demo" mention, it should be 5 to 10 sec max) whenever you launch the graphicsmode (then nothing, I hate watermarks)
the cinematic will appear only in Release mode (debug mode won't show it unless you ask for it > because when we want to code something even with a demo version, we don't want to see a logo again ang again and again ...)

2 > When you run a program, a counter/timer on renderworld will increase until 120 seconds and/or 120000 loop, maybe less ... then when one or the other counter has reached the max, renderworld will be skiped
So you'll can test your stuff for 2 minutes then you'll have to deal without the 3D.


I think in the week the demo version could be released as a compiled binary module (without sources).
The engine will be for sell when I'll get all the extra stuff done, and when I'll made any modification asked while testing the demo version (it will be kind of the "beta test").
The release data will be announced as soon as :
- doc is complete
- samples available for extra stuff
- support site ready (for bug repport etc ...)
- payment method validate ( > I'm thinking about paypal)

-------------------------------------------------------------------------------------------------------------
Then, as the modelisation software doesn't seem to be the point of interest, I will wait a bit for it.
I just think I'll make a pack for the 2 products when they'll be both ready for release
License will be like this :
- buy one or the other product (3d engine module and/or software)
- buy the other later : you'll get it at a reduiced price (if you still have a valid license key of the other product)
So, people who wanted to buy the engine while the software was not for sell won't regret it.

-------------------------------------------------------------------------------------------------------------

Now, enough talking about money, let's talk about time line :
- For now, I'm on the Doc and a demo (not very nice, I'm not a very good designer, but showing all possible stuff)
- next week (somewhere between the 15 and the 20 of february) I'll release the closed-sources-compiled module of the restricted demo ( It doesn't sound very well said like this :D )
- for the two next week, I'll make the needs for the payment site and all the required stuff for getting a "real" release version of the module
And from the demo release to the <release> release, it will be the beta-test time. So, you'll be invited to test it the most hardcore you can ;)


RemiD(Posted 2016) [#27]
@Bobysait>>I can provide a fully functional and recent code for Paypal (receive IPN, make sure IPN is valid, analyze IPN, depending on transaction state and if the datas are valid and not duplicate, automatically send email to seller (with infos about the transaction) and to buyer (with receipt + instructions to download and use the digital product)
I am ready to trade it for a reduction (for me) of the price of your 3d engine :D
Let me know if you are interested. (my email address is in my blitzbasic profil)


Bobysait(Posted 2016) [#28]
I sent you a mail, see you on facebook, we'll talk about this.


RemiD(Posted 2016) [#29]
@Bobysait>>I don't use facebook, we can use skype to talk about this, i sent you an email with my skype pseudo...


Bobysait(Posted 2016) [#30]
@RemiD -> I don't use Skype ... I use Facebook ^_^


Naughty Alien(Posted 2016) [#31]
:)


Flanker(Posted 2016) [#32]
Then, as the modelisation software doesn't seem to be the point of interest, I will wait a bit for it.
For me it's almost as interesting as the engine so your idea about buying one product after another with no extra cost compared to the full pack is fine. I learned 3D with 3ds max but I don't own a license, I tried to switch to blender but i find it counterintuitive and can't do anything. So an alternative to 3ds max would be nice, since I don't need all the advanced functionnalities, just the modeling ones (ok, maybe it would be nice to have CSG but I guess it's not on your todo list :). Will it be possible to export a model to .3ds or .obj ? And is there an UV mapper ?

Anyway, I can't wait to test the engine demo.


Steve Elliott(Posted 2016) [#33]
Bobysait, what did you use to write the Modelisation Program?


Bobysait(Posted 2016) [#34]
@Flanker :
There will be a UV mapper, at the moment, it's still in a very early stage. coordinates can be edited by hand, one vertex at a time ... boring and long :)
I will add multi-selection and some functions, like scale, rotate etc ...

for the export format, there is only import/export as b3d (and the internal format, that use polygon instead of triangle -> for surface extrusion and smoothing normals)
I will add some more export (like obj and 3ds) later, as they are not fully supported by blitz3d, 3ds, obj, etc ... and just convenience format for importing old stuff from another software that don't support b3d. Once you have them in Cosmos (Oh, yeah, that's the modelisation tool name for the moment ...) you'll only deal with b3d.
So, to be true, it's not on my plan to deal other formats.

considering CSG, boolean operations could be added, but I don't know any software that doesn't do it messy as hell. So, as the rest, it will be part of an upgrade/update later.


@Stevie Elliott :
The whole software is made with blitzmax. Maxgui + the 3D Engine + some extra blitzmax stuff on my own.


Pingus(Posted 2016) [#35]
Boby, I meant rendering a 2D background (bitmap) then rendering a 3D scene on it and of course another 2D overlay.


RemiD(Posted 2016) [#36]
@Bobysait>>Can you explain briefly how we can draw 2d stuff (text, plot, lines, rect, images) by using Blitzmax + your 3d engine ?
And if we can use different buffers like in Blitz3d ? for example, select a buffer (imagebuffer, texturebuffer, backbuffer), and draw only on this buffer, and then use copyrect to copy an area of a buffer to another buffer (like in Blitz3d), and then display the result on screen ?


Naughty Alien(Posted 2016) [#37]
..this is very exciting..so, source code for 3D renderer part will not be available? Im asking simply, in case that official support is no longer available (many times happen before)..


Bobysait(Posted 2016) [#38]
Just a bit of remixing in the code and the engine should be ready for a test release in demo mode : 2 minutes rendering max or 7200 loops (60 loops per second for 120 seconds)

@Pingus :
Mixing 2D, image or anything else works the same

here a sample of image drawn before the renderworld (a TImage object as background) and some 2D after (a rect and some outlined text)
As mentioned, you just need to remove the Clear_Color parameter from the clsmode
> it works the same as Blitz3D, you can mix 2D and 3D if you remove the cls_color with CameraClsMode(camera, false, true)




This is how it looks.




@RemiD :
Just like you would do it with blitz3D but with the max2d stuff.
so you use DrawImage/DrawRect/DrawPoly etc ... before or after the renderworld (according to the clscolormode)


And if we can use different buffers like in Blitz3d ? for example, select a buffer (imagebuffer, texturebuffer, backbuffer), and draw only on this buffer, and then use copyrect to copy an area of a buffer to another buffer (like in Blitz3d), and then display the result on screen ?


Actually, buffers in BigBang doesn't really work as blitz3d's ones
As the technology is not the same, the behovior isn't ever.
First, don't expect to copyrect to an image with blitzmax, there is no stuff I know that can allow that (not with direct functions)
You can copy pixels from the pixmap of the image, but it's long and not accurate.

All this stuff is pixel manipulation, as the engine let you return the pixmap of a texture and a framebuffer, you can make what you want with the pixmap.
Just to notive : I don't think it will be fast.

Whatever, The engine allows to render to a buffer directly, but a buffer is not a texture, it contains a texture (that you can access via getters)
I will provide both sample and wrapper to simplify this, as At The Moment, it's not fully "user-friendly".

for example, this is an how-to render to an offscreen buffer and texture a cube with the result


note that the BufferTexture function has not been wrapped for the moment, so it needs to be accessed in object mode
> BufferTexture:BTexture(Buffer:BBuffer)
Buffer.getTexture(GL_COLOR_ATTACHMENT0);

It's not "that" awfull, but it's not the most user-friendly way :)

That's how it looks




Naughty Alien(Posted 2016) [#39]
..also, i would suggest to make it 5 minutes limiter...2 minutes is somewhat too short..


Bobysait(Posted 2016) [#40]
5 minutes shouldn't be a problem ;)


Bobysait(Posted 2016) [#41]
Old style shadow map >
+ render the screen from above with fog enabled
+ texture the plane with the texture

(this is definitely not the good way to make shadows, but ... it still works for simple scene)





RemiD(Posted 2016) [#42]
About buffers, if it is not like with Blitz3d, is it at least possible to write/read on a texture fast enough (like with Blitz3d buffer system) and to copyrect a part (or the whole) of a texture (or of what is displayed on the screen after renderworld) to another texture ?
This can be useful to create procedural textures for screens, windows, mirrors, premade cubemaps...


(about your shadows, the light must no go through the casters... and it seems that the shadows are not applied on others casters...)


Bobysait(Posted 2016) [#43]
the shadows are only projected on the plane, so it's all but accurate :)
It was just trying to show a fast render2texture sample.

@About buffers :

As I said, it's not the same technology. With the engine, you can set the texture of the buffer being rendered.
So whatever you want to write on a texture you can :

1/ use the texture's pixmap
Local pix=Texture.lock()
' -> make what you want from the pixmap like you would with a LockImage()
' [...]

Texture.unlock() ' to refresh the texture pixmap


2/ render directly what you want
'- initialization -
'> Create or Load a texture
Local mytex:BTexture = LoadTexture(file)/CreateTexture(width, height ... > any texture ...

'> Create a buffer
myBuffer = create3Dbuffer(width,height, mytex)

'(or use the existing one, if your working with the multirenderer)
' myBuffer = MultiRenderer().renderBuffer()

Repeat
' [...]
'- rendering -
'> before rendering, set the buffer to use
BRenderer.set(myBuffer)

'> render the stuff to be drawn on the texture
RenderWorld()

[...]

the texture is now updated with what the camera sees.


(it's sort of blitz3d-like's SetBuffer method in the syntax whatever it's absolutely not the same thing, and as far as setbuffer does not work for render to texture (in the original blitz3d version))

Then I repeat it : if you want to copyrect, use the pixmap and probably something like GrabPixmap or use per pixel copy/write
You can do it, blitzmax have no "CopyRect" function but you can make your own easily


Bobysait(Posted 2016) [#44]
Unoptimized shadowmap using depth comparison
> require 2 render (one in lightspace one in camera space)

here the shadowmap resolution is 2048*2048 for a 1920*1200 screen
runs at 250 fps, but the scene is not complexe.

Also, the bias used is far from perfect, we can see some artifacts
But whatever, it works, and that's the first time I implement a shadow shader (fun to say), so I'm a bit proud to make it working with very experimental stuff :)




BTW, it's coded using a Post-process FX, so, once it's done, it works like a small library and is easy to setup

I show you the full source of the fx, like this you'll have a view on how it is setup.
(note that the shader sources are hard coded, but it can be load from file, whatever I always prefer making them in full code, using const string, it prevent naming issues)



Then this is the initialization

' this effect requires the multi-renderer (that export both color, depth and normals on a single pass)
' the light rendering could be done with a different Renderer that I removed long time ago ...
' maybe I should redo it as it exports the depth only
MultiRenderer().enableDepthOut()
MultiRenderer().enableNormalOut()
	
' create 2 buffers for depth from light source and main camera source
Local mainBuffer:BBuffer	=	MultiRenderer().CreateBuffer( shadowsize,shadowsize );
Local shadowbuffer:BBuffer	=	MultiRenderer().CreateBuffer( shadowsize,shadowsize );
' create the fx post-process (set the first texture as the depth layer from the mainBuffer)
'  3D Buffers are like this : layer0 = color (normal render), layer1 = depth, layer2 = screenspace normal
Local fx:TFxShadow			=	TFxShadow.Create ( BufferTexture(mainBuffer,1), shadowsize,shadowsize );
	' set the second texture as the shadow buffer depth map
	fx.setTexture( BufferTexture(shadowBuffer,1), 1, shadowsize,shadowsize );


Then, create 2 cameras and set them for the fx
' 0.7 power > dark shadows (0.0 is no shadow visible, 1.0 is full black)
' 0.0002 > bias for depth test
fx.setParams(0.7,.0002,cam, camlit);


and in the loop
' set multirenderer to use the main buffer
MultiRenderer().Set(mainBuffer);
' render the scene on the offscreen framebuffer
cam.render();

' set the multirenderer to use the shadow buffer
MultiRenderer().Set(shadowbuffer);
' render from light source
camlit.render();

' apply the shadow Fx (compute the 2 depthmaps to extract the shadows)
fx.render()

' return to backbuffer
SetBuffer BackBuffer();

' draw the main rendered scene (the first texture from the frame buffer)
' I used a rect texture but it also works with a non pow2 texture
local vh:int = shadowsize * (Float(GraphicsHeight()) / GraphicsWidth());
drawTextureRect BufferTexture(mainBuffer,0), 0,0,shadowsize,vh, 0,0,GraphicsWidth(),GraphicsHeight()

' draw the shadowmap on top with shade mode, so it only draws darkened areas
SetBlend SHADEBLEND
drawTextureRect fx.getOutputTexture(), 0,0,shadowsize,vh, 0,0,GraphicsWidth(),GraphicsHeight()



And here it is.
So, to resume :
- init by creating 2 buffers and the fx + set the fx
- render using multirenderer
- draw.


Flanker(Posted 2016) [#45]
Nice one, looks good ! Performances seems quite nice for 2048x2048. How does the scene complexity affect the shadows ? Because of the two rendering ? What's your setup, graphics card ?


Bobysait(Posted 2016) [#46]
2 renders on 2048*2048 textures, yep, more entities to render will affect the framerate so as the textures on the scene, not that much actually but, the scene on the previous screenshot is so low in poly that it's not a good sample for framerate it also has no textures at all no animation and no collision. Next screen will show a more standard scene


-My setup -
* CPU 6 core 3.1Ghz
* Radeon HD 7980
* 8 Go ram 1333 MHz
* Windows 8.1
So, it's far from the best, but it's certainly farther from the worst.
Good for met, not very good for testing performance will low machines


RemiD(Posted 2016) [#47]
This looks better indeed, however it seems that there are some inaccuracies (for example, the shadow at the center does not seem to be applied on the cube)

Can you show the same scene but with walls and a ceiling ? (like a room)

What happens when there are multiple olights and the lighting of one olight lights the shadow casted by another olight ?

About the complexity of the scene, it is always possible to create low details casters...

Also what happens if a surface use transparent texels and opaque texels like for the smallbranches/leaves of a tree or the texelsshape of a particle ?
With stencil shadows it is not possible to cast such a shadow (only the shape of the triangles/surfaces are considered) but with your system i suppose it is possible ?


Bobysait(Posted 2016) [#48]
The shadow system here use the depth which take care of the masked pixels (no alpha ! -> we can't store alpha in a depth buffer) So, it should be able to cast a tree.

A light can't "hightlight" a shadow as it's computed "after", but if I defer the lighting pass it could happen ... but I won't, because it would involve too much work. I'm just trying to make a minimalist sdk for now, not a "big shadow library" :)
Whatever, as it's shader based, all we want to do should be possible if we export what we need and know how to use it.

BTW, the light is "omni" but the shadow rendered is on the camera frustum only. So in a room it will not cast on all walls until we make a cubemap process-like to render all 6 view from the light view. Once again, it's possible ... but too much work for a demo.
It will probably (certainly) come later.


some inaccuracies (for example, the shadow at the center does not seem to be applied on the cube)


yep, as mentioned, the bias is not perfectly setted up, so there are some offset which include a gap between entities and the projection.
I already fixed that on the current version I'm working on (with an animated entity, it works ok by the way, I'll add a tree to cast shadows from the leaves, just for the purpose)


RemiD(Posted 2016) [#49]
This looks nice, i am curious to see your next demo.


Bobysait(Posted 2016) [#50]
BTW, for coding my shaders, I use notepad++ with a <user defined language> specific for glsl + my engine variables
(I actually added some stuff to a glsl setup I found ... I don't remember where)

you can get it here (save it, don't open it ... or copy paste it ... well, it's up to you)
> http://mdt.bigbang.free.fr/bigbang/notepad_glsl_bigbang.xml

That's what it looks like :




The <attributes> </attributes> <main> etc ... are extra codes parsed when the shader is loaded, it is extracted then as variables which are then used to build the shader and Attribs are automatically collected.
So as the functions that are re-interpreted as BScriptFunction

the <fx.stuff> are pseudo-internal variables from the BFxProgram, it allows to access data from the super.shader (like textures, pixelsize, uv etc ...)
<bb.depth> is a function that reinterpret a depthmap exported from the rendering, it is defined in the core of the engine.
(as it's a special format encoding > depth to color, it requires a specific function to rebuild the depth from the color)


Steve Elliott(Posted 2016) [#51]

I use notepad++ with a <user defined language>



notepad++ is very good :)


angros47(Posted 2016) [#52]
Well, I guess I have to ask it, so here is my question: why someone should use your engine instead of OpenB3D?

OpenB3D already provides Blitz3d like features, including quaternion rotations (not available in minib3d); and it also provides shaders, like your engine.

Plus, OpenB3D has some features that you said your engine lacks, like:
- terrains
- shadows
- CSG
- particle system
- support for .3ds, .x and .md2 files

Last but not least, OpenB3D is 100% free, your engine will be more expensive.
And the full source code of OpenB3D is available, so there will always be someone able to fix/improve it (this is in reply to RemiD)

Oh, if anybody wants to try it, he won't suffer the 2 minutes limit, he can test for two minutes as well as for two weeks.


RemiD(Posted 2016) [#53]
@angros47>>also i realized that minib3d and openb3d are more than only a 3d graphics engine, so in a way they are more complete than this.
But Bobysait seems to create this 3d graphics engine for him anyway, so if he decides to make it accessible and some decide to use it, why not.
More choices of functional and stable 2d 3d graphics engines with a blitz3d syntax is beneficial for every coder who like blitz3d programming style, so it's all good !

(i will probably take a look and try blitzmax+minib3d and freebasic+openb3d in the future, but for now blitz3d still works, so i will continue to use it until its last breath ! thanks for what you do on your side :) )


Bobysait(Posted 2016) [#54]
@Angros :
When you won't seem to be sarcastic as hell, maybe you'll get some answers.


steve_ancell(Posted 2016) [#55]
@Bobysait:
I notice at the start of this topic you mention "flat mode", is that like a kind of make 3D look like 2D thing such as cartoonify sort of thing?.


Naughty Alien(Posted 2016) [#56]
..where to get that OpenB3D thing, and is it working with BMax ??


RGR(Posted 2016) [#57]
@Naughty Alien
http://aros-exec.org/modules/newbb/viewtopic.php?post_id=79976
It seems to be a minib3d portation to FreeBasic with several added functions + shader + shadow + physics etc. made by angros47
Interesting project (thumbs up). And still supported!

http://www.proog.de/home/freebasic/tools-libs/27-3d-programmierung-mit-openb3d-und-freebasic
http://freebasic.net/forum/viewtopic.php?t=15409&postdays=0&postorder=asc&start=0&sid=a24f7f2664ee3fe94501e44716cea02f

V1.1
Added:
-3d actions: ActMoveBy, ActTurnBy, ActMoveTo, ActTurnTo, and many more
-Physics features: CreateConstraints and CreateRigidBody; with the last one, it's possible to define position, scale and rotation of an entity by linking it to four other entities (usually pivots): if that entities are controlled by constraints, the main entity can be controlled in a physically realistic way
-Particle system: emitters can be made with CreateEmitter (even an "emitter of emitters" is possible)
-if a light is created with a negative light type, it won't cast shadows
-DepthBufferToTex: with no args it uses the back buffer, otherwise it can render the depth buffer from a camera (like in CameraToTex)
... plus many more


angros47(Posted 2016) [#58]
@Naughty Alien
The official OpenB3d site is:
https://sourceforge.net/projects/minib3d/

The site with the wrapper for BlitzMax is:
https://github.com/markcwm/openb3d.mod

The official thread for the BlitzMax version is:
http://www.blitzbasic.com/Community/posts.php?topic=102556
http://www.blitzbasic.com/Community/posts.php?topic=103682
http://www.blitzbasic.com/Community/posts.php?topic=105082

@Bobysait
Why do you call me "sarcastic"?

You were the one who asked for it:
ps : I'm curious to know why nobody asked something like "Why using your engine instead of minib3d ?



Bobysait(Posted 2016) [#59]

"Oh, if anybody wants to try it, he won't suffer the 2 minutes limit, he can test for two minutes as well as for two weeks. "



Seriously, I think I deserve a little more respect than that.

Now, the way I want to distribute my engine is not of your buisness, and you're not allowed to comment on it.

And, "Damned !" I won't let you post links to your engine. If you want infos about what I have to sell, ask for it, but here, you're just like on the Porsche stand, trying to sell your Ferrari stuff ... Do I really need to explain why you 're really impolite here ?


steve_ancell(Posted 2016) [#60]
you're just like on the Porsche stand, trying to sell your Ferrari stuff

LOL That's one way to look at it I suppose! :D

I think this has all spiralled into a whirlwind. I see Bobysait's point and I see angros47's point but although OpenB3D may be doing the same as Baobysait's engine, maybe his engine could be easier to use in a wrapper kind of sense. May be a good idea to compare them and weigh-up the pros and cons. ;)


angros47(Posted 2016) [#61]
Actually, it's like we both are on a VW stand (since this is Blitz3D forum, not your engine forum), and you try to sell Porsche stuff, while I promote Ferrari stuff.
On YOUR forum, you can forbid me to post link to my engine, but this is not your forum. You are free to post link to your engine in OpenB3D thread, as i can post link to my engine in your thread. You are the one using somebody's else forum to promote a commercial product.

Anyway, I won't post more link in this thread, if you want.

I actually want an info: you said your product has many advantages over Blitz3D (first of all, BlitzMax language with OOP); I am just asking which advantages does it have over OpenB3D and miniB3D.


Bobysait(Posted 2016) [#62]

maybe his engine could be easier to use in a wrapper kind of sense. May be a good idea to compare them and weigh-up the pros and cons



People will compare this engine to others later for sure, but most of the engines available are like modelisation softwares : You can pay a lot or nothing, in the end it's just a matter of feeling when in use.

I made the engine to work in many ways (in object or not) and the commands easy to use
(I must admit, it is in part due to blitz3d commands set that is IMHO the best one we can have for 3D manipulation).

While it's better for prototyping, it could be a bad thing for large stuff like real game engine ...
The engine solves this by letting everything (or almost) accessible to the user
We have abilities to fit the engine to our needs and make it usefull for more professional stuff.


A non-exaustive list of extra stuff you can acheive easily, that lacks in blitz3d/minib3d and makes the engine ready for big projects :

1/ Surfaces are defined by Vertex and triangle "arrays" that are not associated to the surface but independant objects
(I said arrays for convenience, while they are real object that automatically update data on the fly and send internal arrays to CG or store data on CG memory for best and faster access)
- A surface can be used the blitz3d-like way, using classic commands like "AddVertex" "VertexNx" "VertexNormal(surf, index ...)" etc ...
- they can also be accessed independantly as object surface.GetCoords() / surface.GetTriangles() / surface.GetNormals() / surface.GetTangent() etc ... (there is also a basic command for each accessor GetSurfaceCoords(surface) ... etc ... if you want to program without object)
-> each array in a surface is accessible and sharable independantly.
There is so many ways to use this feature that I will just list the most relevant coming to my mind :
You can share vertex coords from a surface to another surface
Surface1 = createsurface(mesh)
 addvertex (surface1, x,y,z, u,v)
 vertexnormal(surface1, nx,ny,nz)
 [...]
 addtriangle ...

Surface2 = createsurface(mesh2)
  addSurfaceCoords(surface2, GetSurfaceCoords(surface1)) ' add the vertex coords from surface1 to surface2
  addSurfaceNormals(surface2, GetSurfaceNormals(surface1)) ' add the vertex normals from surface1 to surface2
  [...]
  ' do not add UV Set 1


This will link the coords/normals etc ... from surface2 to surface1 object except for the UV Set 1, it won't duplicate any vertex.
Then you create a specific UV set (coord set 1 > for Lightmaping) for each surface that won't be shared between surfaces

 VertexTexCoords(surface1, 0, tex1_u0,tex1_v0, 1)
 VertexTexCoords(surface1, 1, tex1_u1,tex1_v1, 1)
 VertexTexCoords(surface1, 2, tex1_u2,tex1_v2, 1)
 ...
 VertexTexCoords(surface2, 0, tex2_u0,tex2_v0, 1)
 VertexTexCoords(surface2, 1, tex2_u1,tex2_v1, 1)
 VertexTexCoords(surface2, 2, tex2_u2,tex2_v2, 1)
 ...

So as each each surface has its own UV set.
Now, you can lightmap all meshes while not consuming extra memory just to store all duplicate vertex data.



2/ renderers are customizables
you can extend a renderer to manipulate what you need (the manner the brushes are rendered, or textures blended or ...)
This feature is probably useless for many users, because the standard renderers available are pretty complete

There is 3 main renderers available (there will be 5 in the end)
-> one that only renders geometry (without depth, normal, texture, lights etc ...) which can for example generate masks for entities if you want to make some glow in post-fx for example ...
or can be a base to extend, to add alpha mask or depth only ... or what you need.
-> one Albedo renderer (the "standard" renderer that renders geometry with normals, depth, textures, bump, lighting etc ...)
-> one with multiple output renderer (albedo, depth and normal exported to separate textures)
the two others to come are for faster access to a single layer :
-> depth only
-> normal only
those two renderers can be usefull to render only normals and/or depth in light view for example (requires for many post-process shaders)

And, to simplify all, switching from a renderer to another is as easy as setting the current buffer
' set the multi-output renderer as the current renderer
SetRenderer (MultiRenderer())

' or in object
MultiRenderer().Set()

' or use the global renderer as the current renderer
SetRenderer (GlobalRenderer())

' ... in object
GlobalRenderer().set()

... etc ...


Then, you can render what you want.

You don't need to hide/show stuff to render separate list of entities, the engine allows to render an entity and its hierarchy directly
RenderEntity (camera, Entity) will render the entity using the specified camera (recursive hierarchy of the entity is optional [true by default])
RenderCamera (camera) will render the full scene like a renderworld but only with the specified camera
RenderWorld() render everything with all cameras (= the blitz3d like command)

You can also define lists to render
AddRenderList(camera, Entity) > add the entity to the camera renderlist (recursive hierarchy of the entity is optional [true by default])
RenderList(camera) > renders all entities in the renderlist of the camera

All those commands also exists in object
camera.render([optional entity])
camera.renderList()
Camera.addRenderList(entity[,recursive])
etc...


3/ main collsions are accessible and extendable, you're not limited to the sphere vs sphere/polygon/box method, you can add many collision style, collision type and collision
response as you need. it returns then a simple Integer for each collision type
-> as blitz3d : 1 for elips/elips, 2 for elips/polygon etc ... you'll have more collision type available, and setting an entity is as easy as
EntityType( entity, MyTypeID)


4/ the engine gives a special type for post-Fx shader based.
post-Fx are for 2D compositing and extra effetcs (for example, screenspace normals and depth map exported with the multi-output renderer can be used to create an outline around entities, or a glowing effect)
Of course, I can't provide a specific post-Fx for all effects because there are tons of them, and most of the time we're looking for something particular, but I provide some ready to use usefull effects and a SDK to create your own post-fx easily.



Just to notice, for the second shadow screen above, it took me at most 10 minutes to create the shader and render my first fully working scene from scratch.
All I had to do was :
- create a camera for the light rendering
- render the scene in both main camera and light camera view
- create a postFx extends with a small shader source that takes


So, for the moment, I still have to finish it, and things will get clearer when it's done and ready for tests :)
But for a first release it should be already pretty consistent and I don't feel the needs to legitimate the existance and the price of the engine. I let the choise to the user to decide if it's what he is looking for or not.

I hope this will clarify some questions.
Cordially.


Bobysait(Posted 2016) [#63]
@Angros :
Whatever you think, I'm not here to come into a war against you, but you're off-topic.
This is not my forum (as you mentioned, but I never pretend it to be), it's still my topic, and it is related to Bigbang engine, not openB3d.
I don't want to be agressive so could we just stop with that ?

Then to answer :
> I won't give the advantages vs openB3d.
I just give the features of my engine.
First : It's not an engine that enter into competition with an open-source one.
Then : I have never used OpenB3D, so I don't even know what it contains and I can't compare.

For the miniB3d part, all I know about, it's that, whatever it is a good community project, there are bugs that have never been solved, and there are differences in transformation with blitz3d that I can't accept as an engine I would use (that just is a problem for me, it doesn't have to be a problem to anyone else ... but it might)
And as experience, I have already asked about broken stuff in minib3d, I have never got any answers.
I didn't expect to get answered that much, and I don't blame authors for not answering, but it just tells me that an open source project is just an open source project.
It's nice to share and expand, but when you need something done, you'll better do it yourself many times, and as soon as an update comes you'll have to make all the hotfix you add. It's a lot of constraints which happens also with most github's projects that I don't want to have to deal with and it's generally worst with github forked sources when updates are rarely updated as the original source, so you spend time to switch between versions.

Then, as mentioned in previous post, I don't need to explain why someone should use a shareware engine instead of an open source one, because, it's a big discussion that is not related to the fact I sell something and it's a too large discussion to be abord here
And anyway, it's just not the way I work, so as many people.
And I don't expect nor encourage open-source guys to understand or accept that. It's just my opinion.

So, all you'll have here, is what the engine is capable of, not what it is capable of "against another engine".


angros47(Posted 2016) [#64]
Thank you for the infos.


degac(Posted 2016) [#65]
@BobySait

I like the name 'BigBang'.

I'll await to see some demos of the engine and of the 3d-editor.
Just to clarify, I don't want to see the 'classic' teapot :), I would like to see a 'complex' scene (shadow, shaders etc) to have an idea of the potential/speed etc of the engine itself.

A question: the editor will export 'scenes' to use in the engine, directly or via some sort of ready-to-use functions?


Bobysait(Posted 2016) [#66]

A question: the editor will export 'scenes' to use in the engine, directly or via some sort of ready-to-use functions?



Both engine and modelisation software are made as standalone, so I didn't do anything in this way, but it shouldn't be a problem.
The format of the b3d in output is already an extension of the standard b3d, with many chunks (like "LITE" for lights "CAMR" for cameras or "COLL" for collisions/pickmodes etc ... which are already implemented in the b3d loader, so it recreates the scene "as is" in the modelisation software and vice-versa)
BTW : Loading the exported b3d in blitz3d will just ignore the extra unsupported chunks, so the scene will be loaded without lights, cameras etc ... but will works as a legacy b3d would.

Then, all entities are accessible via FindChild (by name) or GetChild (by ID), so a scene should be easy to extract from the b3d.

Whatever, you're right, it would be a good thing to add a specific scene manager, and I will implement it later, so it could for example manage scripted stuff (like textures animations or else)

For the demo sample, I 'm creating a scene big enough to have lots of entities, textures, animations, collisions and effects, so it will be easy to see and compute parameters to check what runs well and what doesn't
Obsviously, We can't build an engine that will render extreme graphics with low-config ... I did my best to optimize the engine, but as long as there is no magic involved, it will still be limited to the hardware running it :)


Steve Elliott(Posted 2016) [#67]
Personally I do find it really annoying when somebody is determined to do well and charge some money for their hard work - but then somebody else suggests a free piece of software, that might, or might not be supported, and contain some bugs. But hey, it's free! But the opposition is putting in some professionalism and support.

True, this site belongs to neither of you, but why hijack anothers thread? Just seems like sabbotage.


Ploppy(Posted 2016) [#68]
Wow, looks as though you've worked hard on this one. Looking really good bobysait, keep at it mate...


degac(Posted 2016) [#69]
@Bobysait

Thank you for the info about the 'extended' B3D model.
An interesting approach for saving/loading 'world/scene'.


markcw(Posted 2016) [#70]
This looks a promising engine Bobysait, I hope you do well with it, you've obviously worked really hard at it.

Your competing with Leadwerks more than Openb3d, the advantage I see here is users can use Bmx instead of Lua. You will need to wrap Newton or some other physics engine of course.

I have a suggestion, if you kept a core module of the engine closed source (like Bmx did with bcc) then open source the rest that would be more appealing for some people.


Naughty Alien(Posted 2016) [#71]
I agree with munch..ill be more than happy to go back and mess around with Bmax , if relatively decent 3D engine exists for it..now i started messing around this OpenB3D thing, but i wouldnt mind to pay for decent, nice 3D engine for BMax..and i hope you will use Bullet instead of Newton..


RemiD(Posted 2016) [#72]
+1 for Bullet physics engine.


Bobysait(Posted 2016) [#73]
I hope I will too :) bullet is pretty good


While it supports a lot of triangles/vertices, I broke something in the matrix system (I used a "do not update if not required" that I switched to "update anytime something move") and the framerate fall down very fast on huge scenes. I need to fix this !

While in the meantime, scenes will large amount of surfaces per mesh renders pretty fast (fps was limited to 30 on the capture, I don't like to have my graphics card burning)


So, At the moment, it's better to have a lot of surface/triangles than a lot of entities ... which is good for big scenes, and bad for physic and animation ^^


RemiD(Posted 2016) [#74]
This is extreme :D

Maybe let the user define the complexity/details in the scene because some of us may not have a graphics card as good as yours to run this...


Flanker(Posted 2016) [#75]
Are you limited in number of vertices/triangles per surface like Blitz3D ?

Also, when can we test it ? :p


Ploppy(Posted 2016) [#76]
I agree, very smart indeed. I can't help thinking there should be an assassins creed guy running along those rooves ;) Nice work, babysait looks as though you have a powerful engine there.


Bobysait(Posted 2016) [#77]
I limited surface triangle and vertex index to Signed Short ... so it can't exceed 32767 triangles and 32767 vertex ...

I did to preserve the blitz3d compatibility (and it also consume 2x less memory to store vertex data)

But, I know this limit is really annonying ... I probably should convert this to Int, as long as the engine really becomes powerfull with large amount of data much more than with lot of small ones.
This capacity comes from BigBang can (and anytime its possible, "should always") store data to graphics memory

I also implemented a "find" function in the texture class to try to recycle texture with same specs (from LoadTexture only -> it requires a path) so it doesn't duplicate textures from material that are already loaded (else, it can break everything if you load too much data ... I experienced it while reaching 2 Go Video Ram, my Graphics Card can support up to 3Go, but as 3dsmax was opened with the "Venice" scene which is very huge, it probably consumed a lot of memory too)

And it's also more convenient to load multiple b3d than just one of 500 Mo ... and in this case, the materials are almost all the sames, but the texture should not be loaded more than once (static mesh, static textures, there is no point to load them twice or more)


For the release, I have to deal with the timeline, and I have to say :
I'm in late !
I have made a parser to export the doc, but there is still almost 1000 functions/methods/types to document and I only documented 500 at the moment (which was already a big step).

I'm also optimizing the shaders and I rebuilt the scene graph matrix, so there is some issues I should solve easily but will take a bit more time.

To be honest, I need to stop myself trying to make better and better and better ... 'cause there is always things to improve, but it should wait for updates as the engine already works.

So, I promise the demo release on March the 1st.
And the "Release" release will come 2 weeks later, once the demo has been tested by you. (So I can fix any remaining bugs before selling anything)


@Ploppy : I made the engine to support my own project which needs to load a big city (not Venice, actually, it's <Le Mans> and it's not related to cars running at all, it's just a MMO survival zombi game ... yep ... one more) with a dark cartoon style which requires some shader effects (outline AO and some more)
So, it's pretty robust for this kind of scene, while it will probably be lower for small games that require lots of small entities.

I will later add a support for more engine setup, so it can take the best of what we need and optimize the render in this way. (large scene and/or lot of entities)


Ploppy(Posted 2016) [#78]
Why don't you create an option for the user to choose the vertex/triangle index size between 16 and 32 bit? It could be an idea to make this configurable. That way if you wish to use more than the 32767 indices you can, and if you wish to economise memory using 16 bit indices you can also. Just a thought...


Bobysait(Posted 2016) [#79]
That's doable.

Actually, it just require 2 or 3 lines of code to add the switch.
(probably a flag on the createsurface)

[Edit]
or maybe, I should keep it on 16 bit and automatically switch whenever we add a vertex index above the SIGNED_SHORT limit
(And By the way, limit of blitzmax short is 65535, not 32700 and some bananas ... as they are Unsigned short [0..65535])

As to mention, the b3d file format uses INT to store vertex index, so there is no problem using int instead of short.


Flanker(Posted 2016) [#80]
I've been testing Ploppy's Hybrid/Hardwired, and modern shaders are really impressive. Playing with marching cubes and an ocean grid... as you can render faster with less entities, the vertices/triangles "no limit" can be useful in these cases, to "group" meshes. I'm not planning to write a game with extreme graphics, but I like to test what an engine is capable of :p


Bobysait(Posted 2016) [#81]
You're right, It's better to have big meshes split into chunks (for fast spatial partitioning) than having several frustum test and matrix multiplications.

- I added the automation for the triangle indices that now switch when require (or on demand, but it should not be required unless we want to save a small time when loading surface -> else it needs to rebuild the buffer to accept integers)

- I'm thinking about a terrain engine based on chunked LOD (as I've already done one for blitz3d)
The heightmap would be a texture object and once send to the vertex shader should deform the vertices of the terrain patchs.
So, we could also translate/rotate/scale the texture in realtime.
I don't know if there is a need for this, but I thought it could be awesome ... or at least very fun :)

But, one step at a time !
I'll deal with terrain engine once the module will be fully downloadable, so I keep the idea in a corner of my head.


RemiD(Posted 2016) [#82]
I agree that it can be useful to be able to have many vertices and triangles in a surface, but if you go over 32768, maybe there will be problems with some old graphics cards (or with some recent but low end graphics cards), don't you think ?
I suppose the best way to know is to test it on different computers.


Bobysait(Posted 2016) [#83]
There is no reason such a limit exist on graphics card side.
The only limit is the video memory.

on Android (old version until 4.0 I think) you can only send UShort as triangle index, but desktop opengl drivers doesn't have this limit.

I had only add the limit because of blitz3d didn't do it, and it keeps low memory usage.

As long as it can be disabled (by default) just by not using big surface, there won't be any problems.
And only hardware which could not run int arrays won't load a glsl 3.3 context, so whatever the problem would be, it's solved :)


Ploppy(Posted 2016) [#84]
My own technique involves storing the mesh info into a dynamic scratch buffer resident in the main memory. It is by default 1000 entries is size (for vertices the x,y,z,nx,ny,nz,ux,vy,ux2,vy2,rgb and for triangles 3 unsigned ints containing vertex references) for each newly created 'surface', for both the vertex buffer and the triangle buffer which make up the surface info for a mesh. Each vertex/triangle added to the surface will be inserted to the already created buffer. If the newly added vertex/triangle in progress is of a higher index than the buffer's current size, the buffer is incresed in size by a further 1000 entries. Giving a vertex/triangle buffer some leeway means less memory manipulation during mesh creation, meaning a significant gain in speed. Once a render is called, the mesh is 'considered' to be finished with as far as modifs/additions go. If the mesh's surface have just been created or modified since the last render then the buffers will then be shrunk down to the actual size to not to waste memory.

If a modification or creation has been made upon a surface the actual real buffers will then be created from these scratch buffers. These buffers are the ones that are sent to the graphics adapter. Once they have been created and sent they cannot be modified since they reside in the gfx adapter - this is why I keep a scratch version in resident memory (very much like in b3d). So, next all vertex and triangle index info is copied over to the newly created buffer destined for it's one way trip to Nvidialand/Atiland. If the vertex count is above 65535, the triangle index buffer created will automatically be configured for 32-bit. Once these buffers have been created, if there is no further modification to be made the buffers will not be recreated in a new render.

Of course, if the executed program is not actually modifing/creating meshes but rather loading them from a file without need of further modification, there is no special need of retaining the 'scratch' buffers once the creation of the actual buffers have been made. Although in directx it is possible to create buffers that are to some respect 'shared' by the processor and the gfx adapter the use of these kind of buffers are slower than write only buffers. So to make the most of the gfx adapter's power I prefer to use this technique. I'm guessing that with OpenGL, similar respects have to be made also to memory management/access concerning buffers in general.


Bobysait(Posted 2016) [#85]
Yep, I use the same technique.
Arrays are stored in a bank (one bank per data type) and are resized by 4096 bytes (not entry) if required.

When the renderworld needs to bind the array, the gl buffer is created after the bank is optimized and fit the real size (if not dynamic flag is set, else, it keeps the bank as it is for dynamic surface)

by the way, I also reduice the vertex data size by enabling/disabling independantly the vertex color/tangent/Uv0/Uv1/bones ...
so, the default surface contains only an array of coords, one of normals and that's all. other arrays are build on the need. For example, if we use surface.color(index,r,g,b,a) it will enable and build the vertex color array if it does not exists and feel it with default values (1.0,1.0,1.0,1.0 for color)

Like this, the city of Venice is half loaded (It didn't load the other half, because it takes time to export from obj format ... and I don't have finished to export it yet) and "only" use 500 Mo Ram/VRam diffuse+normal textures included.

(Yep, I know it's already a huge ram size, but there is already 3.5 Million triangles in the scene * and some 8 or 10 million vertices with at least 36 bytes per vertex and some 200 Mo or more ... textures)

About the OpenGL memory management, there is different modes to send and store data to CG, it depends on whether you need to frequently update the array or not, so you can load it as static or dynamic, with read and/or write access.
Once it's stored in the CG memory, I keep the ram buffer for dynamic meshes, else I would have to get data back from the CG, modify the vertices/triangles then update from the ram to the vram ... it's too long, so keeping the ram buffer allows me to modify whatever I need (with VertexCoords etc ...) and it will be updated automatically when arrays is bound to the CG on the renderworld. It works the same for the textures, using Lock/Unlock to update the pixels of dynamic textures or writepixel without lock/unlock but it's slow as hell :)


Ploppy(Posted 2016) [#86]
Yes, you can also use dynamic buffers (for texture manip too) with dx, but the same speed problem is there as for opengl. This is why I effectively keep a scratch buffer in system ram and prefer using an exclusive one way buffer like you, you get more raw power out of the gfx adapter that way.


Bobysait(Posted 2016) [#87]
I forgot to mention about the fog modes

3 modes are available and cumulable

1 is the linear depth fog
2 is the height fog (it doesn't care about the camera position/rotation)
4 is the radial fog (only if 1 active, replace the linear fog)
so 1+2 is linear fog + height fog
1 + 4 is radial
1 + 2 + 4 is radial+height
and 4 is ... nothing ^^ (4 only works if 1 enabled)

Here is a screen where I exagerated the height Fog and set it to (10,-1.0) -> you can set the far lower than the near, it reverse the fog direction :)




RemiD(Posted 2016) [#88]
The height fog looks nice, like some kind of mist on the ground.

I suppose that these buildings are built procedurally ?
But maybe not the church/temple and maybe not the shape of the city ?


Bobysait(Posted 2016) [#89]
It's an "obj" file I downloaded from tf3dm.com
The full scene is 4.5 millions vertices - 1.5 Million triangle
I bumped it in a dirty way, just to see how it runs.

I finished the obj loader, so I can load directly obj files.
It works pretty well, but I have to create a very big buffer to load the data temporarily (obj file format is really a crapy format)

So, it consumes lot of ram just while loading (then the memory is released on exit on the loading process)

I unleashed the beast for a moment just to know how it runs well ... and it runs like a charm :)



and a link to a big image (I don't want to break the horizontal rule, so it will be just a link to the image)

Big image


And this last one is ... over exagerated ... I loaded 9 times the city (40 millions vertices 13.5 millions triangles)
Mega Stress Test


Finally, Venise is not Venise without water


RemiD(Posted 2016) [#90]

It's an "obj" file I downloaded from tf3dm.com
The full scene is 4.5 millions vertices - 1.5 Million triangle


Ah ok... I thought that you were building everything procedurally ;)

Have you checked if the meshes are optimized or not (sometimes some exporters export 3 vertices for each triangle as if each triangle was unwelded, but most of the time this is not necessary because some triangles can share vertices... (depending on their normals and depending on their vertices uv coords, as you probably already know...))



my own project which needs to load a big city (not Venice, actually, it's <Le Mans> and it's not related to cars running at all, it's just a MMO survival zombi game


I suppose that you already know the tv series "the walking dead" ? (about surviving a zombies invasion) (quite good imo)
If you like this kind of movies, maybe take a look at "z nation" and "survivors" (also quite good imo)


Flanker(Posted 2016) [#91]
Seems to run well :)
I like the fog modes, especially the height fog wich can be useful.


Bobysait(Posted 2016) [#92]

Ah ok... I thought that you were building everything procedurally ;)



As soon as I have finished a last part of the engine, I will
(i'm currently adding support for polygon modes > GL_LINES/GL_POINTS/GL_TRIANGLES[already include as default "driver" for vertex array] / GL_QUADS / and ... maybe ... GL_POLYGON)

At least, when GL_QUADS will be done, it will save 1/3 of vertex index (4 index for 2 triangles vs 6 index with standard GL_TRIANGLES mode).
it's not that the engine suffers from polygons count, but ... you know ... optimization is optimization :)

And it should enables lower sized files


I also started to integrate a part of the shader for Texture projection mode
(it overrides the vertex UV to use the screen position of the vertex, so the texture applied will appear "flat")
I don't remember why it's usefull, but I just remember it's really usefull ... You'll tell me ^^
In the end, it will just be a texture flag ("TEX_PROJECTION" = 2048 [Alias : TEX_PROJ]), the projection can then be set with SetOrigin/SetScale/SetRotation methods of the texture (those are BTexture methods, not max2D commands)


I suppose that you already know the tv series "the walking dead" ? (about surviving a zombies invasion) (quite good imo)
If you like this kind of movies, maybe take a look at "z nation" and "survivors" (also quite good imo)



Obvious is Obvious :)
but ... I have never seen the last one, "survivors", I'll have a look ;)


steve_ancell(Posted 2016) [#93]
All I can say is holy f**k!, you're doing a grand job there. ;)


Bobysait(Posted 2016) [#94]
Thanks, I do my best (or at least I try to do the best I can while I'm really tired)

RemiD >post #47
This looks better indeed, however it seems that there are some inaccuracies (for example, the shadow at the center does not seem to be applied on the cube)



I missed this one :
There is no error on the shadow projection, just your "point of view" that miss the actual position of the cubes in 3D space.
The primitives are not stuck on the floor, most of them are flying, so the cube that does not receive the shadow is not an error (like Bill Gates would says : "it's a feature"), it's just that it's higher than the shadow projection.

And by the way, while not optimized, the small shadow sample can't produice error over than bias offset. It's not based on geometry projection, but depth comparison.


RemiD(Posted 2016) [#95]

There is no error on the shadow projection, just your "point of view" that miss the actual position of the cubes in 3D space.
The primitives are not stuck on the floor, most of them are flying, so the cube that does not receive the shadow is not an error


Indeed, i see that now, the cube does not touch the floor ! So, all is well. :)


Bobysait(Posted 2016) [#96]
I'm a bit late, I had to reinstal windows to ensure future clean and safe releases. (I also had to move some furniture at home ... which took me one more day and gave me nausea for a day ... too many dust)

So, Actually, I still have to do :
- rebuild the internal maths for entity (I replaced them last week with a vector library I made, it's accurate but not as fast as the previous. So, I 'll go back to previous and keep the vector library as what is is : A Library ! )
- I rebuilt the b3d loader (including a binaryloader abstract class for loaders, which will make next file format easier to include > I project to add an obj importer at least) it's probably working great, but I want to be sure it won't break on some particular models.
The goal of a loader is to "try" to load, and if fails, just returns null (and try with the next loader)
It should never crash while trying to load. So I really need to add some safe check on the b3d offsets.
- I modified the PostFx and some shaders to get more flexibility on our own custom shaders. It also needs some testing to ensure I didn't broke something, so I just have to test the 4096 different shader flags ... just a matter of a day or two -_-

Then, it should be ready for demo/test next week !

And BTW, I updated my graphics drivers ... I didn't know that but, the latest amd drivers can detect opengl/dx application and capture videos
It will save me some headacke to make external software working (like fraps or bandicam or else)


Bobysait(Posted 2016) [#97]
Thanks to Filax, I'm preparing a demo with some stuff maybe people here will recognize :)

I'm trying to get a borderlands pre-sequel feeling.




Flanker(Posted 2016) [#98]
Hmm... Tachyon Storm space ship ? ^^


RemiD(Posted 2016) [#99]
@Flanker>>(If you know about Tachyon Storm and you are from France, maybe you also were a member of the old blitz3dfr forum ? If yes, not sure who you are, but this is good to see that some ancient blitzbasic users still continue to code and make games :D)


Flanker(Posted 2016) [#100]
@RemiD Yes I've been a member of blitz3dfr but not very active because totally noob at that time and playing more with DarkBasic (the dark side ^^). Bobysait, Filax, Eole, Patmaba, Wako, Seihajin... beside Tachyon Storm there was also a multiplayer FPS being written. That's pretty much all I remember, it seems to be ages ago now. My nickname was Gead I think. What was yours ?


RemiD(Posted 2016) [#101]
@Flanker>>oh i see, so you were probably a member before me, i started to use Blitz3d spring 2009 and became a member at this time (my username was "Ranko").


Naughty Alien(Posted 2016) [#102]
..Filax is very very kind fella..i really miss that guy..


Bobysait(Posted 2016) [#103]
Yep, Filax ... R.I. ... Hey ?! he's not dead :)

Actually, he works on assets for Unity.

BTW, the Weird screen of the month :
(Atlantis model by Alesk - an other old blitz3dfr member)




Flanker(Posted 2016) [#104]
@RemiD It was more around 2006 for me. "Ranko" tells me something because I've been visiting the b3dfr forums from time to time after that.

@Bobysait It looks good, as a Borderlands player I can confirm the color effects look like the pre-sequel :)


RemiD(Posted 2016) [#105]
I don't really like this graphics style (the black outline), but i am curious to test the demo ! :)


RemiD(Posted 2016) [#106]
@Bobysait>>Can you explain how the lighting is calculated (for per pixel omni lights and ambient light), i mean the formula you use.

For example a brief description of what i consider for lighting (for vertices lighting or for texels lighting) :
The map is separated in groups
Each group is lighted by one or several lights (omnilights)
Each group is lighted by one ambientlight with a color (AmbientR%,AmbientG%,AmbientB%)
Each group has a different light resistance (GroupLightResistance#)
Each light can light one or several groups
Each light has an intensity (LightIntensity#) and a color (LightR%,LightG%,LightB%)

To calculate the lighting of a group :
To begin the group is colored in black (0,0,0)
Then for each light impacting the group, i calculate the color of each vertex/texel by taking into account the lightintensity, the grouplightresistance (the lightintensity and the grouplightresistance are used to calculate if a vertex/texel is near enough to be lighted or not and how much it will be lighted (because of falloff depending on the distance)), the vertex/texel normal (the normal is used to calculate the amount of lighting that the surface receives depending on its orientation related to the light), the lightcolor
Then i add the ambientlight color

That's the idea

Thanks,


Bobysait(Posted 2016) [#107]
Each light (omni/directional/spot) have those parameters :
- diffuse color
- emissive color
- specular color
- ambient color

Each component has its own intensity
Actually, it's computed in the render pass in the fragment shader.
So : it's limited (in some way) to a light count (it's up to the user to use as many light as he wants, but the more light the lower the render)

With up to 8-10 lights it's very fast, I didn't test with more because I had no needs for more, but I will (just to know)

Formula is not <copyrighted> but hey, it's the engine :)
All I can say is that it's a very commun implementation of some kind of phong lighting.
For each pixel, I compare the distance to each light sources and use a dot product between light direction fragment normal to detect how the pixel is illuminated, then I reflect the vector with the camera Z axis to get the specularity.
Given the fact a light has all color components available in the shader, depending on the affect of the light, it will interpolate between diffuse (direct light) vs emissive (normal opposed to the light source) + ambient and finally specularity applied on top. And of course the final color is faded with distance and added to the fragment color

There is also a global lighting that doesn' care about normal and distance wich is applied before any lighting -> the "AmbientLight"


- hidden lights are not rendered.
- a list of lights is generated on each render based on the entity lists generated, so, using camera.Render() instead of RenderWorld allows the user to only use the lights he wants if he uses a partitioned space.
+> Actually, I encourage using it this way > a space manager is released with the engine to manage entities on a 2D grid (arbitrary set using XZ plane, because it's the most commun use of 3D space where Y is just the height of the jump and terrain evaluation or building floors etc ... whatever, it's always the best case for something that is not a space-opera-like )
the world is split in area and it offers a class to extend to automatically attach-detach-remap an entity to a node of the "space optimizer"
Then a sample code of what can be done to optimize the rendering :
' update the space optimizer at the camera position
' it generates a list of entities (or any kind of object actually, the SpaceOptimizer class is a generic class that must be extended)
MySpaceOptimizer.update(MyCamera.x(1),MyCamera.z(1),MyCamera.rangeFar())

' get the generated list
local list:TList = MySpaceOptimizer.renderlist()

' add the entities to the render list
For local ent:BEntity = eachin list
   MyCamera.addrenderlist(ent)
next

' render the list
MyCamera.renderlist()


So, using this enables to add only the lights in the renderlist.
And for global lights (like sky light), we can add them to the render list between the For/next and the camera.renderlist.

As I said previously on the topic, the engine gives a basic wrapper for "blitz3d" users, so it can be used as a "easy" and "ready to use" engine, but it can be used with more complex stuff to fit the needs of big projects.

--------------------------------------------------

Whatever it is now, I want to deferre the lighting to a 2D pass, using the depth and normal buffers. it will allow more lights and more specific lighting effect (more user defined) like cell shading.
But for the moment, I'm locked with an issue while trying to disable the GL blending on specific buffer
The rendering on multiple buffer is required to split the different pass (albedo/depth/normal) which allows to create post-fx. (the standard renderer is a bit faster but does only export the albedo with lighting already applied)
The MultiRenderer is only used for people who wants some extended stuff (like the outline and AO on the screens above), but for basic rendering, it's not required :)

So the problem right now, is that the blending (add/multiply modes) affect all rendered buffer (for gl users, I speak about glEnable (GL_BLEND) + glBlendFunc ....) And this cause the depth and normal to be hilighted or shaded ... and it just produices wrong results.
I want to disable the blending on the those specific layouts
layout (location = 1) out vec4 bb_out_fragDepth;
layout (location = 2) out vec4 bb_out_fragNormal;

bb_out_fragDepth export the depth to the depth texture and bb_out_frag_Normal export the normal.

All I know is glDisableI probably could help me to do this, but I can't get it working.
First, I'm not sure it's supported with OpenGL 3 ... then, the old (that is probably deprecated now) function glDisableIndexedExt cause me some troubles ... it's undocumented and it seems nobody on the whole internet never used that on an open-source project, so I can't get the data I need to understand what the index stand for.

Some uses it with 0,1,2,.... values, some says it must be used with a buffer index ... and a buffer index can be so much thing that I can't test everything just to see if something work ... chances are I will burn my card before getting it working, so that's not an option ^^



ps : And BTW, I implemented the cube_map mode for textures, but with some restriction :
- It can't be used (for the moment) with realtime generated textures (because the format is not the same and I need more work to get this working)
- It is limited to one cubemap texture per material. (cubemap textures are sampler3D while texture are sampler2D. It probably means nothing, but in the engine, it's really a problem, because we can't create a generic shader with variable texture count. So for 2 identical materials, the first with one texture and the second with 2 will have to use 2 differents shaders., So I managed this but if I add sampler3D, I can't produice as many shaders as there is combination of textures)
Then, the cube map texture is <always> computed after any other textures.
(I had the choise to put it before or after, I choosed after, because it allows multi-texturing with normap maps that will affect the cube maping)

So, on top of the small list of the "todo" is the disable of blending.
Once it's done, I'll compile all that and release the demo (I know I'm a bit late, but I had to deal with some life time waste > new furnitures, administration etc ... well ... life in all its glory ^_^).

Lots of info you didn't ask and that won't be of any use, but I have a headacke I wanted to share :)


RemiD(Posted 2016) [#108]
mmm... headache shared successfully ;)


Steve Elliott(Posted 2016) [#109]
lol, perhaps less questions bar one...When is the demo out so I can see for myself? ;)


Bobysait(Posted 2016) [#110]

So, on top of the small list of the "todo" is the disable of blending.
Once it's done, I'll compile all that and release the demo (I know I'm a bit late, but I had to deal with some life time waste > new furnitures, administration etc ... well ... life in all its glory ^_^).



Done !

Tomorrow, I'll make a test to render the lighting on post-process, and ... it will be the first demo to come !
Whitout the blending issue, it will be a lot easier :)

Before I forget about it, Don't expect a release this week, I have lots of things to do, I have to prepare a big week-end for my two-years-old princess, so there will be lots of people and I have tons of stuff to buy ... I'm pretty sure I should end a demo, but, I prefer not bet on that.


Bobysait(Posted 2016) [#111]
I managed to use deferred lighting.
90 dynamic lights running perfectly (of course it doesn't produice shadows)

At the moment, I've only done the "omni light".
The other lighttypes will be finished in the day.

It also comes with some restrictions, but I won't go further into the details for the moment, just have to know it requires an export of light_mask > to disable Fx 1 from lighting pass and currently, i don't really know how to deal with entityblended entities ... is a blended entity supposed to receive light ? For me, it seems like a self-lighting or self-shadowing object receiving light ...





Bobysait(Posted 2016) [#112]
I forgot Specularity, it's added now.

This is how Blend_Add looks like on alpha spheres





feeble1(Posted 2016) [#113]
It looks absolutely amazing!


3DRCzy(Posted 2016) [#114]
Das Crazy.


Filax(Posted 2016) [#115]
Naughty Alien : Hey :) Glad to ear this :) You miss me too :)

Hello to all members :)

Filax


RemiD(Posted 2016) [#116]
Oooh a ghost from the past... Hello Filax :)


Steve Elliott(Posted 2016) [#117]
Hi Filax :)


Naughty Alien(Posted 2016) [#118]
Hey Filax, my main man :) very happy to see ya here..i hope everything in life is good for you..last time you have 'reported' was you moving to some new place and from pictures you post, it looked like a ruins of some sort :) hehehe


Ian Thompson(Posted 2016) [#119]
Bonjour Filax! :)


angros47(Posted 2016) [#120]
I can't tell for sure by looking at these images, but ... the spheres that are behind an alpha sphere looks like they are not lighted, don't they? I've read somewhere that this is the only drawback of deferred lightning. Anyway, the images look good (in many other engines no more than 8 lights can be used at the same time)


Pingus(Posted 2016) [#121]
Will there be a support for VR headsets (Vive, Occulus...) ?


Bobysait(Posted 2016) [#122]

I can't tell for sure by looking at these images, but ... the spheres that are behind an alpha sphere looks like they are not lighted, don't they? I've read somewhere that this is the only drawback of deferred lightning. Anyway, the images look good (in many other engines no more than 8 lights can be used at the same time)


Actually, they are lighted but no by the deferred lighting.
I've made the lighting in two pass.
The first one is the standard lighting method, but I only exported the main (global) lights (ambient and sky light), it renders fast and give a global lighting to any objects.
The second pass is the deferred one which light with all the scene lights.
> this pass has the restriction that it can't light objects occluded by alpha objects. (that's why I added the first pass on top, so all objects are still lighted, at least a bit)

And yep, it's a problem of deferred lighting, as we can't export alpha layers for "each" objects (or, we could, but the cost is too high), anything that cast Z buffer will cast at 100%, so the deferred light pass will only affect the front pixels.

I assume it shouldn't be a very big problem in the end as we don't use that much alpha generally (except for specific stuff)
And if we do, there is still the standard method that works very well, but should be limited to a lower amount of lights. (like 1 to 10 ... actually, there is no real limit except the graphics card : my old laptop graphics card GF 9700 M only accept up to 48 lights. one more and it failed to compile due to too many shader attributes)


So, It's really just a feature that has some restrictions, and as it's not the only nor the "official" way to produice lighting, in the end, it's up to the user if he has a need for this :)

With a good tweaking of the scene, it should fit perfectly to a big city scenario with a huge amount of lights, using the SpaceOptimizer to choose which light to render.


Will there be a support for VR headsets (Vive, Occulus...) ?



mmm ... I don't own a VR headsets, so, as I can't integrate something I can't test myself, ATM moment, it won't be supported (at least "officially").
I just suppose it could be added by someone that knows how it works. After all, it's a blitzmax engine, everything can be added by anyone.


RemiD(Posted 2016) [#123]

it's not the only nor the "official" way to produice lighting, in the end, it's up to the user if he has a need for this


Yes it would be good, if you don't force one way to render the lighting...

In your post #1 you mention that your graphics engine does not support per vertex lighting. You mean like directx 7 vertex lighting ?
But can i use vertices colors in order to precalculate the per vertex lighting in a scene ?


Bobysait(Posted 2016) [#124]

In your post #1 you mention that your graphics engine does not support per vertex lighting. You mean like directx 7 vertex lighting ?
But can i use vertices colors in order to precalculate the per vertex lighting in a scene ?



The engine could support vertex lighting but computing lights in fragment or vertex shader does almost not change anything to the framerate.
Big difference is : Vertex lighting ... sucks ^^
It lights with errors. So I removed it to only support per pixel lighting, because it's easier for a lot of stuff the depends on (like bump maping, specular etc ... as it changes the way the light actually light the pixels, the lighting must be done "after" the textures rendering. So, it has to be done in the fragment shader And for this purpose, I don't want to have two separate ligthing just to maintain an old rendering method -> because shaders made vertex lighting obsolete).

But yes, you can use Vertex color to fake the lighting.


Bobysait(Posted 2016) [#125]
Sorry for not posting news anymore, it's already a long week I'm ill (with temperature from 39.5 to 41°C, I'm currently unable to think about anything for the moment).

I've spent the last week to finish the documentation pages (but not the documentations sheet)
When I'll get fine, I'll finish the doc with samples. Demo and else should come very soon now.


Bobysait(Posted 2016) [#126]
I'm a bit ... a very big bit late, I'm currently (still) building the documentation and a webgl portage of the engine.

It's not finished yet but you can see a
small demo online here of the w.i.p webgl engine portage

* use arrows or ZQSD to move
* mouse down + move to rotate the view
-> click the "Swap to US" to use a QWERTY config (WASD to move)

- advantages of the webgl engine :
> nothing to instal
> really fun and interactive website
> works on multiple platform as its webgl on a browser :
- Windows (chrome, firefox, IE 11, Edge, Opera)
- Windows Phone 10 (windows phone 8/8.1 not tested)
- IPad
- Android 5 and above (does not work on 4.4 and below)
- Mac ? not tested on Safari

- inconvenient :
> while pretty fast for what it is, it is still a bit slow due to javascript which is far from a fast language.

You can change the canvas resolution with the list of buttons (which are not buttons by the way) on the left of the canvas.
For slow machine, decreasing the resolution can help getting smoother animations, while for fast machines it will just give a crappy version but won't really affect the framerate (it can by the way lower the framerate due to the canvas resized. But it any resolution, the canvas is resized on the demo. I didn't invest a lot on it so it is just for the demo)

There is still a lot of work to do on it.
Just to know : the webgl engine will be free to every users who buy the blitzmax engine, just like a bonus.


At the begining I just wanted to add realtime online examples to the documentation, but as far as I ended with a pretty cool engine working in javascript, I thought it would be a pain if it was only used for the purpose of a documentation.

As usual, feel free to comment or not, I don't come very often, so, don't expect a fast answer but I'll do what I can ;)


Guy Fawkes(Posted 2016) [#127]
This is nice & all, but aren't you forgetting to mention http://threejs.org/ ?

~GF


Bobysait(Posted 2016) [#128]
This is nice & all, but aren't you forgetting to mention http://threejs.org/ ?



... you should stop smoking weed guy ^^
There is nothing related to threejs here, the webgl engine I use is mine, build from scratch in pure javascript, so, why in the hell should I mention it ?


RemiD(Posted 2016) [#129]
@Bobysait>>This looks promising, well done :)
(since it is the summer season i will probably not code as much, but i will probably try later)


AdamStrange(Posted 2016) [#130]
works on mac safari, but no mousedown rotate. - I'm using a trackpad


Bobysait(Posted 2016) [#131]
Oh yes, I forgot to mention, I didn't implement touch events, so it can be manipulated only on PC at the moment.
Thanks for the feedback Adam.


Bobysait(Posted 2016) [#132]
2nde version of the demo : lots of optimizations.
It should run really faster (and the materials are different)

(I have uploaded it on the index of the website)

> http://mdt.bigbang.free.fr


Ian Thompson(Posted 2016) [#133]
Looks good, the 1st version works on iPad Pro, 560FPS @ 2160p! No joy with the 2nd demo, just a blank canvas.


Naughty Alien(Posted 2016) [#134]
..is this Bmax engine or what? I thought this is Bmax project..


Blitzplotter(Posted 2016) [#135]
@BobySait, looks very good here, great work ;)


Blitzplotter(Posted 2016) [#136]
@BobySait, looks very good here, great work ;)

This is weird, just got this when I tried to post:

A mandrill error occurred: Mandrill_Invalid_Key - Invalid API key
Fatal error: Uncaught exception 'Mandrill_Invalid_Key' with message 'Invalid API key' in /home/mandrill-api-php/src/Mandrill.php:153 Stack trace: #0 /home/mandrill-api-php/src/Mandrill.php(132): Mandrill->castError(Array) #1 /home/mandrill-api-php/src/Mandrill/Messages.php(80): Mandrill->call('messages/send', Array) #2 /home/blitzbas/public_html/send_email.php(90): Mandrill_Messages->send(Array, false, 'Main Pool') #3 /home/blitzbas/public_html/Community/_createpost.php(123): send_email('VirtPDox@yahoo....', 'New Post in "Bl...', 'http://www.blit...') #4 /home/blitzbas/public_html/Community/_createpost.php(98): SendMail('VirtPDox@yahoo....', 'New Post in "Bl...', 'http://www.blit...') #5 {main} thrown in /home/mandrill-api-php/src/Mandrill.php on line 153


Hmmm, might be a firefox peculiarity - firefox didn't let me get a confirmation of my post - but chrome seems to show it no bother.


Bobysait(Posted 2016) [#137]
@Ian Thompson :
Damned, I've got no clue on this, I don't have any ipad/iphone/imac/ithing... for test (maybe it's time to invest)

@Naughty Alien :
Bigbang was originally created for an android project I had (and still have), but as it's a long time consuming to develop on mobile device (due to compilation -> send -> launch -> try again on error) I decided to make a blitzmax portage of my engine. Now, I wanted to have an online view on the sample I produice for the documentation, so here is born the latest child; the webgl portage.
Anyway it started as an embedded graphics engine for a single android project it's only with blitzmax that it became a real "engine" and for now, it still is a blitzmax project ! :)

As mentioned above, the webgl engine is just a "bonus"

@Blitzplotter :
Anytime I post anything I got the same message, but in the end the message is posted whatever the error it gives (with google chrome)


RemiD(Posted 2016) [#138]
@Bobysait>>It works well on my laptop (Windows 7), but i see a pink screen on my tablet (Android 4.1.2)


RemiD(Posted 2016) [#139]
Oh yes i also get the weird error message.


RemiD(Posted 2016) [#140]

A mandrill error occurred: Mandrill_Invalid_Key - Invalid API key
Fatal error: Uncaught exception 'Mandrill_Invalid_Key' with message 'Invalid API key' in /home/mandrill-api-php/src/Mandrill.php:153 Stack trace: #0 /home/mandrill-api-php/src/Mandrill.php(132): Mandrill->castError(Array) #1 /home/mandrill-api-php/src/Mandrill/Messages.php(80): Mandrill->call('messages/send', Array) #2 /home/blitzbas/public_html/send_email.php(90): Mandrill_Messages->send(Array, false, 'Main Pool') #3 /home/blitzbas/public_html/Community/_createpost.php(123): send_email('VirtPDox@yahoo....', 'New Post in "Bl...', 'http://www.blit...') #4 /home/blitzbas/public_html/Community/_createpost.php(98): SendMail('VirtPDox@yahoo....', 'New Post in "Bl...', 'http://www.blit...') #5 {main} thrown in /home/mandrill-api-php/src/Mandrill.php on line 15




Bobysait(Posted 2016) [#141]
Android 4.4 and below does not really natively support webgl, so it won't work on it.
By the way, for some devices, you "have to" activate it on the browser
Like with chrome for android, you have to enable it by going to the url chrome://flags and check the webgl support then restart the browser application. It may run ... or not.

But, let's just think that in just a few years, android 4.1 became obsolete, so, regarding android 5.0 that fully support webgl, it's a way to see the near future : almost everybody will have at least a 5.1 android version on his tablet :)
(we change more often our phone and/or tablet that we change a computer or just the graphics card)

don't know why I say all that ...

mmm ...

@RemiD : thanks for the feedback ^_^


Pingus(Posted 2016) [#142]
How looks the code for that webgl engine ? Is it based on bmax ? Sounds very promising.


Blitzplotter(Posted 2016) [#143]
Anytime I post anything I got the same message, but in the end the message is posted whatever the error it gives (with google chrome)


Yeah, same here. I didn't want to detract from your topic, sorry for the mini-de-rail - still looks good ;)


Bobysait(Posted 2016) [#144]
How looks the code for that webgl engine ? Is it based on bmax ? Sounds very promising.



Looks like Blitz3d or more exactly, it looks like the blitzmax version of the engine mixed with the Monkey application design.
There is the OOP stuff and a basic wrapper to access all members/functions/methods in a static way.
It's still javascript syntax, but in the end, it's quit easy to translate blitz code, while at the same time, due to the webgl requirement for render loop, I've decided to pack the user code into a extended class

So it looks like this :

(it's a working demo showing some ... "smarties")


and this is a simple php file that init and show the application


It's still javascript, but the main overhead is removed or simplified to the maximum so the user doesn't need anymore to deal with lots of stuff just to draw a cube.
And, I also translated some basic functions like Rand, Rnd, Cos, Sin etc ... and String stuff, like Instr, LSet,RSet, Left, Right, Mid etc ...
the javascript String class provides some very good and safe functions for that, but ... Have we the blitz spirit or not ?!

the demo works online here, with thoose exact same files as posted here -> Smarties WebGL Demo -

Not to mention :
the engine is coded with ECMAScript 6, so it's not supported by all browsers. (ie 11 for example won't run it, neither Edge for some reasons)
the demo I posted in previous post are transpiled to ecmascript5, which is more compatible, using "gulp" (it's easy to instal and free to use, once you have the java runtime installed on your system)
It's not part of the engine and up to the user to do the same or not.
The last demo (the one on this post, with the "smarties) is not transpiled, so it won't run on some browsers. Be aware of that :)

And, not to mention, the EcmaScript 6 will soon be compatible with almost all browsers as it's the new standard for javascript.
It allows to use new syntax, like "class" which really helps to have a nice code style.


ps : on the smarties demo
- use Left-Mouse down + mouse move to rotate the player
- use arrow keys to move.


KronosUK(Posted 2016) [#145]
Any news on this?


jhone(Posted 2016) [#146]
Hello this is amazing are you planing on selling this, because I would be really interested :)


Bobysait(Posted 2016) [#147]
Sorry, I've encountered many troubles thoose last monthes (and some hollidays :p) that I almost didn't progress on the engine (actually, I had to step back to a previous version, which annoyed me, but ... that's what to expect when a baby comes and want to share some fun with her father's job ^_^).
I'm on it since last week (got my station working back), so it should be ready in few weeks now.
(I also bought an old mac that doesn't support glsl 3.0 for now and need some tweaks until I can get it working, then I'll update the module for Mac)

@jhone : yep, once it's at least on the "beta" stage, I'll publish a demo version.

ps : And for thoose who might want to know ...
My baby is a genius, she finally found a way to format my linux computer (don't know how ...), then she just unpluged the windows computer before thinking it was a good idea to give it some chocolate :)

Curiously, the computer didn't much appreciate ...


Hardcoal(Posted 2016) [#148]
hehe cute


RemiD(Posted 2016) [#149]
A suggestion for your 3d engine :
It would be good if you could separate the systems (turn move translate, collisions detection and reorientation repositionning, update of joints/bones/skinnedvertices, rendering), because with Blitz3d this is not the case and in some cases this is annoying and limiting...


Also once you are ready to release a first version for testing, it would be good if you could post a brief tutorial on how to install Blitzmax and others things necessary and your engine, so that the beginners with Blitzmax are not lost (i have not used it much)


degac(Posted 2016) [#150]
Ok, a baby is a valid excuse for this delay :)


Kryzon(Posted 2016) [#151]
Hello.
Fantastic work. Best of luck with this and the family.


Bobysait(Posted 2016) [#152]

It would be good if you could separate the systems (turn move translate, collisions detection and reorientation repositionning, update of joints/bones/skinnedvertices, rendering), because with Blitz3d this is not the case and in some cases this is annoying and limiting...



You mean : Having a collision/animation system that does not affect the transformations ?

What's the purpose of this ?
At least, the engine allows to generate "no response" collisions, so it only returns the info about collision, without affecting the entities.
Then you can make what you want of the collected info (like ... regenerating a collision system)

I don't understand the part about the bones/skin etc ... what do you mean ?


RemiD(Posted 2016) [#153]
No i mean separate systems which can be used or not and not merged like updateworld+renderworld

Because depending on the game/tool you don't necessarily need all systems...

And also because i have noticed that with Blitz3d you can structure your loop all you want, it does not matter because you don't know how Blitz3d manages it behind the scene. Which is bad and limiting imo.

A few examples that i have noticed :
- if i want to detect collisions in one part of my code, and then update animations in another part of my code.
- if i want to position/rotate my entities in the world (considering what happened in the previous loop) and then later create/activate colliders/collidables (that's why i ended up using only linepicks and pickables)
- if i want to update the positions/orientations of joints/bones/skinnedvertices depending on the animation/pose even if the entity is out of the camera fov


Bobysait(Posted 2016) [#154]
Ok, so it's already done from the start.
It was originally designed like this to enable threading of animation or collision (with greaaaat care of the user !)
UpdateWorld still exists but it only calls the 2 functions UpdateAnimations() then UpdateCollisions.

While UpdateAnimations and UpdateCollisions can be called as standalone process, they both call the respective methods on the "root" entity.
In most scene/game, the best way to call animations and collisions is to call them only on a stack of entity that are actually in the player's area (animation of a tree at 5 km behind the camera doesn't mean anything ...
For the collision, it's also better to update according to the player's location :
- Update in realtime the player's area
- Update one loop every 50-100 milliseconds loops the proximity areas
- Update one loop every 100-1000 milliseconds the areas too far.
So, once again, a usefull module that comes with the engine : the SpaceOptimizer class. Its main purpose is exactly what is mentioned above :)


ps :

- if i want to update the positions/orientations of joints/bones/skinnedvertices depending on the animation/pose even if the entity is out of the camera fov


I don't think Blitz3D does that. animations and orientations are done on all entities regardless to the "inview" state. The only state that breaks it is the "EntityHidden" state.


RemiD(Posted 2016) [#155]

UpdateWorld still exists but it only calls the 2 functions UpdateAnimations() then UpdateCollisions.


updateworld is a nonsense and limiting function, i suggest to forget about it and to replace it by 2 different functions. (or let the coder decides which functions he wants to use)
i also suggest to add a functionality for colliders (ellipsoids, capsules) : the possibility to activate or desactivate the collider each frame before the collision detection and response.



I don't think Blitz3D does that.


then you are not a Blitz3d guru of level 10, only maybe of level 9 ;) (level 10 would be Mark Sibly)
See : http://www.blitzbasic.com/Community/posts.php?topic=106670 #127 #131 #133 #142


Bobysait(Posted 2016) [#156]

updateworld is a nonsense and limiting function, i suggest to forget about it and to replace it by 2 different functions. (or let the coder decides which functions he wants to use)


I won't, it's not useless, it's actually pretty usefull for fast prototyping.
Most of the time, you don't need a very accurate collision nor animation system. UpdateWorld does perfectly the job.

If you need more control, then you have full access to UpdateCollisions and UpdateAnimations independantly, and all the stuff in thoose functions is also accessible for a more "low-level" control.
So, all is in the hand of the user according to its needs.


then you are not a Blitz3d guru of level 10, only maybe of level 9 ;) (level 10 would be Mark Sibly)



I don't know what you did to get your code not working, but seeing the blitz3D sources, I can confirm any animations are done on UpdateWorld regarless to the camera position.

The only check being performed to enable/disable the bones update is "enumEnabled" which generate a list of "not-hidden" entities (the entities that are not Hidden with "HideEntity", nothing related to the camera frustum)

So your model can be out of the screen and not visible but the bones are still updated.
The "bug" regarding the animation system is about the main pivot of the model and its global bounding box which is not updated by the animation frames, so something that move too far from its pivot can be culled because the pivot is outside the camera frustum. But, anyway, in this special case, the animation is still performed and the bones are updated. Its really only the entityvisible that is returned falsy.

Maybe that's what your talking about, else, I just don't understand your purpose.

ps : if it's what you were talking about, you just have to debug "EntityX/Y/Z/Yaw/Pitch/Roll" of an animated bone that is not rendered.
You'll see it's being updated on each frame whether (of course you need to "UpdateWorld" and "Animate" your model).



i also suggest to add a functionality for colliders (ellipsoids, capsules) : the possibility to activate or desactivate the collider each frame before the collision detection and response.


To disable an entity before the collisions -> ResetEntity (like in Blitz3D), but ... is there a need for this ?
I mean, if you require collisions on an entity, what would you want to disable it before the collisions ?
If you're searching for "detecting collision" without "happening collisions" then you're probably looking for a "no-response" collision method.
As mentionned in a post above, you can use a "no-response" collision, so your entity won't be affected by the collisions while you'll still get the data of the collisions that happened
(usefull for detecting "entity on navigation mesh" for exemple)
And, if required, you can also manage your own collision response by extending the CollisionDetection class.
> all data are accessible as vector/matrix/... or with entity-like commands, so it's up to the user to use what best fits its needs
> You just have to set the DeltaPosition/DeltaRotation/DeltaScale on the end of the method and it will be considered when the collision will reach the end (which update the entities regarding the responses generated)

Collision system is sorted like this:

1 - Generate the list of all entities that has moved between previous collision call and current one (entities that own a collider of course)
2 - Loop Until entity's "Deltas" are 0.0 (all moves have been consumed by responses)
{
2.1 - detect collisions from emitters on receivers ( AKA : the entities set with the first ID in your Collisions(COL_ID_EMITER, COL_ID_RECEIVER, COL_METHOD, COL_RESPONSE) )
2.2 - happens response for each collision event (it calls an abstract method that is set by the default extended CollisionDetector classes, or any extended class the user makes)
}
4 - in the end : update the entity with the new positions/rotations/scales

The use can set its own method for :
* 2.1 : the collision detection -> the engine implement the basic ones (sphere/sphere, sphere/tri etc ...)
* 2.2 : the collision reponse -> the engine implement the default modes (stop/slide/slide XZ)

And for sure, all this is not mandatory !
It's just a possibility. Advanced users might want to have accurate stuff that really fit their game for best performances or better accuracy/precision (because, a generic physic system is never as good as a specific one), while beginners won't need that at all.
A simple "UpdateWorld" (or only UpdateCollisions) will do the job :)


But seriously, I think I really don't understand what you're looking for. (maybe it's me who is tired to understand simple things)


RemiD(Posted 2016) [#157]
I will post 2 code examples later to demonstarte the problems i mean with the no update of the joints/bones/skinnedvertices when a rigged skinned entity is out of the camera fov and with the weird unreliable behavior of resetentity...


Bobysait(Posted 2016) [#158]
As I'm rebuilding the collisions part, I'm a bit frustrated due to Blitz3D :
I wanted first to do it the same way blitz3d deals with the collisions (which I managed to do ... but)
Problem is : blitz3d mentions that the collisions are ellipsoid versus [ellipsoid/box...]
But in the fact, it's not. It's clearly spherical collisions versus spherical or else
-> There is no ellipsoid involved, and if you scale the entities, you just don't really know how it will end.

So, I'm currently implementing some different collider styles (sphere, box, cylinder, capsule ...) for both emitters and receivers

Now, what I'm wondering is :
Has anyone ever required to set collisions for a receiver differenlty according to the emitter ?
(something like this)
Const ID_Player% = 1
Const ID_IA% = 2
Const ID_Scene% = 3
Collisions (1,2, 1,2) -> player/IA -> sphere/sphere - slide
Collisions (1,3, 2,3) -> player/scene -> sphere/polygons - slide XZ
Collisions (2,1, 2,3) -> IA/Player -> sphere/Mesh (???) - Slide XZ
Collisions (2,2, 2,3) -> IA/IA -> sphere/Sphere - Slide XZ
Collisions (2,3, 2,3) -> IA/Scene -> sphere/polygons - slide XZ

So as the IA id detect collisions with the sphere method against the other IAs, and sphere/polygons for the player (which is probably a source of error as Player already check collisions with a sphere/sphere method against IAs)

As far as I'm concerned and after years of blitz3d use, I've never find any use for this ...
Do you ?

If not, then it's probably more usefull to set collider method on the colliders directly
So, Instead of (blitz3d) :
* Collisions(id_emitter, id_receiver, method, response)
* EntityType (entity, collider_id)

We'd get (Bigbang) :
* Collisions (id_emitter ,id_receiver, response)
* EntityType (entity, collider_id, collider_mode)
-> where @collider_mode is one of the detection method ("sphere", "box", "capsule" etc ...)

It would be less versatile, and could make the physic engine more flexible to incorporate real physic system

At the moment, I'm already implementing the second method, so, by default it will be as specified :
* Collisions (id_emitter ,id_receiver, response)
* EntityType (entity, collider_id, collider_mode)
And i think in the long run, the @Collisions function will be totally removed, because, I think the materials ane eventually the geometry object can store the response method (so we'll be able to set a "slide" factor for each material/collider ... could be usefull for setting an ice block, or checking bounciness of the ground etc ...).


But I'd really want to know if somebody has any use for the old blitz3d-system.


(and by the way, here is a small video ... spheres colliding and sliding then sleeping)
there are some spheres that intersect, but it's not due to the collision system, it's just the random position at the start that made them imbricated.
250 spheres (all collisions are updated on each loop, it runs without any lag)



RemiD(Posted 2016) [#159]
What is important to me in a collisions system :
(i use the term "collider" for a shape which turns moves and provokes a collision)
(i use the term "collidable" for a shape which is static and an obstacle)

a collider can be a sphere, a capsule, a box, a combination of these primitives
a collidable is usually a low details mesh
each collider can be created/destroyed before the mainloop or in the mainloop
each collider can be activated/desactivated in the mainloop
each collidable can be created/destroyed before the mainloop or in the mainloop (preferably before)
each collidable can be activated/desactivated in the mainloop
each collidable must have a name (where i can put the group/list name and the entity index/handle)
the collision responses i find useful are stop (the collider is reoriented repositionned outside the collidable (turned back or/and moved back) and slide (the collider is repositionned after having slided along the collided surface depending on its movement vector)

to define collisions detection and response between a collider and a collidable, it could be :
collisions(colliderreference,collidablereference,detectionkind,responsekind)
or you could use something like with blitz3d :
collisions(colliderreference,collidablegroup,detectionkind,responsekind)

if you have functions similar to coutcollisions() and collisionsentity() i suggest to order the list of collided collidables by distances (first would be the nearest, last would be the farest) (i know that "farest" isn ot the right word, but i don't like arbitrary exceptions so it will do :P)

that's all i can think for now...


Flanker(Posted 2016) [#160]
I've always been confused by the "ellipsoid" collision shape in the blitz3d documentation, because a 3d ellipsoid is a very complicated shape for collision and it would need 2 different radius to define, but we only have one (EntityRadius). So it appeared quite clear to me that it was a sphere, and I guess that sphere to sphere collisions are the simplest to compute.

Anyway, for your question about the collision response, I've never came across a situation where the collision response from A to B would be different from B to A.

Nice demo, keep up the work !


Just a question about OpenGL with Bmax : i've been playing with it, it seems a little faster to render triangles than Blitz3D, but only a little... I'm quite surprised because with Ploppy's Hardwired, with DirectX 9 or 11 it was incredibly fast. I thought that Blitz3D old DirectX was emulated by the CPU, and that DirectX9/11 and OpenGL were handled by the graphics card. For example I tested a 3D performance demo (Unigine) with both DirectX and OpenGL and they run almost at the same speed. So is there something I don't get ?


Bobysait(Posted 2016) [#161]
There is not much difference between Dx and GL, but there 's a master difference between GL 2 and GL 3 as there is a huge difference between Dx7 and Dx9
-> VBO and Shaders.

Each one makes the rendering really faster :
* shaders : I won't explain why, but the highly parallelized pipes makes the GPU incredibly faster at rendering shaders that using fixed pipeline
* VBO ... what could I say : the data are uploaded on the graphics card memory and stay there, so there are no any more transfert to do on each render of the static meshes. Blitz3D and its old dx7 needs to send data (triangles arrays, vertices arrays (coords, normals, colors, and other uvs and bones)) on each frame which require also to be synchronized with the graphics card.
So, it's really faster for explicit reasons.

Both conjugated, it's night and day :)
all the cpu process can be a pain to implement(due to more complex management of memory, shaders, programs, buffers etc ...) but it worth it !

Then, to answer your question, maybe you just used the fixed stuff and did not try shaders and/or vbo (without thoose, there is really not a big difference and in the end, it's just the optimization of the engine that will show better framerate or not).

By the way, you can specify 2 radius for EntityRadius in Blitz3D (The X and the Y, with X used for both X and Z axis), but it's really hard to setup to get exactly what you want it to do ... and I assume it only works with ellipsoid vs Polygon. (ellipsoid vs ellipsoid and ellipsoid vs box only check collision for spheres ...)


Flanker(Posted 2016) [#162]
You're right, I didn't even know that there was an optionnal parameter for Y radius on EntityRadius !! I'll try that.

So when we use blitzmax+opengl, what version of opengl is it ? I didn't try shaders with opengl because it's quite dark for me at the moment, so yes I "send" each triangle each frame. I'm really interested to see examples of what is possible.


Bobysait(Posted 2016) [#163]
I've implemented Vertex displacement :
* it uses a texture to move the vertices
Modifying the texture matrix modify the displacement in realtime (using standard commands such as RotateTexture/PositionTexture etc ...)

Also, it is limited to a single displacement texture per material it is very usefull for lots of effects such as realtime terraind deformation, water effect etc ...

here is a video of the procedural planet translated for Bigbang.
(it allows lots of features I couldn't do with blitz3d -> higher texture size thanks to offscreen rendering, of course, now there is displacement for the water, and also a lot more polygons)



I zoomed on 2 minor bugs in the video :
- the first one is not really a bug but, I want to be honest so, I show it ^^
-> as it displaces the vertices of the sphere of the water, it moves the seams, so we see it (it can be fixed easily by using a good texture scale, so it moves the two sides of the seam identically)

- the second bug is an issue with my graphics card : it creates some black spots and lines, and it happens only with this card. I think it didn't like testing No Man's Sky ... since then, I have thoose dirty artifacts on 3D ...

So when we use blitzmax+opengl, what version of opengl is it ? I didn't try shaders with opengl because it's quite dark for me at the moment, so yes I "send" each triangle each frame. I'm really interested to see examples of what is possible.


No idea about the original OpenGL version coming on with the latest blitzmax, but I remember I couldn't do OpenGL 3.1 stuff with, so, I've add an updated version of opengl as a standalone module for blitzmax (it is part of the bigbang module)
Currently, it supports almost anything from GLSL 450. (includin geometry shaders, but I've not implemented them to the engine yet)
The only thing I'm pretty sure is that the Mac OS version does not support OpenGL 3
The glew module I've implemented will probably resolve this, but I still have not tested it : I've a mac, but it's locked at Mac Os X 10.7.5 at the moment (and 10.7.5 does not provide support for OpenGL 3, so I need to update it to test the engine. I have already all the needs, it's just a matter of installation ... but I'm still as lazzy as ever ^_^)


Rick Nasher(Posted 2016) [#164]
Pretty impressive stuff. Thanks for the explanation of differences between gfx systems. Could perhaps explain why latest versions of Unity run more slow on my aging rig: they probably kicked out the old stuff and so my not up to date GPU will need to do more in another non-optimized way, perhaps on CPU.


Yue(Posted 2016) [#165]
@Rick Nasher

A shader version is no longer supported by Unity, so my computer does calculations on another version of shader that is slower in older computers.


Bobysait(Posted 2016) [#166]
Building a terrain demo using vertex displacement in the vertex shader with a chunked-lod algorithm
(This setup will be released in full source code with the engine as a demo code, but currently it needs some features -> there is no way to use collision with it for the moment, as the geometry is generated on graphics card side, I need to convert the heightmap texture coordinates with the UVs of the patch to get the real height of the terrain at a specified position. It shouldn't too hard, but it's not done yet ^^)

The terrain is built from few patches of different resolutions that are "copied" and shared to a specific position.
It then uses a copy of the heightmap texture (it does not duplicate the texture, it only creates a texture with unique layer and share the pixels from the original -> it's a feature of the engine) with specific UV coordinates for each patches. So In Fine, there is only 10 patches (from 256*256 to 1*1 quads) and the 4096*4096 heightmap and several copies (instances) of the texture and the patches.

So it 's a really very low memory usage according to the fact the terrain is built from a 4096*4096 heightmap and the whole terrain is 32 Million Triangles.
(actually, with this demo, it only uses 200 Mo of ram with all the textures) and a pretty fast build thinking it's 32 million triangles and ... we can transform the terrain in realtime, like slicing it just by modifying the texture matrix (heightmap.SetOrigin(x,y) / heightmap.SetScale(scale_x,scale_y) / heightmap.SetRotation(angle)) or by directly writepixel to the texture.



here we can see the patch distance for high to low resolution


and the final rendering (the color is generated by mixing gradient color layers according to the normals and the height of the terrain)


The sky is a small module I'll distribute with the engine :
-> it creates a sphere and update its vertex colors in realtime
-> it's released with a preconfigured template for colors, changing along the seasons and the cycle of the day.
-> It uses a smart function of the engine that returns a vector representing the sun direction by seeding the longitude and latitude on earth, it's not very accurate with pretty close to reality, while the colors are really fully fantasy style.
-> it also modifies the sun light and sets the camera fog colors.
-> it uses a TimeDate module which is also distributed with the engine.




Steve Elliott(Posted 2016) [#167]
Looking good, maybe time to start a "part 2", this thread is getting very long now and lots of scrolling required :)


RemiD(Posted 2016) [#168]
looks nice...

what is the red stuff ? the active zone ?

i suppose that, by default, there are 1 height (vertex) each 1x1z ? what is the maximum width depth of a terrain ?

can we dig in the terrain ? (modify the heights between xmin,zmin and xmax,zmax, (like modifyterrain())


Bobysait(Posted 2016) [#169]
As mentionned above the screen "here we can see the patch distance for high to low resolution"
red parts are the higher resolution of the patches (the one(s) surrounding the camera)

maximum width/depth :
A terrain depends on the heightmap size, then you can scale it as long/large as you want. (it won't create higher resolution)
The texture size depends on your graphics card -> usually 4096*4096 is the minimum supported on most "old" graphics card (most recents can goes to 8192*8192 or really larger. My GT 640 which is almost a low cost graphics card and not really recent can draw textures of 8192*8192 without problems, but higher the texture higher the video ram used ... so, it also depends on hardware capacity)

can we dig in the terrain ?

Modifying the height at a x/z coordinate is possible, it's also mentionned on the post above ;).


Looking good, maybe time to start a "part 2", this thread is getting very long now and lots of scrolling required :)


I'll create a new one soon for the release, I better continue this one for the "wip" showcase.

ps : And by the way, you can use Ctrl+End on your keyboard to go to the bottom of the page


Bobysait(Posted 2016) [#170]
With specular and textures.




Steve Elliott(Posted 2016) [#171]

ps : And by the way, you can use Ctrl+End on your keyboard to go to the bottom of the page



And how do you do that from a mobile phone?


Bobysait(Posted 2016) [#172]
No idea, I almost never use my phone for browsing websites (or for anything by the way ...) ;)


Henri(Posted 2016) [#173]
You can also click the topic header link to go to the last post.

I'm not a 3d man myself, but I do appreciate what you are doing. When you are ready I'll surely try it out.

-Henri


Steve Elliott(Posted 2016) [#174]
Thanks Henri :)


Flanker(Posted 2016) [#175]
Nice demos Bobysait ! Now you need some trees :p


Blitzplotter(Posted 2016) [#176]
The big bang engine is working well within your demo bobysait, great work!


Bobysait(Posted 2016) [#177]
Really glad you enjoyed :)

At least, I start with the grass (the trees are next)

The grass is generated on the fly (I'll add some different textures with an atlas, it should be nicer) using a mask generated by the terrain heightmap and a color template based on interpolation of height and normal of the terrain, so I get colors and shadows for the terrain which I then use to color the grass quads.

(I'm also thinking of adding some wind ... but I'm a bit afraid it will kill the framerate, so I need to think about a good way to do it)




Not for old graphics card, but keep in mind my graphics card is only a GT 640 (so, it's probably comparable to any next gen engine)


ps : If I'm building this long run demo, it's essentially to create stuff to show, a working sample to go with the engine AND testing that everything works (I already fixed lots of stuff while coding the last 5 demos ... some minor bugs, like an x for a y, some more critical - like the SetAlpha that created a material per call ... and drastically increased the memory usage ^^)

So, I hope in the end of this demo, I will have fixed almost all the potential bugs remaining, and in the end, have a nice demo to provide with full source code.


Bobysait(Posted 2016) [#178]
Updated video of the demo
- optimized and better rendered sky
- added lens flare
- added shading for the grass





Bobysait(Posted 2016) [#179]
I've started an algorithm for tree generation, so there should soon be trees in the demo ;)

(if it can be usefull to anyone, I'll publish the source code. It's based on a free unity script)




AdamStrange(Posted 2016) [#180]
brilliant grass. and love the new tree stuff!!


RifRaf(Posted 2016) [#181]
I dont visit nearly enough these days. I missed this. How long do you estimate until it goes on sale ?


Bobysait(Posted 2016) [#182]
I plan for a release of the demo in a week or two at max.
The paid version will come a bit after, once the demo feedbacks are relevant
(I've already checked a lot of stuff, but we always miss few things, and never anticipate the others workflow) and I'm aware of "how to make it for sell" :)
I'm really new in the indie buisness, so it may take some days to find a good solution for every body (and obviously for me)


Rick Nasher(Posted 2016) [#183]
Looking good, hope indeed you manage to keep up FPS with the added wind. Likewise if water is added, plus animated, shaded entities and physics.

If you manage to juggle all that at relatively decent speed then you'd have basically a very viable ecosystem for Blitz to thrive on and may prove to be pretty profitable.


jfk EO-11110(Posted 2016) [#184]
Nice tree. Is there a LOD philosophy associated with it?


Flanker(Posted 2016) [#185]
Hello Bobysait, in the first post you wrote this :
- it uses its own version of the glew module and glgraphics, and is built as a real max2d driver
So any (or almost) 2D commands works on top of the 3D (with blending, alpha etc...).

Do you have a video example of this, a HUD or a menu on top of 3D using max2D commands ?


Bobysait(Posted 2016) [#186]
I don't do much 2D to be honest, so, I have nothing very impressive built in 2D but :
there are some screens on the topic that use 2D.
-> everything that is rendered with a post fx is actually an image (not a TImage, but a BTexture, but in the end it acts the same wat) drawn with max2d colors alpha and blend mode
It cares about the viewport, scale, origin etc ...
Regarding TImage : They can be rendered without problem, but only for purpose ... it uses a (probably) low method that converts it to a "legal" BTexture object.
I wanted to code without the restriction of blitz3d : having a Texture object different from an image object, so when you create or load a texture you have to create a camera and a support to draw it on screen, while in BigBang, texture and image are the same BTexture Object. You can then set it on a material (brush) or draw it to the screen, as you can also set it to a render buffer and get the result to paint a mesh or apply some screen effects etc ...

On the last videos, you can see the "progress bar", it's only pure max 2D.
On the screen of the terrain rendered in wired mode, the top left hud is made only with the Max2D commands.

You can also see on post #21 the text render before and after the renderworld (it's the DrawText command)
And all the text used for debug is actually rendered with DrawText after the renderworld.

Now, I have to correct something I said :
- Bigbang is not "compatible" with max2d, it implements it on the graphics driver.
So every 2D commands are rendered by bigbang with 2d oriented shaders made for the task.


ps : (this is totally unrelated to the previous speech)
I'm currently trying to rebuild the opengl module, so it use a low-level "Gl_Driver" object like you can have with java (GLES.glFunctionFunction)
It won't work in parallel of Pub.OpenGL but at least it won't says there are duplicate if we don't use FrameWork to remove the Pub.OpenGL load from the standard framework.
All this is internal blitzmax stuff, and it makes things harder if we don't want the end-user to have to replace original blitzmax stuff in order to use the engine
-> I want it to be easy to use and also to install, so only a core mod folder with the sub modules in it
[bigbang.mod, gl_driver.mod, etc ...] all in the same mdt.mod and no extra stuff outside
= to install, only copy the "mdt.mod" folder in the blitzmax' mod folder.
So, digression-less, abouthe the GL stuff, it dispatches the Glew API into several modules (one per version) that recursively extends the previous one, so we can load the best gl context that fit the needs by using the "Import MDT.Gl_X_X" module (but it's not working yet, I'm still building the dump to extract all the modules at once)
By default, it will load the minimal required to run the engine (which is something like 2.1 ... or 3.0 if we don't use some extra stuff that require later versions of GL)


ps2 : (And that will maybe make some guys smile a bit)

I've got a working OpenGl 4.1 context on mac os x 10.11 (el capitan) with blitzmax.
but there are some stuff to replace to get the engine working with it (mac os is a bit different from other platforms, it requires a core profile, and problem with "core" version is that is removes the deprecated stuff ...) even if I only need a GL 3.2 context, from 3.2 to 4.1 we load the same profile (then it's the graphics card compatibility that will make the context 3.X or 4.X)

So, currently, I'm on a dirty stage that I hope will end soon and ...will end right and BigBang engine fully works on Windows and Linux, but not yet on mac Os X, but there are hopes it will ... I'm working on it !


Bobysait(Posted 2016) [#187]
@jfk :
There should be LOD, as there is an update method that recompute the mesh with the new parameters, so it just need a lower "MaxVertices" parameter to compute a lower version of the same tree, but for the moment, removing trees can stop the branch creation faster, so the tree does not always look like his higher version.
I'll fix that too.


jfk EO-11110(Posted 2016) [#188]
Ok, sounds interesting.


Bobysait(Posted 2016) [#189]
Soooooooooooo ...
Good news and bad news :
- I managed to create a graphics driver for gl 3.2 (and above) core profile on mac os x 10.11 (el capitan) with the loading of gl api via glew 2.0
And ... after some hours of headacke, I get my colored triangle rendered with shaders and vao+vbo.
That's for the good news ^^

bad news :
- it absolutely not fit the engine properties -> vertex buffers are not built the way mac requires them to be done.
So, I can't have the engine on mac unless I make some serious modifications to the core of the engine, including the way the vertices and triangles are setup AND the way I use Ids for shader attributes.
(mac requires the use of Vertex array objects to store the "definition" of a chunk of vertices ... in other words, you define the vertex array object by seeding the vertex buffer object, the vertex buffer data and (and this one is seriously problematic for the engine at this stage) the attrib pointer you use in the shader)
All this means I need to use static adress for attribs if I want to use vaos and it also mean I need to redefine the surfaces objects to generate vao including the different arrays (coords, colors, uvs, bones etc ...)

I think in the end, I 'll do it, but for the moment, I'm a bit fed up with that, and will concentrate on the windows and linux part.

Anyway, I have a working glew module available for mac if some are interested, it's a standalone pack of 3 modules (it only requires to install glew 2,0 -> easily done via homebrew and the terminal "brew install glew" ... and it's done)

I remember the guy of leadwerks who said he would pay for a driver for mac ... maybe it's time to make some money :)


ps :
As it took me about 3 days to get it working, it's probably the most satisfaying triangle I've ever made.
Here it is in all its glory !

(on the left, we can see an error on glew initialisation, but actually, it seems that it's a standard error from glew on core 3.2 and above -> some extensions have been removed, and as glew performs a GetString GL_EXTENSIONS it fails on some extension that are not available anymore. So it's just an error to discard)




Bobysait(Posted 2016) [#190]
Well, considering, I 'll revoke what I said as I managed to create a render pass that better fit the engine props ... so maybe ... :)


Bobysait(Posted 2016) [#191]
Damned I should have done this at first !
I 've built a prototype to ensure I could rebuild the module with some tweaking to incorporate vaos, and in the end ran a similar engine (less customizable but containing the main structur as the bigbang engine itself)
Then I tested on 5000 cubes spinning in loop ... well I got 40-50 fps and I said myself ... "Not too bad, considering the cpu load for doing several matrix transformations on all thoose entities"
Then ... I just remembered I was in Debugmode :)
It can render 5000 turning cubes (the most intensive transformation) at 250-300 fps :)
The first version of the engine was pretty fast, but probably something like half as fast as the new one ^_^

There is still a lot of work to do to get every thing working as it should, but it's a very nice surprise


Flanker(Posted 2016) [#192]
Interesting yes because quite a lot of people here are working on Mac. I just hope it won't make the release of the engine later than expected ^^


Bobysait(Posted 2016) [#193]

I just hope it won't make the release of the engine later than expected ^^



Well ... it could take just a bit longer (for once it's a very good reason) because I'm playing with geometry shaders and I try to replace the 2D part.
Currently it's a bit poor in usage (it only deals with cubes at the moment) but it renders rotated/scaled/handled/colored/blended/alphaed rects at a furious speed
Like 100 fps for 30000 rects (while with original Max2D I get something like 40 fps with 10000 rects on the same demo)
I also implemented a "Oval" shader that works very well, but I'd like to get it in the same geometry shader, just using a "style" variable to seed so it doesn't swap programs constantly if you do something like this in loop
DrawRect 10,10,100,20
DrawOval 130,10,20,20
DrawText ...

Because to get it working fast, we need the less drawcall possible, so the 2d pool is not rendered when asked (while the original max2d use direct draw stuff), it only renders on the flip or anytime the current buffer is changed
So, if there is a need for 2 programs that interleave, it will result in bad performances. And I want to prevent from such things, at least with the 2D.
If we start drawing 2D, it should use a single program (it's not what is supposed to be better in general as a shader should always be the smallest possible, but in this case it is.)
Then I need to refresh the 2D structur and integrate a more low level interface for it so I can use some Poll2D() command in the buffer class (and at the moment, the buffer class is more low level than the 2d engine ... it means that, like in C/C++, it can't use a 2D command as it doesn't exist yet.)
to resume, I've still got a bit of work on it, but I take it seriously, I also want it to be finished the soonest as possible, because :
1 - I don't really like when we're waiting for me, and it's a bad thing for "commercial" approach.
2 - I need some money for things (a lot of hardware to replace and a home to build), so ... as soon the engine is released, as soon I can get a bit money and go on more "commercial" stuff (games & else)
3 - Probably obvious, but I'm glued on the engine for so long that now, I want to stop digging and start playing with it too :p

With a better implementation, it could be used for things we barely touched here due to framerate issues (like 2d water simulation and so on)

Oh, and by the way, I'm working on an old Mac Pro 1.1 (the old early 2006 version ... bought at 150$ on ebay and an ATI Radeon HD 5870 and 16 Go Ram to get it working on El Capitan with some tricks)
With the OpenGL 4+ drivers it works like a charm, so, it should work on any decent/recent mac machines.


I'm finishing this and it should be ready for the demo release (not a "demo of the engine working", but the demo version of the working engine ... if you know what I mean)
I'll have some fun later with the engine once it's up for release, I don't want to make you wait too much, it's just I really want it to be accurate and the less possible bugs.


Flanker(Posted 2016) [#194]
Sure I understand, it's always better to take the good way from start instead of changing everything after the engine is released.

It's just that at the moment, i'm about to port my game from blitz3d to blitzmax, for calculation speed and threads, but I need a 3d engine... I tried "pure" opengl in blitzmax but everything has to be done... I tried openb3d and it works pretty well (but no support anymore). Now I wait for BigBang to see wich engine I choose.


Bobysait(Posted 2016) [#195]
Implemented rect and oval on the same shader (with fill mode or line)
it works pretty well both on mac and windows, so I think I've got a solid base to work on.
It's really only missing the text and image function (which are almost the same thing actually).

A small video of 35000 particles (rendered with the DrawOval function + scale, alpha and a very small physic pass)




Yue(Posted 2016) [#196]
Very nice, what I see is that for me it will be very expensive, 40 dollars in Colombian price is approximately half of my monthly salary. I wish you success in your project.


RemiD(Posted 2016) [#197]

I need some money for things (a lot of hardware to replace and a home to build)


I wish you the best man, but seeing the few people who have posted here and shown interest in this, don't expect too much...
But maybe this engine will revive the "Blitz3d flame" and the members of the old community will be pulled back here. (once it is ready, maybe go talk about it, in a neutral way (like comparing the fps for the same scene with your engine and with another engine) on the forums of others engines (like UnityEngine, UnrealEngine, DarkEngine)). We'll see what will happen...
In any case, i hope that you will not give up (and that Microsoft will not annoy you with future updates/mods), because you already invested many years in this.
Good luck for the remaining steps !


RustyKristi(Posted 2016) [#198]
Very nice, what I see is that for me it will be very expensive, 40 dollars in Colombian price is approximately half of my monthly salary. I wish you success in your project.


There are lots of free alternative options Yue, and going back you are also a Unity user which is now "free" and even better in terms of performance.

If you really like to work on Blitzmax and a new product that Bobysait here is trying to build, any decent price will be well worth it because the man worked on it with time and effort, you can't complain there.

There's also OpenB3D which is angros47 is still working on.


Rick Nasher(Posted 2016) [#199]

Very nice, what I see is that for me it will be very expensive, 40 dollars in Colombian price is approximately half of my monthly salary.

If it's any consolation: I make like 20 times that and still can hardly make ends meet due to high cost of living round here. :-(


Bobysait(Posted 2016) [#200]
Considering the minima for colombian salary being aroung the 250$, I think you need a better job :)

Sorry for the price, but I can't (or don't want) make it lower, and the price was fixed by Blitz3d. The original cost of blitz3d vs blitzmax which had no 3d, if we had the price of the module; we get the price of blitz3d + the taxes.
Anyway, I suppose in the end, I could make you an offer, like ... you work for me for something, and I pay you with the module for "free" (which would be wrong thinking you worked for it ^^, but it's like this in buisness "buy 2 the third is free", which means : "the price for two is currently higher than usual, but with the extra third, the price is the same as before, except you would have bought only two before ... now you pay for a third you didn't want" ... that's another <<free>> digression about capitalism ^_^)

Whatever, I've finished the 2D part of the module, including some functions I thing were missing from vanilla, like DrawImagePart, here we have DrawTexture (similar to DrawImage), DrawTextureRect (Draw a texture overriding its size), DrawTexturePart (draw a part of a texture to an area -> so you control the position and size of the view to area of the texture show and the position and size of the area where you draw it)

Also, for better performance, pixmap fonts are loaded entirely (not char by char "when required") it allows to use a single drawcall for all the text being drawn with the same font. Else it would alternate the draw calls for each char (like vanilla does with the glBegin/glEnd) the result is just drastically faster ... we're talking about a factor 10 or more, so, the benefits are really to take about (conv : it will use more memory, but the engine is intended for middle/high graphics hardware, so we should think about at least 128 or more Mo on the video ram to use the engine, which is available on all cards, or graphics chipset since a decade I think)

The real requirement is an Open 3.2 capable graphics card. older cards won't load the shaders. easy as that.

I'm rebuilding a part of the 3D to match the new specs for the mac and we'll be ready for a release.
(I must admit the mac cost me extra time I didn't think about, as I had never used a mac before, I didn't know the opengl implementation was that poor ...)


Yue(Posted 2016) [#201]
Bobysait, I understand that your work has a price and you have to value your work.

I do not know where I can work for you, although if you need models I can create things that are possibly of your interest. Trees, rocks, organic models, plants to make a set of assets for the engine.


In the end, on my salary I am only a watchman in a building. ;)

Greetings. :)


RemiD(Posted 2016) [#202]
.


Bobysait(Posted 2016) [#203]
While I was implementing DrawText, i thought it would be good add to enable some text features for 2D

So, the engine Implements the support for "Signed Distance Field" fonts
-> it makes text clean with large scales

(on the screen, from top to bottom : unfiltered font, smoothfont, signed distance field)


(and if you're wondering, yes, it's the zelda ALTTP map in the background ... a 4096*4096 textures found on a ripp website, it was only for 2d tests with tiles, viewport etc ...)
The background checker is only a 8 pixel texture with 4squares (2 whyte, 2 greys) using the DrawTextureRect function and a scale on the texture to draw, we can draw patterns easily.


RifRaf(Posted 2016) [#204]
you should start a second topic.. this one is starting to take awhile to load all the images


Yue(Posted 2016) [#205]
Yes, new Topic please. :)


BlitzMan(Posted 2016) [#206]
Wait for a demo @Yue you might not even like it.
Seen things like this come and go like a dodo on this site.


Yue(Posted 2016) [#207]
@BlitzMan

Sorry, I do not understand, the Google translator does not help me much.

A demo?


Bobysait(Posted 2016) [#208]
Well, at the moment, I'm a bit stuck on a single thing related to the 2D
> OpenGL 3 core profile does not implement the Immediate mode and actually, the mac implementation removes the compatibility so we can't use immediate mode whatsoever
Of course I've rebuilt the 2D to use modern stuff but there is one performance issue if we use 2D in a specific way, which I must admit is causing me troubles :

In immediate mode, we can switch blend modes on the fly to render rects/ovals etc ... on top of each others as it's immediate mode there is no problem to display alpha blended geometries, and the immediate mode is pretty fast.
Blend changes is always a cost, but this cost is merely acceptable.

Without immediate mode we have to make a choice in the rendering method :
Method 1 - store everything in vbos and render only on buffer changes (to prevent massive transfert on each state changes -> it will render stored geometries when the current buffer is modified)
this method works very well and is way faster than any other method, but it integrates a z-order issue (which is a famous issue encountered in 3D -> the Order Independent Alpha)
Because the geometries are stored on layers (one for each blend mode, and some other for textures), when rendered layers using depth test (each geometry is stored with its z-order as component for Depth), an alpha geometry will write (or not) to the z-buffer, but it will prevent the next layers from writing blended pixels that would be farther (and, as it's a transparent geometry, we should see geometries that are under it)

Method 2 - render geometries on each state changes : this method renders without graphical issue, but the state changes has a really higher cost than a simple glBlend()/ glBlendFunc, so, it's really recommended to switch blend mode and texture the less as possible.

The first method is just not really possible due to alpha/shade/light blending that will produce the explained graphical issue ... but can be accurate for solid/masked stuff
So, it would be exactly what blitz3d provides for 2D ... but faster
I think it would be a pain to have a blitzmax 3D engine that would remove the powerfull max2d alpha blend capabilities
The second method would be the way to go, but at the moment, I can't find a way to make the calls that occures on blend chages faster ...

So, it will be very fast in most cases, but will be very slow on some cases (that can be frequent enough to be a problem)
For example : creates some grey rects with an orange outline
SetBlend SOLIDBLEND
For n = 1 To 50000
    x = Rand(GraphicsWidth())
    y = Rand(GraphicsHeight())
    w = Rand(10,50)
    h = Rand(10,50)
    SetColor 255,128,000
    Drawrect x,y,w,h
    SetColor 80,80,80
    Drawrect x+1,y+1,w-2,h-2
Next

This run pretty fast and is accurate


now, the code below shows the same stuff but replace the orange line with a blended shadow
For n = 1 To 200
    x = Rand(GraphicsWidth())
    y = Rand(GraphicsHeight())
    w = Rand(10,50)
    h = Rand(10,50)
    ' a shadow around the rect
    SetBlend SHADEBLEND
    SetColor 250,250,250
    Drawrect x-10,y-10,w+20,h+20
    ' the rect
    SetBlend SOLIDBLEND
    SetColor 80,80,80
    Drawrect x,y,w,h
Next

Even with only 200 rects the framerate is horrible due to alternates of blending modes (the immediate mode in this configuration is nice because even if state changes has a cost, it's faster than computing vbo datasfor each call)
And this kind of configuration is probably very frequent (wether or not it's a bad way to do it)


So ... if anyone has already had to deal with this in modern opengl implementation, I'd be glad to hear about it.

And by the way, the same problem comes with textures if we frequently alternate textures.
-> the best way to deal with it is to use a texture atlas to prevent from changing texture.
in the blitzmax vanilla sources, there is a zombie code that shows exactly what we should not do :
-> the zombies frame are split in several AnimImages
-> the zombies are rendered using a shadow at the same time the zombie frame is drawn
so, it alternates the current textures a lot of time and make the framerate bad (it is still playable but the framerate really drop a lot from what it should be)
I've rebuilt the zombie code to make it work very well by :
- create a big texture with all the frames on it (a texture atlas)
render the zombie' shadows first with the SHADEBLEND mode
render the zombies on top with the SOLIDBLEND (or ALPHABLEND) mode
so, it only use 2 state changes in the end for the zombies.

Wether or not it should have been the same in the original code (for obvious reasons), it's still a different way of coding, so it could lead to big changes in the max2D codes already done.

So, thoose restrictions are a real problem for now, and I'll have to find a solution as soon as possible, else the 2d part of the engine won't be really usable in a "generic" way (or at most would be very restrictive -> works really well for large datas at once, like particle system or lots of stuff using the same blend mode, which covers the most commun usage of 2D, so it's not that useless)


RemiD(Posted 2016) [#209]
I don't understand everything that you wrote, but :
why don't you forget about "real" 2D and instead create a pixel precise lib to position textured shapes (like lines, rectangles, circles), all in one surface with one texture, with texture filtering deactivated ?


Bobysait(Posted 2016) [#210]

why don't you forget about "real" 2D and instead create a pixel precise lib to position textured shapes (like lines, rectangles, circles),


Because I want a "real" generic engine, not a framework with restrictive stuff (and this kind of tricks is not reliable for modern implementation)

Anyway, I managed to get a good compromise by removing the writing to the depth buffer for all blend stuff.
So, solid and mask blend are accurate and work with the z buffer, and alpha/light/shade blends only use the depth buffer for test, but does not write on it.
It's not accurate if a transparent shape is supposed to be surrounded by 2 "Light blended" shapes, but it should fit almost all purpose.


all in one surface with one texture, with texture filtering deactivated ?


In some way, it's what I do, except it uses more than a single surface (and there is no such things as "surface", but ... we could imagine it's something similar)
It uses virtual 2d layers that holds a VAO and 4 VBOs per layer (with a layer per blend mode)

In the end, you won't notice how it works, you'll just use SetColor/SetAlpha/SetBlend/DrawRect/DrawOval/etc ...
All the above is just internal stuff.

But, I think I'll keep the 2d layer class public, so that we could have static 2d geometries (which would be faster to render > and we often have a lot of static stuff in 2D ... like UIs, maps, HUDs etc ...) so a layer is probably similar to a 3d surface.
Then, we'd have to just to hide/show the layer to enable or disable it from the 2d render pass.


RemiD(Posted 2016) [#211]
for your info, and if i understood correctly, nuclearglory graphics engine is all 3d (so the 2d stuff is made using surface(s) and texture(s) with the appropriate texels scale (to have a pixel precision, if needed)
and Tank Universal 2 was made using this graphics engine, so it seems good enough to create demos/games (even if i like to be able to work with images/pixels, all can probably be reproduceable using textures/texels, especially when texture filtering is desactivated)