What's the role of an engine?

Community Forums/General Help/What's the role of an engine?

Yasha(Posted 2013) [#1]
(Another in the "Yasha asks shockingly stupid questions" series.)


To expand the title, what and where is the threshold between dynamism and flexibility of control, and hardcoded/preauthored effect sets and tools? How do I place my project relative to this line, and how do I take advantage of it to find suitable tools?

I come from the fixed-function background, having started out with B3D. In that context, having an engine with a bunch of preselected effects you can tag onto objects makes sense. Directly using the fixed functions of OpenGL is nightmarish (in my opinion), and anyway that's pretty much all the underlying system was, as effects were directly implemented in hardware. The engine composes the low level effects into modular units that it makes sense to think of as object-level and presents them to the user. OK.

After previous adventures in the "stupid questions" vein, I've finally got around to learning about shaders, and truly they are the work of the gods. They can do anything! Literally! Amazing! But this is where it gets baffling: next-gen superpowered AAA engines like Leadwerks and Unity have a feature list... consisting of pre-packaged effects. (In a way, that they "have a feature list" mentioning effects at all seems to be the core problem I'm having difficulty with here.) So the new giants of the playing field have leveraged the awesome power of the programmable pipeline... to create a new fixed set of effects? So that we can just pick and choose from a bunch of predetermined tags for objects just as we did in fixed-function engines? (e.g. whether skinning is implemented on the CPU or GPU, imposing "one way to do it" on the user makes it effectively the same.)

I'm not knocking this by the way - if I wanted SSAO, bump mapping, or soft shadows, I'd be very grateful that someone had already written and optimised the shaders for them, and this is a good thing and worth paying for etc. etc.

The reason I'm thinking about this of course is that I'm not interested in those things, and in one of my own projects want to experiment with radically non-traditional rendering techniques that absolutely, definitely require being able to use completely original vertex, geometry and fragment shaders and have no use whatsoever for your mere mortal notions of "lighting" or "texturing" (let alone something so mundane as shadows!). However I'm still interested in the CPU-side benefits of an engine over raw OpenGL - entities arranged in a scene graph, camera objects with assigned viewports, etc. Logical space doesn't change much just because the rendering system does.

(I don't want to sound like some kind of special snowflake, but more that I'm just overwhelmed by the insane possibilities ready to be explored after finally breaking out into this brave not-actually-new world of programmable pipelines. Also that just using shaders to make "effects" faster is one hell of a wasted potential.)


Basically what it looks like to me is that a lot of rendering engines by definition seem to restrict and control access to the pipeline, because if they offered completely unrestricted shader authoring, they basically wouldn't be providing "features", so much as pre-written code snippets. And that by choosing to use a rendering engine there must be some amount of lock-in to its re-fixed-function pipeline (e.g. the fact that the description of Unity's features involves specifying "post-processing" effects as opposed to ...effects, or in fact uses the word "only", indicates there's a whole lot of stuff you can't do that you should, like implement a different shadowing system).

So where's the middle ground if I don't want to work in raw frameworkless OpenGL, but also want a completely dynamic rendering pipeline? Implementing a simple scene graph ain't that hard, but there are a whole lot of related logical-space things it would be nice to have robustly pre-authored and available (e.g. cameras, meshes, the ability to tag "entities" with completely original individual pipelines).

Does the concept of an "engine" still apply at this level? Or does the notion inherently impose some level of fixedness and lock-in to the pipeline? What amount of customisability do the various current solutions actually offer? Does wanting to write all my own GLSL essentially replace the whole point of a graphics engine? Does that mean "non-graphics" engines exist (or could exist) and that the graphical component of my program would need to be "raw"? etc.


I'm not specifically asking for tool recommendations here - although if you have 'em, sure, why not - but more for help with the floundering and the trying to work out concepts in a completely unfamiliar and unknown engine landscape. I am in an alien place, program-architecture-wise, and don't understand what I'm looking at any more!


Pre-emptive counterpoints:

-- I haven't actually shopped my code around and tried out many engines, working out the answers by experiment, because a) I'm not willing to invest weeks of time and mental energy learning e.g. Unity only to establish that it won't do what I want; and b) that won't actually help with the conceptual/philosophical questions at root here because all it does is demonstrate how Unity/Leadwerks was already built, not the ideas behind it.

-- I know what I want to do with the pipeline itself because it turns out shaders are easy to understand. The question is about the surrounding code and loading them into a framework, not the effects or shaders themselves.



Thank you for reading. This is probably a confusing and incoherent question. That's because I am very confused.


virtlands(Posted 2013) [#2]
I am in an alien place, program-architecture-wise,...

Good luck on it. 3D engines are alien to me too.


Kryzon(Posted 2013) [#3]
What kind of game are you trying to make?
Even if you're after an experimental, non-traditional rendering scheme, it all goes down to placing pixels on the screen (literally, the final instruction in a pixel shader is defining the pixel's color with an RGBA value).
You'll also need to play audio, solve physics and send network data; you can explore a lot in terms of content, but the way you offer that content is mostly paved already and not even by any game engine, but the hardware we're using.

Regarding engines, it's a tradeoff. If you want reliability, if you want to be able to code something and have it running on several different targets with minimal custom programming, then you need to give up some freedom.

We can see a 'game' engine as a collection of other sub-engines, such as one for rendering, other for audio, physics, network etc.
If I understood correctly, you want a 'game' engine where the rendering sub-engine is replaceable or at least extensibly modifiable.
The access you have to each sub-engine is heavily influenced by how proprietary that engine is. Leadwerks and Unity are proprietary engines, and so the amount to which you can alter their internals is very limited.
Unity (the Pro version) has the option of using native plugins (in a way, like a Blitz3D userlib). It's something you can consider.

Most of these special-effects that come packed with the engines as you mention are there to cater for that "I like shiny things" audience. Developers that want to drag-and-drop effects onto their scene. This is very similar to the engine coming bundled with a pack of textures, models and sounds that you can make your projects with.
I think most of the shader-based special-effects from Unity are coded using their material system with a scripting language called ShaderLab. The relevant point here is that most of these special effects are "open" in that they consist of files with shader code, fixed-function parameters and custom settings that you can modify, extend etc.
Other effects, like the shadow-mapping, to my knowledge are not open and you can only turn them on or off and alter parameters - I believe this is so because it's so much optimized internally, so they would have to reveal part of the engine's internals for allowing modification.
Since Unity offers render-to-texture, you can turn off Unity's native shadow-mapping and create your own with shaders, and this can have almost any look you want.

In a way, offering shader-based special-effect packages like these along with the engine is also interesting because you can learn a lot from them, even if you don't end up using them. Certain parts of special effects code can be useful to you, like rendering arbitrary data to a texture and retrieving data from that texture so as to compute particle systems etc. on the GPU.

It's important for you to consider if what you're trying to achieve is something these engines already allow you to do.


Yasha(Posted 2013) [#4]
We can see a 'game' engine as a collection of other sub-engines, such as one for rendering, other for audio, physics, network etc.
If I understood correctly, you want a 'game' engine where the rendering sub-engine is replaceable or at least extensibly modifiable.


This small comment actually helps enormously in terms of getting my thoughts in order and understanding the scenario. Thanks!

It also seems like it fits pretty closely with the architecture of the main engine that I was actually thinking about using, which is Canardian's beloved Urho3D. That's built out of a bunch of separate sub-engine modules and is open-source, so it should be possible (for a competent user) to both examine and take apart as necessary; and in fact it advertises a configurable rendering pipeline as one of its primary features. Unfortunately when I was looking at it before my baffled perspective meant I couldn't really understand how to take advantage of this, or what any of the description of the "configurable" pipeline actually meant (it's "ubershader+permutation based", for whatever that's worth). Will have to try again with an eye to how it can be reconfigured into an Urho-based engine, perhaps, maybe with a different render module. I will take the advice to pay attention to the shaders other engines let one use for ideas and inspiration, that seems like a good way of assessing flexibility without needing to commit to actually building something.


rendering arbitrary data to a texture and retrieving data from that texture so as to compute particle systems etc. on the GPU.


Indeed, indeed. I found shaders themselves pretty incomprehensible up until I stopped thinking about games and graphics all and spent some time learning about GPGPU programming instead; but they're so obvious and simple to understand once you view all graphics as a special case of GPGPU programming (like software rendering is a special use of the CPU), rather than as an arcane way to generate blends and effects.


Gabriel(Posted 2013) [#5]
e.g. the fact that the description of Unity's features involves specifying "post-processing" effects as opposed to ...effects, or in fact uses the word "only", indicates there's a whole lot of stuff you can't do that you should, like implement a different shadowing system).


There seems to be a misunderstanding here. Unity doesn't lock you into any of that. You can absolutely write your own shadowing system or lighting system if you want to. There is at least one user-created shadowing system for Unity. You can write your own geometry, vertex, fragment shadows as well - not just post processing stuff. It's pretty loosely integrated too, so you can use as much or as little as you want. Want your own lighting system but also want to piggyback on the inbuilt shadows? You can do that.

There are Minecraft-like voxel systems and custom landscape/terrain solutions too.

I'd imagine this would be the case for something like Ogre too. Without knowing exactly what you want to do, it's hard to say, but it doesn't sound to me as though you need to write your own renderer. Most flexible engines should enable you to take their stuff for handling scenegraph, occlusion, etc without restricting what you want in terms of geometry, lighting, shadows, etc.