Worklog for ImaginaryHuman

Creative Environment

Return to Worklogs

New worklog(Posted 2009-03-25)
I decided to branch off into a new worklog dedicated to my new programming book which I am now focussing on. You can join the journey here:

http://www.blitzbasic.com/logs/userlog.php?user=8053&log=1671

I'll be coming back to this worklog about my long-term project sometime soon.

Check out my Game Development Blog -ImaginaryHuman-

Back to it(Posted 2009-02-27)
Okay so it's been about a month since my previous entry, so here is an update.

For the past month I've been working hard to try to find a good compression algorithm. I've tried out many many ideas, some better than others, some really quite bad, some unnaturally promising and turning out to be flawed. It's been really interesting, exciting even at times, but for now I am putting all this aside to get back to my game/editor project. After four false positives, where I thought I'd found something that was yielding amazing compression, only to discover there was a bug, flaw, or mistake, it's time to let it go and get back to something more productive.

Compression is a really difficult area to make advances in. It's not like making games or open-ended applications. With compression you're trying to squeeze as much excess out of an image as possible to make it smaller. The whole aim is focussed around finite limitations and their reduction. There is a dead-end to that, because you can only compress data so far, and the approaches already available are quite capable. I found it difficult to compress any better than a combination of `filters` and zlib, ie like PNG, especially when I found that certain applications produced much smaller PNG's than those I was originally testing against. If I could find some kind of golden algorithm it would be like striking it rich overnight, but such advanced gems are seldom come by and rare like precious jewels. So I am ceasing to search for now. I may come back to it if I am further inspired.

Now I have to read some of my own prior worklogs in order to refresh myself about exactly what it is I'm really working towards. I'm going to focus on getting more code written in the areas of graphics, object organization and scripting. I had wondered whether to stop the current path of quite advanced (and complex) engine features, to focus on just a much simpler and more specific piece of code for a single game, but while that is tempting I still think the bigger project will yield much better results in the long term. It might take a while to get there but creative potential of it will be much greater.

Check out my Game Development Blog -ImaginaryHuman-

Compression(Posted 2009-01-24)
Ahhhhhhhhh, how lovery art thou, computational distractions.

Yes, the past week or two has had me consumed with one of the topics that I find fascinating - image compression. This harkens back to the summer of 1994, back when AMOS Basic was my language of choice on the Amiga, and before Blitz Basic had much impact (or even existed?). I spent most of the summer learning some basic compression techniques, trying to get my head around stuff like huffman and sliding windows and all that. I made myself a little test-bed application where I could experiment with compression techniques. I wasn't doing anything very advanced, because back then I didn't understand the more advanced stuff, but I did come up with a simple compressor (which took AGES to run) based on a combination of Run-Length-Encoding and finding repeated strings. It didn't have no fancy dictionary or huffman-like cleverness, but it did work and did compress most images at least a little.

Since then I've managed to understand some of the more advanced algorithms, except of course jpeg or jpeg2000 which still are too mathematically intense for me. I can understand how formats like png work and png seems to be a pretty good standard for lossless images. But I thought to myself, well, maybe there is a way to do even better lossless compression.

Lossy-compression, where you lose some of the original clarity, color, definition or information, in clever ways that you `least notice`, somehow doesn't have much appeal to me. If I could have my way I would do away with compression altogether and just go with pure RAW data everywhere. After all, compression arises from wanting the current storage space to hold more than it can, or to transfer more over a network in less time, etc - trying to get ahead of where the technology is. However, I like quality more than quantity so I tend to prefer my graphics (for example) to be kept in their pristine full-quality condition.

So I started recently thinking again about lossless compression, doing some research, coming up with tonnes of ideas to try. I made myself a test-bed application in BlitzMax to try out various algorithms and combinations thereof. There is some interesting stuff out there, and some interesting avenues that I have yet to explore. So far I've been using the normal `compress2` zlib compressor as my main back-end compression algorithm. I don't really want to get into trying to write yet another repetition-reducer at this point. Zlib (as used in png) does a pretty good job and I don't know that I'd be able to come up with anything that does what it does better.

Where I do see potential is in the area of pre-processing. The way I see it, traditionally most people look at compression in terms of `I have this existing data, in this specific form, and I want to compress it`. Then we set about trying to find repetition patterns, runs of bytes, etc based on what this data contains and its arrangement. But very few compressors do anything to make the data `more compressible`. PNG has filters whereby it applies either a simple subtraction from the previous pixel, or the above pixel, or an average of 3 adjacent pixels, or the paeth filter which is basically another prediction scheme (which I find works worse than a simple subtract in most cases). All of these attempt to change the input data prior to sending it through the zlib compressor. And this is the kind of area which I think is open to advances.

Rather than focussing on how to compress x amount of data better without changing the data, we should focus on `how can I change this data so that it is NOT this data, so that it is a more ideal data, which I CAN compress`. In other words, it's kind of like saying `instead of changing the compressor to compress better, lets change which image we're compressing`. You could even say it's like saying `I cant compress this image very well, but that other image with all the sequences of bytes and long stretches of 0's is much easier for me to compress, so let's compress that image`. The zlib compressor is only one part of the compression system, there has to be contributions from other areas as well.

So I've been focussing not on making a better zlib, but on changing zlib's perception of the data it's trying to compress. I asked myself, what is the ideal most-compressible kind of data, and how can I turn the current data into that data in a reversible way? What data can zlib compress extremely well, and how can I convert my data into that data, and do so reversibly with as little extra data as possible? Basically, removing as much variability in the data is the key. Random data is harder to compress then long stretches of the same values. The ideal image to compress would be one where every pixel is the same color, ie 0's - all black - nothing.

The ideal would be to find a transformation of the data which results in an all-black image, or something close to it, and reverse it. Now, this is where we have a problem. If I can turn one image into all black pixels, then potentially there are many many different images - how does a set of all black pixels distinguish these images? How can one pure set of the same numbers decompress into countless multitudes of different images? You can't do it without at least some extra data. Somewhere there has to be data which says `this is how this image differs from all others`, hopefully in the most refined and minimal representation possible. Finding that minimal representation is the key. After all, a given image has a limited number of pixels, and therefore there are a limited number of images which those pixels can represent, so somewhere there is a limit and we don't have to turn our data into infinit images, just a very large number.

Still, this is tricky to do. I have tried many different approaches so far and most of them have ranged from poor compression to terrible compression to gross expansion. One thing you don't wanna do is make the original data bigger at the end of all this work! However, I have found a few nuggets so far, which could be the basis of something.

I have developed a simple technique which, as it stands, combined with a zlib compression pass at the end, produces smaller image files than PNG. Not a whole lot smaller, maybe about 10%, but that's not bad. It also actually does a pretty okay job with highly random data, too, shaving off a small percentage which PNG and even Lossless Jpeg2000 are quite poor at. I also found that there is a certain degree of benefit to running the resulting output back through the whole system again, more than once, for even better results. I managed at best to compress a 1024x768 24-bit image down from about 2.5 megabytes to about 1, where the equivalent PNG was about 1.5 megs. That was after 3 passes. The fourth pass seemed to increase the size slightly, so there is a limit to how far it can go. But that it can improve over multiple passes at all is very promising, plus it is reasonably quick.

I did have one false eureka moment, where I was accidentally outputting a copy of the source bytes instead of the compression codes I'd generated, and the resulting compression was fantastic, like 1:30 compression down to less than 100 kilobytes! For a moment I thought it was going to change the world as we know it, but alas it all came down to a simple typo! :-) There have also been some techniques that I thought for sure would work really well which turned out to be awful. Oh well. That's why it requires a lot of testing and trial and error, trying anything and everything just in case it might be the one thing that really does work. Sometimes you just really can't tell until you try it.

So anyway, I have numerous ideas to explore still. I have set myself a target of getting that image down into the few-hundred-k range. I think Jpeg 2000 lossless compression is one of the best out there, it can get that sized image down to between 500k and 1 meg usually. I would like to do better. I would be thrilled to find a way to compress down to like 100k because that opens up so many exciting possibilities - lossless video for example.

I have looked at the Burrows-Wheeler transform which is a blocksorting algorithm, but I am deciding not to use it at all. I am not impressed enough with the compression results or the compressibility of its output data. I would like to come up with something unique and original. So for right now this is the side project which is consuming all my thoughts. I will get back to my bigger project, sometime soon.

I am thinking that eventually this improved compression system will become incorporated into my creative environment and useable in a wide range of products, like in storing images, game assets, movies, audio, meshes, etc

Check out my Game Development Blog -ImaginaryHuman-

Languages(Posted 2009-01-11)
Recently I have been researching and designing how to approach a `script language`. I have already been working on making scripts execute and adding some functionality like calling OpenGL commands etc, but there is still much of the core language to define. I've looked at assembly languages like Motorola 68000 assembler (which my popular Mildred graphics library was written in), and bytecode languages like the Java Virtual Machine's bytecodes. I also looked at a variety of higher level languages like Java, Javascript, Basics, Rebol, Python, Ruby, etc. There are lots of approaches out there and many advanced languages to learn from. I do not like languages that require the user to adapt to the computer in order to get stuff done. Looking at the syntax of many languages like lua, c, c++ etc can be so cryptic. No beginner could hope to grasp them quickly. So here's what I came up with so far.

My virtual machine executes `opcodes` just like a cpu does. Each opcode is the equivalent of a `bytecode` possibly followed by any number of immediate parameters, and is also the equivalent of what would be represented in assembly language as instructions, ie one assembly language instruction converts into one binary instruction in `machine code` which the cpu can understand. We know that whereas bytecodes run on a virtual cpu, real machine codes run on a real cpu.

So I have this virtual machine which executes my opcodes, and opcodes can be followed immediately by parameter data all in a single stream. It's similar to regular assembler programs - opcode+immediatedata=machinecode.

Initially I am going to define a fairly low level `assembly language` to run on the virtual machine. This will let you do simple stuff like add two pieces of data together, read something from memory, push something onto a stack, compare two values, branch to something, etc.

Instead of higher-level languages `compiling down` to this low level assembly-like format, I will instead use a live reverse-engineering to `view` the assembler program in the form of other languages. Essentially, the assembly language program `projects` the higher level program, not the other way around. This means there must be a direct `mapping` of assembler instructions to higher level functionality. It will be possible, for example, to be writing some program in a basic-like syntax, suddenly decide that you know how to do something in a c-like language, and as such at the flip of a switch display the program as a c-like language. You would make the necessary changes in that language, then flip back to basic. I think it is important to acknowledge that the IDE environment has a lot to do with making a language useable and such must be considered a major factor in the language. In some ways the IDE will HAVE to implement certain support behaviors in order for the language to work. Why should languages be incompatible and intranslatable between them? Where is the harmony and cooperation there? We should all be on the same page.

The languages implemented on this first tier level would all be interchangeable, provided there is a direct correlation of functionality between the languages. It may also be possible to intermix commands from different equalized languages into the same program. Assembler programs could also be `viewed` in other ways, besides their native form, they might be remapped to another assembly language with a different instruction set, they might be viewed differently by an IDE, or they might be displayed in some graphical programming editor. I think it is important when designing such flexibility to have a view toward what kind of ways the user might want to create and modify programs, including how different IDE's would provide `help` to the programmer. I am envisioning a pretty nice IDE, as one of many possibilities, based on OpenGL with very fast responsiveness and many dynamic tools.

There would not be any compiler needed. Editing of programs would actually be done IN the assembly language, directly/self modifying programs at the opcode level and then displaying an interpretation of the changes. With a direct correlation between higher level commands/syntax and lower-level opcodes it should be easy to basically `recompile` each instruction as it is modified. Compilation basically will be non-existent. There will be no need to translate a higher language into a lower one. Modifying a program immediately modifies its `executable` opcodes. The only way in which compilation might be considered is where one program indirectly executes another, and the act of in-lining the other program or `locking in` indirect references would be similar to creating a new executable. Another cool feature could be to be in a lower level language, choose a command from a higher level language and have its code be broken down into the lower language for editing in-line, kind of an `insert code here`, similar to macros which will also be a feature.

As well as allowing languages to be rapidly interchanged, based on a reinterpretation of the underlying assembler code, I plan for higher level commands to `break down` into lower level commands in-line. For example, a higher level command might say `open a screen`. If the developer wants more than the default kind of screen they might break it down into a lower-level program of which they may modify the parts they want. There could be a few levels of flexibility, whereby if you still aren't getting enough control to do what you want, you break the offending command down into lower-level code and modify it. You should be able to intermix code from multiple levels and also revert commands to a higher level if deciding against the modifications. This way a very very high-level `easy but generic` language can be used by the beginner, and broken down into more advanced flexibility as needed. New languages would need to define not just one command set, but also a breakdown of those commands into a sub-language, or several. A `language` then comprises multiple levels of languages all the way down to assembler.

Since I'm using a virtual machine I can control access and execution for security reasons if necessary, and prevent code from `doing bad things` if needed, all contained in a virtual environment. This should provide also protection against any kind of virus or malicious script. That said, what constitutes a `bad thing` is highly subjective depending on the person. Causing a pixel to be rendered onto an image undesirably could be considered malicious. Anything you do to any part of the system for any reason could be construed as malicious if the user sees it that way. Therefore all software is potentially malicious and you can't really stop ALL software from running. Trying to limit it is a pointless exercise, so you may as well let it all run and not be concerned. This of course then requires a real foundation of real trust.

So anyway, now the first step here is to define the main portion of the assembly-like/bytecode-like language upon which all other languages will be based and extended. It should be fairly simple to achieve since such languages typically have few simple/abstract commands.

One other thing I've been working on is writing a converter for the OpenGL glew library. BlitzMax already comes with GL1.1 support and glew support up to GL2.0. It includes a program `glew2bmx.bmx` in the glew module folder, which converts the standard glew.h header file into blitzmax code. Coupled with glew.c and other sources it then provides access to and detection of OpenGL commands up to 2.0 plus extensions. So my task is to a) download the latest version of glew which supports up to OpenGL 3.0 plus newer extensions, run it through the converter, store most of the output in glew.bmx, then use this output to convert it into a wrapped script language API

My intent is to make ALL OpenGL commands and extensions available within script programs. I think this will be a good feature. This project isn't just about games, it's about a general environment for software. I am excited about creating higher level apps later which include support for things like script languages and OpenGL commands - imagine a graphics/imaging/animation application like gimp/photoshop/paintshop pro, where you can write scripts for every feature and use OpenGL to render whatever you like, to build shaders, to create 3D renderings, to do raytracing, image processing, whatever. I think that'll be pretty cool. It's going to be very open and flexible. A graphics application isn't just going to be a graphics app - an app is just one example of software running in the virtual environment, and so it can be expanded to do anything that any other software would do, beyond the scope of its design.

So for now, I gotta finish my API wrapper converter for GL, and test it, and also then put together the low-level assembler language for the virtual machine to run. Exciting stuff!

Check out my Game Development Blog -ImaginaryHuman-

Update(Posted 2008-12-15)
Now that I have the script engine running as multi-threaded in a thread pool, where each thread has its own virtual processes, virtual cpu and virtual task scheduler, I decided to switch gears a bit and head into the more graphical and organizational areas of the system.

I took a step back to look at the whole system and layed out the main features that I want it to have (as a minimum), to get a better sense of the whole and how it fits together. Now I am starting to home in on selected areas and I decided to get into some of the graphics programming for a while.

I would very much like to get the scripts to where I can actually write useful scripts and have them execute, ie open a display and draw a simple editor in which the scripts can be edited. Then I could start to work on development within the environment rather than printing test results to the console.

So I have switched gears and am focussing now on graphics, the display system, and the camera system.

Initially I have been working on making the full OpenGL 1.1 API available within scripts. It isn't too difficult, it's just a matter of wrapping each function in the necessary code to interface it with script engine. I am about 85% done. I am not too happy about the overhead of having to wrap functions in functions with glue code, but at the moment I don't see a more efficient approach. In future I think I will explore more optimization possibilities.

I want the full API to be available to scripts for `in-line GL calls`. This will offer the raw API for anyone that wants to do something entirely or partially custom from within a script. This creates the possibility of writing multiple `engines` in one environment.

Instead of opting for some idea of an `ideal` rendering engine, or art pipeline, I decided to make it possible to write/use any kind of pipeline you like. Then on top of that I plan to write one or more layers of more abstract/higher-level graphics API's with more advanced functionality, which makes it a lot easier to harness the power or GL and do cool things with it without lots of code.

So if you want to get down to rewriting the whole graphics system you can, or you can use one that either I or some other party has created. One thing I don't want to get into is locking the user into some prefabricated `engine` structure that creates limitations and insists on things being done in one specific way. Freedom is the key.

Once I'm done wrapping GL 1.1, I am going to look at implementing all available extensions. I am particularly interested in some of the newer features like framebuffer objects, floating point textures, shaders, etc. Then at some point I will have a good look at GLSL. I think I will support standard GLGL shaders using the normal API, and then will create a higher-level `easy` shader language` system (or several) to leverage the power of shaders without all the complex math code.

I am still working on the design of the first higher level graphics layer, which will sit on top of OpenGL and do some cool things. In addition to lots of rendering and artistic tools it's going to be able make it much easier to use OpenGL's functionality.

Beyond this I am not sure what would follow. I have a display system in place to handle fullscreen/windowed displays and eventually will have support for multiple gui systems. My aim is to make it possible to create creators, to edit editors, and to basically give to the user the same freedoms that the author has. That means providing the tools and `capabilities` to the user, rather than specific applications of those tools. And where applications ARE provided, they will be fully editable.

One thing that did come to mind recently is that writing an operating-system like system sheds light on how specific normal applications are. They tend to be focussed around a particular task or group of tasks. So for doing graphics you have paint programs and image processors, for handling audio you have some other sample editor, then for browsing the web you have a web browser, etc... I think this approach of tying functionality to form, or making form follow the task, tends to fragment the user experience into lots of seemingly different pieces of software.

The fact is that many of these different tools do some of the same things that other tools do. There is a lot of overlap. An image is an image is an image, whether it's in a game or a browser or an image processor or a presentation. The freedoms to operate on a particular part of the system should be present throughout the system and not just isolated to what someone intended to `lock in` to a specific workflow. I foresee a kind of `functionality soup`, coupled with a `pool of objects/data`, and being able to use any functionality with any object as you see fit, and not just because it's `outside the scope of the application`.

Competitive applications vye for attention, they steal resources, they don't play friendly together, they don't cooperate, they are closed off and they are quite rigid and fixed in their ways. Enough of that. We need something much more open, much more flexible, and much more focussed on sharing. A unified system where all parts are equal. That continues to be one of the main focuses of this project.

Check out my Game Development Blog -ImaginaryHuman-

Threads(Posted 2008-11-29)
Okay so it's been about a month with not much to write about. I have been trying to find time to work on this project but other things have come up, or I get distracted. However, I did spend some time today on getting the script engine to be multithreaded and have succeeded!

It wasn't quite as difficult as I thought it might be. Basically now I create a `thread pool` whereby instead of creating one thread per task that you want to run, you separate the execution of tasks from the existence of the thread itself so that a given thread can run multiple tasks. Then it is only necessary to create as many threads as there are CPU cores, plus a few more to fill in the gaps when things like mutexes are locked or threads are blocked.

I have it defaulting to a pool of 16 threads which I think should cover 4, 8 or even 16 core CPU's (looking to the near future), but I have also tested it working fine with 250 threads. Much beyond 250 and the app starts to crash, either due to not enough memory to support the threads or the o/s doesn't allow it. But I'm sure this will be plenty for years to come.

In the thread pool, the operating system takes care of allocating timeslices (multiplexing) to various threads, and locating them on available hardware CPU resources. With more threads than CPU cores the o/s starts to shuffle threads around and restrict their run-times so that other threads get a chance. It appears to give quite a random distribution to which thread gets run when, but that's ok. It would be nice to be able to specify exactly which thread runs on which core at which priority/timeslices etc, but no matter.

The second way that multitasking occurs is that each thread is now treated as though it is a CPU, or VirtualCPU as I call it. Each VirtualCPU features a full script execution engine with a multitasking pre-emptive script scheduler. The scheduler (as said before) works with background timesliced tasks, or interruptive timer-driven tasks. A script acts as a `workload` for the thread to execute, while the script scheduler for each thread works to manage which scripts or parts of script get potential execution time. The thread itself is running its own scheduler to manage its own scripted tasks. Multiply that for multiple threads and you have multiple simultaneous script execution engines, each capable of multitasking within their own collection of scripts.

It's working :-)

One of the issues I had was that I wanted each Thread type (which contains info about that thread) to have a local variable which would be available to functions called by the thread. There is no such mechanism in BlitzMax. You can't have a local that appears to be accessible to functions within the scope of locality without explicitly passing the variable as a function parameter. Believe me I've tried many different possible ways to make it work and I've come to realize it's made impossible by virtue of the structure of BlitzMax as a language - ie because functions cannot *directly* access local variables outside the function (and Var defeats the purpose) I had to settle for passing the variable. This added a 10% drop in performance for function call overhead. Oh well. Short of turning all functions into methods I have to settle for this.

That aside, turning my script engine/scheduler into a multithreaded solution entailed also adding a `Me.field` instead of just `field` to access the thread's scheduler data. More overhead, but hopefully not too much. It still performs pretty well, in the region of 100's of millions of commands per second.

While I've made the schedulers thead-safe (as far as I can tell so far), I have not yet thought about how to deal with shared object allocation/changes. For example one thread could be operating on an object stored in the global data-storage area while another thread's script tries to do the same. I'm not sure if that means having to give every *object* a mutex, or to group objects together, or some other solution. More thought needed there. So while the schedulers are thread safe (!) there is still a requirement that separate scripts/threads do not write to the same objects (unless I know what I'm doing).

Another thing I need to work on is getting commands from a single script to distribute workload across multiple threads. Normally one thread runs one script but ideally you want to be able to say like `image process this image using 16 threads`, which should split the operation into 16 parts which get distributed to 16 threads and executed in a timely/interruptive fashion. This raises quite a few issues, particularly with how to distribute the workload quickly, and also how to get it to work while each thread is trying to schedule its own workload. Something I'll have to ponder for a while.

My intention now is that my engine will support multithreaded computation on a large number of cores, supported by the kind of script language that truly takes advantage of parallelism. And this is where the problem arises. Ideally you want like one script which runs on as many cores as possible, which clearly flies in the face of the execution model - you're supposed to only have one script per thread since each thread is isolated and possibly running on a separate CPU core. Since CPU design has kept boundaries between cores instead of implementing parallelization of single-threaded code in hardware (maybe in future?), we have to work around it.

What I'd like to do is to find a way to make a single-threaded (intended) script do as much parallelization as possible on the fly, as needed. Perhaps that would mean changing the scheduling model to having a single scheduler for multiple threads, or perhaps a way to quickly distribute a parallelizable operation to multiple threads without messing up the normal flow. I'm sure things can be learnt from processor design or similar. My ideal would be that whichever commands are not totally dependent on the previous command's results, are executed in a different thread. But I think that would mean quite a lot of overhead shuffling thread tasks around and messing with scheduling.

Another part of the parallization mindset is to think about what kinds of tasks can be broken down into smaller parts, which applies particularly well to stuff with lots of data or computation - image processing, artificial intelligence, physics, collision detection, decompression, etc ie larger tasks.

I kind of feel that trying to turn a thread-isolated script into a parallel-executed program flow is like fighting against the way the hardware is working. The hardware doesn't want to treat programs as `blending` across boundaries, or dynamically distributing to different threads automatically on the fly. It's something that could be done in hardware but I think trying to do it in software is going to be too expensive. There is probably some point at which the tradeoff can be made between redistribution overhead and timed saved making it run in parallel.

Anyway, I'm glad it's working as it is right now, although there is quite a bit more to do to turn the script commands into more of a `language`. At present all I have are a number of commands which do specific things and nothing that ties them together - no structural constructs like if's, loops, branches, etc, no variables or memory storage, etc, so there's still quite a few of the basics to code in order for it to become a useable language. I'm just happy right now that it could run on 250 cores without crashing ;-)

Coding threaded apps is quite interesting, and poses strange problems that I didn't expect. Like you create a thread to do some work, and that thread then hogs the cpu for a bit, before you get the chance to spawn another thread. So you can't always control what is going to happen when, or whether it'll be trying to do it while you wanted to do the same thing with the same data. It's tricky, but kinda cool.

I have a bit of work to do to make other parts of the system thread safe and to add a few commands that deal with script editing, but things are coming along. I will be quite happy when this stuff is done and working so that I can move on to more fun stuff, like graphics, and actually making games out of it. :-D

The way I'm approaching scripts now is that almost everything in the environment will be run by scripts, including what would seem to be the environment's operation itself. Editors, tools, utilities, applications, games, environments, all will be script-driven. This means that the amount of blitzmax code is reduced, needs to be less specific, and that I can work in a higher level script language to put `most` of the actual environment/products together. The Blitz app will be basically like a virtual machine that provides the execution functionality/script handling and lower level resource handling, and events. Most everything else will be an open book. My own editors and such will be on an equal level with stuff added by users, and users will have all of the same privileges and capabilities that I will have in creating those editors/products. That makes them optional, which means you can strip things down to bare bones if needed. It's really a blank canvas.

I would like to be where I can work on game design and editing systems, but that will come later. You have to lay a strong foundation first. I was thinking today that I might focus more on making `an editor` first before making a game, because that will in turn make things so much easier and more productive. Trying to just make a game without an editor wouldn't be much easier than writing specific blitz code. I want to get away from having to write lots of code and more towards being able to operate a user interface. So that's what I'm starting to think about now.

I have to finally add that I am feeling some discord between the potential benefits of writing one specific game and writing a whole game-editing environment with high re-useability. It's really the difference between short term goals and long term goals, and sometimes I wish I would just focus on the short term. In the short term I could write a half decent game in project-specific non-re-useable BlitzMax code, from scratch, and have it finished in a few months to a pretty good standard. Hopefully I'm not boasting ;-) ... I just feel that if you write games on a project-by-project basis without any thought to longer term issues you might well output the first few games sooner but not in the long term. With an eye towards re-use of code, generalization of functionality, flexibility rather than hardwiring, creating editors rather than just games, and having an eye towards the future in everything you do, in the long term I believe it will pay off. And I think the longer term goal suits my style better - I am very patient and have great determination to keep going. Ultimately someday you'll be watching this space when I announce that finally I've finished my super-mega does-everything ultra-engine game/app/everything editor-sytem thingamie. Hopefully it won't be TOO long term because sometimes the draw of a short-term solution tempts me with more immediate satisfaction. Gotta stay the course, though. I'll get there.

Once you have a good editor and higher level tools, development is accelerated *considerably*, it becomes much faster to produce results, much easier to experiment with adjustments, and much quicker to see progress - faster even than the progress you'd see from writing specific one-off game code. I am looking forward to THAT! I think I can be much more productive and prolific in creating good products by using higher-level advanced tools than trying to start from the ground up every time. Software should be made to take advantage of the computational power and the possibility of automation wherever possible, which can really help in development. The future is looking exciting.

Onwards and upwards.

Check out my Game Development Blog -ImaginaryHuman-

Events(Posted 2008-10-11)
Upon getting my script system up and running recently I have sort of sidetracked into some other stuff. It's a *relief* to get a big chunk of code to a working state and to see it doing what it's supposed to, such that even if it's not entirely finished I feel like ditching it for a while and doing something else. I need some variety sometimes to keep things interesting.

So lately I am starting to think about events. As usual, getting into a new area of the engine entails some philosophizing, visualizing the big picture, and thinking of the future. And so the question arises - what are events? What are they in traditional operating systems? How are they normally handled? Why? Is this the way we want to do it? Is there a better way?

When I think about it, an event is basically a occurrence of some `cause` which may lead to `effects` somewhere in the system. The cause could come from outside of the system, such as from the user, or from a networked computer, or even from some object interacting with some other object.

Cause and effect are closely connected. I would go so far as to say, that really neither of two objects are causing the other to do something, they are both equally involved in creating the appearance of `causer` and `causee`. An example is two objects touching - even if one appears to be stationary ,they both are involved in the event of `colliding` with each other. And since they are both involved in producing the `event`, they are both causal and they are both effects. So where is the event? The event doesn't really exist, it's an illusion of an event.

Let's also consider something inspired by holograms, in which the whole is present in every part: when an event occurs, it is occurring everywhere, and to everything. As soon as the event exists, it exists everywhere, and all parts of the environment are experiencing it. In computer terms this would mean that as soon as the user clicks a mouse button, the `mouse-down` event is now everywhere - all objects and all `applications`, and all networked computers, and all objects and applications on all networked computers, must instantly reflect the presence of the mouse click.

Events are basically a way of describing `change`, and change changes everything. Events are also communication, and therefore are intimately related to `message passing`, and message passing is also related to telling something to do something, which in turn becomes execution of code. It's all connected. Therefore, abstractly, an `event` is `any change in the system`.

In part, when an event happens, the occurrence of the event is *itself* an effect. What is an event but the content of its form? Whether it `causes` anything else to occur after it is merely a matter of perception on the part of everything else. Therefore events do not necessarily `cause` further events. Instead, observers/onlookers freely choose to use the event for their own purposes, and in their own way.

What this leads to, then, is a model which is somewhat modeled on reality. In reality people choose to perceive events how they please, giving them their meaning and choosing how to interact or respond to them. Events do not `do anything` without the free will of the observer and they have no inherent meaning. Therefore the effects of events are voluntary agreements and exercises in causality, ie "I choose what happens, I am not a victim of others".

In a computer software environment this would mean that a) any change can be considered an `event`, b) every part of the system has access to every event instantly and at all times, c) every part of the system can choose how it responds or uses the event, d) all events are openly shared, and e) the response to the event becomes a new event itself for others to use: events are circular. If events are circular, nobody has exclusive ownership.

To get events to be everywhere means that the network has to get involved. Events have to be transferrable across a network, which would ultimately lead to simple game networking, like passing keystrokes between opponents to update avatar actions.

So overall, at the lowest level of the system is the hardware components - the local computer, the networked computer, the low-level network connection protocols, and input from operating system events. Then above that there's the more abstract concept of pervasive omni-present objects. Then within those objects you have various kinds plus event objects, and then everything else is build on top.

An `event object` isn't so much a message that gets copied from one place to another, because that's inefficient, but it's more of a sharing of data ie we both look at the same data to share it and neither of us own it, or in other words, we all share access to event objects.

Permission to access an event isn't naturally exclusive. Rather, all events are open for anyone to look at. A typical operating system might distribute specific events to specific applications based on how the user-space is structured and what the app has asked the o/s to be notified of. I will allow that kind of thing to occur but from a different direction. All events are everyone's events but anyone can `opt out` from access, or can interpret them however they like. It has to start with everyone having freedom, you can't just add-on freedom afterwards or you end up with lots of fragmented pieces bunched together trying to look like full access. Freedom is a natural birthright.

Since anything can be an event, any object's relationship with any other object is basically an event. The relationship is the event. The relationship is the network traffic. The relationship is everything. If every object is an event, then every object has a script attached, and every object can make anything happen in the whole system. So in a way, events are really nothing more than `things happening`, which is what scripts do. Any object can make stuff happen.

The only thing beyond this is the use of interrupts. It would be really nice is BlitzMax had real interrupts, whereby you tell the operating system you want some piece of code to execute when the hardware or o/s detects something happening - a button push for example. This interrupt code totally pre-empts everything currently executing and hogs the CPU until it's complete, then returns things back to normal. It's nice for getting the system to run like clockwork.

However, Blitz doesn't have these - instead you have to ask the operating system to provide you with the triggering of your callback function. That's basically the same as checking for the presence of an event and calling a function. What's the point in having to ask the o/s to tell you what you already could know yourself?

Never the less, since my system runs in a somewhat abstract virtual world with my own virtual CPU and virtual operating system, I can also create virtual interrupts. I am still dependent on what the real underlying o/s can tell me, and the restrictions I have to abide by to get access to those things (like timer events), but once I do know that something is occuring, I can pre-empt the currently executing script(s) and call some other script(s) to act as interrupt handlers. In a way these would be even higher priority than an event handler function. I'm not quite sure yet to what purpose these virtual interrupts would be put, but it is definitely something I want to add.

So overall I think the main thing left to do with events, besides them generally being `anything a script wants to do`, is to interface between internal events and those generated by the o/s. When the user clicks a button I still have to deal with the o/s event and somehow turn it into a script action within the virtual world. Or in other words, the event hook function has to generate an `event object`, which may or may not be complete with some default (but open) script that actually does something right away.

I still haven't quite figured out how I'm going to route events over the network, but I have kind of a concept of `post-offices` in mind. I think so long as I can translate external events to internal objects then I can leave it up to scripts to decide how to use the events. I might provide some higher level functions to do a simple event system similar to traditional GUI's, ie which piece of the interface has focus, which part receives events in a hierarchical order of priority, etc, but that won't be mandatory. After all, we're still striving for complete openness here :-)

One thing I definitely DON'T want to do, is just decide on a bunch of standard events which everyone is forced to use, because as soon as you try to decide for the user what you think they should be using, or how they should think about events, you are imposing restriction on them. I might provide some standard events as options, but they won't be mandatory or in any way limiting. After all, anything can be an event, even things you never thought of.

Besides fiddling with events and other elements of operating system design, like handling messages, and thinking towards threads and multiprocessing, I've also been keeping this project on track - asking myself what I still want to achieve with it. I still want it to provide a platform for games and applications development, deployment and use. I also want it to be geared towards graphics applications, including image processing, design, editors and general artwork. I have been quite interested in computer generated art recently, particularly fractals and mathematically constructed images.

I also want it to be a general operating environment like an operating system would be, and to be fully unified across multiple networked machines. I was inspired by seeing the Taos/QNX operating system able to drag realtime-updating games from one desktop to another as if the two screens were controlled by a single computer. I am aiming for the same system-wide seamless overcoming of hardware separations.

I think overall I am leaning towards this being a kind of `microkernel operating system` (if it were one), because the core of the system itself is quite thin and most services will run in a variety of optional processes. It's the optionality of so much of it that means you can really strip it down to bare bones if necessary, and that you can switch off or eliminate whole chunks of higher-level o/s features if you don't want them.

Anyway, this is enough of a rambler for now. Back to it!

Check out my Game Development Blog -ImaginaryHuman-

Threads of progress(Posted 2008-09-21)
BlitzMax now supports real operating-system `threads`, allowing a single application to execute multiple portions of code in parallel (if you have multiple CPU cores), or at least with the illusion or parallelism on single-core systems. What does this mean for this project?

My original design for this system involved the idea of instantaneous computing - that time and space are meaningless and everything has occurred in one single instant, where every single object executes in a fully parallel way. That would require a CPU core for every object, and ideally would take 0 time to complete. That's the ideal, albeit impossible to achieve but possible to approximate.

With that ideal in mind I tried to approximate parallel computing in a single non-threaded BlitzMax application by implementing a multitasking engine. I realized that I had to abstract the execution of the program itself by elevating the main processing to the level of scripts which run within the application, and to then try to get the scripts to execute in parallel. This required the design and writing of a VirtualCPU execution engine, a multitasking pre-emptive scheduler,`processes` and script programs, timer-based execution suited to framerate-reliant games and applications, a definition of `compilation`, and an expandable object storage structure based on holograms.

This was all in place, albeit in separate parts, and in the past month or so I have been working to integrate it all into a single system. This required modifications to much of the code to get it to work in a modular way. I also had to expand the definition of library functions to make it expandable like `modules`, which is now in place. I have also added a number of useful functions and redesigned a couple of parts. The VirtualCPU system is now more flexible too - I expanded it to support up to 64 timer-based processes rather than 32 - this should be more than plenty.

I wrapped it all together and put together a very very simple demo. There were a few bugs but I was very surprised to see it working correctly thereafter. Scripts are executing again! Here is the set-up and execution of four timer-based processes and a simple `looping` one-instruction general program.

The four timer-based processes execute once every 1000 milliseconds (1 second), and in any remaining time a generic process runs which increments a counter and then resets its own execution position back to the start. Very simple, but it's working. The output looks something like this:

Since the conversion to the new integrated system, it is running a little slower than it was. It works out to be about 220 million function calls per second, whereas it used to be about 237 million, whereas raw BlitzMax code is capable of about 240 million. In order to make the VirtualCPU modular I had to add a couple of layers of abstraction which makes accessing various data structures a bit slower. I don't think its the execution that's any slower, it's designed to be as fast as possible, but it's the checking for the presence of timed processes and running the pre-emptive scheduler that's a bit slower.

Oh well, you sacrifice a little to get a gain in functionality. Every solution creates new problems. I am currently working to optimize this a bit so hopefully will gain back a few million functions per second. (this is on a single 2.0GHz Intel CPU - capable of 2 billion assembly instructions per second, so it's taking about 10 cpu cycles per opcode - at the assembler level it should be about 6 cycles per opcode, but hey, at least my executing script overhead is not much slower than raw Blitz executables). I expect that normal script functionality will be somewhat slower than native Blitz code, but hopefully not too much. The idea is to make the script language high level enough that there is less interpretation overhead and more time spent in pre-compiled functionality.

So anyway, just as I get the multitasking scripts functioning again, which in part was a reaction to BlitzMax not being able to do real threads, along come real threads. However, I designed the system with an eye toward the future, expecting the day when real threads would be supported and hoping that implementing them would be fairly easy.

Fortunately it would not be very difficult at all to attribute a fully functioning execution engine to each individual thread, and to create enough threads to cover the number of available CPU cores. Basically each core would then be executing its own scripts, doing its own multitasking and scheduling, etc. This would, however, be more a matter of making better use of available CPU power than making the system able to multitask, because it can already do that without threads. Also it's not really possible to do scheduling of threads or to manage cpu time available to each thread, so the script engine still benefits as it is.

Utilizing multiple CPU cores for extra computational power is an attractive possibility and I had been starting to think about multiple concurrently-executing processes as a solution. What with the hype around Google's Chrome browser using multiple processes, and given that my system is designed to be a kind of grid-computing networked environment, it's not hard to imagine running multiple instances of it in each process. It would then do some kind of memory sharing or network link to synchronize the systems, using exactly the same techniques for making it work over a network. This is a possibility, but also now it's a possibility to use o/s threads and just share the memory space within a single application. This might be preferable, but maybe processes have their safety benefits too - maybe I will do both. The ideal is that no matter how many instances of the system are a part of the network, it merges into a single unified transparent environment.

On a slightly different note, I am still aiming toward my first use of this system to be to create a shootemup game. Most of the ideas for the game are already in place and I think it will be a good test bed for the features of the engine. It's probably going to be a bi-directional scrolling shootemup (like Defender) but with very hectic gameplay and a tonne of stuff happening graphically.

On the technical front, I am aiming towards a truly massive particle system with many tens of thousands of objects, pervasive physics where the whole game world is updated regardless of which part is being viewed, advanced collision detection where *every* object can collide with *every* other object - utilizing my design for a dynamic bounding volume hierarchy, loads of cool animation features and effects, super-high-resolution resolution-independent graphics, 3D with shaders, and lots of other cool stuff. The design/style/thematic/story/artistic front is a secret for now ;-)

Check out my Game Development Blog -ImaginaryHuman-

It's me, again(Posted 2008-07-30)
Hey folks.

Lately I've been focussing on writing code. It's actually a nice change to write code than to be in a design phase. Of course the design has to come first but then once you know how things need to work you can dive into actually writing the program. Writing code is a different experience to designing. It's interesting how if you just sit there and start to write the program you actually make quite a bit more progress than you thought you would. You get into a kind of `flow` of development and one thing just leads to another quite nicely. I don't get a lot of time to work on this project but I do try to `optimize` the time that I do get and be productive - but not always. Sometimes a game just has to be played or something else just has to divert my attention for a while until I can overcome my self-made mental obstacles.

I have been working to properly code what will be the `final working version` of the script execution engine. Originally I wrote the basic system using custom types for each object, but I don't like that approach. It may be convenient for the programmer but not necessarily efficient at execution time. I like arrays, and they often are faster to access. So I am spending time at the moment converting the type-based system to a parallel-array system.

Basically I have this thing which I call the holographic linked list, which is really just a linked list of types. Each type, a hologram, can potentially contain all objects of all types. In practice it's basically a type containing a number of `extensions` where each extension is a custom type containing various fields, many of which are arrays of data. Rather than have a custom type for each lowest-level instance of an object, I prefer to have multiple parallel arrays, one array for each Field, and to then reference them with an index number. This can be somewhat faster and easier than using pointers, although pointers are cool. Also it keeps similar data in contiguous memory for better cache access and stuff.

So I'm converting this script execution thing from a type-based system to an array-of-fields based system, so that it can be an integrated part of the special linked list. The holographic linked list is basically built on the idea that everything can be in one place and vice versa, so it's sort of a `master` type able to contain every object type all in one place. This has the benefit, actually, of being able to tie together, index references to arrays of one type of object, with that of another type. So for example I have a dynamic bounding volume hierarchy, the bounding volumes of which may be tied into texture data, or file data, or maybe associated with sounds, or scripts, etc. I think it's a pretty tidy and efficient way to allocate and organize the system.

Also the important part is that it's meant to be compatible working across a network. If I want to go find object index 15 in HLink xyz, that HLink object might be on some other computer, and my local copy of it would be just a proxy or buffered copy of the remote data, so then there would be network activity involved. But by making sure that I use holographic-link references combined with array indexes across the board for all objects of all types, I can easily make sure that any object can reside on any computer - which is of utmost importance in laying the foundation for a pervasive cross-platform networked realtime filesystem (for when I get around to the whole multi-user thing). You have to really think about these things and plan them carefully when you want to do networking and such - if you want to do things which transcend boundaries then you have to make sure those boundaries are not being put in place by the design of your code or systems, otherwise it's next to impossible to modify later.

I am part way through converting the script system to a parallel-array-based data storage design. It'll stay basically the functional way it is now but with a few tweaks and hopefully it will produce even faster script execution. To get it working, I have to implement a Virtual CPU which executes program, program objects, processes, a multitasking pre-emptive scheduler system that supports timeslices and timer-based events, an event handler, libraries of functionality and a few other glue pieces. All of them are needed before any script can run.

Just to recap, scripts are programs of functionality in a simple and fast manner similar to assembly language. You have simple little instructions that do basic things with immediate data following the opcodes. Kind of like a bytecode system I suppose except it doesn't start out as a higher level language (but it could in future). The idea is to make the need for compilation obsolete by making the `executable` version of the program be essentially the same as the development version/sourcecode. Then I also have functionality which says `go to this other program and execute whatever functionality is stored at opcode # whatever` - it's kind of like a customized `view` of a program. If you were to collapse that view and lock it in, making local in-copies of those opcodes into a single program, that would basically amount to what's involved in compilation. To compile then is simply to copy opcodes from remote reference and turn then into local in-line copies. These programs with a subjective perspective of other programs basically also allows an easy linked-list-style editing functionality of otherwise array-based programs, which is nice and avoids shifting lots of data around during editing.

Beyond programming I have been playing a few games lately, mostly 2D shootemups, and am getting together more ideas for what will by my first game - a mostly-2D shootemup. Plus procrastinating a bit. I'm hoping to make it a fun and yet impressive experience. There's a lot you can do in 2D that people will have never seen before.

It's nice to be doing coding at the moment - design is mostly on hold and I'm just nuckling down to focus on getting this script stuff converted. I have started working on code for other parts like the bounding volume hierarchy, texture storage, etc. Lots to do.

And now some words about limitations. Over time you see developers striving to make their engines less limited, like able to have infinite landscapes or huge megatextures or global lighting systems rather than specific. It's kind of backwards if you think about it, because really we started out with this idea that we can only work within confines and strict limitations, and yet these restrictions have actually made things complicated and difficult for us.

How about if you were to simply start out designing for the UN-limited engine? How about being able to treat textures as one giant and potentially unlimited pool of pixels data? How about building your system from the start so that there is really no limit to how big your game world can be or how many sprites you're allowed? How about lighting and shadowing that is so integrated you are no longer limited on how many lights or whatever? We build UP to these ideals only because we've taken so long struggling through the limitation-based philosophies that prior systems have been based on.

Once you get into the realm of no limitations, things actually become quite a lot simpler - and simplicity is a good thing. It's easy to understand, easy to grasp, doesn't get in your way, doesn't hinder your creative flow, etc. In my system it's going to be simple to think of graphics as infinitely large textures where the boundaries or limitations imposed by the hardware are completely transcended. In my system you won't be thinking in terms of how big a tilemap you can deal with, but rather how to narrow down the size because otherwise it would just be endless. When you have access to a system with no limits, you don't have to deal with all the complexities that would arise from those limitations. It is only because limitations themselves put a problem in the way that you have then find an after-the-fact `solution` to try and overcome it. Get rid of the limits from day one. It's much easier to say to someone that they can have whatever size images they like, rather than `oh by the way watch out for power of 2's, and older graphics cards, and maximum texure sizes, and watch out for ATI, and you should always do this and that to get best performance, etc`. How about throw that out the window and let the system deal with how to handle the resources. The interface to the developer should be that the engine they're using shouldn't be be limiting them. Infinite storage. Infinite space. Infinite freedom.

Check out my Game Development Blog -ImaginaryHuman-

Triangle strips and stuff(Posted 2008-06-24)
I'm still working on the main spacial structures and how to store triangles and meshes. I don't know why it takes so long to design this stuff but when I'm done considering all the possibilities I'm usually happy that I've settled upon a good solution. I decided that most of the useful spacial partitioning/object partitioning systems can by represented by bounding volumes and nested grids, so I'm using a combination of those two.

I'm currently trying to decide now how best to store triangle strips. 3D is kind of new to me and there's lots to consider. I've thrown out the use of `quads` in favor of just using triangle strips throughout, since they can be used to create quads but also other things. This makes it simpler to just support one basic geometry type. The question now is how to store multiple strips and access them in an OpenGL-efficient manner. Also to consider is making sure that whatever arrangement I go with will be good for a wide variety of uses including large landscape environments, particle systems, complex objects, curved surfaces and games in general. One thing I don't want to do is create a system which is good in a small range of uses and poor in others.

But anyway, not much else news at the moment. Plugging away.

Check out my Game Development Blog -ImaginaryHuman-

Building Structures(Posted 2008-06-07)
I finished reading the entire OpenGL Red book (programming guide), Orange book (GLSL shading guide) and Blue book (API and guide). Good stuff! It's cool to take steps closer to the cutting edge technologies and I look forward to doing some very interesting things with them.

I also purchased a copy of `Game Programming for Teens`, 2nd Edition by Maneesh Sethi. It's a pretty decent introduction to programming in Blitz Plus for those who have not undertaken any programming before. I felt it was not entirely suited for *absolute* beginners, skipping over some fundamental basics too quickly, but overall it's a useful book for teenagers getting into game programming. I mainly wanted to see what it was like as a reference guide for when I write something similar in future. Who knows where that's heading, but I have some great ideas for some language learning systems in the near future.

Programming-wise I've been researching various kinds of spatial and object partitioning schemes, combing through published papers and websites and gathering together various nuggets. After reading the OpenGL books I am now more committed to full 3D support than ever, so I figured I should put aside my 2D stuff and rethink how to store and use data in 3D. I like the Bounding Interval Hierarchy which combines the benefits of Bounding Volume Hierarchies with KD-Trees and general Binary Space Partitioning, but for some purposes I also like nested grids and some other techniques.

So I'm going for kind of a flexible structure which can be adjusted to implement several types of organizational systems. Using a combination of bounding volumes (probably mainly axis-aligned) and interval arrays/grids I should be able to support KD-Trees, BSP-Trees, BVH, BIH, Grids, Nested Grids, Sorted Lists, Quad-Trees, Oct-Trees and other approaches all in a mix-and-match structure to implement pretty much however you like. There might be a bounding volume at the top of a tree encasing some nested grids and each cell in the grid might contain bounding volumes which may in turn contain small bounding interval hierarchies which might in turn contain an octree or quadtree, etc.

I could go for a pure single solution like using all bounding volumes only and fixing the number of children per node which would be okay but I think some scenarios need different kinds of structures for best efficiency. None of them are perfectly suited to all cases. Also I wanted to make sure that while fully supporting 3D it would also provide full support for 2D algorithms and structures, like tilemaps of equally-sized tiles, which would be quite strangely represented as a tree and would be better in an equally spaced grid. Also there are some scenarios where a bounding volume might not be the ideal way to store certain types of polygons and similarly with most other structures, there's usually always some side-effect and price to pay. If I was choosing a single method it'd probably be bounding volumes but it needn't be strictly fixed and adding some flexibility needn't have any significant performance impact.

Another thing I've been considering is animation. It is normal, in the world, for everything to always be moving and dynamic. Fixed static scenes are an exception to the rule and should not be the rule. BSP trees for example are pretty nice for static geometry but really suck when it comes to dynamic scenes - they're just not flexible enough to adapt dynamically to change. Specific solutions for specific scenarios will always be the fastest approach but not necessarily providing the most freedom, because once you get specific you also become fixed and limiting. A system able to deal with a fully dynamic environment must be designed to be highly flexible *itself*, otherwise its rigidity will limit and inhibit the dynamics of the environment.

I have come up with a cool system which I believe will be great for highly dynamic scenes, like for giant particle systems or lots of objects moving around with physics. It also takes advantage of temporal and spacial coherence - ie multiple objects moving in a similar direction to nearby neighbors and objects being in a similar place to where they were in a previous frame. With dynamic update schemes and what I'm calling `Morphic` object tracking, I think it's going to be really nice for handling the high dynamism of games and animation. I've written some code but its not complete yet.

One thing I would like to do which I think will be valuable, as seen in a number of computer science papers, is to create a test-bed profiling application within which to play with various structures and ways of implementing them to benchmark them against each other for various purposes and to come up with some `evidence` of which will be best for particular scenarios. This will also help me to polish and tweak the system for best performance, not to mention helping with optimizing the code.

Then once the basic system is in place I should be able to start looking into how to use it for things like occlusion queries, collision detection, object representation, particle systems and animation.

One thing I did become aware of after reading through the OpenGL material is that OpenGL is not perfect, but not because it's OpenGL - it's because DirectX and OpenGL and many other systems are all trying to pile together many many fudges and techniques and tactics for trying to make the most of a fundamentally flawed system. Triangles might seem to be nice for working with in hardware but they are not a natural representation. The sheer number of procedures that you have to go through to turn triangles into believable realities is astronomical. It is clear that a ray-tracing system is probably closer to life. But it is also clear that we are still not able to just do everything with raw brute force power, simple because that power is not there. We would not need any special algorithms or techniques if the platform we're working with didn't have limitations in terms of processing power. Instead we're left trying to scrape together scraps of CPU or GPU time, trying to optimize, trying to find ways to squeeze an elephant out of a mouse. It's disappointing, actually. But what can you do? Gotta work with what you've got.

With that in mind, I do hope to focus on trying to create a fully dynamic system where all aspects of the system are dynamic. Dynamic real-time lighting, dynamic animation, dynamic rendering, dynamic audio, dynamic gameplay etc. That usually means you need lots of CPU/GPU power, so we'll so how it goes and undoubtedly will have to come up with some clever compromises along the way.

Check out my Game Development Blog -ImaginaryHuman-

Books!(Posted 2008-05-19)
Yay! My new OpenGL Red and Orange books arrived on Saturday. I have so far read about 1/3rd of the red book. There's a lot to learn but I already read version 1.1 of the red book online so I know the basics and the older features. v2.1 is very well written and much easier to read than the old online version. It doesn't get into complicated hard-to-understand math too much, or rely on it very much, which is great, and the tone of the writing has become more relaxed and casual. Every question I could possibly have is answered pre-emptively which is nice.

I'm making lots of notes along the way and also have a number of ideas for cool ways to exploit the potential of new features in games and applications. I am planning some kind of `performance testing`, calibrartion, feature-detection thing to run on the user's computer initially and to then use that information to provide as up-to-date support for features as possible, ie if they have the whole GLSL shader support and OpenGL 2 features then I want to use them - they're very handy and will provide a lot of cool possibilities. Then if they don't have the features or there's different ways of doing things which give better performance, the system will adjust to use whatever is the best technique for a given effect. That way it can adapt to being as efficient as possible based on useful test results. This would double as a way to make the system backwards compatible with older versions of OpenGL.

I think I want to support GL 2.1 as `normal` and then provide backward compatibility for older systems, ie some features become less capable/less efficient or not available, rather than hold back the better technologies. I hope to be able to still keep standard BlitzMax OpenGL 1.1 the minimum requirement for graphics. It might not be possible to do some things very fast at all in 1.1 but I'll try to find a way. Trying to support multiple system architectures at once is also a bit tricky but it's a good challenge.

I am also thinking that I might as well go the whole way and make this a 3D system. After all, OpenGL is very much geared towards 3D and it does make it quite simple to do most things. Although I personally am most likely to create 2D games with the system in future, I can see the benefit of using 3D elements in those games so I might as well start out on the right foot. I'm going to read the entire Red and Orange books and also I have the blue-book superbible on the way - I'll read most of that too, and then will sit down and put together a new revised design for the whole system. I think I need to get more organized and keep better track of what I'm envisioning doing, so maybe I'll draw up some diagrams or charts too.

I know this probably sounds like a another `not making any real progress, just spinning my wheels and not producing a product` kind of speech, but I have accumulated some good code already and will only need to modify if slightly to meet the new design. I wish I could put up a nice screenshot of something doing something but there's quite a lot of groundwork to be laid still. I aim to eventually have a basic script editor running once enough of the API has been developed. Trying to write a system which can itself be used to write other systems is kind of like a chicken-and-egg scenario, you need the system to exist already in order to write the very system that you need to exist. Fun!

Check out my Game Development Blog -ImaginaryHuman-

OpenGL books and user interfaces(Posted 2008-05-16)
I just invested in a complete line of OpenGL reference books - the Red Book (main OpenGL 2.1 reference), the Blue Book/Superbible (tutorials, command reference and introduction to shaders), and the Orange Book (all about shaders and the GLSL shading language). These books are pretty hefty. I am looking forward to reading and using them and they're due to arrive any day now.

I consider this an investment into the future of this project which will be using OpenGL for hardware-accelerated cross-platform graphics throughout. I have been thinking about shaders and other cutting-edge technologies for a while and it's time to take the dive and commit to it. Hardware accelerated graphics is the future for software, which traditionally might only minimally use the hardware.

I look forward to fully exploiting features like shaders, floating-point pixel buffers, High Dynamic Range lighting, shadows, general-purpose GPU computing, customizable pipelines and a full range of hardware accelerated image processing/vecor graphics/rendering techniques. I am looking to achieve immediate or near-immediate visual feedback for everything.

I also aim to create a very cool fully customizable virtualized GUI interface system. Traditionally a GUI is designed based on what the author thinks users will find easy to use or how they want to work, and then within this fixed ideal they seek to give the user `some options` as to how to arrange windows, dock palettes, float toolbars, etc. But this is more of an afterthought, and contradictory to the whole foundation of it.

Instead of its structure being hard-coded like typical GUI's, my GUI system will be *entirely* flexible in the sense that the GUI structure will be a fully editable abstract layer separate from the functionality of the system. Its appearance/behavior will be scriptable and customizable in every way at all times, even live while `using it`. The user of the application will in effect also be the co-developer of it. Creator and created are One.

After all, the GUI's structure - the order in which windows pop up or which buttons they contain or what `purpose` they have, is really not something which has to be fixed in place. The user should be able to accept working with a preconceived GUI if they want to, but as an option within a wider freedom to adjust and modify to their hearts content. What constitutes an `application` should not really be defined by the structure of the GUI - its `function` should be deeper and abstract, and the presentation of or interaction with that function should be flexible. Functionality is then separated from form. The user then has the *option* to join functionality to form in artistic ways as they see fit.

Also instead of having one preconceived fixed interface representing `how you have to use the functionality`, which itself *becomes* the functionality, multiple `views` of interfaces will be possible, drawing upon the same underlying functionality and interacting with it in different ways depending on the task. What might seem to be a fully fledged image processing system could instantly morph into a more specialized game-graphics editor or tilemap editor. The user will be able to alter and change any GUI element at any time, create their own, and combine them. It will be the ultimate in flexibility and freedom. Classic operating-system-provided GUI's will seem archaic compared to this high-speed super-fast super-flexible interface.

I will also set up some `standards` for guided cooperative unified ways of doing things. Rather than one application author greedily or selfishly interrupting the `total workflow experience`, such as a window suddenly popping up from a background application right in the middle of you editing some text, the system will monitor the users overall bigger-picture activities and will manage/coordinate when and how apps vie for attention. The overall experience should be that all applications are working *together* as a unified cooperative team, not competing against each other or pursuing their own exclusive whims.

I am excited about the future of high-speed graphics and I have some very cool ideas for how to put the power into the user's hands.

Check out my Game Development Blog -ImaginaryHuman-

The Ultimate Software(Posted 2008-05-16)
This project that I'm working on is a pretty big long-term project and as such it is hard to keep a grasp on what it is, how it will turn out, or what it will do. That's mainly because I want it to do just about everything, all at once. It's kind of like a diamond in the rough, whereby it doesn't really have smooth facets or shiny gleaming surfaces, but as I keep chiseling away it starts to take form.

Sometimes I have to use existing ideas of types of software and `apply them` to the concept of this project, to see how that would fit into its framework. It takes me down a path for a while as I gather insights and continually reshape the vision in my mind. And then of course the path turns and I realize I need to think about something else, like shifting from a focus on games to a focus on applications, or from a focus on creativity to a focus on teaching. I want it to do so much that its `true nature` is elusive and as yet unseen.

What I envision is an software environment which embodies creativity. Creativity in its purest form extends its full potential to everything it creates, bestowing upon its `creation` all the abilities and properties that it was created with - to create alike its creator. In its purest form, that which is created IS an extension of its creator, since there is no difference between them. Cause and effect have no separation, unlike the seeming evidence to the contrary in the physical world.

So what does this mean? It means this software environment must be able to create other software environments, which in turn have the ability to continue doing so. If there is any hinderance to this complete extension of creative potential then the software meets its death. A roadblock of un-creation or an inability to do as the creator could do is evidence of failure. But achieving this in software is difficult.

Perhaps the closest idea of what such software would be like, is an operating system. Such a system is able to `create itself`. In other words, you can use a system to write code which can be used to construct a similar system with the same capabilities of constructing new systems. This perfect loop of creative extension basically means that anything in the character or capabilities of the original creative system remains intact within everything created with it.

It's a somewhat abstract concept and a little difficult for me to get my head around. We are so used to thinking in more specific terms, using software which does not foster the creation of further software like itself. For example you use a word processor to create documents but you can't use it to create a word processor. You use a graphics application to create or process images but you can't use it to create graphics software. With a purely creative system we're talking about something very `primal` and fundamental - the very essence of creativity itself.

So let's say this system would allow the creation of games, but we should add that it would also allow the creation of `game-creation-systems` as well. Kind of a game-creator-creator. But of course it wouldn't tackle just games because a game or a word processor or a graphics app are all finite specific `forms` of creations. The type of software created within it would be ambiguous and non-specific. It must be able to facilitate creation in such a way that a practically unlimited variety of forms of software could spring from it. The system itself remains transcendent of the forms that it creates, being a consistent continuous abstract `kernel` of creativity from which everything else springs.

Now, specific applications all share something in common - they all must include a variety of `differences`, `separations` and `boundaries`, all of which form a sense of identity and make the application confined to a specific set of capabilities. A word processor is not a graphics app and a programming language IDE is not iTunes. In order to provide specific functionality or multiple different `purposes`, the software HAS to diminish the operating system's full potential of creativity down to an isolated range.

This means that all `forms` of software are a limitation on creativity, and creativity in its purest form really cannot remain pure if it is used to create limited systems. Pure creativity can only extend pure creativity, which commonplace software is not. So even if a software environment were based on the idea of pure creation, everything created with it must unfortunately be `less creative`, in most cases, than its origin. Only if the creative `kernal` is written in the same `language` that it allows others to create with, can it hope to maintain its full range of capabilities.

That would mean `software which creates software`. The software environment or as much of it as possible must be written in some kind of language or system which is the same language or system that it exposes to the user. If this language is a compiled language then it must compile the user's software also. At this time I am not interested in trying to create software which compiles to machine code. But what I can do is create a virtual machine which runs a high-speed interpreted script language. And then almost the entirety of this script language would itself be used to create the user interface elements of a creative environment. And finally, ideally, such an environment can be used to create other environments. So what we're really talking about is a `software platform`, a virtual computer with a high-level operating system.

Rather than trying to create a game editor, trying to create a game-editor-editor` takes things to a whole other level. Trying to envision software which is flexible enough to create itself in its own language is quite difficult. Trying to imagine a user interface which at all times allows every piece of `software` to be modified is quite difficult.

What this really leads to is that everything in the system is editable, is live, has immediate results and can be turned into anything imaginable. It doesn't even have to be that you create something `along guided lines` or `as the author envisioned`. I have no clue what users will use this for, and I can't pretend to do so. Who knows if someone wants to take gui button x and stretch it out to turn it into a branch on a tree, or wants to make a song play when they hover over a button.

`Intended use` cannot be communicated. So long as people have their own way of perceiving reality, ie interpreting it, there will be as many perceived uses of whatever you create as there are people. It is expecting the case to be otherwise that starts to turn software in on itself, building barriers against creative freedom, disrespecting the user, preconceiving processes and solutions which prevent some solutions from being possible, and generally adding an unconscious blindness that does not need to be there. If we can accept that your intention for your software has nothing to do with the user's intention, then you simply provide the user with as many creative options as you can possibly imagine, hoping ultimately to facilitate creative possibilities that you didn't even dream of.

That's what I want this software to do. I want people to do things with it that I could never have thought possible. I want them to be able to entertain whatever whim or fancy they consider important. What this really means, then, is that the software does not have any inherent `meaning`. The user gives it its meaning, projecting onto it their vision for what they want to experience. After all, we do not perceive a world that someone else placed outside of ourselves, we projected it first and then perceived our own projections as if a stranger to it.

So what this really boils down to is the creation of a `kernel`, somewhat like an operating system kernel - a collection of interfaces and capabilities arranged in as flexible and open a way as possible. The more open it is and the more it can make potentially possible the better. Everything extends from the original creator which means if there is something about the kernel which makes something impossible, it will be impossible for all of its creations. The full scope of what will eventually be the openness and creative freedom at a higher level, simply stems from the degree of creative freedom at the core of the system. Ultimately that means the kernel is a condensed form of everything the kernel can create, like the `essence` of software. Holographically speaking, the part of the system which is the `kernel` actually contains `the whole`. When the whole is in the kernel, the kernel is in the whole - everything is the kernel and everything is what the kernel creates. So there is no way to say one part of the system is the kernel and another part is not - it all works together. But let's think of the kernel in terms of a core of globally shared possibilities.

The core would comprise essential non-specific low-level functionality. For example, a script execution engine, the potential for multitasking, access to functionality libraries such as for graphics and other things, and generally a blank slate upon which everything else can be written. Upon this blank slate would then be phase 2 - a set of non-compulsory open `tools` which allow further software to be created and explored.

It's hard to say it would be an `editing environment` because you would also use software in addition to creating it. But it's also hard to say that creation and use are separate from each other. How about you can create and use at the same time, live? It is only tradition that says you have to do one or the other. Immediacy is a vital component of creativity.

Anyway, this is enough of a speech for now. Back to `The Ultimate Software (tm)`.

Check out my Game Development Blog -ImaginaryHuman-

Lately(Posted 2008-05-06)
For the past month or so I have not made a whole lot of progress in terms of writing sourcecode. I have not been feeling very driven to write stuff and have sort of been `reviewing` what I've done so far and what direction it's taken. That also means bringing into question whether I want to continue on this same path or to adjust where I am heading. This is actually a good thing because although I haven't made much progress in the actual writing of programs I have made a significant change in how I look at what I'm doing. As a result I am aiming to change my focus and purpose for this software.

To review briefly, I am mainly working on a `software environment`, something akin to a general operating system within which multiple `programs` can run simultaneously. These programs are scripts running very quickly inside a very simple script engine, albeit augmented with a fairly advanced multitasking script scheduler. The aim is to create a networked, fully open and shared environment for the use and development of applications. Those applications can of course be games or presentations or videos or tools - whatever. Since I am designing the `whole system` I get to make decisions about how things work, and more importantly how they work together.

This is still my focus, in part, but I am also now adding an emphasis on teaching. The software environment will facilitate the creation of advanced real-time interactive teaching environments, which I will use to teach various computer topics. Initially I will teach absolute beginners `how to program` using BlitzMax as the reference language. This will basically be like a full manual for BlitzMax plus the application of it to create some simple games and applications. This will be like `volume 1` of a series. Then there will be other volumes dedicated to further more specialized areas, like the creation of more advanced games, the creation of game-creation tools, the creation of game engines, possibly the creation of 3d games eventually, and whatever other areas I come up with.

It's a lot of work to do, and in practical terms my focus now has to be on continuing to write the initial software platform within which I will then be able to put together the teaching aids. I find that personally I am more motivated when I am able to help someone else to learn something or do something - the element of selflessness allows creativity to flow more easily. So I think that instead of focussing on `end products` and focussing instead on `empowering people to create` I will get more done and enjoy greater success. Wish me luck.

Check out my Game Development Blog -ImaginaryHuman-

Triple Boot iMac with BlitzMax(Posted 2008-04-08)
This past weekend I converted my iMac to a triple-boot setup :-)

My iMac is an Intel Core2 Duo from late 2006, with 2 x 2.0 GHz cores. I upgraded the memory first. I originally had 2 x 512MB cards so I added a 2GB card and kept one of the 512's, to give 2.5GB total. It seems like a good idea to have plenty of ram, at least for Vista, and especially if you're planning to run two or more os's at the same time in Parallels Desktop or the like. I am amazed how small and inexpensive ram cards are these days, given that my first 1/2 MB Amiga card was over 100 UK pounds and was many times larger than today's.

My Mac was originally running Tiger and has a 128MB video-ram ATI X1600 graphics card, which is shader model 3 compliant to support aero in Windows Vista. I had previously used DiskUtil from the Terminal to partition my harddrive into 3 partitions - the first one is the EFI partition, then OSX (80GB), then Linux (was FAT32, 22GB), and finally Windows (was FAT32, 40-something GB). There are plenty of tutorials online about triple booting which give you the commands to use to do the partitioning, but I did it under Tiger and I hear that now in Leopard you can dynamically resize the partition in the Disk Utility app non-destructively.

My wife has an older PPC iMac with Panther on, so I'd already backed it up, installed Leopard, and restored various files (I recommend a firewire cable, much faster than over ethernet!). So I could then back up my files to her machine pretty easily, plus backed up some files to my Windows partition to be on the safe side.

Then I performed a fresh install of OSX Leopard onto the OSX partition which was pretty straightforward. I did a full format-and-install and did the whole registering thing within the installer. Of course then you have to install the extra XCode package - there is an option to install files for Panther compatibility - I chose that option and found that it installed an earlier version of the Gnu compiler. Upon reinstalling BlitzMax I had no problems synchronizing modules or compiling them - I didn't have to go and find an earlier copy of the compiler. Everything Blitz-wise worked fine, as usual. Downloaded the latest update, synced modules, rebuild, everything is go. Of course then there are lots of updates to do for Leopard and after also installing some of my older Tiger bundled apps there were several rounds of updates to do before all was said and done, including updating to Leopard 10.5.2. Leopard is very cool in many ways.

In Leopard you don't have to burn a Windows driver disk because all the drivers and stuff are on the Leopard CD ready for installation to Windows. I did not use the Boot Camp setup assistant at all and I did not even print the installation instructions. But perhaps that's because I'd researched how to do the whole triple-boot thing fairly thoroughly beforehand.

From this point I rebooted with the Windows Vista Ultimate 32-bit CD in the drive, holding down C to boot from it. It worked fine. Went through the normal installation process. It reboots several times and the first time I wasn't sure if it wanted to reboot from the CD or the harddisk. Holding down the Option key when the Mac boots you get to choose between os's right off the bat in a nice little menu. It supports Windows and OSX and any CD's that may be present (maybe external drives too?). So it was easy to get it to boot into Windows from the harddrive. I also put in my product key as part of the Windows installation - I figured I wasn't going to bother waiting up to 30 days to fully authenticate it so why wait.

Once Vista was finished installing for the first time, I authenticated it which was pretty easy. I then put the Leopard CD in the drive and it started up automatically to let you run Setup.exe - this very easily ran a little piece of automatic software which proceeded to install all the extra Apple drivers and software, to let you use things like the iSight camera, the wireless card, special keys on the keyboard, etc, plus some neat little additions in various places in Vista to let you do things, like changing brightness/volume and so on.

After a reboot I went back into Vista and ran Windows Update. There were lots of updates to do (still not done Service pack 1 yet), involving several reboots. Once all updates were finally done (for a while), I also ran the Apple software update tool, which Apple kindly installed already. It ran just like on the Mac and proceeded to let me download Safari, iTunes and Quicktime. I know Vista has its own software in these areas but hey.

After that there really wasn't much else to set up in Vista besides installing BlitzMax. That was fairly easy although I bodged up the environmental variables for MingW the first time - made a new variable instead of adding it to the path variable. Seems you gotta add C:\MingW\Bin to the PATH variable, and I also added C:\MingW\ to a new `MingW` variable. Not sure if it's needed. I couldn't get BlitzMax to compile at first, partly due to getting used to the whole security thing in Vista. I had to set the MaxIDE app to run in `Administrator` mode before it would let me do a SyncMods from the IDE, or get compilation working. Editing the Properties for MaxIDE lets you set it permanently as an admin app, but you still gotta remind Vista that you actually did want to run it - all that user protection stuff. Eventually after some reboots and more updates and syncmods and recompiles and so on, everything in BlitzMax is up and running just fine. The steps to install it on the forums are pretty much all you need.

Having Vista and OSX on the same computer now, and being able to boot into each with the Option key held down at reboot, proved to be pretty easy and satisfying. And I used it that way for a while until I had opportunity (time) to do the next part.

The Apple boot menu doesn't support Linux, so various other steps are necessary including a replacement for/addition to the boot menu, which comes in the form of the ReFit application, easily downloadable and simple to install on OSX. After installation it simple required a reboot and to get into the boot screen again to see that the icon for OSX was relabelled as ReFit. Going into what looks like a `ReFit disk` gives you a full menu where you can choose to boot into various os's including Windows and Linux, plus from a CD, and there's also some other options there should you need them. At this point though I just wanted to know it would work okay, and it did - booted just fine into OSX or Vista.

The next step is Linux, and this was arguably the most difficult step, which is a little disappointing. Initially I had downloaded Kubuntu 7.10 which was recommended to me by my cousin. I wanted to be sure it would work correctly on my mac, given maybe the hardware wouldn't work or something, so I burnt the iso image to a CD. Since it's a live CD you can just put it in the drive and reboot into it via the boot menu (standard or ReFit) and it should then boot up. However, it did not work for me. I got various different visual problems and absolutely no sign of a desktop other than a temporary mouse pointer. Depending on which other options or resolutions I tried I would get different disasters after it attempts to boot up. It just didn't work with my machine. My cousin has a newer recent iMac, purchased only weeks ago, and apparently Kubuntu 7.10 booted up fine for him. I don't know why.

So anyway, while looking around online I noticed some people suggesting to try the 7.04 `Feisty Fawn` installation and to upgrade later to the latest version. I also wasn't entirely convinced that the KDE desktop was going to work out, so I went with standard Ubuntu 7.04 (missing the K) and burnt the iso image to a CD. It booted perfectly first time! I have no idea what the issue was with 7.10. I could only get Linux to boot into a 1024x768 resolution, though.

The next thing I tried, and this is really a sidestep, is to get Vista running on Parallels Desktop for Mac. This was actually much easier to achieve than I expected and I was very impressed with the benefits. It is so easy to use the `Spaces` in Leopard to easily switch between full-screen-mode desktops of Vista or OSX. It's also nice trying to run Vista apps in native OSX Windows. The integration is very good and Vista ran at normal (surprisingly quick) speed. Parallels 3 can use the existing Vista partition, which btw had to be reformatted to NTFS before Vista would install. I highly recommend Parallels for productivity purposes and easy of switching between OS's. It almost makes it feel like Windows is just another part of the Mac environment.

Now, where Linux is concerned, I did install try to install the Kubuntu 7.10 disk image as a virtual machine in Parallels. It still did not work. But I figured the 7.04 was going to work perfectly okay, so I had to decide - did I want to run Linux only as a virtual machine in Parallels, or as a native o/s with its own partition? At the moment Parallels doesn't support using a Linux partition directly so you have to make a virtual harddisk - but that would of course mean you can't boot into Linux directly after a reboot. I decided I wanted the `full` Linux experience so was going to go for the full install route and think about how to virtualize it later. See, if I install Linux fully on a partition able to boot by itself Parallels cannot see it, and if I install Linux as a virtual machine in Parallels that means two copies of Linux - ie a Linux virtual installation running within the Linux partition. I am not sure how I feel about that or whether to make a minimal installation virtual Linux to be able to access the Linux partition from OSX. Reason being - Linux did not want to install to FAT32 so I had to reformat it to ext3, which OSX will not even mount.

So anyway, back to Linux, I put the live CD of Ubuntu 7.04 in and booted it up. Someone had said you needed to download and install the Linux debian package for ReFit in order to be able to use the partition table sync tool. Since I did not have an internet connection in Linux (wireless wasnt working) I downloaded it on OSX, stored it on the flash card of my digital camera and then transferred it to the Linux desktop via USB before installing the os. I found instructions on the web someplace, to open up a Terminal window and type in the syncing command but to not execute it until later. You're not actually installing ReFit again, you're just syncing the two types of partition tables (or something) so that everything works.

Then I started the Linux installation process. It's pretty easy to do. There are two parts that you have to get right. When asked about partitioning and whether to use guided partitioning, you must choose to go the`Manual` route, which lets you decide where Linux will be installed. It listed my partitions. You have to choose the partition you want and then set the properties for it, which entails saying that "/" will be installed to the Linux partition. Make sure you choose the Linux partition which in my case is the one inbetween OSX and Windows and is #3 in the list, described at disk0s3 or sda3. I then chose a format for the partition and wanted to install to FAT32 so that OSX could see the disk, but it would not let me - so I chose ext3 format. This then entails that the drive will be reformatted.

As you continue there then the second important option - the `Advanced` button. You must click it to go in and decide where grub (the boot-loader) will be installed. Initially it says "(hd0)", and that word is actually a button. You click on it and remove the (hd0) text, then press Ok. You go back to the previous screen and now the (hd0) button has been replaced with a "/dev/" button. You click the button again and add sda3 to the end of it, so it reads /dev/sda3 - this refers to the third partition on the harddrive. When you click okay it should now show a button with /dev/sda3 on it. You can then proceed - this is the disk to which the grub boot loader will be installed - you do NOT want to install it to the standard (hd0) because that will overwrite your EFI boot loader and may stop your computer booting altogether! I am not sure if that's true but I did not want to risk it so was very careful to make sure it was going to us the /dev/sda3 path.

Finally you can carry on with installation and everything was fairly simple from there, but you have to watch for one thing - the percentage of progress. After the 22% mark, the installer has finished copying some files and you can now go back to the Terminal and press `Return` to execute the partition table resync tool which came with ReFit. This must run no sooner than the 22% install mark otherwise the installer will overwrite the changes. I press return somewhere around the 25% mark just to be sure. You have to press prior to the 92% mark, apparently, but I was well within that ;-) The installation finishes and hey presto. I rebooted, went into the ReFit menu, chose the Linux partition and it booted up. :-)

I still only got a 1024x768 screen, had no wireless network, had unaccelerated OpenGL and no BlitzMax. Avoiding the wireless temporarily I plugged in a normal Ethernet cable which connected to my Router. I didn't have to do anything other than go up to the little icon on the title bar and choose the `wired ethernet` connection. It connected. I didn't have to type in any DNS settings or anything. I was online! Firefox is included so online status is easy to test. The next desired step was to get the ATI hardware acceleration working, and clearly it was running in software mode as seen by testing some Screensavers.

For me I have an ATI Radon X1600 card with 128MB Video Ram. It is Shader Model 3 compliant and has all the necessary gubbins for Vista aero to work - oh, any btw, Parallels currently cannot use Aero while running Vista, you have to use the basic interface, but it's ok. To get the ATI card working fully was easily done by installing the 3 packages mentioned in a couple of threads on the forums here, specific to ATI cards. After installing and rebooting I not only had hardware accelerated OpenGL but also a full plethora of screen resolutions including my Native display res of 1440x900. I downloaded all the packages needed for BlitzMax, also outlined clearly in the Ubuntu installer guides on these forums, and istalled and upgraded BlitzMax. It had no problem syncing mods or rebuilding them or compiling apps, although it does show a few warning from time to time. This doesnt appear to stop compilation or to prevent the compiled app working right. It was nice to see the OpenGL hardware accelerated from within BlitzMax.

The last hurdle in Linux is getting the wireless networking to work. My card is not supported by default and I'm trying the ndiswrapper but I have not been able to get it to work yet, it seems quite a complex process and I did not see the results which were explained in the various places online. So for now my Linux is wired-ethernet only and I've given up on it for now. Other than that, and after doing numerous updates for Ubuntu, I decided not to upgrade to 7.10 yet. I want to see a full desktop running normally from a live CD before I will install or upgrade the OS version and apparently Ubuntu 8 is coming out soon so I will wait for it.

All told, the OSX, Vista and Parallels parts of all this were much easier and nicer and cooler than I expected. The Linux part is cool too but it's just a pity the whole thing wasn't quite as reliable or easy or well-supported as the other two major os's. But still, now I have a triple-booting iMac running BlitzMax on all three platforms, so I can now begin to generate software products for all three from a single computer. And should I need to do a PPC Mac executable I can use my wife's machine and then do the whole universal binary thing. Although, saying that, although universal binaries are easy to use I am not entirely fond of having to download bigger executables than necessary, I'd rather have a single native executable for the CPU I'm using.

So anyway, there you have it. This is all quite fun to do and there were surprisingly few if any hair-raising moments. It is cool to bring in blitz code and compile it on each platform and see it do exactly the same thing, and at a similar speed. I haven't tested anything like frame-rates or whatever yet, but that will come. OpenGL is working on all three platforms with hardware acceleration which was one of my main concerns. I think it's beneficial to be able to see the speed/functional/operational differences of the same software on the same *hardware* under different OS's, because then you can really be sure of which platforms are doing better than others, or what their strong points are, knowing that the hardware is not a variable. I'm excited to start some cross-platform development now that I have the hardware and software to support it.

Of course I went back and downloaded many Blitz Showcase entries, having not been able to see many of the Windows-only apps before. There is some interesting stuff out there, and in greater number than on the Mac side. I have not seen anything for Linux yet, but maybe I didn't look hard enough.

The only thing I am not sure about yet is how to access the various partitions fully from each o/s. Windows seems able to read and write to OSX okay but not to Linux. OSX seems to be able to read the Windows disk but not write to it, and does not see Linux at all. And Linux can see both others but as read-only. So it's really a matter of deciding whether to copy stuff `to` a disk or to read it `from` a disk, depending on what o/s you're in. But I'm sure there's some utilities or drivers out there to make this more seamless, and Parallels has a tool to transfer files between os's. It's all good.

So the big question is, which of the three os's do I like the most? ;-) .... Mac OSX of course! Windows Aero is really quite nice but very copy-cat-not-done-as-well. Linux is of course the young newbie on the scene and I still haven't tried the new KDE4 desktop, but I am sure OSX will stay the best for for foreseeable future. Leopard rocks!!!

Now to get back to writing that next great game.

Check out my Game Development Blog -ImaginaryHuman-

Holographic Linked Lists revisited (and networking)(Posted 2008-02-09)
The natural state of a networked system is that there is no perceivable network at all. It is natural for the user to experience the `whole` system as One system, the same One system as everyone else is experiencing. This means the system is fully open, unified and immediately accessible from any part. The availability of shared data should be instant and direct as if all data is present everywhere. When the user can experience the system in this condition there are no divisions or separations causing an interruption or delay. Normal functioning should be the experience of being connected, not interrupted.

It is only when an interruption is introduced that you begin to form the idea of there being a network. Such interruptions are in the form of separation between addressable memory space, lost connectivity, the appearance of a delay, the seeming need for `transmission` from a separate source to a separate destination, data isolated to one part of the system and not available to all, users being disconnected, etc. The purpose of `networking` is then an attempt to correct or gloss-over these disruptions.

It is quite difficult to create a system where the user's experiences are completely equal and shared when the computer systems and conditions foster separation more than togetherness. How do you get separate pieces of hardware, where the memory spaces are completely cut off from each other, and which are joined only by a broken threads of delayed communication, to act as though they share everything immediately? It's a tall order, if not impossible. Most things in this world take some amount of time no matter how small which unfortunately means an ideal unified platform of oneness is only `almost` achievable.

My vision for the future of computing comprises computers which are able to plug into each other at the hardware level. Just by connecting them, their full memory spaces and processing capabilities become 100% shared, completely transparent to all software. The CPU's/GPU's would be able to access each other's memory and peripherals directly as if they were directly plugged into that machine. I thought perhaps the Cell processor was going to achieve such connectivity but it's not quite there. Shared GPU's like SLI and so on is a step in this direction although still limited to the same computer at the moment.

If such a unified machine existed it would be a simple matter of writing an application for parallel processing with multiple threads and then letting the hardware or o/s deal with distribution among the super-hardware. And then perhaps there wouldn't be a need for moving the unification into a software implementation. But at the moment we still live mostly in a world of separations where hardware is quite isolated from other hardware and only barely unified through a very weak communication channel (internet).

Ideally communication channels would mean that every computer has full immediate access to every piece of memory on every connected computer, as if all memory is just one big equally-accessible pool. But instead what we deal with is a very closed-down narrow `connection` over which only a relatively tiny amount of bandwidth can pass and not in parallel. The difference between total openness and almost complete closure is vast.

Because of this, various strategies are needed to `pretend` that the hardware is unified, and that means understanding where the unification breaks down and how to overcome that. The fact is, complete unification is impossible so at best we're left with trying to create an illusion of unification where it cannot be. Most people will observe that unification exists if there is only a little stretching of the truth required to believe it so that's what we have to aim for - the *appearance of* unification.

To achieve a unified-appearing system while still working with separate systems means that we have to accept some limitations. For one, cpu's can't see each others memory directly. Secondly, we can only work with the networking connections that we're provided, no matter how they perform. Thirdly, objects residing in memory on one computer can only be thought of as locatable globally by adding a layer of abstraction.

If in the bigger picture the `real location` of an object would be `everywhere`, that would basically mean `somewhere within the total unified memory space`. Since the total memory space is fragmented by computer boundaries it translates to mean objects are located on a remote machine in a remote memory space. So every object in the system has to be wrapped in some kind of layer of abstraction to allow it to be referenced regardless of whether it is within the confines of local memory space. You can't just say `do this with object A`, you have to say `do this with the object located at A`, and to get to that location may entail going over the computer's boundary (via the network).

This is where an object-wrapper comes in. In my case, I named it the Holographic Linked List, given that I wanted it to `appear` as though every object can exist everywhere, and that every object can be related to all other objects.

Going back to my design for this wrapper/abstraction layer I have seen a number of aspects of it which can be improved. I used to think that each object must contain a link to another object-wrapper-list, but as it turns out all that's really required is for the linked list to allow parts of itself to point to other parts of itself in a circular fashion. In other words, all you really deal with is a single flat equal set of objects and some of them happen to be connected to others. By connecting objects to a range of other objects, by remote references, I can build whatever structures I like be their hierarchical or two-way dependency-webs or whatever.

The holographic `link` object is basically a link in a chain and contains an array of objects. I decided this is preferably to making every object a `link` for purposes of both speed and memory consumption. Of course when you use arrays you begin to lose the flexibility of linked lists for adding and removing objects. Also if you define an array of a given size it is time consuming to resize it. I have now come up with a novel way to address this, as a side effect of the now flat relationship structures.

Basically since an object can point to a range of objects, you can now make some objects be `indirect` pointers to other ranges of objects. This means you can now have an object pointing to what is basically a list of ranges, and each range in that list can be portions of arrays within different links. So it's now possible through this abstraction to get a single object to relate to any number of other objects regardless of storage space, removing the need to ever resize an array. The list of ranges basically describes a mapping of chunks of object sequences scattered all over the place, as if they were contiguous. And the cool thing is each chunk could be stored locally or on a remote machine.

I realized also that I needed to add a global ID of some kind to the references to links themselves, so that you can refer to a link object on a remote computer. So now, not only does each object have a global ID (as a combination of an `interior` and an `exterior` ID), but also the links now can be uniquely identified. This lays the foundation for the abstraction needed to represent objects in a unified system. Each object stores a blitzmax custom type, so it now means that to access a type instance you have to first consider where it is located and what is its ID. Naturally this isn't ideal but it is necessary in order to provide this abstraction. Also types have to be converted to a single abstract object type in order to be stored in the structure, and thus cast back to a given type before being useable. But I think also I will allow direct references of assumed type where possible - ie for objects within the local memory space.

So what this comes down to now is I have a way of referring to objects which locates them anywhere in any computer. This would be along the lines of `object sharing` as seen in BlitzMax's GNet system, where you basically are trying to think of an object as synchronized with all other instances of that object on other computers. To each application you basically just access the object within a `slot` and the system automatically synchronizes these slots across computers, using the network. However, where I differ is that I don't just offer a limited number of fixed slots of fixed object types, I allow *all* objects to be shared of all types.

I think I will probably use proxy objects or `ghost` shells of objects on each remote computer representing all objects everywhere. This will be a `view` of the objects so multiple views are quite possible. You will be able to create whatever groups or views of objects you like, subjectively. This means that no object is inherently a part of any fixed group. If these objects represent files then any number of custom views of groups of files is possible. This then begins to amount to a fully dynamic networked filesystem, abstractable and viewable in whatever arrangement you like.

Some o/s's have approached networked filesystems by saying that every object is a file. Effectively what they're saying is that every object is *equal*. An object can be a file or a piece of information or an interface or a list or whatever. You could choose to show this as a set of files, with folder for hierarchies, or you could show it in some other way. After all, what is a file view but a bunch of organized objects representing something you can't directly see? You could make all objects look like files but you could also make them look whatever else.

Once we're thinking about a filesystem or, basically, a way of referencing objects, then we start to get into some of the deeper problems. How do you practically make every object available to every computer? How do you transfer objects which aren't currently everywhere? How do you deal with the seeming possibility that one user might make changes to an object locally which isn't being changed equally on all systems? How do you actually make all objects persistent? How do you get the system to avoid delays at all times?

One of the problems - a user editing an object locally - is really the result of how you look at what they're doing. By thinking that they are able to make local changes in isolation you are basically demanding that the objects are separated from all other users. This is not good because it implies a denial of access plus conflicting separate versions of objects. This whole issue would be solved if we just consider that no user can make isolated changes.

That must mean that while one user is changing any object, any other user must be able to change the same object at the same time, and any changes being made must be transmitted to all users. This is a real juggle to do in realtime even if we set up one machine as an authoritative server. And to make such sharing possible we have to build it in to every object right from the start. It can't just apply to the `filesystem`, it has to apply everywhere in the system, all the way from object management to how programs/scripts are run. After all, unified means `it applies everywhere`.

I envision, for example, a text file where multiple users are making changes to it at the same time and are seeing each others changes in realtime. Orchestrating all of these simultaneous changes, considering network outage and lag, is going to be an interesting challenge and food for future thought.

For now I have to focus on rewriting my holographic link module and integrating it with a low-level UDP-based networking layer. I've recently been learning and researching a lot about networking and I can see that an abstraction layer will be needed on top of UDP. I am just not sure yet whether I will need this layer to operate beneath the object system or alongside it. Most likely it will be completely integrated into each object, whereby each object has all the necessary data to manage its own synchronization and connection independent of all other objects.

Eventually this will allow for a distributed per-object server, whereby each object individually can act like a network server and/or client. By building this definition into the object and not tying it to the boundary of the computer system means that any part of a system can become a server or client. Any group of objects together can represent a server. Any user, any objects, and groups, any hierarchy or web, any collection of machines, and any individual machine. This `distributed` unified server means that any object can delegate operations to any other object anywhere, sharing the workload and working together. This means every object and every user on every machine can now to a little piece of the total server activities, removing the need for a dedicated server machine.

This is akin to the `host` system where each computer can potentially host a game or whatever, except that it is not isolated to one computer. Instead *everyone* hosts the game and everyone shares in sustaining consistency and persistence. How exactly we will go about creating persistency and consistency when there is no central authoritative server is going to be an interesting challenge and also food for future thought, especially with regard to deterministic physics simulations and multiplayer gaming. Perhaps a form of `balancing state` might work.

Also, if at the beginning of a game, lots of game state needs to be sent to a new player, it is not sent from a single source, but instead from multiple partners, each contributing a small amount of bandwidth. Equal teamwork. One person identifies that data needs to be sent and notifies others of their share of the workload via an automatic distribution system. Of course this will then adapt well to the given performance capabilities of each machine dynamically.

So for now, back to the excitement of coding this puppy.

Check out my Game Development Blog -ImaginaryHuman-

Compression, and networking(Posted 2008-01-21)
I recently became quite distracted trying to figure out some kind of new super-cool way of compressing data without losing any `quality`. Of course this problem has been tackled extensively by some very clever people and I really wasn't able to come up with anything that hadn't been done already. But it was fun for a while, and good research to learn from.

I understand how to do things like build a dictionary of popular `words` and the whole huffman-tree encoding using shorter bit patterns for more common words, etc, but beyond applying filters to the input data I couldn't come up with anything new or ground breaking. I did find that converting images to separate bitplanes and also splitting out the color components into separate channels, helps compression a little in *most* cases, but not all, and only by a few percent.

I only got onto the compression bandwagon after some good time spent designing a networked filesystem, and trying to consider how to make network traffic and file storage more efficient (this networked filesystem is the next step in my main project). But as some of you will know from personal experience, compression is quite a fascinating aspect of programming and much time can be spent with interesting ideas which achieve nothing at all ;-)

It seems that the holy grail of compressing data is being able to compress random data. I suppose an example would be if you had 256 bytes and every single byte was a different value from all the others, and they were all scattered `randomly` throughout. This would presumably present almost no apparent patterns or sequences or `runs` that could be exploited for compression. In fact if you try using BlitzMax's `compress` on a bank of data generated by Rand(0,255) it basically gives up and does not compress the data at all.

Now, of course, if you knew the random number seed which BlitzMax's random number generator used to produce the sequence, you could represent the entire data as a single integer, yielding huge compression. But that's where things usually are not possible. You see, the main problem with trying to compress something is that by the time you get a hold of the data you have already lost all track of where the data came from or what generated it. You're more or less working blindly with an entirely isolated finite amount of random data which has no apparent meaning whatsoever. It's been totally cut-off from any bigger meaning or greater context and you might not even know what it's supposed to represent. This makes compressing it extremely difficult.

So before you even start out trying to compress something, you could have already been separated from the meaning of the data or its place in the larger picture or what it `means` and be left with something which just seems meaningless and unknown. Trying to compress an unknown then entails either processing it in a way that its meaning is irrelevant, ie working on the binary data, or trying to figure out what parts of the data mean something. It's something of a hit-and-miss affair.

Ideally, I philosophically deduced, the most compressible data is data which has the most pure consistency. A screen of all-black pixels, for example. Every pixel is like all the others and is easily compressed. The problem comes when you start to get fluctuations and differences. This ultimately leads to the worst case scenario where, for example, every single value in the data is wildly different from all other values and seems to have nothing in common with them, either in terms of its value or its placement within the data or its relationship to surrounding data. This is why compression of `truly random` data is considered the pinnacle of achievement, and thought impossible.

Here is my take on that. There is *no such thing* as `pure randomness`. To me, for something to be purely random, every single piece of data must have absolutely nothing in common with any other piece of data. They wouldn't even be represented by the same number system. They wouldn't even exist in the same reality! I find this to be absolutely impossible. So what this means is that you can't compress truly random data because there is no such thing as truly random data.

All of the data in the world is not, actually, purely random. It is `highly` random but not absolutely random. There MUST be *some* meaning to it, however obscure. There must be a thread of thought running through everything in the universe which ties it however loosely to everything else, meaning that its randomness is only an appearance disguising a deeper unity. As such any data that the world seems to have produced MUST be compressible. It is compressible because it is not quite random. Even data which by most standards is considered random-as-hell is not quite so. There is still going to be something about it which, if discovered and exploited, can result in compression.

I know there are numerous arguments about how it is mathematically impossible to compress random data, and all that. But like I said there is no data we would ever consider which is truly random, so the math doesn't apply.

The only issue remaining then is either knowing or finding out what system of thought generated the source of this data, what it means, what is the key to it, what common thread runs through it and all other data, and thus how it can be compressed. But unfortunately that is a very very advanced requirement and not something I think will ever likely be achieved. Being able to look at data and perceive the meaning and unity in its despite its appearance would be bordering on the spiritual.

Perception is everything. If you could find a way to `perceive correctly`, all data would be seen to have no variation at all. This masterful form of perception would let you look at a file of data and such mastery would completely `compress` the differences in the file down to nothing. What appears random would be seen as not random at all. You would be able to see it as pure and consistent, to such a degree that the differences in the data would disappear. The data itself would disappear! But this really is what we're trying to do when we try to compress something.

So I thought, perhaps pre-processing of the data is the area where the real compression could be achieved. If you could make the data `more compressible` with techniques such as reversible sorts or the filters that PNG uses, or whatever, then compression would be much more effective. You could even take it to the extreme and say that this `changing the data to appear consistent` is the ONLY thing you really need to do, because if you could find a way to do it perfectly you wouldn't need any kind of compression algorithm at all. But I don't think this is something you can achieve with a computer and especially not in binary, or even with any kind of machine that is of this world. It would require moving deeper into the hologram of reality, beyond the world, which is where the answer to everything always is.

So anyway. I took a trip down this road for a while, wondering what so many others have wondered, insisting to myself that there must be a way, but the ways are always limited. When the way is achieved the application of it becomes unnecessary. So I gave up :-D

I think one avenue which has some potential merit, is where it is your application which creates the data you're trying to compress. When you're not separated from the meaning and context of the data you can take advantage of runtime generation data to implement more appropriate compression schemes. For example you could generate procedural images, or structured vector graphics, or you might know something about what kind of values are going to show up in the data and in what kind of patterns, to be able to craft a compressor especially for that purpose. At least then you have a chance to compress using more specific algorithms. This is more towards the direction I am going to take.

Ultimately everything you *experience* is `live` and ideally all software would be generating all data and images live. There would be no need to store any files or to consider compressing them. If you could truly procedurally computationally mathematically technologically and immediately produce the same data in realtime within the computer, live, then you'd never need to even think about compression at all. Time spent trying to compress `old` data is really time spent in an old mindset. Realtime generation is the key to overcoming compression limitations.

And now for something completely different.

What I'd ultimately like to do is create a system which allows any number of pieces of computer hardware to be connected together as a hardware/resource foundation, but to then tie it all together at a higher software level to make a single completely unified operating environment. The underlying hardware landscape would be virtualized into multiple `views` and all users would experience the same full environment completely transparently.

Basically this entails removing all of the apparent hardware barriers and creating a single workspace accessible at all computers. I suppose it would be like a networked operating system. It would start out being *completely* open, whereby anyone has full access to anything anywhere in the connected system. Everyone would see the full entire file system and would have access to all computers and all runtime functionality. The hardware separations would be totally removed, resulting in a seamless unified environment.

This environment would then provide the opportunity for teams to work on the same data at the same time across the world, to develop software/games etc. They would be able to share files, of course, but more importantly share in a realtime shared-computing environment where objects and data can be worked on simultaneously. Of course the same networking system will then be the foundation for the rest of the engine, and the engine would then be like a client and a server rolled into one.

The same networking system would be the foundation for other uses, such as to easily build networked games and applications, to support online high-score tables, runtime download of game levels/data, etc. Pretty exciting. But what will be even more exciting will be running software on the system (in my virtual machine) so that all of the computers can act like a single computer, users can do `remote procedure calls` to other machines, and it will generally act as though everyone's using the same single piece of hardware.

Of course, that said, now I have to finish the design and move toward coding it. I will focus on this for a while and then when it is in place I will move toward implementing a main-memory cpu-based bitmap system, on top of which will be the texture system, buffer system and graphics systems.

I also think that with this kind of open networked system it would be very easy to run multiple instances of the software on a computer which has multiple cores or CPU's. Each instance would be a normal separate process and the system would see each instance as just another view of the whole network, regardless of the fact that it's on the same computer.

It would then be possible to distribute processing to the other instances of the software on the same computer. This would mean that the o/s would start to make use of the other CPU's/cores. You wouldn't need to bother trying to figure out how to do threading or sharing of data between processes or interprocess communication because the whole system is the abstraction needed to make them work together.

You could then be running an application which is actually using the CPU power of a second, third or fourth core on another person's computer. Those of you with multi-core CPU's know well that usually you can't distribute the running of single-thread applications across the cores, effectively halving or quartering the overall CPU power accessible to the application. By allowing the environment to run multiple times on a computer as different processes, and using my own unified virtual-machine software system, I can make full use of all available processing power.

As always, there's more work to be done.

Check out my Game Development Blog -ImaginaryHuman-

Welcome, 2008!(Posted 2008-01-02)
Okay so it's New Years Day 2008. Happy New Year everyone.

I had to make a worklog entry today, because the temporary unavailability of the forums is driving me nuts. Ok ok, so I'll be patient a little longer. ;-)

I am still working on a particle system. I've put together about half of a demo of the system I've designed, to the point where I think it will work well even though it does not execute yet, so I think it's time to jump to making it a proper module to accompany the others I've made. It's based on a bounding volume hierarchy containing sorted lists of boxes (loose grids) and per-box hierarchical state. The structure maps well to aspects of a game. I have made a quick demo with thousands of medium-sized objects flying around at high speed, but I'm sure there will be things to tweak as I go.

One thing I am surprised about and not sure about is why various other BlitzMax games experience so much slowdown with a relatively small number of objects. My collision detection system is very efficient so I wonder if some games are doing a lot of brute force work. I took a look at sourcecode to one game and it looked like it was testing every object against every other. My mid-range graphics card is quite capable of drawing thousands of objects smoothly so maybe there's other issues yet to discover.

I went back and looked at some of my other modules as well and I am glad to see that an overall `engine` is starting to take shape now. I am redesigning the texture system to take advantage of hierarchical state and also to support all of the spooling/caching/tilemap/dynamic-texture-space features that I envision. It's a relatively small rewrite of that module. This and some of my other modules need some higher level functionality to be written.

I made an OpenGL-based 2D graphics module which replaces most of Max2D and adds significant functionality including use of stencils, lots of drawing functions, efficient grabbing of textures, textured polygons, setting of blend modes etc but it does need a few tweaks and also some further additions in order to make it truly useful. It is much easier to write a framework around OpenGL than to write the low-level per-pixel math operations of the old school days. I also at some point have to make this functionality available to the multitasking script system and then I can begin to build games with it.

That said, my texture system ties in with a custom main-memory-based bitmap system and I am experimenting with necessary CPU rendering, not so much to render and then display in realtime but to generate game graphics and to edit levels/graphics etc at runtime. It is my aim in future that a game could contain its own graphics editor so it'll need to be able to draw to bitmaps as well as to textures and the backbuffer, all in one unified system. Writing bitmap rendering code is time consuming and detailed but there definitely is no easy draw-to-a-pixmap functionality in BlitzMax so I have to write my own.

I noticed that most of my modules so far are `mostly written`. They all need some extra high level stuff and they all have a few obscure features which are not quite working. They also are not quite all integrated with each other yet. That will come. The script system needs developing more to provide more of a `language` besides the functionality I've written so far.

Sometimes I am not sure if I am taking a shortcut towards creating a game rather than creating the very open freedom-oriented system that I visualize, but I think that even if I take shortcuts to begin with I can always keep expanding the system later to create a general software platform.

Anyway, I don't really have much news, but I just wanted to write something. I am making good progress. Some of this code is very difficult to write but I'm very glad when it works.

Check out my Game Development Blog -ImaginaryHuman-

Designing the ultimate particle system(Posted 2007-12-19)
Since I last wrote I've been working hard to continue developing the ultimate particle system. I made myself a demo of a bounding interval hierarchy to see what kind of processing is involved and how it would work. I have it sort of working, but a little buggy. That said, having created that demo I did not feel comfortable with how much processing is required to update the hierarchy, even with the use of clever efficient updates. It also didn't seem to map well to a relatively flat 2D world without getting into lots of levels of division. The idea of making a node for every pair of objects just seemed like too much overhead.

Since I know that the use to which it will be put has certain design-time known characteristics, I can make use of that foreknowledge to build a system which will take advanced of specialized situations, while at the same time being flexible enough not to hinder freedom.

So I've come up with a design which is based on a combination of bounding volumes hierarchies and sorted box lists. Essentially it's a bounding volume tree but it can have multiple branches at each node. Or to put it another way, each bounding volume is a hologram containing multiple parts contained in its whole. Ok so maybe calling it a hologram is a slight stretch, but the basic idea is that any whole (bounding box) can contain many parts (child boxes).

I've been polishing the design to get it into the most efficient form possible, and to also really thoroughly understand what needs to be done and how. I now have a totally array-based design which is both flexible and efficient. I've started to code a demo to test it for performance. I am expecting it to provide better performance than a grid-based approach, at the very least.

I decided NOT to make it `object-oriented` by the use of custom types, although doing so would be pretty simple. I did not like the idea of thousands of memory allocations/deallocations or of the inefficiency and overhead of jumping around in memory finding scattered object data. It is better to keep the same type of data in a compact sequence and to use a pool of preallocated memory to avoid lots of changes. The array-based approach is much more streamlined and tuned for efficiency. It has effectively most of the same flexibility as a linked list type-based system but without the custom types acting as the framework.

I do my own tracking of allocated space and have devised a two-dimensional space allocation which completely overcomes any wastage of space due to needing to allocate an area which is larger than an available area. The only fragmentation I have to deal with is when multi-section box groups are stored non-sequentially, requiring an occasional low-overhead re-optimization - which may not even prove to be necessary.

When I have my demo up and running (and I am not distracted with 256 or 512-byte coding competitions), I should be able to begin to create a sort of `module` as part of the overall engine. It would handle definition of and management of particle systems and would also provide course-grained collision detection.

Once this is all working I need to next look into efficient per-pixel image-based collision detection, to really perfect the collision accuracy at a low level once the bounding box tree has determined potentially colliding pairs. I have a few ideas for how to represent an objects contour and to find collisions. As you may know, an important thing about colliding is not only knowing that something collided, thus requiring a response, but some kind of physics data about how much force is in the collision, how deeply it penetrates, etc, so that's something I'll have to consider- ie how to glean useful information from a collision scenario which can provide gameplay options.

Back to it, then.

Check out my Game Development Blog -ImaginaryHuman-

A persistent world, fast(Posted 2007-12-01)


This past month I have been conducting a great deal of research into a few related areas - occlusion culling, collision detection, storage of objects for efficient animation, and storage of objects for efficient rendering. Objects, whether they are a part of a game or anything else, all basically comprise a large `system` which you might refer to as a particle system. So a game is basically one big advanced particle system. Figuring out how to handle all those particles in realtime at a very high speed is quite a challenge.

My goal for a good particle system is that on a fairly average computer it will perform so well that it can handle tens of thousands of particles with a smooth framerate. The kind of games I am thinking to make will include large numbers of individual objects moving around *pervasively* in a fairly large play area many times the size of the screen. They all have to be animated and all logic performed in full, every frame, to provide a truly consistent experience. That means that we're not only dealing with getting one screen full of graphical wonderment, it has to be many screens worth at the same time. Clearly, finding or designing very efficient subsystems is a high priority.

I would ambitiously like to aim for being able to handle around 100,000 active particles. That's a lot! But with that many particles at a good speed I can create some very cool graphics.

Obviously just throwing that many objects at the computer and asking it to handle them all is probably going to push it way past its limits. In fact, as kind of worst-case scenario, I created a self-collision test (every object can collide with every other object) and it can barely run at 60fps with 1000 objects. With 10,000 objects there are over 700 million collision tests to be performed which throw the framerate way down to below 1fps. Obviously that's not going to cut it for a large number of objects and lots of stuff going on at once.

So this is where I have to get clever and either leverage existing ideas or come up with something new, or a hybrid. Nobody in their right mind would compare every single object to every other object just to do large sets of collisions, and nobody would test every single object to see if it is on-screen in order to do occlusion culling. It is not enough to just draw every object and hope that the graphics card will speed things up for the objects which are off-screen. It is still very slow.

I've been looking into all kinds of interesting approaches - hash tables of object positions, various kinds of spacial partitioning, various kind of trees, bounding volumes, bounding intervals, different kinds of bounding objects, grids within grids, voronoi diagrams, and also a quite funky dynamic web of local-object interconnections which I made up (but is a little on the slow side and no good at some important tasks like culling). Having gathered up all of the best of the best - and ignoring the stuff that's too heavy on the advanced math - I have a pretty good idea now of how the system will work.

I like the bounding interval hierarchy but I haven't coded a test demo yet. I have coded a test demo of the grid method which is way faster than the brute force method but still too inefficient. While it can do 10,000 objects of collision detection at 280fps, plus fairly efficient render-culling, that would be down to 60fps at 40,000 objects not to mention time for things like physics or animation or rendering. I will next be coding some kind of tree approach, like a bounding interval hierarchy of sorts.

Researching all of this stuff it is evident that largely the area of static scenery has been greatly explored. This is not the direction I am heading. I desire fully dynamic environments and that can be a problem. For example a binary tree is great for some kind of indoor building with fixed walls, but not very good if those walls were changing and moving their positions drastically. The entire tree would have to be recalculated which is unacceptable. I am looking to adopt some techniques which make use of temporal and spacial coherence, ie doing as little as necessary in the current frame by reusing results from the previous frame, and taking advantage of the fact that from frame to frame not a whole lot changes.

I'm pulling out all the stops with this because it can mean the difference between having to put a cap on design possibilities and having the freedom to explore more impressive animation and effects.

I think I have the OpenGL side of things sorted out and basically the data that GL needs to access for rendering (ie vertex, color, texture coords, normals, etc) will be in an optimal array format. I am still also trying to build-in some of the advantages of a holographic type of appraoch.

I am still working on creating test demos, evaluating framerates in a wide variety of combinations of settings, to find the most optimal system to use. I know that probably one system may work better for one situation than another, so I may end up with a few different approaches working together, or at least with the flexibility to make it simple or complex.

One thing I have noticed as a general rule is that if you want to make something more efficient you have to introduce some kind of separation. In other words, you have to divide the problem into smaller units. But the problem with that is it creates a new problem - overhead. You can gain seemingly a lot for a little expense, but these little expenses add up. Unchecked they become more of a problem that what they solve. Ideally the overhead will be as small as possible, will give the biggest yield of efficiency trade-off for its overhead, and the method of achieving it will be as thin, small and simple as possible.

I noticed, for example, that if you introduce an occlusion culling system of some kind, the very structure which helps you to focus on the parts you are interested in also makes the system a little slower, so in a situation where all objects are on-screen and there is nothing off-screen it can actually be slower than a system which doesn't do occlusion at all. So it kind of depends on the scenario, what type of game, how persistent you want the world to be, for how long or far you want to keep track of an active object, etc.

In some games such as platform games you can use triggers to switch on and off certain groups of enemies which begin to animate when you are close enough to them and then are ignored when out of range. This does help with efficiency, with a slight overhead, but it also breaks down the consistency and persistency of the game world. It introduces separations in the experience and interrupts the sense of simultaneous existence, or being a part of a large active world. Going back to the same object and seeing it spawn at the same spot and realizing it isn't plausible based on its last known position, just doesn't work. Separation creates illusions, which creates a dream, which puts the player to sleep.

So ideally we want a game world where everything is always dynamic and active and operating and keeping track and nothing ever gets put on hold or frozen - this of course entails considerable effort computationally and finding algorithms that improve that without introducing perceived breaks in consistency is quite a challenge.

I want to be able to move and animate all objects, all the time, every frame. It could be so severely dynamic that objects could be switching places and moving toward and away from each other such that it is highly likely to interfere with any kind of temporal or spacial coherence. I am programming for the worst case scenario and then a real life game will hopefully perform better than that in most situations.

I think overall there will be some kind of very flexible structure which somehow updates automatically without lots of manual changes or traversals, even when objects move across spacial boundaries or outside of bounding boxes. This will act as a quick way to screen out unwanted stuff. Then there will probably be some kind of constantly updated sorted-list system underlying it which has a number of benefits. And then a deeper level of per-pixel collision detection and physics handling.

I am not going to go with convex polygons because although they sort of seem attractive, their overhead for a small degree of additional accuracy is too much of a tradeoff. I would rather my initial pruning be fast and approximate and then dive into a much more accurate collision test when needed. I just don't think I could live with an approximated object boundary, and especially not with the limitations of convex polygons. I am considering a realtime surface-normal-calculating system based on small sampled areas of collisions, or pre-computed per-pixel object perimeter arrays. More stuff to test.

But now that I typed all this and avoided actually coding, I can at least say I have successfully procrastinated. So back to it....

Check out my Game Development Blog -ImaginaryHuman-

The Holographic Particle System(Posted 2007-11-04)
In Oneness there are no divisions and no separations. All is one and nothing else exists. This wholeness is where we start out before we begin to decide how to represent game objects. We don't just dive in and wonder how to manage lots of different objects, because by then we have already made a great number of assumptions about how and why things are divided up in the first place. There is no need to rush ahead into blind conclusions. Our train of thought has to proceed carefully and deliberately.

We begin with the whole. To this whole we add an aspect of separation - we try to divide it in half. By adding separation we now have `parts`. Oneness is everywhere, so when we apply a separation to it all we end up with are two everywhere's. They might seem to be smaller or more confined, but everything which is in the whole is also entirely within both parts. Where did each part gets its character or qualities from? The whole. Each part is a reflection of the whole.

If the whole were an apple and we cut the apple in half, each half still contains the same substance that whole apples are made of - apple stuff. You can keep cutting up the apple into smaller pieces but every piece still contains the essential content of an apple. Biologically this is represented in DNA and genes, whereby the blueprint for the entire organism is present in every little tiny part of it. You can chop someone's arms and legs off but the rest of the body still contains blueprints for what arms and legs are like.

This principle that the qualities of the whole are retained when there is a division, is contrary to what appears to be the case physically in the external world. In the world it would seem that if I cut off my arms and legs my body is now absent arms and legs. It is as if they are gone and no longer a part of the body as a whole. But what remains of the body still has the same original blueprint of an entire body stored in every single cell. Removal of the arms and legs is merely a change external appearance, while deep within nothing has changed.

This idea of divide and retain, as opposed to divide and lose, is the foundation of the hologram. Dividing Oneness leads to many parts, and yet every part retains a trace of everything that is in the whole. No matter how many times you divide the part, it is impossible to completely eradicate the trace of the original whole from which it came. Dividing something in half does not destroy it, it merely creates an appearance of the parts becoming smaller. The whole can never be completely removed.

Similarly, since the whole is in every part, every part is everywhere and is present in every other part. This again does not appear to be the case in the physical world because it seems that any given object can only be in one place and not another. This is an illusion. In very rare circumstances, for example, light particles can be seen to be in two places at once. They are the same single `object`. This again reflects the holographic principle.

A separation creates more parts and these parts are what we would refer to as objects. However you want to divide things up, the divisions appear to define the limits and boundaries within which we give things separate identities. These are false identities, because truly every part IS the whole, but in the world of computers we have to work at the level of reality as it appears.

So in our computer games, we decide there are boundaries and separations and that these define specific objects. To be true to the holographic nature of illusory reality, we *must* define an object as having holographic properties. Only by giving objects holoraphic properties can we possibly model every potential definition of an object and what it is capable of. If we don't do this we are making ourselves blind to the fabric of reality and this means we later have to find other additional methods for implementing the freedoms and functionalities that we need. Why bother. Let's start out with the correct foundation and everything else will fall into place.

A holographic object, in a game, means an object which has ALL of the potentials and capabilities of the entire system. An object is not merely a specialized specific `type` of object. An object's apparent form or visual character is the end result of interpretation and has little to do with its inherent reality. Structuring an object system based on interpretations is out of touch.

Every object must basically have all of the traits and possibilities of an entire system of objects. Every single object must be able to be related to it. Every relationship must be available to it. It must be able to appear anywhere and everywhere regardless of the apparent count of its instances. It must be able to manage an entire universe of objects regardless of whether other objects are managing it. Most importantly every object must have a full range of freedoms and creative potentials.

A large collection of objects might be considered something along the lines of a particle system. An assessment of particle systems on the internet, for example, will usually show you hierarchically structured designs and specialized types of object. A particle manager will usually be assigned only the task of drawing objects and managing them. A particle subsystem will be assigned only the task of organizing a group of objects within its closed grasp. And individual particle objects will do little more than define locations, animation frames or movement information.

It is all very well separating out the functionality in this way, and to jump onto the object oriented (object separated) bandwagon, but a great deal of separation has been introduced by thinking this way. This is merely a form of allowing the programming language to define how the system will be structured, rather than to recognize that the language imposes limitations on the `ideal` system and that the challenge is to implement an ideal system despite the language.

In a holographic particle system there is no separate particle manager or particle group type or individual leaves-on-a-tree particles. Hierarchies are closed structures which isolate and confine and limit, simply resulting in making it impossible to achieve certain creative desires. Any object must be able to be associated with and to relate to any number of other objects on a per-object basis, not forced to be a part of a tree or limited to a preconceived structure. If the structure is not completely flexible then creativity and freedom are confined.

Every particle in a holographic particle system is, itself, the manager, the particle group, and the particle. It is everything. It can act like a particle, it can act like a group of partices and it can act like a group manager. Every particle has full functionality, not specialized functionality. Every particle can have relationships with any number of other particles or none at all. Every particle can enjoy up to and including the most advanced logic systems, be they executed per-particle scripts or treat-them-all-the-same functions. Only by making it possible for each particle to have this full flexibility of functionality can you create a truly clean canvas of freedom on which to express creative potentials.

A holographic system also is ultimately very much simpler than all of the extraordinary complexities which come from trying to separate out and divide up aspects of capability. Why should a particle manager be able to manage groups of particles when a particle cannot? How does that allow any particle to influence or control any other group?

A step towards a particle managing other particles is to implement a parent-child system, which is hierarchical, but then this has introduced new problems and limits. Not only then can a child only be controlled by a parent, but also this is really a one-way relationship. Hierarchies as confining, exclusive and isolating. That is why they create a scenario of inheretance. Inheretance of properties from parent groups is *not* a `good feature`, it is a side effect of the hierarchical system.

A better system is simply a completely flexible web of interactions and relationships. Any object can relate to any others in any number of ways, so it is now possible for relationships to be two-way. Both parties are now respected and there is no slavery. Remember that what we're doing here is creating a way of looking at objects which is based on spiritual principles, and if those objects were people you wouldn't want them to relate in the kinds of ways that we typically make computer objects relate. Each object must be allowed full freedom, full potential, and full respect. There must be no inherent inescapable forcing of action or slavery to masters or victim/victimizer roles. Essentially this means hierarchies are limiting.

In a computer game scenario, objects then are usually defined as graphical shapes on the screen. They might be an avatar representing the player, such as a spaceship. This is the player's projection of identity and their belief in having created that object themselves. Then you have the objects which emit, surround or are associated with the avatar, such as ship bullets, shields, weapons pods or whatever. Then you have that which these things are aimed towards or directed against, such as aliens, enemies or scenery. ALL of these things can be represented with a single unified particle system.

If an entire game can be handled by one single particle system, then what possible benefit is there to the system being hologrpahic? First of all there is the major benefit that whatever relationship or interaction or form you can dream up can be implemented. There are no limitations on capability, no limitations on relationships, and no limitations on functionality. Design is as free as the imagination.

Secondly the holographic system indicates that it is now possible to show single particles many times. If there are objects which appear similar or should be animated in the same way or use the same graphic, these can now be referred to only once and yet instanced many times. This is highly efficient compared with creating multiple *separate* objects in order to represent multiple visual appearances. An explosion of sparks where most sparks are the same small set of animations can now be animated and drawn using just a few actual particles represented in many different locations. The holographic system optimizes similarities, resulting in increased efficiency.

Any particle can have all of its own qualities, distinct from any other, but also can *share* them with others. Due to the fact that every part of a hologram is in the whole, every other part of it also has access to it. It is a completely open system despite its appearance as many parts. Splitting objects into many objects doesn't have to mean that they are now all separate. Sharing of data and information means very high efficiency, compression, and optimum use of computer resources.

Should you desire that one alien shares its thinking with another, this is a simple and natural possibility. Should you desire that one alien bumps into another and affects its position, that is also a natural possibility. Should you desire that upon collision two aliens merge into one, that is entirely simple. All forms of joining, splitting, increasing, decreasing, sharing, excluding, separating and unifying, are all modelled by the hologram.

Whatever duality you can imagine can be represented, but not in a way that cannot be transcended. A traditional particle system might limit you to only being able to create hierarchical subgroups or that once created a particle cannot join with another to create clumps. It might also tell you that every particle has to have its data kept separately from all other objects, there is no sharing and no aspect of a particle can be shared with others. If these limitations are built into the system's structure there is nothing you can do to move beyond it.

In the holographic system every aspect of a particle is kept essentially `in the same place` (holographicaly, separated space is meaningless). For example, all vertices, colors, animation frames, texture coordinates, etc, can all be stored in the most condensed and highly efficient format possible, ie in arrays of data. While BlitzMax does offer object orientation it is not necessarily the most efficient system, especially when high efficiency is of utmost importance when desiring to achieve high framerates with thousands of objects.

I use a hybrid system of linked lists of arrays of arrays, to provide *occasional* extensibility and mostly constant high-speed access. Each particle additionally can be linked to any other number of particles. These links can take several different forms. They may simply arrange for dependent relationships or interactions, or may define physics interactions, spring systems, or magnetic forces. The unlimited relationships between particles allows for the modelling of highly varied systems with very complex interactions. This largely goes back to my holographic linked list system which I devised at the start of this project, where each `object` (particle) has the full potential of all system functionality, and can play all roles in the system in relation to all other objects.

I am currently still working on the nitty-gritty design of the data structures, to allow for as much freedom as possible. Once this has been designed and coded then I will have one single unified massive particle engine, tightly integrated with my script execution engine, and I will not have to get tied up trying to create many different types of objects in code. When separations really don't exist, why make them a fixed part of the system? All separations are *optional*, not inherent.

The appearance of a particle then is largely an option with the possibility of representing any particle or collection of particles using whatever rendering method is desired. It could be points, lines, polygons, textured meshes, 2d sprites, blobby objects, splines, animations, gui elements, visual effects, whatever.

Once you have a single unified simple-but-advanced particle system, it is so much easier to deal with than writing thousands of lines of specialized code. It is not only more efficient and flexible but it saves a lot of development time. I look forward to being able to create my first shootemup game with thousands of objects onscreen, wild `particle effects` and extraordinary interaction.

Check out my Game Development Blog -ImaginaryHuman-

Making progress(Posted 2007-10-15)
It's been a couple of weeks so here's a progress update.

I have been largely procrastinating because I knew that if I would actually finish the part I was working on it would mean moving on to the next part. It got me to pondering why that happens.

Your ego really really does not like you to finish anything. It likes you to be in a place where you are searching and questioning and figuring out problems and trying to come up with inventive solutions but it actually doesn't WANT you to really solve those problems. It wants to stay in a place of doubt and uncertainty and doesn't like the idea of you actually being done with something. As soon as a part of a project starts to turn from being `in progress` to seeming to be about finished, the ego suddenly flips its interests and creates immense resistance. To the ego, finishing something and being done with it is the antithesis of the ego's intentions, which are that you stay in a limbo of incompletion and searching. It will love to lead you down many complicated paths and distractions just to avoid getting into a `done` place. Being done is similar to the ego being `un-done`. When you're done with something you detach and let it go and you're no longer inside of it, kind of like waking up, which is against the ego's ambitions. It will happily lead you to exciting distractions and false idols but it won't lead you to removing the need for all of that, and thus it doesn't want you to finish your project.

Upon realizing this, and after spending quite a few days distracted down rabbit holes looking at complicated 3D algorithms and polygon systems, I decided to cut through the crap and finish what I was working on.

After tackling a few quite difficult bugs and adding some more new code, the timing system is now working! I have it running a test with four prioritized timed processes and some generic processes. Everything works correctly. The timed processes trigger on-time and pre-empt each other if they're more important, and meanwhile inbetween control falls back to the generic processes. There are now two parts to the system, the generic processes and scheduler, and the timed prioritized processes and timing system. It all works together. So now if I want to have separate logic and graphics and input and networking etc all running at a specific timed rate I can do that, while also falling back to background tasks which themselves are operating in an entire prioritized scheduling system. I'm looking forward to when I am able to see how this is beneficial in actual applications and games, being able to do multiple things at once and being able to adjust how much time is given to each part of the system dynamically.

Apart from a few more minor features to add, such as resetting a timer immeditely following a flip so that I can trigger a process to run just before it's the end of the frame, so that I can do a flip without wasting cpu time waiting, this `module` is pretty much there. I have to add a custom event queue and maybe some performance monitoring. Then I can actually get to thinking about all the stuff I got distracted with - a next-generation graphics engine.

I have been giving some thought as to where I am going with all this. Looking into a lot of 3D stuff I am still put off by how much of a fudge job we're all trying to do as a substitute for actually being able to do it right. The hardware power needed to do it right is substantial and not currently available. Also working in 3D is so much more complicated. So I am starting to lean towards focussing mainly on a next-generation advanced 2D engine. It would still have 3D features but probably wouldn't go so far as to get into things like 3d terrain and 3d physics etc. I'm not sure yet. This just seems where things are headed.

I've also given some thought to whether this project will result in some kind of a game framework or a whole game development system or just individual products (or all the above). I'm still wondering whether the code I'm writing is quite as tidy to be for human consumption, and whether I really want to be committed to providing the full service and support which creating these products will require.

I have to say that the code I've been writing is the most complicated I've ever had to write, and most likely a lot of what I still have left to write will be even more advanced. But I'm glad to say that eventually it actually works and that's a very exciting thing.

Check out my Game Development Blog -ImaginaryHuman-

About time(Posted 2007-09-30)
Over the past few weeks I've been designing, tweaking and coding a timing system which is now mostly implemented. The timing system blends modern game timing techniques with script execution and multitasking/scheduling. It's all wrapped up into one coherent system.

The main heart of the system is the VirtualCPU which blindly executes script instructions over and over with no awareness of which script is being referred to - much like how a single CPU just executes instructions at the machine-code level. It has no awareness of concepts like processes or multitasking. It is efficient at simply churning through instructions and executing the corresponding functionality. It's basically a script interpreter. There is no compilation.

On top of this is a layer of processes combined with individual scripts (programs) which acts much like a typical operating system. It allows us to refer to multiple simultaneous scripts and allows a script to be run several in multiple instances at the same time. This allows us to virtualize the CPU's attention and to eventually put different scripts under the CPU's nose as and when we see fit. The VirtualCPU doesn't know anything about this happening (shhhhh, don't tell!)

Then we have a generic scheduling system which manages the switching in and out of different processes to create the illusion of parallel execution. It divides up the VirtualCPU's time amongst various scripts so that each gets an appropriate amount of attention, based on its importance. The VirtualCPU actually executes the script instructions which decide to change the current process and thus which script to run, since the scheduler itself is just another script.

To implement a collection of processes efficiently to enable fast finding of the next most important script to run, I chose a double-buffered array of linked lists of processes, where each element in the array is a different priority level. New processes are given a timeslice and put into the active array in its appropriate linked list. The scheduler then is basically deciding which is the next highest priority process to give time to. Higher priorities pre-empt lower priorities and processes with the same priority are scheduled in a round-robin first-come-first-serve basis.

Each process has a `timeslice` based on its priority and when this expires it is moved to the dormant array and its timeslice is recalculated. This is modelled on the new Linux scheduler, and the process takes about the same amount of time to perform regardless of how many processes are active. When all timeslices are expired the reference to which array is current is simply swapped and the new timeslices come into effect. The scheduler provides fair and efficient pre-emptive multitasking of scripts. All processes get to use their timeslice before other more important processes start over again with their bigger timeslices, so nothing is left unattended or frozen.

Certain events can occur such as a script ending, a new script being scheduled to run, or a script deciding to perform a context switch by itself, all of which cause a pre-emption and a rescheduling of activities. The scheduler only runs at the time when a timeslice is done with, a higher priority process has been added to the system, or a script ended. Generally speaking the timeslice of the current process is checked about once per millisecond. These generic processes are great for multitasking several applications, running background tasks or generally tasks that don't have a big dependency on timing or framerate.

Then we come to the next part of the system, which is a pre-emptive stack of high-priority timed processes. This is a similar structure to the storage of generic processes except that the double buffered array of processes does not swap - processes are merely copied from the dormant buffer to the active buffer when it is time to schedule it and then is removed from the active buffer. Since these processes need to run `on time` over and over again it is not quite the same environment as a generic process which just keeps running indefinitely.

While the timeslice of the generic process is being checked, it also checks *all* timed processes to see if it is time to run them. There are a maximum of 32 timed processes, each can have its own `Hertz rate` and each is at a different priority level. There are no linked lists of processes, only one process per priority. It turns out that the situations where you need timed processes are usually related to fast framerate updates and high Hz rates so there just isn't enough time between calls to warrant trying to implement a timeslice or multiple processes at that level. I decided it would be simpler and more efficient to just go with one process per priority level. When you're dealing with 16 millisecond or less per frame, the Millisecs timer is not accurate enough to make good use of timeslicing, and it's easy enough to add calls to multiple scripts from within each script at each priority level. A script can be a scheduler so each priority level basically can have its own scheduler.

The priority stack behaves differently to the general scheduler in terms of how things get scheduled. Higher priority processes pre-empt lower-priority ones when it's time for them to trigger. A higher priority process *must* run to completion before control is returned to a lower priority process. All processes in the priority stack pre-empt and freeze the state of the generic scheduler and all generic processes. Timing is more important than background processing.

If things get behind and it comes time to call a timed process and it hasn't finished its previous call yet, or was pre-empted by something more important, the system still scans for this and keeps a count of how many times that process must execute to catch up. Eventually I will add capping to the count to compensate for major delays, and certain types of process like a user-input process really doesn't need to do any catchup at all.

The system will be set up whereby I have higher priority processes generally at higher hertz rates (but not always). A user input process is the highest priority of all, perhaps running at twice the display Hz rate, or even more. This pre-empts everything else. Then there is a logic process (or processes) running at lower hertz rates, with fixed rate logic, perhaps the same Hz as the display. Then there is a low-Hz networking process perhaps at 15Hz or so. Then there is a lowest priority rendering process, which actually is at a higher Hz than the networking, at the display Hz rate (ideally). There are also going to be other processes like a spooling file-loader process. It's possible to run different parts of logic at different Hz rates, so maybe you have a slow-to-update animation process, a per-frame animation process, and an occasional once-per-second process which updates things like elapsed time or framerate display.

To minimize moire effect from the fixed rate logic being at a different rate to the display, the logic will likely be a multiple of the display Hz. This partly then defeats the point of doing logic at a fixed rate because now you can't tell how fast to move and animate things at development time. However, this now calls for a tweening system where the `design framerate` is tweened to the logic framerate, allowing logic to run at a multiple of the display Hz while also allowing fixed-rate design at development time.

Additionally, once the fixed rate logic has run, current state is calculated by the rendering system using tweening to the `current time`, or perhaps to the `current time when the frame will be flipping next`. This tweening allows the framerate rendering to be way higher than the display Hz, which means the possibility or rendering (and wasting) older frames which may have been updated by sudden logic updates in the middle of a frame cycle. So when it comes time to flip, whatever was the most recently finished frame will be displayed. This takes some additional jiggery pokery in the graphics engine but it will be worth it.

I should also give worthy mention to how a frame Flip() will be handled. It is not tied to when the rendering of a frame is complete, because that would mean that as soon as you finish rendering a frame you sit idly in a waiting loop until it's time for the vertical blank to flip. As you all know, Flip(0) is usually way faster than Flip(1) but cannot be used due to tearing, and thus the end of rendering usually marks the end of processing within the current frame time. It doesn't have to be that way. By decoupling frame flips from rendering update I can start rendering the next frame and processing the next logic while there is still time left in the current frame. A separate `timed flip process` is scheduled to run based on the start time of the current frame and the estimated time at which the frame should end and be flipped. When it comes to about 1 millisecond or so before the frame should flip, the flip process is scheduled at maximum priority, pre-empting everything else, and simply flips the frame to show the current backbuffer content. Meanwhile the system could still be in the middle of rendering a new frame - because I will use a separate double-buffered custom backbuffer, using `frame buffer objects` under OpenGL. It can keep rendering the next frame(s) and need only copy them to the backbuffer (with fancy effects) just prior to the Flip().

Once all timed processes are done to completion and it's still not yet time for another to start, control returns to the generic scheduler and its processes, whereby it carries on where it left off with background tasks. The framerate of the rendering system will probably have to be capped to allow it to stop at some point to allow fall-through to the generic processes. But it's all optional.

I'm slightly deviating away from my high ideal of a system based on total freedom, and more toward what will be needed to implement a game. I am starting to make some compromises but I am still trying to keep as much flexibility as possible. Maybe this is just a phase - after all, I do have to think about how this will be used for high-speed timed gaming and also for less prioritized general computing.

I still have a few bits to implement, but at this time it does compile and run with generic processes still being scheduled properly. All I have to do is code the `add a new timed process at a given hertz rate` routine and then I'll be able to test it to see that it actually does pre-empt and run at the given time intervals. Overall I think this is going to be an efficient system and hopefully it will run timed scripts efficiently, allow creation of smooth games, and also support multitasking of general applications.

I still have to think about how to handle jitter or interruption from other applications consuming CPU time. After all, this system does multitasking for other general apps in the o/s. When processes resume after a jitter it will be necessary to automatically adjust the timeslices so that they still get as much actual time that they're supposed to. And then it comes down to how to interpret the interruption so that its impact on `apparent smoothness` or `apparent correctness` is minimized.

One approach implemented so far is that timed processes actually don't keep a milliseconds timer. Instead, they keep floating point time. This means that if you really need your timed event to occur exactly 60 times per second, which is 16.666667 milliseconds, it will actually add 16.66666667 to the previous time so that the new time position is precicely accurate. Of course I still can only test that a given millisecond has ticked over, but by keeping track of the virtual timers using floating point math it is much closer to correct. This should account for at least some of the error introduced by Millisecs() but obviously there is still some work to do. Tweening will help to keep accurate time, but a progressive adjustment to interruptions will be some additional feature added in the near future.

Now that I'm getting sort of close to making the bridge from script timing code to graphics and logic timing code, I'm also getting ready to make a leap to other aspects of this engine. I think I still have some work to do with scripts and to actually define more of the `language`, but beyond that the next step might be to start on the graphics engine, texture handling and all that fun stuff.

Always more to do.

Check out my Game Development Blog -ImaginaryHuman-

A new approach?(Posted 2007-09-20)
Wow, that was a long worklog entry last time - must be time for another! ;-)

Following on from thoughts about timing, which really mainly pertains to realtime graphics update and logic/physics processing, in a way that looks smooth on as many computers as possible, I started playing with the idea of a multitasking timing system.

In my system, all runtime functionality is activated by way of scripts which are run `interpratively`. I've designed the script system and VirtualCPU to be as efficient as possible, and although there are surely some optimizations that could be made I think it's running pretty well so far. I got multitasking of scripts working with a scheduler inspired by the new Linux `totally fair` scheduler - where all processes get the chance to use their timeslice even in the presence of CPU hogs. I got all that working nicely.

Then I figured that since everything that `runs` in the system is scripted, even things like rendering scenes, processing physics, moving and animating objects, handling user input etc will all need to be written in script form. Sure there will be certain intensive instructions which do quite a bit of processing for a single instruction. High-level instructions mean less interpretation overhead for the amount of processing that gets done. But basically to make things adjustable, all those subsystems will have to be scripts which are almost constantly executing. And since each will be a separate script or even a collection of scripts, there's going to have to be some scheduling and multitasking involved.

So then I figured, since we are going to have to schedule and multitask and run the interpreter full-time, the issue of *timing* should really be something that is handled by the scheduler.

Usually when you only have one CPU to work with and one process/thread for your application, you are pretty much stuck with having to think in terms of "I've got to do job x and finish it and then do job y". It's quite procedural (or shall we say linear?) that way. Even if you're using lots of `objects` and are thinking that you're coding in an `object oriented` way, you're still stuck with having to lay out your code to run in a single sequence. There is only one thread of execution which hops around visiting all the objects one at a time - in sequence. It just happens to be a more adjustable sequence than it used to be.

So when it comes to designing a timing system, which is an effort to update logic and graphics in a way which seems to be smooth and which scales to different hardware, you still have to lay out separate pieces of code which `take turns` to run physics OR graphics OR input OR networking etc. Even if it would ideally be time to run the logic again, or perhaps take new user input at a fairly high number of times per second, those things have to wait if the graphics code has taken its turn.

So what happens? Additional logic calls get pushed to the next frame. The first frame that ever gets processed is really the only frame which is actually `on time`. From there onwards you're either playing catchup or you're getting ahead of yourself. Because you have to do whole logic updates all at once, as soon as you start doing logic nothing else can happen, even if doing so prevents the rendering from occuring and deprives us of a graphics frame for display. Glitch! Do we really want our level of CPU activity to fluctuate wildly like this? No... because then you have to target the maximum CPU use that is needed in a worst case scenario, instead of a more efficient constant level.

Along comes multitasking. Multitasking is either the illusion that you have more than one thread of execution, or is actually hardware supported multi-processing running programs concurrently. In my case, it's the illusion of multitasking, since I only have one application running as one process on one real CPU (even though my CPU has 2 cores, only 1 gets used). In other words, to multitask is to fragment a program into pieces and to treat each of them like a separate entity. The nice thing about multitasking isn't so much the splitting up into parts, but the ability to then decide, proactively (pre-emptively), which parts you want to run when. The WHEN is where it ties into our need for a better timing system.

As it stood, my scheduler had no real support for timed events. It was really more of a constant-processing system, sharing CPU time among several general processes with no particular time constaints other than that some tasks got bigger slices of the pie. So I needed to add support for timed events.

I decided to do this by setting up a sort of `stack` of several processes. The process at the top of the stack is the highest priority and the priority levels for lower items are subsequently lower. Processes aren't necessarily put into the stack in terms of how often they need to be called, but more in terms of what is more important. The priority also has nothing to do with how long that process gets to run. Timed events are treated as one-off rapid-fire triggers for entire scripts to be run. The process must begin at the beginning and process its entire program to the end before it is considered `done`.

This is where the interesting part comes in. A higher priority (higher up in the stack) process can pre-empt a lower process when its appointed time for executing occurs. Each process has a Hz rate which is how many times per second it is called - or perhaps more accurately, how many milliseconds between calls. The script execution engine, which tracks the current time and detects when a processes timeslice has run out, so as to pre-empt the process and schedule something else, works the same way - as soon as it is time, something gets switched.

The way that the stack of processes relates to the regular arrays of processes is that the stack totally pre-empts whatever the generic scheduler is trying to run. If it's time for a timed process to run, everything with a lower priority is paused and put on hold including all generic processes and even the scheduler process that manages them. So then it's just a matter of setting up the scripts, assigning them a process, putting them in the stack and setting the timer.

I decided that the most important thing to occur is user input. As soon as there is user input there should be visual output, with as little delay between them as possible. This is an absolutely super high priority. The detection of new input has to occur quickly and often. So what's at the top of the priority stack? It's the user-input process. This one process is dedicated to just handling user inputs and putting stuff into a custom queue system. The user input process can run at, say, 120Hz, maybe even 240Hz.

Second most important is the logic process, which runs discrete timesteps (ala fixed rate logic) of logic calculation. This means proper reliable support for physics when I get around to it. The logic process will probably run at the same rate as the display Hz - between 60 and 120 Hz. When I design a game I will design it at a specific Hz rate and then all movements and animations will be scaled to map to the display Hz.

Third most important is probably network processing which will likely run at a low Hz rate - 13-30Hz. User input and network processing feed into the logic system. Somewhere in there I will also have a file-spooling process which will allow me to spool some file data from disk in realtime during gameplay, like frames of animation or new level data/textures. Then the lowest priority is the display process. This of course tries to run at the Hz rate of the display.

While my display process will be vertical-blank aligned, so as to sync with the display Hz rate, I do not intend to implement that by just Flip()'ing whenever graphics have been drawn. Instead I am decoupling the graphics rendering from the frame flipping. Graphics will render to a separate double-buffer and then be copied into the backbuffer (with special warping effects etc) when complete. What takes care of flipping the display? The scheduler - when it detects that it is `almost the end of this frame's time`, for example 14 or 15 milliseconds have passed since the start of a 60Hz frame, it will trigger a `Flip process` which will do the actual Flip().

The reason is, that Flip *waits* for a vertical blank and totally wastes CPU time doing so. That CPU time can be used `towards` the partial logic and graphics processing for the next frame of logic-time or rendering-time. Just like when you Flip(0) to gain extra frames per second, this system will just keep on processing and rendering even if it's not yet time to flip the display. It could on a fast machine render several frames before it even needs to flip, and the most recent frame will be *the most up to date*, based on the most recent user input. So if the user presses a key a few milliseconds before a flip, maybe there is still time to do new logic and graphics so that you see it much sooner than you otherwise would.

Also because logic processing and graphics rendering are handled in an ongoing multitasking way, I think the system will actually achieve higher overall processing than if the CPU was spinning its wheels waiting for a flip to occur or waiting for the next frame before being able to start the next logic update. An additional benefit *might be* that I can have OpenGL graphics calls being performed by the rendering process in some way `parallel` to the CPU running logic processing. Maybe this will be a performance gain.

I plan to use tweening where necessary, with future prediction to approximate where objects will be *at the time of the display flipping*, rather than where they were at the time of the graphics rendering starting. This should further add to the feeling of responsiveness and immediacy.

So the timed events will happen really when they are supposed to happen, rather than waiting and then trying to compensate for the wait. This will produce more accurate predictability of future state and more immediate updates based on user input. The only thing stopping something happening on time is then because something more important has happened on time, and the task in question will immediately resume as soon as that more important task is done. I suppose I could start to allocate timeslices to each of these prioritized timed tasks so that it looks as though they are even more parallel then just based on the start or stop time of a process. But we'll see if they really provides any benefit versus extra context switch overhead.

I have a few other ideas as well that I will try once I get this up and running. For example, perhaps when the rendering speed outperforms the logic speed the priorities of each can switch around, or maybe we start splitting up the logic time into slices of time to spread it out evenly rather than creating a sudden interruption in anotherwise high framerate. It could detect the ratio between logic time and frame time and adjust scheduling dynamically.

I think that these measures will improve the overall reliability, accuracy, promptness and smoothness of the user's experience. I have one other area yet to ponder, which is what to do about interruptions from other applications or the o/s - jitter correction, interrupt smoothing, etc.

I have designed the process stack and the way that its functioning will integrate with the current scheduler and VirtualCPU. So I just have the coding to do. I'll let you know how it goes.

Check out my Game Development Blog -ImaginaryHuman-

It's about time!(Posted 2007-09-16)
It used to be relatively easy to predict how fast a game would run, in what resolution, on whats speed cpu, with how much memory, with what graphics processor, and at what refresh rate. The hardware was predictable and all you had to think about was hardcoding some values for how fast things would move. Whatever way it behaved on your development machine it would always perform exactly the same for everyone!

You only had to think about one specific hardware scenario - you knew what the hardware was capable of and you could specifically code your game to push the hardware to its maximum limit. You knew how much time and processing power you had and could always make full use of it. You knew you could adapt your game design to use the full throughput of the hardware and not a drop more. It would never do anything enexpected to behave differently for different users. And what's more, the operating system and all multitasking could be put to sleep so there wouldn't be even a single clock cycle stolen from your nice predictable game code.

Almost all Amiga games ran on a 320x256 screen at 50Hz refresh (at least in Europe) - or perhaps a predictable portion of that such as 25Hz. You didn't even have to think about things not working the same on some other computer, or that the timing would be different. You'd open your display and start drawing stuff - all you needed was a vertical-blank sync. You didn't have to think about floating point coordinates since everything moved at integer increments, pixels were always drawn in full with no filtering or smoothing or fractional placement, and the game loop was one single sequence of logic and rendering all tied together. Easy!

These days, all of this has been thrown out the window and turned on its head. CPU's run at different speeds, graphics processors run at different speeds (and with different feature support), amounts of video and main memory are unpredictable (and in some ways undetectable), screen resolutions vary greatly (and detecting aspect ratio's etc is not always precice or possible), video refresh rates vary greatly (and are not always detectable), you have to deal with floating point coordinates and filtering and antialiasing, and now it's impossible to switch off the constant interruptions and demands of the operating system or other applications. We might as well be completely blind to what hardware we're running our game on - it's totally unpredictable!

The amount of variation in hardware and software support is now so great and the hardware world is so fragmented that our view of what we're dealing with has become a blur. About the only thing that you can be sure of is that your game is running - at some unknown time and at some unknown speed. Even when it does run its execution speed is not stable - how strange that even our code doesn't even execute smoothly anymore. What's more, the rate and scale of updating the game world is now entirely out of sync with the display.

If we're lucky maybe we can force the game to only open at a specific resolution at a specific refresh rate and then guess at a minimum spec requirement for the game to run `within limits`, but this not only means you can't push the spec to its limit (because you don't know what the limit really is) but you also can't get the user to accurately stick to the minimum requirements. Inevitably, just because there is the possibility to do so, someone is going to try running the game on a slower-than-expected computer, or with x number of applications running in the background, or maybe even on a much faster computer. Then all hell breaks loose.

So what do we do? Well, it's not so much a case of what we want to do but what we're forced to do. Because of the varied hardware landscape and the unstable operating environment, we don't even have the option of really knowing what we're dealing with. It's just not feasible to try writing for a specific minimum spec - maybe that will work for some people but not for others.

We don't HAVE a choice - we have to treat the hardware as though it is in a constant state of flux. All of the many `advances of computing` have completely disrupted our communication with the hardware. We no longer have a direct view of it and we cannot tell what it is really capable of. Because the hardware is in constant flux this forces our game to have to be in a constant state of adaptation - the instability of the environment now directly correlates to our application having to behave in an unstable way.

We can't just say `do x amount of work in x amount of time`. Our game now has to be a constantly changing, constantly adjusting, constantly adding-and-removing-features as a flexible reactor to conditions. We have no choice but to now be at the mercy of, and a victim of, external circumstances. To make full use of the hardware now requires altering the nature of our game, in realtime.

The very design of our game is no longer a set of predictable commodities - its structure too has now been undermined by the varied landscape on which it runs. We are being pushed to create adaptive games, games that `do more` or `do less` depending on who knows what.

Our graphical approach is forced toward realtime generation since we are no longer working with x number of frames of animation in x amount of time. Anything precomputed or pre-drawn is now inflexible and rigid. Traditional hand-drawn animation is now adrift in a sea of alien progress. It is no longer even easy or viable to create graphics for such adaptive circumstances, forcing many developers to seek out new dynamic graphical styles. Bitmap graphics do not scale well and trying to support several resolutions at once is no easy task without some kind of realtime scaling, filtering and butchering.

Throw into the mix varying and mostly undetectable aspect ratios and you have a recipe for a whole new set of potential problems. The attempt to solve hardware issues by fragmenting the hardware landscape has resulted in more questions than answers and more issues than solutions. We have stepped backwards in the name of moving forwards. This is the nightmare that we now have to live with.

Our best approach, then, is to now take this scenario, which has become our unchosen fate, and make the best use of it. How can we turn this to our advantage? How can we bring stability back to the unstable? How can we create an illusion of predictability and consistency where there is none? It is the weaving of this new illusion that we seek to undertake when we try to address the situation we've created. We want to somehow gloss over this terrible underlying nightmare and make it act pretty. This too is typical ego madness, trying to cover up a flawed idea with more flawed ideas, but it is our only option.

To provide a solution to the situation we need to know what the most ideal perfect situation would be. In an ideal world, the very instant that the user provides new input, the effects of that input are displayed *instantaneously* with absolutely no delay. Or to put it another way, the user's pressing of a key on the keyboard or a button on a joystick *is the same thing as* state changing in the game world and on the screen.

Let me clarify that by `instantly` I literally mean with absolutely *zero* time delay - which implies one thing - that the state of the user is directly reflected in the state on-screen as though they are ONE. The pressing of a button *is the same thing as* the visual alterations seen occuring in the game world. Since this reality we are in is holographic, this makes sense, because there really is no cause and effect - there is only resonance. What is in the user is in the world. The world IS the user - they change together as one.

In the world of computers, and through what appears to be `time passing`, our ideal is already broken down and interrupted by delays and separations. Ideally the computer would be producing an infinite number of frames of game-state at all times and the display would be displaying at infinite Hz to show that state, so that at the instant of user input there is game output. This is obviously not achievable.

Computers take time, they have to process things, they have to think and delay and manouver. They cannot just do everything all at once, they have to separate it all out into little tiny pieces and do them all one after the other. There is no sharing of time or space. Nothing is happening in parallel or instantly. This is a delay and an interruption to the directness of communication between the user and the computer. So just by virtue of the fact that the computer `takes time` (away from the user), we already have a disruption of the synchronization between the user's state and the game world's state.

Then we throw in even more delays and disruptions. The display only shows the game world's state once ever x Hz - let's say every 16.6 milliseconds, which amounts to about 60 times per second. But hey, what's 16.6 milliseconds between friends? Let us not overlook that this IS a delay and it does affect both the user and the game engine. How is the user to see an immediacy of response if the response comes only after a delay? Granted the delay is short and it is hard to tell the difference when you're updating the screen at 120Hz, for example, but the delay is still there. So now instead of a constant flow of game state - analog game state - it has now been compartmentalized into discrete digital chunks of time.

This by itself perhaps wouldn't be too bad of a thing. The problem is, there are several other aspects of a computer game which also have been digitalized and sampled at different rates.

For example, if you draw a game sprite at 64x64 pixels which looks to be the right size on a 640x480 screen, as soon as you move to a different resolution (and you want to maintain the proportional size) there is now a sampling difference between the sprite and the display resolution.

As another example, you want to move the sprite 50 whole pixels, one pixel per frame, at 640x480. As soon as you move to a different resolution, and you want to maintain proportional speed, your movement is now sampled at some unusual number of pixels per frame.

And another example - you want your game logic to run at 100Hz but your display refresh to run at 60Hz. Now the sampling rate of the logic and the display are not synchronized so occasionally there will be 2 logic updates per graphic update.

By breaking down the game into several parts each with their own Hz rates, although sounding like a good idea to solve the problem of an unstable hardware environment, now introduces yet another new problem. Now that the game is fragmented further, there are now aspects of it which are out of accord with each other. The logic is no longer talking to the display at the same time and they're starting to talk over each other.

To compensate, we now try to indroduce schemes like tweening - basing the current state on the last known calculated state. However, this is a shot in the dark. We are *guessing* what the state would be, so now we've introduced yet another problem - the display now shows a state which is slightly offset from what it should be showing. It is a distortion.

Then, even if we can make it seem as though the frame being displayed is `fairly close to` what should be being displayed, we still have increasingly more fragmented issues to deal with. Now we're up against the operating system, scheduling time to other tasks and taking it away from our game. Peaks and spikes and dips and stutters all take their toll as the game battles it out against the hordes of marauding applications.

So what do we do? We try to get an understanding of the distortion and we try to minimize it - we add jitter correction, we try to predict the jittering, maybe we even get into some heavy math to map waveforms onto interruptions so as to minimize their impact on the displayed game state.

Maybe after all this `correction` (which actually is`creation of error`), we have glossed over as much of these distortions and interruptions as possible so that the user probably won't notice the very fine-grained fluctuation which is still occuring. But what it all boils down to is, you cannot get rid of it!

All we can do is try to slice it up and smooth it out and dimish its contrast to create an illusion that it is not there - but we can NEVER reach the point where it is not there at all. Why? Because a) it was our own fault that we introduced variation and fluctuation when we decided to have more than one hardware specification - separation between hardware conditions creates problems, b) no matter how many times you divide up a problem it always results in at least 1 new problem, c) problems are designed by the ego to never be completely solvable, and d) every time you add 1 CPU instruction as an attempt to solve a problem you create a delay which adds to the problem.

It is a pointless exercise try to seek after a perfect timing solution that totally handles all scenarios all of the time. It is *impossible*.

We are in a conundrum: We have a problem which we absolutely need to solve and it is absolutely impossible to fully solve it. We can partially seem to solve part of the problem, but we cannot fully solve all of it. If we ACCEPT THIS, then the sanity of that acceptance should suggest to use that maybe, then, we need to go back and reconsider just how much of the problem we really need to try solving at all. After all, if we accept that perfection is an impossible goal of the ego, and there is no point in trying to strive for a percect solution, then doesn't that mean that it doesn't really matter how well our solution works, or even if we try to solve the problem at all?

Naturally there is no sane reason to pursue the ego's impossible goal, trying to solve an impossible problem which really does not want to be solved. But just because we're spending some time here in this illusory world doing illusory things, we might as well spend it doing something!

If we are trying to please the user, and are trying to create the illusion for the user that a game exhibits consistency, reliability, smooth near-immediate responsiveness, fluid movement and animation, predictable performance and a pure continuum, then we should admit to ourselves that we are basically trying to create the impression that the total mess of `varied hardware` has not occurred.

Correcting the real problem would mean targetting a single specific hardware combination and taking over the whole computer. Although this is the ideal solution to all the problems we've created for ourselves, this is no longer even desired by users. Even the users themselves are now just as insane as the rest of us, actively wanting to be able to choose from a whole variety of different hardware combinations and yet at the same time expecting consistent equal results. They don't even really want the problem of this fluctuation to be solved.

What they want is the *illusion* of it being solved, on a whim, as and when it suits them, and it's entirely the developer's fault if this need is not met. At least we can use software to compensate for hardware. The main point of all software is to add an ever-changing adaptable face to the fixed hardware. Users want to see a face they are happy with, even if that face is totally at odds with the real state of affairs beneath it.

Ideal software really has to fly in the face of the very hardware that it runs on. It has to try to compensate and adjust and cover up and reinterpret so that what the user experiences, after all, really isn't the hardware at all, but instead a carefully crafted illusion. And this is what they want. They don't care what's really going on, or how efficient it is, or what problems it is creating, they just want to be served.

They want the game to meet their needs and to meet them now! They want to sit passively and be entertained. The insane hoops that the developer goes through to make this possible are none of their business, and they especially don't want to be reminded that they are playing a part in making the situation worse. They do not want to wake up or be made aware of the real problem.

So what are we doing, creating games? Are we trying to give people what they want even if it makes them even more unconscious and insane, or are we to try waking them up gently but actively to become more aware of what is really going on? I'll leave that question for you to answer.

With all this in mind, and clearly given that I'm still intending to create software including games, and that I want the ideal impression that none of these problems exist, so that I can focus on conveying a message through the medium rather than the medium itself becoming the story, I am still left with trying to figure out how best (ie what is `good enough`) to create an even playing field.

For now the best solution I've come up with or read about is to used fixed rate timing at a fairly high rate for user input, to used fixed rate logic at a lower rate for logic, to use whatever rate is possible for the graphics update and using tweening and some kind of interruption-compensation. This assumes the logic will always take less time than the graphics. If it turns out that the graphics can be churned out at a higher rate than needed then maybe I will let it generate multiple frames and only display the frame which was most recently finished when it is time to flip. That will partly depend on availability of buffer object support in OpenGL. A system which is able to generate a high volume of frames, regardless of whether they are displayed, is at least close to the ideal of an infinite analog-time update. But then, a huge volume of processing would be wasted which perhaps could have been used for much more intense output at just 60Hz. Do you trade responsiveness for processing power?

I thought about a fixed rate graphics update with a variable rate logic, but delta time logic is unappealing. Knowing a consistent rate of animation and movement makes things hugely simpler for me as a developer - it makes the physics simpler and more stable, and it gets me back to where I can `hard-code` movement rates and positions that will work on all systems.

At the end of the day, that's what we're aiming for - were trying to get back to what it was like when this all began - one specification, one reliable performance, one rate of time, one level of graphics processing, one level of cpu processing, one simple piece of code, one consistent user experience, immediate responsiveness and accurate visual representation of real state. How we achieve that when the situation we're dealing with is totally the opposite is the whole challenge. Creating the illusion of that is the ultimate goal.

I wondered perhaps if, since we know that the display framerate is a fixed Hz, and we know what the time is just after a Flip, then we know that we have approximately X milliseconds after that before the next flip needs to occur. It is then within that timeframe that we need to make the best use of what we have. If we *predict* that the next flip will occur at a given millisecond, ahead of what the current time is, based on the display Hz, then we should be generating tweened graphics based on how they will look *at that time*, NOT on how they would look at the time that the graphics code started to run. If the graphics code takes 50% of the frame time then you'd already be 8 milliseconds behind when it comes time to display the frame - not to mention that the frame will be on-screen for a further 16 milliseconds before it is updated. So the user is going to experience 24 milliseconds of delay since their last user input was processed? This is unacceptable.

If you're going to use elapsed time to tween the current state to where it should be, and you're only doing it for display purposes (logic state is not affected), then you should be perhaps showing how things would look at least at the time of the next flip, if not how it would look halfway through the next frame. Obviously you still won't get new input to show on the output for about 8 milliseconds, but then when it does show the state won't be as old as normal. Obviously I'll need to test this to see if it really does provide a better sense of responsiveness.

Another option is to run logic in a separate thread or with a multitasking script system, so that the high-Hz script process interrupts the lower-Hz graphics process as soon as is necessary. The graphics process then puts a lock on the last completed logic state which it then uses to predict future positions with tweening. My idea is to prioritize a user-input script over a logic script over a rendering script.

The user input script gets scheduled by my scheduler at a consistent Hz rate - say 120Hz. It pre-empts all other scripts including graphics and logic - so if graphics or logic are in the middle of doing their thing they will get interrupted pre-emptively. Then the logic updates at, say, 60Hz, which is able to pre-empt the rendering script but not the user-input script. Then finally the rendering script uses whatever time remains, ideally at the Hz rate of the display. The logic script should still have enough time to finish its current iteration, in addition to the time taken by the user input script, so there is always time left to render graphics and the user input is not causing the logic to get behind.

Since the graphics script is sending instructions to the GPU `all the remaining time`, the GPU can start to render in parallel to the CPU doing the logic and user input, especially if we use OpenGL features like display lists and arrays. I am hoping that the ability to run CPU logic at the same time as the GPU doing some work will provide even higher overall performance than doing them sequentially. Ideally I'd have the logic and input in a real o/s-level thread running on a separate core to the GL script, but at the moment that isn't really supported - but maybe in future.

My task now is to add a system of virtual timers to my Virtual CPU so that it can schedule specific scripts at specific times. Those timed processes will have higher priority than all other normal processes, so normal processes will then use whatever time remains for regular processing. To make that happen I may have to cap the maximum time spent by the rendering script, so there is some time left for these background tasks, but we'll see. I also need to think of the best way to handle gross interrupts to the applications cpu time - o/s events, other apps stealing time, etc. Perhaps a spline-based smoothing tied into the tweening interpolation.

Another idea I had is to log how long the logic and render phases take, then use that information to decide whether it is worthwhile scrapping the current frame that's being rendered and starting over due to new user input. If you're halfway through a frame cycle and the user input script runs and gets a new keypress and there is enough predicted time left before the flip in which to draw a whole new logic state and graphics state, then it should try to do that. We want to get the `most recent` possible state to be showing visually when it comes time to flip, so if we still have the option to do so we should really try to squeeze as much delay as possible before we start to render the final scene for the frame. That means, of course, predicting how long the render will take, which is tricky. And what happens when the o/s jumps in with a delay, preventing your delayed frame rendering from completing before the flip needs to happen? Do you detect a recent earthquake of jitter interruption and ease up on the delay prior to rendering? Things to try and test.

My ultimate goal is to achieve close-to-perfect smooth motion and animation, close-to-immediate responsiveness, and close to precice timing.

What would really be great is if BlitzMax's timers would actually automatically trigger a function to be called as soon as a given time is reached, without any need to wait for events or cause a poll of the system. If we could provide a real hook function into the operating system, so that when the operating system detects a timer interrupt at the appropriate time it calls that function for you, that would be the ideal system. Then as soon as the user provides input it immediately changes the game state, logic is immediately updated at the right time, and everything is very efficient. But alas, what are the chances, given today's complex architectures? ;-)

Things to do.

Check out my Game Development Blog -ImaginaryHuman-

Execute!(Posted 2007-09-10)
Hi folks.

I have my VirtualCPU system up and running. I have to say there is something satisfying about seeing a program run versus just looking at the code. :-)

The multitasking and execution is working mostly correctly - there is a small issue of the way that I increment my `program counters` whereby it is being incremented a little too often in a few special cases, but that will be easily fixed. I made a brief workaround while I fix it. Just to recap, I am not using any real o/s level threads or processes, it's all still a single BlitzMax application as a single process, but it creates virtual threads of execution and manages its own virtual processes. Maybe in future when Max gets reliable thread support I will migrate the engine to a multi-core/multi-cpu system with real parallel processing.

I can add processes which are basically instances of execution of a script `program` and allocate a timeslice to each and it will do all the scheduling and pre-empting correctly. I decided it would be more efficient to allow a process to use all of its timeslice before trying to pre-empt, since the only reason there would be a pre-emption is if the current process scheduled another process to run with a higher priority. So I have the priority-changing code manage the pre-emption so it only needs to be called when it really applies. I also removed the brute-force polling for system events several thousand times per second as I realized this is major overkill and Driver.Poll() is remarkably slow compared to a millisecs() based timer system. I also got rid of my `real` scheduler timer, therefore, and resorted to a simpler and more efficient millisecs()-based check which is way more efficient overall. Switching from an event-driven pre-emption to a millisecs()-driven pre-emption means a jump from 2 million instructions/second to over 200 million.

I wrote a very simple program with one instruction which executes a `loop back to the start of this program` instruction. This basically resets the program counter for the current process back to 0, and also counts (temporarily) how many times that instruction has been called. I let this run and timed how long it is taking to loop and count.

My system is a 2.0GHz Intel Core 2 Duo iMac, so this single-process program is only really using one of the cores at 2.0 GHz. At most, at the lowest level in machine code, you'd get about 2 billion instructions per second. With the simple looping and counting test program I get execution of about 230 million instructions per second. To me this sounds impressive compared to the 18 MIPS (million instructions per second) that my old 25Mhz 68040 Amiga was capable of.

It works out to be about 9 cpu clock cycles per script instruction. Considering it's a script language and it's also multitasking with a pre-emptive scheduler, I think this is pretty good performance. Obviously this is not a real-world test with a variety of instructions each with different requirements, and it could be even less real world if it were just a `No Operation` instruction which doesn't even increment the program counter, but it's a promising start.

I think it also looks promising in light of the fact that this is a totally flexible script program which can be loaded/modified/saved at runtime and does not require any `compilation`. The question of compilation is actually kind of debatable because really to compile something means to translate its meaning from one level to another. Since my scripts are written at a very low level and can be edited at the level of executability, there really isn't any compiling required. It will only be when I start adding on higher level interfaces that there will be some time needed for translation.

The VirtualCPU currently executes 256 script instructions before checking the milliseconds timer, and then when the timer is checked it only takes an action when either it's time to poll for events or the process's timeslice ran out. So currently events are polled every 15 millisecs and there is only a pre-emption (a jump to the scheduler process) after every timeslice expiration. The test program's timeslice is set to maximum priority which is 32 milliseconds. This results in about 31 pre-emptions, 62 context switches, and 66 polls for events per second.

Granted, this program test is probably somewhat faster than future results will be with real-world program situations, and it's probably making huge use of the cpu cache given that the program is so small and is revisited a lot, but I am hopeful that this will make for a very high performance script engine as time goes on.

I compared this small program to the same functionality written purely in BlitzMax language - ie call a function which increments a global counter and sets a program counter to 0. The BlitzMax equivalent isn't dealing with processes or doing any multiasking or timing - it ran at about 270 million functions per second. Since my script program ran at 230 million this means my script is running at only about 15% slower than native blitzmax code.

I also in earlier tests tried directly mapping OpenGL functions and it appeared that my script was almost 100% the same speed as if written in native Blitz code. This possibly disguises the fact that the cpu is kept waiting for the GPU and the GPU time has not changed, but I am glad that the script is at least not any slower.

The task now, after some additional testing and features (such as adding some other fake timers and correcting a few issues), is to start/continue writing script instructions so as to develop it into a more fully useable `language`. Then in future I will be able to make more of a jump from writing BlitzMax code to writing script code - the system will start to be written in its own language.

Much to do.

Meanwhile in more creative moments I've been putting together a number of ideas for a good shootemup game, doing some research into the genre and testing out a few things. I think I will be able to bring some very original technical features to such a game, along with some very fun and exciting gameplay.

Check out my Game Development Blog -ImaginaryHuman-

Still scheduling(Posted 2007-08-26)
I've been working recently on coding the scheduling system as described earlier and most of the parts are now in place. I found that I was procrastinating quite a lot when trying to implement this, partly because it is quite complicated and is something I have never written before, and partly because making progress with it would mean having to move on to the next part and all that this entails - psychologically or otherwise.

The system is based on a loop which just constantly executes script instructions, and after x number of instructions polls the o/s for events. One of those events is a timer which triggers every millisecond to check if the timeslice for the current `process` has expired. A process is an instance of a running script - it's not at an o/s level so this is still a single cpu system at this time. Maybe if threads are properly implemented in future versions of BlitzMax then I can make the easy transition to a multi-core or multi-cpu arrangement. My ideal is that every program/object that exists gets to execute in parallel with no delay whatsoever, but that is far from being supported in hardware or in the low-level of BlitzMax right now - so instead I am playing with creating the illusion of multiple processors - ie multitasking, pre-emption, and scheduling.

The main scheduler is in place and I have designed it so I only have to call the scheduler program when a timeslice expires, and the countdown of the current process's timeslice is a very quick little subtract-and-test operation. The scheduler gets invoked once the timeslice runs out, or when changing the process priority, or when adding a process, or when a process gets to the end of its program. This way there only needs to be a context switch to and from the scheduler script when it actually needs to think about scheduling something. As mentioned, the scheduler is a script like any other and although it calls my first pre-built `ReSchedule()` function it doesn't have to use that to make a scheduling decision. So far I have the following functions which covers most of the functionality needed:

Init: Initializes the scheduler and virtual CPU system ready to start running scripts

EventHook: Captures events including timer events and user inputs, depletes timeslices and decides whether the scheduler needs to be invoked

ScaleTime: Scales the relationship between process priority and timeslice size

Prioritize: Sets the priority for a process and recalculates its timeslice and if necessary schedules the process (if more important) or schedules another process (if less important)

PreEmpt: Interrupts the currently executing script and switches context to the scheduler process - the process's program begins to be executed by the virtualCPU which will likely include a call to ReSchedule()

Finished: This gets called at the exit point of a script, to say the script is done and can be unloaded or whatever

Schedule: Sets up an instance of executing a script and then schedules it to be run

ReSchedule: Examines which is the next process to receive virtualCPU time and makes the context switch to it, while also maintaining a double-buffered linked list system of processes

Switch: Switches context to an already initialized process and continues its execution

Execute: This calls the virtualCPU to begin executing script instructions, and to occasionally check for events from the o/s including the pre-emptive timer tick for the scheduler

Stop: Stops the virtualCPU from executing, at the next available check-point

I've implemented process priorities and a system which always makes sure the highest priority process is currently executing, doing whatever pre-emption or switching is needed to make it happen right away. Also there is a double buffered array of arrays of linked lists of processes - each process priority has its own linked list of processes which are scheduled round-robin before switching to the next lowest priority. When all timeslices for all processes in the current buffer are expired then it switches to the second buffer, which is where all expired processes were moved with recalculated realtime timeslices.

Any script can call any of the above functions, meaning any program can schedule another program, can pre-empt itself, can put itself to sleep, can change its priority, etc, but ultimately the scheduler has the decision-making power as to making sure that higher priority processes get to run.

I haven't yet implemented the special-case events based on the FlipHook, so that at the start of each `frame` a specific process is executed similar to calling the scheduler process, to perform frame-synchronized tasks on-time. I also haven't implemented general purpose timers for the user to set up.

Also for every function I write I have to write a second version of it which can be used in a script, based on the special way that I deal with function parameters and how I handle the script.

I think there are still going to be a few other functions I have to write, to implement some special behavior. For example when there are processes at the same priority level it will excecute round-robin (processes contain circular linked list fields) but I want to really give each of those processes a percentage of their total timeslice, or an equal timeslice for all. I also might add something to evaluate performance and adjust priorities dynamically on the fly. I also want to make sure I can dynamically priorities tasks that the user decides are important, and which includes giving plenty of time to the application or window that the user is actually interacting with.

It's coming along :-)

Meanwhile I have been playing and looking at various versions of Defender games, including Datastorm - an old-time favorite - and am planning at the moment to make this my first game project that I will create with this engine.

Check out my Game Development Blog -ImaginaryHuman-

This and that(Posted 2007-08-15)
I have been slowly working away at figuring out how to do scheduling and how to put together the code that handles events, pre-emptive multitasking and running of scripts. I did some research into the various kinds of scheduling and found the Linux scheduler to be interesting. It is efficient and fair and I like the fairly simple design. I won't be copying all of it as I really don't think I will need an operating-system-grade scheduler but really I just need to be able to have several scripts running at the same time like threads, sending messages around and doing stuff. I have it working on a round-robin system just to begin with but will be moving up to a basic model similar to the new Linux scheduler.

Basically what that comprises is a double-buffered array of linked lists, one array list for each process priority. The array is filled with active tasks and timeslices are calculated. The highest priority array element gets first dibs on cpu time and then it goes through the list within that array element to execute the processes of the same priority in a round-robin fashion. Then it moves on to the next list, working down to the lowest priority. When a process runs out of time its new timeslice is calculated and it's then put into the second buffer ready for a buffer swap. When all tasks run out of timeslice the buffers are swapped with a simple pointer change. Longer timeslices are given to higher priority tasks. Tasks can set a base priority and then the scheduler can relatively modify it based on performance requirements. Highly interactive tasks or those that need to run `on time` get to have extra timeslice allocated when needed. This is the basics of how it works, although Linux gets somewhat more advanced in terms of monitoring io-bound or cpu-bound tasks and so on. Since my system is very open it will be quite normal for any script to be able to act as a scheduler, to pre-empt itself or other scripts, to invoke other scripts, to share its time with other scripts, to end itself, and basically to allow a whole new system to be re-implemented.

Since the system needs to be good for creating graphically demanding games and constant or frequent graphics updates, I will also be building in a special way to trigger certain processes to be scheduled at a given time interval, e.g. when the frame Flips or when a timer event fires. The pre-emption integrates with o/s user input events at the same time as executing script instructions. A virtual cpu runs scripts in a brute force way with no regard for what process or program is running, and the scheduler is also a script, calling special scheduling instructions. Events from a GUI or user input can be processed in a timely manner at the same time as executing script code. Also since scripts will be calling OpenGL commands it will be possible to multitask GL calls with regular script programs which is not possible in standard BlitzMax code without structuring the program flow in a very procedural way, which I hope will allow the graphics card to be processing graphics while other things are going on.

Processes allow a program to be run any number of times concurrently and in parallel. Of course all the headaches of thread management come into play since all local memory space is shared. In addition, since network boundaries are invisible there also has to be management of concurrent accesses across multiple computers as if it is all one shared memory space. This of course will be all transparent. In my opinon all scripits should by default share all resources and memory with all others, and then the narrowing down of access should be optional (to create what amounts to a traditional process running in a protected memory space). I am not very much interesting in protecting memory since I am not anticipating or encouraging malicious behaviors, but the freedom to choose it will be there. After all this is not some kind of elitist exclusive software model.

There is no centralized operating system core and any script can become a new kernel upon which a whole system can be built. If the user does not like how something is done they are free to change it or add something new. I believe in not only providing this freedom, but also actively and deliberately providing tools that make it easy to do this. We are not in the business of leaving the user out in the cold, and no matter what they desire to do, even if it is to scrap the whole system and start over, the system should support them and encourage them in their endeavors. Otherwise we would be showing a bias and a desire to have the user conform to our vision of how we think things should be done. That is against our philosophy.

Traditional software will aim to anticipate the user's needs through an attempt at understanding their needs, but really this is an attempt to misunderstand their needs. We cannot understand each other while perceiving each other as different. The real intention of a preconcieved model is to keep the user having to revolve around the software to meet ITS needs. Who cares about the stupid software?! It is nothing. Instead we wish to be completely uninvested in any specific form of the system and to be just as enthusiastic about supporting the user in whatever they want to do even if it means absolutely no desire to support our own models, preconceptions, ideals or ways of working.

It is because we as the creators of the software have no `needs` that we are trying to make the software a symbol of, which the user has to then fulfill, that we remain detached from the system creating a space for the user to express their own visions. What is software but a reflection of and symbolization of the needs of the developer, trying to be met in a backwards and unconscious manner? And what developer can see beyond the preconception of their half-awake mind? If software is just a symbol of egotistical neediness, forcing the user to comply or be denied, then it is of utmost importance for us to create software symbols that are as absent of egocentricity as possible. Preconceived rigid software only seems to be transparent and resonant with the user so long as it is correct in its anticipation of the user's mental activities, but as soon as those mental activities run against the grain the system has failed them and instead its seeming invisibility becomes suddenly visible, revealing its ugly intentions to limit and disable. We neither desire to have an ugly intention nor to disable the user, and this means having to be willing to accept that the user may not want what we want for them. All is choice. Good software has as little central identity of its own as possible. Its centrality forces the user to revolve around, while decentralized software lets the user put their focus where they want it.

We recently moved to a new apartment and are going through lots of changes at home and at work, so I haven't been doing much programming in the recent few weeks, but I will get back into it. I have been playing some games a bit more than usual, to relax and unwind, which in itself has been mildy a research opportunity (yeah right!) ;-) I apologize for the irregular choppiness of this worklog, I'm just writing it as I think of it.

Back soon.

Check out my Game Development Blog -ImaginaryHuman-

Compilation and Execution(Posted 2007-07-07)
What is a script system without a way to compile and execute? How do we do that?

Traditionally, in the world of computing there is tremendous focus on separation and dualism. We like to do thing separately. This really stemmed largely from the fact that at the lowest level a computer is digital. The stark contrast between an `on` signal and an `off` signal separates the two and makes them distinguishable as seprate states. Slightly above that you have the dualism of separate binary digits - 1's and 0's. Since the days of early software, a lot of the paradigms of thinking of how to create software have been highly influenced by this `dualistic binary thinking`. Even today we have things like texture sizes being powers of 2, or tilemaps with 16x16 or 32x32 tile sizes, or color components in 8-bit, 16-bit, 32-bit etc. The influence of separation from the deepest roots of the computer hardware through to the software has been immense.

What I wanted to do is to get away from this baggage. All of the ways of thinking about software in terms of thing A being separate from thing B. Is dualism really able to comprehend anything unified? No. As software has progressed we have seen more and more the attempted shift towards escaping from the mindset of duality, and from the mindset of the computer, which dictates limitations and divisions.

The idea of `user centric` is basically the idea of everything revolving around the user's needs rather than around the computer's needs. But even this idea too is limited because what if the user has dualistic needs that are not in his best interests? Ultimately this gets down to deciding what is really important and what is really the most purely opposing to the dualistic mindset - and that is pure Oneness.

Software is travelling towards and inspired by a mindset of Oneness where the legacy of `past thinking` and approaches based on separation are completely ignored. This is like starting over from scratch, since nothing that came before has the degree of unity that is here right now. So I started over.

I started looking at various aspects of computing and in particular the software models and the kind of thinking that is behind them. But it is important not only to look at these things but to look at them through the eyes of Oneness - to be willing to see every small trace of separation ideals present within it. To do so is very eye-opening. And since we realize the extent to which `separation` as an abstract concept has influenced so much of what is happening in software, we realize that more or less all of it needs to be rethought. When the wheel limits unity, the wheel needs to be reinvented.

Software cannot, unfortunately, escape from its ultimate roots, since it always has been and always will be a binary program running on a digital computer. But what we can strive for is to exercise the application of Oneness at as deep a level as possible. Somewhere along the lines there has to be an interface between the mindset of Oneness and the mindset of separation - or in other words, where the unification principles meet the separation ideals of the hardware. It is this meeting place that acts as the translation layer between ideals of wholeness and the insistance on separation by the hardware. It is here that higher ideas must be translated into computer programs, and it is this boundary which must be pushed back as far as it can go.

So first we recognize that the hardware very stubbornly and resistantly insists that everything be based on separation. Everything has to eventually end up being a program of binary digits that the CPU/GPU digests and acts upon. But that is more or less where the insistance stops. There is no reason why anything above that layer of operation needs to be influenced by or affected by the lifestyle of the hardware. And in fact, the hardware's little world of separating things out should be completely ignored as much as possible from there upwards, since it was designed to separate and separation is not our goal.

Traditionally in computing there are a lot of areas where separation ideals have bled from the computer hardware into the software. Or to put it another way, where the hardware has been allowed to dictate what the software can do and how it does it. As much as possible this must be avoided. Finding ways around all of the hardware issues to the point where they appear not to exist is a primary goal.

What we want is consistency. Consistency means honesty, which also means that wherever you go and whatever platform you are running on, nothing changes. Consistency denies separation and completely ignores any kind of specifics of form or changes in hardware. If you don't have consistency you begin to have fragmentation which HAS to lead to confusion for the user. The more differences you introduce into the software and the more ways you split it up and introduce levels, the more the user will be confused and lost. The more trees you introduce the less they can see the woods. It is of utmost importance that the user sees the big picture. They can't do that if the picture is separated out into lots of unrelated parts. One approach to achieving consistency is simplicity. Usually this is interpreted to mean limit the number of features. What it really should mean is to limit the number of separations between things at any given time - and that means thinking abstractly.

Where do those separations come from? The developer. Why does the developer put separations there? Because he thinks they are useful. Why does the developer think so? Because he thinks his idea of what the user wants is correct. It is not. It never is. We are almost all subjective in our perception of others. Not only do we not perceive them for how they really are but we do not understand their separate needs - and all separate needs are different. The more the developer tries to determine for the user what is best for them, the further away from their best interests things become. Because then it really is not about the user's best interests, it is about the developer's attempt to anticipate the user, which is not only impossible but is also very limiting. What happens when the user wants or needs or does something that the developer did not anticipate? What happens when the user wants to step outside of the developer's mindset? What happens when the user wants a feature which is not implemented or wants to work in a way that it was not designed to do? They meet a road block. That road block is a separation.

Oneness between the developer and the user can only be achieved by allowing the user to do whatever they want to do even if it is way outside of the realm of what the developer thinks should happen. Giving the user the freedom to choose, for themselves, and providing a `world` or reality in which to express that choice endlessly, is the ultimate gift. This reflects the way that The Creator (The One) provides the perfect freedom for its Creation and creates the perfect reality (Heaven/Nirvana) in which to express it. The world of possibilities created by software determines the scope of what the user can do, and clearly this should be as limitless as possible.

So a major step is not to try to understand the user's specific needs. There are as many specific needs as there are users, if not moreso. Trying to meet them all is people-pleasing which is a form of paralysis that actually meets the needs of nobody. Oneness can only be shared between the developer and the user when the developer gives the user total freedom to go beyond the vision of the developer. This is the only way in which there can be an understanding betwen the developer and the user. Attempts to prethink the user and to predesign the interface and to hardwire the code and to shut out the user's choices are bound to attack the very communication and meeting of needs that is being attempted. The only needs that can be met are needs that everyone shares, and those are simply the need to be themselves - to choose freely to do whatever they want. Providing that need is the only need that CAN be provided. All the rest is hit and miss. Obviously this is not attractive to most developers since they want their ego to acquire attention. When the developer can put their ego aside and `be there for the user`, in my opinion that is the achievement of the only thing that is meaningful. This unfortunately has not happened in traditional software.

I call software traditional because I am implying there is another way to look at it - a new way. Traditionally, developing software has been separated out from executing software. Interacting with software has been separated from changing it. Software has to be `on pause` in a freeze-frame state of suspended animation, while you do some surgery upon it, later to revive it as it spring back to life. This mindset is one of separation.

Just because development and execution are separated out, inevitably must lead to certain consequences. The user now has to wait for compilation of changes. The user has to think in different terms when they are developing to how they think when they are using. The user has to learn two languages. The user has to stop their use and testing of changes in order to implement the changes. You have to have separate sourcecodes and executables. You more or less have to recompile whole units of sourcecode in order for them to execute again. Developing the software is quite far removed from using it. All of these contrasts make it difficult for the user. But most of all, this software model has been leveraged by the ego's of this world to deny access to the mechanism of the software. In other words, the sourcecode is hidden and the user cannot change it.

Now, I am every bit as much in favor of free software, as in freedom of speech, where the user is given is the same abilities that the developer had. But I think that the way of going about this, in the traditional software model, is really not addressing the full issue. Making the sourcecode available in order to give the user the same freedom that the developer had still does not address the lack of freedom caused by separating development from execution. It is the whole way of dealing with software and splitting it up into states of executing and editing that is just as much an attack on freedom as is retaining one of those two parts in order to limit the user. Sharing the sourcecode still keeps this separatist software model in place and does not really actually change anything.

So long as you are sharing the *freedom* that the developer has, with the user, on all levels, then that is the true spirit of free software. This may or may not mean sharing the sourcecode depending on whether doing so really gives them the same freedom. The first thing we must do is bring to gether the separated areas of software, so that development and use are one and the same. There should not be separate states of development and execution any more than there should be separate states of operation. In the mindset of Oneness, everything is always live and separation is the exception to it. So in my system software is always live. Changes are always performed live, and these changes directly map to the executable condition of the software. In other words, there is no noticeable compilation time. Code can be self modifying, all data is open and shared, there are no conservative barriers trying to create preventions, the user needs only operate the software in a seamless flow of change, and full freedom is conveyed to them at all times.

Since the system will give the user as much freedom as I have as the developer, that means they can do everything with it that I can do. That to me means `free software` as in freedom of speech. If the system itself has no preconceived interface and no preconceived way of working, then it's a clean slate both for the developer and the user. Sharing the `platform`, ie the virtual machine, which creates an even consistent playing field on all hardware, provides the same degree of freedom as if you had access to the sourcecode. Only a system that lets the user decide what they want to do, in the most broad and general sense - be it to create or play games, or applications, or make art, or listen to music - whatever, is a system that really shares freedom and allows Oneness to be present.

So in my system there is no separate sourcecode isolated from the executable. There is only the executable. Changes are made directly to it, live. It is flexible enough that any other `languages` can be built on top of it, and there is a direct mapping between the instructions and the executable functionality - similar to assembly language. The virtual machine creates a level platform on all systems and its virtual CPU executes the program code. Stopping the program's execution to make a change that is not immediately live is then a *choice*, freely given to the user, rather than an automatically imposed limitation that they have no say in. This allows them to choose to work in a traditional software model with separate source codes if they want to, but it is not a requirement. Creating a system where I am not assuming anything about the user gives them the most freedom to be themselves.

It will be possible to create languages on a seemingly higher level with structures designed for ease of use, and I will develop one of those myself also, but then it is merely a matter of translation between those and the native code system. It will also be possible for a program to execute any instruction within any other program in any order at any time, and converting to this other program's view of the first program constitutes `compilation`. What is compilation other than a translation between different ways of perceiving? As such, whether a program is interpreting another program, or whether a program is executing directly itself, has no bearing on the code needed to provide both forms of execution. Interpreation and execution are entirely the same process. This is how it should be - a single execution engine. Traditionally an interpreter is quite far removed from direct execution but in my system they are both the same. Programs can be interpreted or executed directly and the virtual CPU has no awareness of there being a difference between them.

When changes are made to a program, there should be no delay. There is only delay when there is a different way of looking at something. Contradictory ideals create separation which creates delay. When program instructions directly map to executable routines with no translation needed this is the optimum expression of Oneness. Efficiency comes from directness which extends from honesty which provides consistency. This also allows per-instruction compilation - when you change an instruction in a program the instruction you select IS the new routine it will call and this is immediately inserted into the executable. Compilation is completely distributed which makes it almost invisible - as fast as you can edit, it can execute. The removal of this delay between development and use is an important way to not limit the user's freedom and to maintain a sense of Oneness. Essentially the compilation engine is invisible, because there is no compiler needed. There is simply a seamless flow of execution because, after all, what is a compiler but just another program executing? Changing the program IS compiling it.

This engine therefore is remarkably simple and very efficient. It is vitally important that from the very roots upward, even in the design of the Blitz sourcecode, that freedom is limited as little as possible. Why? Because every limitation you introduce on any level projects onto and imposes limitations at a higher level. It may not seem important, but if you design your Blitz source in a way that will limit you later, then you also are limiting what the user can do. The minset of Oneness has to be present all the way through the process. Intentions to limit and to separate start in your own mind.

There are ways that BlitzMax works which actually also imposes limitations, since it has aspects of separation in its mindset and design. These have to be worked around and smoothed-over in my sourcecode, to reverse the impact of these limitations. If I did not do this these limitations would bleed all the way through to the user and their creations, limiting their freeedom. Part of my task is to, in a way, write a `BlitzMax driver`, to translate between how Blitz wants to do things and how things need to be done to cultivate freedom. Similar to how the hardware has its mindset of separation and needs a translation between Oneness and separation, so too does Blitz have a mindset which doesn't entirely foster freedom or Oneness. So while writing the Virtual CPU platform I also have to write sourcecode which `re-interprets` Blitz's functionality, within the sourcecode level, to create a source-level API for the rest of my own sourcecode to use. It's kind of like having two levels of sourcecode - one that abstractly dissolves the separations forced by Blitz and another that uses that abstraction to create functionality. This is all fun stuff.

Back to some coding...

Check out my Game Development Blog -ImaginaryHuman-

Scripting(Posted 2007-06-04)
Somehow it seems to be about a month between worklog entries, which is a long time for me not to be writing something ;-)

My first news is that I am now the proud owner of an Intel iMac, featuring a 2.0 GHz Core 2 Duo processor, 17" widescreen display, 1 Gig of RAM, a 160GB harddrive, built in iSight camera, and OSX Tiger. This is now my new BlitzMax development platform, representing an intent to pursue software development and sales as a future source of income.

I have partitioned the harddrive into 3, with intent on eventually having this be a triple-booting system. At the moment I am working on OSX only and this will be my main platform of choice. But eventually I will install some form of Windows and also some form of Linux such a Kubuntu.

Along with our existing PowerPC iMac (and perhaps under the Rosetta PPC emulation), this will then round out development platform support for all BlitzMax platforms, allowing me to create software for all four systems. While it is nice to develop for OSX mainly, I do recognize that the user base for Windows is much larger and also that supporting Linux is a good thing.

Installing BlitzMax was a simple process - download the full install for Intel Macs, install it, download the update for the most recent version, install it, do a syncmodes, then rebuild all modules and I am ready to go. So far I've had no problem compiling any existing sourcecode.

The machine is at least 2 times faster than our older PPC iMac, graphically and computationally. It's nice to see things like my blobby objects running that much faster. Visually it's hard to tell that you have the extra oomph because it might only mean the difference between 25 and 50fps, which visually is not a big difference, but everything is more responsive. Overall I'm pleased with it and I think it will be a pretty nice development platform for some time to come.

With that in mind, and after faffing around installing all kinds of software and trying things out, I set about getting back to some actual programming and contining development of this project.

For some while I've been thinking about scripting and programs and multitasking and all the issues to do with `running programs`. After some philosophical thought and designs, I have come up with most of a solution and am starting to implement it now.

To elaborate on the philosophy a little, it is largely based on holograms and their implications. It is also based on the concept that there is no time or space and everything happens in a single instant totally simultaneously. This means that ideally every single instruction has its own CPU (the main cpu is present in all the parts as in a hologram), and every instruction is executed in parallel in a single moment. In a way it is somewhat similar to distributed computing or a decentralized operating system.

Symbolically, every program therefore has its own CPU as if you had a dedicated CPU for every program that runs, with no hardware limitations. Massive parallelism! It is the aim of the system to approach instantaneous completion of infinitely parallel processing, and that is the goal of all computing, but obviously that is not entirely achievable (though very desirable). So this is where the real hardware CPU time has to be distributed, shared and scheduled.

A scheduler is just another program, when you get down to it. Running a sequence of functions from a program is no different than calling on various programs in a sequence, which is most of what a scheduler does. Thus there can be as many schedulers and ways of scheduling as is desired, whether it be just a sequence of function calls which call programs in sequence, or some other fancier dynamic technique based on timing or priorities. Effectively every program has its own scheduler.

Every program also has its own CPU which is pre-emptive since it is able to decide how much time or number of instructions to give to executing the program and can stop at any time (inbetween instructions) to put the program on hold and return control back to a scheduler. Every program is also re-entrant and manages its own processes so you can run the same thing as many times as you like in parallel.

One thing to bear in mind is networking. This system considers there to be a network connection between any two objects no matter where they are, and messages are sent back and forth in a non-dependent non-hierarchical way (which can operate in parallel). So the network isn't just the interface between one computer and another, it's the interface between one object and another, anywhere. Messages are routed via `post offices` (not sure what to call them yet) and the post offices manage sending it to the appropriate location, be it on the same computer or another. So the network is not just outside the computer it's also inside it. The network is *everywhere*.

I bring up the network because the network and message sending has to become a part of the way that CPU time is managed and attention is given to objects so that they can send and receive their messages. It also means that any program or instruction on any computer can execute any program or instruction on any other. So the scheduler could be on my computer and be executing programs on some other system. Since messages can be sent and not waited for, allowing local execution to continue, there is no situation of being locked into waiting for stuff, even if it takes time to transmit. And so the scheduling and the pre-emptive CPU and the passing of messages and the handling of the network and the execution of programs and the calling of subroutines and so on are all a single unified architecture.

Internally, execution of scripts requires no interpretation as such, since it runs a sequence of directly mapped functions via an array of function pointers executed in a given sequence (which is decoupled from the order they are stored in). This allows for pretty fast execution, as if these were bytecodes that are translated into chunks of executable program. To the user, they deal only with the opcodes or tokens in the program.

A number of interesting features are going to be possible since I like to work at an abstract level and apply the idea to the whole system. One such idea is that you can make anything indirect, which will be as if able to write `in-line` custom types, to do self-modifying code, to execute instructions in various orders based on input, and some other pretty interesting things. It naturally will lend itself to object orientation on all levels, not just in the form of custom types but in the form of the general *idea* of having different `levels` or `areas` of program execution. Whereas most code within a BlitzMax method is procedural, in my system the code itself is object oriented at the level of the instruction. Since I am implementing my own CPU's I have full control over how they operate.

Eventually there will be a low-level language which will be similar to assembly languages, where you have simple instructions with one or two parameters each. Larger constructs such as loops and ifs and types etc will be implementable within it, but it will be basically to perform simple low-level actions. There will also then be a broad spectrum of higher level features which can be accessed from programs and built from them. Some of the more useful features will become official functions, like very high level commands.

Ultimately programs will execute as part of a vast web of interconnected mini-computers with messages flying all over the place and all kinds of forms of scheduling and execution going on. Also it is my philosophy that everything is always live unless deliberately being frozen, so changes made to programs initially create immediate effects. The user will experience zero compilation-time.

I am sure more language features will develop as I go along. For now I am working on implementing functions to be accessible by the programs, eventually leading to being able to write simple test programs that run as scripts. I am debating whether to let the sourcecode remain as modules or whether to just have program-runnable versions of all functionality, so that user programs are the way to access the modules rather than from BlitzMax source. Or maybe I will do both.

There's still a few design issues to work out but it's coming along.

Check out my Game Development Blog -ImaginaryHuman-

Cameras(Posted 2007-05-19)
I've been working to transfer and rework some existing camera/viewport/display code over to my new system and have the basics now working and ready to work with my object system. The screenshot below shows two 3D views within the same screen, ie two Cameras. As you can tell by the perspective of each object they aren't just two objects within the same camera view, and I promise I did not cut-and-paste ;-) Doesn't look like anything major but I am pleased that the Camera system is working. This means support for at least some of the simpler split-screen modes based on rectangular areas.

I have reworked the display opening/closing system and it is now more intuitive and capable. You can set up fullscreen displays and windowed displays, then once created you can hide and show them, or even re-create while switching between the modes. It's tested to be working which is good. It also does my usual OpenGL tests to make sure the context is working. You can have as many desktop windows as you like but probably only one fullscreen for now. Internally the desktop support uses MaxGUI. In future the GUI element of the system will be an option, whereby you can choose whether to use an OpenGL-based GUI or to `pop out` GUI elements into MaxGUI native-look gui elements/windows. Personally I am mostly in favor of OpenGL GUI's for the graphical speed and am planning for development/editor tools to have stuff flying all over the place.

I am not actually quite sure what comes next. I have to take a break and think about it. I know there is more to do with regard to object management, and to finish off the `pivot` support (named `points` for now). Since objects can be related in all sorts of ways not just hierarchically, I have to figure out how to deal with using pivots for location in 3D space, such as to move the camera or to move objects around. I have to eventually get to working on the multitasking script system, to get programs working, and also other basic image stuff like texture handling etc. Then of course at some point there is the offline software graphics rendering (probably 2D only but also image processing). Networking is yet to be done although some parts are in place, to support creation and manipulation of remote objects as if local. I also have to work on the messaging system, and start to build up more functions to take the place of mytype.mymethod() calls. Still, it's getting there, slowly.

I thought it would be good to be able to easily combine 2D elements with 3D such as if you have part of a game or gui and you want all the rest of it to stay in perfect Orthographic-projection 2D while other parts turn into 3D objects right where they are and then move around three-dimensionally. I figured out where to place objects and how to do the camera etc so that in this `simulated orthographic` mode the object looks to be exactly where it would be in real orthographic. So then switching between the two is easy. You'd start off drawing the 2D stuff in perfect Ortho projection and then switch on 3D and fly a bunch of buttons or a window out into the screen. It's also set up to support changing the field of view of the camera, as if it had a zoom lense able to move from wide-angle to narrow-telephoto.

Anyway, here's a basic screenshot showing two simultaneous and separate cameras. The future of this basic camera system will be for cameras to be non-rectangular/irregular, for their shape to be generated in realtime with geometry or images, and to be able to do things like soft edges, cross-faded edges, spline shapes, blobby objects, image-based-shapes and so on. I've already designed some very unique and cool dynamic split-screen modes, which will add greatly to the gameplay in some games. Remember also this is not just a game-creation system, it's also for general purpose graphics processing, art creation, and general having fun. So the terminology it will have won't be entirely game-oriented. But it will be cool for making games too :-)


So far I have written the following functions/methods:

HolographicLink: CountLinks, PrevLink, NextLink, SwapLink, SortLinks, CopyLinks, MergeLinks, SplitLinks, LinkBefore, LinkAfter, UnLink, GenerateID, IsEmpty, StoreObject, FreeObjects, CountObjects, AvailObjects, UsedObjects, SwapObjects, SortObjects, CopyObjects, MergeObjects, SplitObjects, Fragments, Defrag, Optimize, PrintSelf, PrintFull, PrintAll (this module still needs some work)

Camera: CameraAutoActivate, CameraActivation, CreateCamera2D, CreateCamera3D, CreateCamera2D3D, CameraProjection, CameraViewport, CameraPosition, CameraSize, CameraLock, CameraRange, CameraZoom, CameraXFlip, CameraYFlip, CameraFog, CameraFogMode, CameraFogRange, CameraFogStyle, CameraActivate (this is a finished module I think)

Display: DisplayAutoFull, DisplayAutoDefault, DisplayAutoTest, DisplayAutoUnknownHz, DisplayForceAspectX, DisplayForceAspectY, CreateDisplay, CreateDisplayWindow, CreateDisplayFull, DisplayWindow, DisplayFull, DisplayError, DisplayStatus, DisplayAspect, DisplayDefault, DisplayTest, DisplayHide, DisplayShow, DisplayClose (this is more or less a finished module)

Pivot: Position, Rotate, Scale, Move, Turn, ReScale, Translate, AlignToVector, PointAt, MoveToward, Orbit (this is a work-in-progress module and all objects must have at least one pivot in order to be placed in the world, plus objects can have any number of pivots associated).

About 3500 lines of code so far.

Check out my Game Development Blog -ImaginaryHuman-

3D!(Posted 2007-04-28)
Hi folks. I'd been holding off making a worklog entry for a while hoping to find a metaphysical angle to take, but as things progress more toward actual code it's probably better to talk about what's really been happening.

I finished most of the multidimensional linked list, at least as far as I can tell, but for a few functions. I added some test code and managed to get it to store an object, which just happens to also be a multidimensional linked list. So it's nice to know that the list system can store itself, and that the groundwork is now laid to implement and store other kinds of objects. There are still things to do regarding networking and message sending and programs but I'll add those as I go.

I started looking at what objects to add first and typically in the past this has meant opening a display. So I started looking into that, and came to considering whether or not to go 3D. I am not exactly the biggest supporter of 3D graphics and never have been, ever since the glory days of the Amiga with its almost exclusively 2D games, 2D graphics apps like Deluxe Paint, and generally big focus on doing everything `flat`. I have never really intended to do anything 3D and largely the issue with it was that it seemed so mathemetical and complicated. Trying to do 3D on an Amiga with no `libraries` to help you pretty much meant you had to write an entire engine from the ground up, understand all the 3d math and transformations, matrices, etc, which I did not find appealing. But that was before BlitzMax and OpenGL.

It was to my delight that I discovered how easy it really was to get a 3D display set up and to rotate an image around all 3 axis. It took a bit of learning but it was up and running quite quickly, and I'm really glad it doesn't involve a single bit of 3D math. I don't have to know about matrix math or perspective transforms or all that low-level stuff. Seeing that simple image rotating in 3D, texture mapped, was quite inspiring. So I've decided to make the move toward supporting 3D.

So initially I'm designing the camera system which will support proper orthographic as well as simulated orthographic, and perspective projections. I am looking toward using an `observer` system whereby you can have as many cameras as you like each with a lense that can go from wideangle to telephoto, and then observers who take the camera to a given position in the scene and point it at it. I don't want to think of a camera as being the thing that you position somewhere, because really it stays stationary and the scene moves in front of it. And by itself a camera can't move, it has to have an observer move it and look through it to give it purpose and meaning.

I'm still planning to support rotated displays, panning, zooming, stretching, flipping and whatever else. Maybe focussing as well, eventually. Probably will also make it do stereoscopic. This system isn't just for games it's also for general graphics and other software, so I think it will be a great addition to have good 3D features.

This of course means that, since I know almost nothing about 3D (and I'm still presently not that interested in 3D games), there is plenty to learn along the way as things develop. It's not going to be `just` 3D, either, or with an afterthought of 2D support. Most of my future game ideas are 2D. At the moment the 3D will be the afterthought but I'm trying to lay a good foundation so it won't be hindered down the road.

Mulitple dimensions here we come!

Check out my Game Development Blog -ImaginaryHuman-

The Holographic Multidimensional Linked List(Posted 2007-03-18)
Wow. It's been almost a month since my last worklog entry. I guess I had to wait until I had something to talk about. :-)

I've been pretty busy and have made some good progress. I spent quite some time working on a `custom linked list` module, which is my first module. It was fun figuring out how to make a module and to see it show up in the documentation and as real tokens in the IDE. Cool. My only complaint is how manual the process is, but I'm getting used to it.

I decided I needed a custom linked list for many many reasons. BlitzMax's lists generally are slow compared to arrays, they store data that I don't need, they don't store data that I need per object, they only allow one object per link, are not multidimensional - you can only expand the list in one direction unless you use a linked list of linked lists which entails a lot of unneeded overhead, and they just aren't flexible enough for what I need to do. They are based on a `technical programming paradigm` which tries to provide a generic solution of functionality. In my system I need it to do more than that - to be based on a spiritual philosophy and to be on the same integrated wavelength as the rest of the system.

My new linked list system is based on holographic principles. Obviously it is just code and not an `actual functioning hologram`, but the concepts are very solidly behind it. The way that holograms work is that the whole is always present in every part. You can take away some parts and it doesn't matter at all because every remaining part still contains the whole. And no matter how many times you divide the parts they still all contain the whole. It's along the lines of the idea of there being a divine spark which is equal with God that is retained within all people even though they consider themselves separate parts of the whole, making it possible for anyone to be One with the whole.

The really important principle underlying holograms is that originally there was only `One` whole. This whole extended its very own nature, and in that extension bestowed upon its creation *ALL* of the exact same traits and qualities and capabilities as itself. "Made in the image of". Thus its creations all fully share, in complete unlimited openneess, all of the nature of its creator. They are just as creative, just as able to extend themselves, just as powerful, just as whole and just as complete. Nothing about the whole is denied to the part and every part contains all of the whole. It's something of a `creation story`, really.

With this in mind, and given that the universe and everything in it is holographic in this way, having been created as an extension of creators who themselves were extensions of the whole, the task is to devise a way of representing objects (parts) and their relationships which captures the essence of this sharing of wholeness. This leads us to the holographic multidimensional linked list.

Since every part contains the whole, and since there really is only One whole, this means that one single part of the linked list must be able to contain all others. This is clearly not feasible with a traditional linked list because each link only contains one object. This is a complete denial of the nature of reality. In a holographic linked list one single link in the list must be able to contain ALL other parts, all objects, and all relationships between them. It must be able to relate to all of it in their entirity. Just as the original creator IS everything, so too must each part be able to be everything.

This leads to an important conclusion: One single link in the list must BE the entire system. It must all be contained within one link!

This one link must have infinite properties, able to comprise and relate to any number of parts, and also to the whole. It must be able to stand alone and yet have full complete access to all functionality from within itself. This doesn't simply mean that you have a link in a list containing an object. It means you have a link able to potentially contain ALL objects, to define any number of relationships between them, to create grouping behavior, to provide a sense of identity, to handle the use of time via undo/history storage, prediction and planning, to be able to interconnect with any number of other computers in any number of ways with built-in networking, to access files in a pervasive file store, to deal with programs and their execution, to deal with object and memory allocation, and basically perform all functionality as a single object. All of the basic functionality of the entire system must be contained in a single place. That single place is the `Link` object.

A single Link is all that is needed to implement a full system comprising any number of other `linked lists`, to store any number of objects and to process programs, all within a single data structure. You could say that then no other Link is needed, and that is true, but you then can create any number of other links and connect to them in any number of ways. A link in a linked list basically becomes a whole system, with infinite expandability. The link IS the linked list.

With this in mind I decided I do not need or want a `List` type, to encapsulate Links. The Link itself IS the linked list, so all operations you might want to perform on the list is done from within the link. Additionally, corresponding to the idea of infinity, the `list`, which is some multidimensional flexible web of links, is circular. There is no need for the list to have a head or a tail, a first or a last, a beginning or an end. Any link can serve that function as desired. Thus the holographic linked list is always circular. This provides a natural foundation for infinite looping - a feature that has use and should not be censored.

So basically I'm working to build this `Link` object, which I decided will be a kind of encapsulating object for arrays of objects. This way if objects are okay being stored sequentially they can be accessed much faster than having to indirectly jump around in memory using extra pointers. So each link contains any number of objects and each object contains a Link pointer that potentially can connect to every other object (to the whole). There is also some other data thrown in there. There is some info about what kind of object are stored, and basically an object is a BlitzMax object, which will be a bunch of custom types. While custom types narrow down and confine freedom in a hierarchical closed mindset, they are necessary to integrate between our holographic system and the programming language.

Within a Link there is also an ID which I've been working to design. I pondered philosophically the foundation for the ideas behind having Global/Universal Unique IDentifiers and the theme of identity in general. It basically occured to me that by trying to have a unique identifier you are seeking after absolute separation and anonymity which is completely impossible. The best you can do is to use some degree of predictability, often based on some pre-existing data, to come up with an ID. That means there is always a way to trace where the ID came from, the anonymity is incomplete and the ID is not entirely unique.

I had to ask myself if this striving for so-called `security` is something I want in my project and I decided that it is not. It is an impossible goal and I don't want to focus on trying to pick up the pieces after people have deliberately decided to undermine the system. In a way that would be an effort to remove their freedom to act as such, which is contrary to the aim of this project. So I am happy to settle with a very easily and quickly generated counter-based ID which will be in two parts. It will comprise one 64-bit value which is simply a counter of how many objects have been created since you began using the software, and will be stored in a file. The second part is another 64-bit value which is a counter of how many separate computers have been issued values, from a central server. The nice thing about this is that you only need to talk to the server once ever to get a unique ID for your computer and then this stays fixed. You don't then even need to be online and your computer can use its `local` counter to generate new ID's. The two values are joined together as a 128-bit value which is then the Universal Unique ID. I decided I did not want to use anything to do with the hardware as part of the ID, including the time. Handling a counter is very easy and fast and transmission of a set of ID's over a network entails only sending the first and last ID numbers.

I like this system of identity because it reflects upon the meaning of what an identity is. When you decide that some part of the world is separate from other parts, you enclose it in some kind of boundary, even if only psychologically. The identity then has two parts - what is inside the boundary, and your perception of it from the outside. What is inside is free to change regardless of how you see it, and yet how you see it adds to and changes part of what it is (as in the principles of Quantum Physics where your perception changes what is there.)

So perception is the `external ID` issued by the all-seeing server, and the `internal ID` is appended to it and freely altered `from within`. This combination of who you are from without and who you are from within makes up the whole idea of identity, distinguished by a boundary. In the case of computers the boundary is the addressable memory space that the CPU has access to. Anything outside of this is the realm of the external perceiver, supplying half of the ID, and anything inside is the realm of the concentrated self supplying the other half. I think I am happy with this solution as a way to generate ID's for each object, and to be sure that each object is unique across the board. I think it also reflects the mindset of cooperation to have everyone `agree to` a portion of the identity of each object, by way of agreeing to a single server issuing half of the ID. It's also very fast to make new ID's - just add 1. :-)

I have also considered that I might add a third 64-bit value which is a combination of the actual memory address of the Link object that an object is stored within, and its offset within the array of objects within that Link. When a message is then sent over the network it would be able to directly call upon a specific object to receive the message, going straight to it rather than searching for the ID. This of course would be data that would have to be synchronized and updated if the software is exited, but it does provide the added feature of `addressing` remote objects directly.

So I am working to put this ID system into the Link object, for each object stored there. I think there will still be more aspects of the system that need to go into that object as it occurs to me. Particularly the networking is not done yet but I have spent some time figuring out how messaging will work between objects in such a way that it can be pervasive and transparent across a network. Basically each object will have its own multidimensional linked list of network managers - elected objects that act as postal-delivery agents like little post offices. An object will be able to be a part of any number of networks simultaneously, and even to network with the same computers in multiple different ways. Because the Link IS the whole system, all parameters can be defined on a per-object basis, which gives the ultimate resolution of freedom.

Check out my Game Development Blog -ImaginaryHuman-

Laying a Foundation(Posted 2007-02-18)
I have recently been working hard to design the basic system structure and how the various parts will fit together. After all, the philosophy has to be practically applied as well. Here is an overview of where things are heading.

Everything in the system is considered to exist simultaneously in a single instant, outside of time and space. This implies that all programs and all processing and all behavior occurs in infinite simultaneous parallel with immediate completion. Obviously there is some inability to achieve this in software running on hardware that takes time, but we can at least come close to this as much as possible. With this in mind, all tasks proceed as fast as possible (as if to be close to instant) and immediate responsiveness is the highest priority. Less-than-instant tasks must be specifically defined as such, meaning that limitation in time is optional and not a rule.

Based on this theme of immediacy, and since all objects exist for but a simultaneous instant, equal attention is given to all parts at once. Objects *coexist* and are alive at the same time. In computing terms this means infinite parallel processing. As such each `object` in the system effectively has its own virtual CPU, its own multitasking program execution/scheduling, and is a separate `thread` as if having its own dedicated parallel CPU. Seeing each object as a computer opens the door for future implementations of real multi-processing on parallel computers.

The system recognizes that hierarchical program execution (procedural flow) is a closed-minded limiting system and so views this as an option, not a rule. The default functionality is that all objects are live, they all process in parallel, and there is no waiting. Every message sent to an object triggers that object to begin acting immediately and the sender does not wait for the recipient to be done before continuing. Hierarchical program flow can be optionally implemented with intentional `wait` conditions, ie send a message to an object to begin doing something and then wait for a message that the object is done. This system allows for a fully object oriented environment at the same time as supporting procedural/hierarchical programming and any other kind of flow control/communication.

Objects in this system are not rigidly confined. Typically in `object orientation` an object is a separate unit, a chunk of data with associated functionality. This represents the identification of ego with what it perceives and locks separations in place. It is supposedly good for modelling physical reality but only from an egocentric viewpoint. It is no good for modelling Oneness, sharing or unity. In my system all objects are holographic, meaning that every object can contain an entire universe of objects and every object has the same potential/abilities. All objects are unified and all share the same pool of data. It is then an *option* to have an object perceive the data pool in a special way.

A BlitzMax custom type is viewed as a specialized and exclusive way of perceiving data, that adds finite meaning by subjectively reducing its otherwise infinite scope. Since types are hierarchical and limiting, nesting behavior is avoided as much as possible. A type is an object, like an ego, that can only perceive and function in a specific way, and cannot perceive outside of its own self-concept. Coupled with an infinitely flexible linked-list-like holographic structuring system, types provide the system's premeditated definitions and functionality - modular building blocks from which higher structures can evolve.

Data exists as a single meaningless pool and each object's view of it gives it its meaning. This makes the containment of data optional and thus facilitates a very important principle - sharing. Sharing of all data by default is the open foundation upon which perceived confinement and identification is simply another option. Thus the system supports non-object-orientation, traditional object-orientation, and object transcendence. Only by making object boundaries optional can there be support for all object models. Sharing (indirect reference) is very important, representing the spiritual principle of Oneness, and it is in many cases more efficient to share than to duplicate.

All objects begin by being connected to all others and able to perceive all others. Each object is an all-seeing symbol of Oneness. It is then possible to define for that object any number of limitations, which again are optional, which narrow down the scope of that object's relationships. This then allows any object to have an infinite number of relationships with an infinite number of objects. This leads to all forms of grouping and to election of certain objects as taking on agreed-to roles.

Agreement between objects, and peaceful coexistance, are underlying principles from which cooperative multitasking springs. Since objects are not selfish they naturally surrender to the greater will. Recognizing that actual processing time is limited by the hardware, objects cooperatively give up control in order to create the impression of non-ownership (no hogging). Ownership is not a spiritual concept and the taking over of control denies the user. In this system we seek for all objects to be aware of the user at all times, achieved practically by taking attention (cpu time) away for as little time as possible before returning to cooperation. In an ideal system there would be no delay at all, being timeless, and the user will experience the system as permanently immediately responsive. Then, if desired, non-cooperation is an option not a rule.

Networking will be implemented in such a way that what is local to one computer can be entirely shared with another. Boundaries between computers (and between users, locally or remotely) are seen as non-existant and optional. By default, systems will synchronize to share a common existence, to automatically distribute computing and share resources. It will be as if there is only one system fully accessible to all users. This opens the way to unlimited collaboration and all forms of more limited networking (separation based) typical of modern games. In this system the network is invisible since there really IS only One system.

Security in this system is optional and not at all like the common idea of what security is. Typical attempts at security merely shut out and separate users that would otherwise be trusting - an unnecessary imposition. Real security is recognized to mean that protection is not needed *at all* because there is no `other` and thus no enemy. Within a system of Oneness there is only one whole and thus nothing else beyond it to act as attacker. Only in Oneness can there be real trust and real peace. This is the system's default state and is infinitely more sane than any concept of protection. Absolute openness is true innocence and true innocence needs no defence.

It is of course possible to implement what would seem to be a wall or a separation or a way of proceeding that must follow strict rules, to clamp down and make it difficult for access or sharing to occur. But we see these attempts as ultimately futile, paranoid, entirely fear driven and fundamentally flawed. Anyone wishing to gain access to any part of the system will find a way to do so. Protection does not work. We try to protect only what we think we own and identify with, but protection can *always* be overcome simply because protection=weakness. In this system we allow attempts at protection to occur, per the user's free choice, but we prefer to focus on a requirement of real trust and openness.

By this, `trusted computing` means non-malicious users sharing a peaceful coexistence. The burden of trust lies in the user, not the computer. It is not the computer's function to protect the users from themselves, for that would seek to make them unconscious which increases attack. What is so often touted as `trusted computing` really means computing where there is no trust, and then those supplying the protection cannot be trusted either. In this system we do not believe in insecurity, secure in our acceptance of what is real. Protection is simply an egocentric band-aid over fears about things that are not real and seeks only to limit the user, removing their freedom. Protection is an attack and invites more of the same. This cannot be the foundation of software based on spiritual principles and is not our focus.

This system, being founded on spiritual principles, and applying them in every way conceivable, removing limitations and illusions, leaving no stone unturned, fosters and actively encourages `spiritual thought` on the part of the user. This is in the best interests of all concerned since the system only remains secure so long as the user does also. To understand the system and its symbolism is to adopt and reinforce a thought system unlike that of the ego-mind. By thinking about computing in terms of Oneness, the user begins to move beyond their own mental boundaries and limitations. It is not that this system causes them to do so, but that it refrains from causing them not to do so. It is my hope that through use of this system, users will be spiritually inspired and that peace will be the result.

Check out my Game Development Blog -ImaginaryHuman-

Spiritual Software(Posted 2007-02-07)
Freedom in software is a very important concept. Richard Stallman talks about some aspects of it and I am inclined to agree with most of his views. I share a prioritizing of spirituality over its seeming reduction (but in Reality there is ONLY Spirituality). By `freedom` we're not talking about software that doesn't cost anything or software that you can't make a million bucks from. We're talking about the spiritual principles - a liberty that describes the conditions for peaceful coexistence and sharing. Freedom is about openness and freedom from limits. And ultimately, freedom is experienced in the mind.

If we take a look at computer software we find that so much is influenced by ideas of separation and limitation. Even the very `bits` that the computer understands as `language` are separate and different from each other, isolated and apart. In a `software reality` where binary is the underlying physics, *everything* inherits this nature of separation.

This binary foundation, a foundation of separation and conflict, gives rise to all software. Separation has become the organizing (and disorganizing) principle, clearly evident in the confusing fragmented mess that we see today.

Taking a look at the earlier days of computing we see games and software exhibiting very strong characteristics of this focus on separation. The hardware was itself limited, restraining creativity, requiring great sacrifice and effort by the user, and presenting very limited frustrating interfaces. Games had that `blocky` look popular in tile engines, and applications were almost more defined by what you can't do with them than what you can. People were required to `think like a computer`, and it was questionable as to who had to act like a machine - the computer or the user!

What has become the common remedy for this? An expansion that pushes boundaries but does not remove them. First of all we look at what the software can't do and then we seek to make it do it. We don't ask why it can't do it, or if we do we don't go very far down that path. As soon as we see familiar signs of `wrong programming` or `bad design` that's evidence enough for a 180 and a continuation of the separatist movement. Adding separations upon separations does NOT lead to unification and does NOT solve problems. It but creates more problems masquerading as solutions. If you're starting out with separation as your philosophy, to any degree, then you're asking for an end result rife with fixed separations. These separations are a big problem. Attempted solutions and `fixes` create new bugs, and efforts to `protect` integrity (which was never there) merely invites and creates an attack.

What's needed is to realize that computer software by its very nature can never truly escape its separation-based underpinnings. Even if you can make software `look as though` it allows some level of unification, openness or sharing, there is only so far it can go. To transcend all of the separation inherent in it would mean for it to cease to exist as a binary creation. So let's at least accept that by creating software on binary computers we *are* limited and *cannot* build a perfect unified solution. Perfect Software, Complete Software, Whole Software, Secure Software, Flexible Software and Creative Software are all oxymorons. Software can never be perfect as it can never be whole. Software can never create anything, all it can do is place limitations on creation - hopefully to a minimal degree so that the user thinks there is a space for their creativity to flow into. Software has *no meaning* at all beyond the user's projected perceptions.

With this in mind, what we at least CAN do is exercise the possibilities that we DO have to at least create the *illusion of* a high degree of freedom. It is towards this aim that most software strives. But what we are *really* striving for is the antithesis to separation - Oneness. We are seeking to turn the tables on the binary world, to overcome the separation that it demands and to create an illusion of its opposite. Let's remember that software will *always* be separation based, even when it appears to convey a level of freedom, and that it is but an illusion - which we buy into - of a substitute for Real Freedom and Real Oneness. Software is but a dream of what could be in Reality.

Although Oneness cannot be achieved in any binary form, and is outside even the realm of physical reality, we can at least be inspired by what it tells us. Since separation in software gives rise to all forms of conflict, fragmentation, attack, competition, negativity, frustration, communication breakdown, and war, it stands to reason that perhaps Oneness would inspire software to propogate peace, light, openness, joy and even healing. Let's not begin with separation as our philosophy, aiming only to pretend to escape from it towards a measure of unity. Let's instead begin with Oneness and look at how this can solve all problems gracefully and efficiently.

Oneness means there are no limits, no boundaries. Things *start out* without limits, so that limits are an *option*. If software starts out with limits built in, those limits cannot be put aside to open the way for true expansion or sharing. The user cannot truly contribute anything `new` to the system and is forcedly spoonfed the limited preconceptions of the designers. Just look at most proprietary software, rife with walls that have been erected in the name of having something that others shall not share. That kind of software inevitably creates real disruption in every aspect of life. Why would you want software to imply or depend upon what basically is a philosophy of attack and destruction? Does such an approach really make for lasting corrections of problems, or does it simply rearrange the furniture, giving rise to new problems? If your ambition is to separate, then you are directly contributing to the creation of problems, regardless of `innocently` trying to overcome them. It is an insane battle.

We should therefore examine carefully all of the foundations of all of the concepts in computing, under a microscope of Oneness, to clearly point out the presence of these deeply ingrained anti-solutions. In a separatist software model, development is thought to be exclusive of execution. This alone gives rise to the awkwardness and non-intuitiveness of having to write sourcecode, compile it, and then run it, all as separated steps. In a separatist software model, files and data in memory are separate from files and data on disk. This gives rise to confusion, wondering if their changes will be pervasive or will be lost. A separatistic software model is filled with examples of keeping things separate and apart - open vs close, develop vs execute, edit vs view, cut vs paste, many many things thare all create new problems.

`Traditionally` designers look at a situation and come up with a dualistic solution. The solution doesn't really solve the problem. It just changes it. If I want to be able to have text in one file repeated in another, woebetide that they might *share* the same data, requiring instead user intervention to isolate something from one place and vicariously deliver it to another. This is not the kind of activity the user should be concerned with. This is *noise*, created by a faulty philosophical model.

Yes there are solutions out there that are inspired by unity, for example a pervasive file system is a step towards removing the boundary between `open` and `closed` files. But there aren't many systems that really focus very intently on taking unity to an extreme. That requires pushing back the boundaries, quite literally, rather than simply moving them further afield. The boundaries have to be removed, permanently, for there to be a True solution. Otherwise we are just spinning our wheels.

Software is not spiritual, but software can be approached with a spiritual outlook, an outlook that I think makes a lot of sense. This project is an experiment in `spiritual software`, where no stone is left unturned or unquestioned, ruthlessly reconsidered in a new light. I believe this will lead to an `operating environment` where the level of freedom is very high, giving natural rise to an effortless and highly intuitive user experience.

Let the computer do what the computer is good at, and let the user simply *be*. Once you are intent on *not adding to* the users limitations, and instead on providing room for real creative expression, the application will inevitably `get out of the way` of the creative process. In `keeping it simple` and `less is more` we imply that less quantity means greater Oneness, but this still falls short of a complete philosophy of Oneness. Complicated separatistic applications can pretend towards simplicity. Doing so is much more difficult then doing away with the separatism that plagues it in the first place.

Software, when inspired by spiritual principles, MUST lead to ease, efficiency, directness, clarity, purity, connectivity, sharing, cooperation, joy, relaxation, creativity, abundance, intuitive flow, awareness, understanding, flexibility, usefulness, purpose, inspiration, and a whole host of wonderful benefits. Such software more closely and fully conveys Truth, Honesty, Purity, Consistency, Wholeness, Unity, Openness, and Peace. It does not create these things, it merely stops disrupting their natural presence. When you take away the separation, what you are left with is Oneness.

Software is not spiritual, and to spiritualize it is a mistake. But if we are going to create software at all, we might at least give it the *illusion* of spirituality, for that is truly the highest and most sensible foundation on which to build. Since spirituality is `the way` by which things best work, it is natural for `software that really works` to be based on spiritual principles. That is the aim of this project.

Check out my Game Development Blog -ImaginaryHuman-

Think again(Posted 2007-01-27)
During development of various projects in the past couple of years I have noticed a general swinging back and forth regarding what seems to be appealing at the time. For a while I find it interesting and exciting to pursue a `new dream` or some new project that comes to mind, or maybe I will revisit some older project that I put down a little while ago. It seems that each project lasts maybe a month or two in my interest, has some progress made with it, and then there comes a time of change. Always change. I find myself stepping back to look at what I'm doing, wondering where it's going and whether it is limiting me. An urge overcomes me to break free of the isolation of the current dream. I enter into a `between` place of re-questioning and re-thinking, not quite having let go of the current project nor quite started another. All these shifts and changes are like tides in the uncertain soup of life, looking special in their temporary appeal, only to later be seen as the exclusive limitation that they are. Commitment to `that special project` is simultaneously anti-commitment to `other special projects`, which later will lure me with their appeals. So torn am I, strung between the pearls of temporary visions.

I'm having one of those times. One of those in-betweeny moments, breaking the shell and stepping up to a higher perspective. And I step back. And what do I see before me? I see the many projects and ideas and aspects of computer life that each have their appeal. There is the mighty shining game, with all of its excitement and immediacy, themes and stories. Then there is the graphics application, appealing to the artist, cunjouring thoughts of service and creativity. Then perhaps the screensaver, a short-lived stop-gap for instant gratification and quick results. This is all followed quickly by the game editor, the creative environment, the game programming system and the humble application. All of these are important. They all merit attention and they all follow trains of thought emphasized by many individuals at different times. I appear to be a multifacted individual, unstable in my perspective and being eventually toppled by the next right thing. I have learned not to throw away the old favorites or the projects of incompletion. I pick them back up, the next time around. Add a little more. Tweak them a little. Think of them from a new vantage.

So I am wondering to myself, what is next. Will it be one or the other, will it be an exclusive and eventually limiting concept, should I once again be caught focussing on the part and forgetting the whole? What would come of this but another shift and change, an eventual uproar from the other parts of me that are going unnoticed. Perhaps as I step back, I ought to consider staying stepped back. Any one project is not going to satisfy. Any appeal for special interests is not going to last. I come back to this, time and again. I find it is the in-between times that hold the most promise. Perhaps this is the vantage point I should maintain, letting go of trying to be just one thing of many. Why limit myself?

Limitations are but for the questioning. They are not real. They are made up in the mind. I find it valid to question every preconceived notion, to identify the true foundation of each individual lie, to understand the motivation that created it and the undoing of that motivation, to replace it with a clearer, cleaner, more unifying solution. Now I'm starting to sound a bit `out there`. That's ok, perhaps I am. Maybe out there is a good place to be, beyond the limits and the isolation. It's good to be free. Free to choose again.

My thoughts are turned to the grand vision, the Ultimate Solution, the software that overturns every stone and adds not new stones to limit or impose. So often in software the philosophy behind a design is unquestioned, following from unconsidered foundations, traditions, inherited beliefs and accepted norms. It is good to question all of this. What is it all for? What function does it have? Why is it there? What does it symbolise? Who invented it? Does it really solve the problem it is a solution for? Does the problem really exist? Where does the problem come from? What is it based on?

Getting back to the most abstract, to the very foundation of philosophical thought, considering the purpose and the meaning of everything, is what I find to be a good starting point. You'd be surprised the things you may discover upon looking, a little more closely, at why you do what you do, or indeed, why you code what you code, or design what you design. Radical re-thinking is necessary and essential if something more effective is to come of it, else it is all for nothing, a regurgitation of someone else's mistaken thought. Why do it the way it's always been done? What if that way is based on a thought system that is based on principles that by their very essence do not embrace the whole situation nor offer a complere and lasting result? What if it was not done right, the first time, and now we build with bricks of confusion, slapped together with meaningless glue to fill in the gaps?

Let us rethink what we are doing. Where are we going? How are we getting there? What came before, and WHAT WERE THEY THINKING? Let the wheel be reinvented when the wheel clearly falls short. Go back to the start. Look at where it all came from. What is the mindset? What is the thought? What is the perspective? Where do egocentric ideas play a part?

After all this consideration you might actually discover that the finest solution to all problems - is a spiritual solution. A solution based on wholeness, upon truth, upon clarity, upon consistency, upon inclusion. It would not be a solution of the ego, for those are clearly rampant in their ideals of separation, fragmentation, confusion, distraction, unawareness, selfishness and mindlessness. Look to the roots of your own mind to find the spiritual principles upon which a firm and solid foundation may be laid. Why build upon principles established from fear? Why use ideas people created out of an intent to isolate? Why use a lack of freedom as the basis for a vision? Why choose the ego to be the informing originator of a design? Doing so leads only to darkness, to suffering, and to a communication breakdown.

Openness is a spiritual solution. Freedom is a spiritual solution. A lack of limitation is a spiritual solution. Consistency is a spiritual solution. Honesty is a spiritual solution. Fearlessness is a spiritual solution. Unity and Oneness is a spiritual solution. Why would software not be extended from these foundations, creating solutions that clearly work, escaping self-contradiction and complexity, being not a victim of its own self-interest? Why would it not follow that your state of mind is the originator of all your solutions, regardless of their form or topic? Why would you not seek the highest wholest point of view from which to create a vision for how things can work harmoneously? Software can be informed by spiritual principles, and let us say it certainly would be about time considering the sheer volume of noise and interference that ego-based software has generated. Purity can create a better software model than fragmentation.

So here I am, wondering as to what comes next, for my own life, and considering how software might be inspired by a spiritual outlook. This is the next step. It is clearly not what you do, but how you do it. It is clearly not where you think you are going, but where you are coming from. What comes next is what is now. With this in my consideration, I aim to set about creating software that embodies these higher principles, whose design is founded on their solid ground and whole form is inspired by the light. To label such software or to narrow it down or make it be some smaller thing than it is, or to find labels or names for it, is counter-productive. It is what it is. Abstract perhaps, with a core that questions the very medium through which the vision is expressed, but a fine solution at least to so many problems.

Perhaps this is vague, perhaps it sounds mad, perhaps it is many things. But at least I know where I'm coming from now. This is a spiritual journey, not a journey of software.

Check out my Game Development Blog -ImaginaryHuman-