Particle Size

BlitzMax Forums/BlitzMax Beginners Area/Particle Size

tafty(Posted 2006) [#1]
I've been testing and optimising my game Fluffoidz (see this thread: http://www.blitzbasic.com/Community/posts.php?topic=62442 ) and have pinpointed the bottleneck as being in the rendering. I've carried out some optimisation (basically reducing the numberr of Get/Set calls for alpha, blend, color, rotation and scale). This has helped but there's still occasional bouts of slowdown.

What I'm wondering is; I've counted upwards of 1500 particles but this only really occurs at Game Over when everything explodes. Does the size of the particle make any difference? The majority of the particles are 2933 byte, 104x58px pngs. Is this too large to be chucking loads of them around the screen?


(tu) ENAY(Posted 2006) [#2]
I find it comes down to texture size and particle size.

If you had 3 particles bouncing around that were as big as or bigger than the screen this would cause slowdown as the program is having to almost render the entire screen 3 times worth of pixels.
I'd use texture sizes of 64x64 or 128x128, the smaller the better to be honest, I wouldn't use any image file higher than 64x64, especially if they were going to be drawn big or drawn 1000's of times.


tonyg(Posted 2006) [#3]
If there's lots of seperate images it'll affect the performance. 104*58 is a bit wasteful as Bmax will create 128*64 images to hold them.
If you're killing/creating lots of particles then look at using a pool of particles and move them to an 'in-use' list from a 'limbo' list and vice versa (unless you're already doing it).
Try 'clumping' particles together.
e.g. If you have a smoke particle, try creating an image holding 3 smoke particles and rely on rotate to make them look different.
Finally, have a search for TAnim in these forums which is a single-surface anim function created by Tim Fisher. It can probably be used for particles and will cut down the number of images/surfaces used.


tafty(Posted 2006) [#4]
Thanks very much, the texture size information is most enlightening! If I'd have known that at the start I guess I could have planned a bit better...doh!

WRT what I'm doing now:

I have a TResource type that pre-loads all images so that each instance of an object/particle references the single loaded image.

Following a previous optimisation binge I am now using object pooling.

These particles don't actually animate, they just move quickly from a central source whilst having their alpha reduced until they reach a given alpha threshold/edge of the screen. I will look out TAnim for future use though.

I do have a couple of other questions...

Is 64x64 the smallest texture size or is it any power of 2? I've got a vague recollection now of reading elsewhere that it's the "power of 2"...

Does scaling or rotation of an image have any effect on the size of the texture that is drawn?


tafty(Posted 2006) [#5]
Just thought of another question:

In general what's going to benefit performance best: reducing my image sizes? Or reducing the number of particles being drawn?

Or does that depend on too many variables such as the card/system the game's running on?


H&K(Posted 2006) [#6]
Power 2.

Scale and rotate as in setscale set rotate no.


Dreamora(Posted 2006) [#7]
both tafty although the amount of particles nowadays has the larger impact as memory isn't the problem anymore on many systems.

Reason is the low bandwidth that ultra cheap cards have to offer. Each particle has to be sent to the GPU on each frame ... so I think its quite easy to understand why 5000 particles have their impact.
On B3D we solved that by using single surface as the surface is sent in one go (unlike the 2000 particles for example). At some point this will be possible with BM perhaps / hopefully as well.


tafty(Posted 2006) [#8]
Thanks very much everyone.

I've just sifted through my images and a lot of them are only just over being a power of 2 in size and can easily be reduced to fit. I also think that the main particle used for explosions can be reduced in size to fit on a 64x64 texture without ruining the effect. Luckily for the rest that it's a bit too late in the day to change they aren't used too numerously.

After that I can look at reducing the number of particles too if necessary.


Grey Alien(Posted 2006) [#9]
what's object pooling? I'm probably doing it already .. .but just in case.

Yeah power of 2 is vital info for textures.

Also I didn't think that lots of calls to SetAlpha, SetRotation effect would have a big performance effect. I was wondering if say you already called SetAlpha 1 and then called it again later without having changed it in between, if the compiler or graphics card optimises it out in some way?


tafty(Posted 2006) [#10]
Object pooling is recycling of objects from a pre-created pool as opposed to explicitly "new"ing them everytime you need one. I copied my Object Pooling implementation from this thread:

http://www.blitzmax.com/Community/posts.php?topic=62215

The tests that I did showed that repeated calls to SetAlpha even with the same alpha value did have some impact. Whilst I'm certain the compiler will not be able to optimise calls to SetAlpha (since the order in which these calls are made can only determined at runtime) I too wonder whether more advanced graphics cards might optimise calls?

I haven't been able to get to my code for a couple of days but I do have another couple of quick questions that might help me when I finally do:

If I really, really want to keep one of my images that is 98x98 is it going to be more efficient to have it as a 128x128 image but with loads of extra transparent space?

I've read elsewhere that DrawText can cause slow down issues - is this still the case?


tonyg(Posted 2006) [#11]
Object pooling, in this case, is where you pre-initialise a 'pool' of particles and re-use them rather than remove then and recreate them.
If your particle life is finished you reset its x/y and lifetime then use it again. Saves a lot of memory allocation/deallocation and GC.


Grey Alien(Posted 2006) [#12]
ah yeah got it thanks.

a 98x98 and a 128x128 should be the same speed as the 98x98 will get made into a 128x128.


Dreamora(Posted 2006) [#13]
I wouldn't even consider pooling for a particle emitter.
There is no use for it. Normally you have a set max amount of particles the emitter can have alive.
So use that max amount to init an array and just loop over that one.
Takes less ram and is faster than a linked list.


tonyg(Posted 2006) [#14]
I might be using the term incorrectly but I would consider that Object Pooling as well just using an array to hold the objects rather than a list.


(tu) ENAY(Posted 2006) [#15]
Yes, just use arrays, I'd never ever consider using a list for storing particles because of that alloc dealloc biz.


tonyg(Posted 2006) [#16]
If I move something from one list to another (using listaddlast and ListRemove or whatever) does it dealloc/realloc or just switch the _nextlink/_prevlink pointers?


Dreamora(Posted 2006) [#17]
DeAlloc, Realloc

You would need to use TLink.Remove and TList.InsertBeforeLink / InsertAfterLink to use the TLink structure your value is assigned to.
But even then: its slower and more memory intense as it has to store a object based structure to store your values and you have to iterate through that structure instead of a single continous memory.


Grey Alien(Posted 2006) [#18]
but a list is flexible, and there can be several dynamic instances (they don't have to be declared as global at design time like arrays), or does array slicing make an array as flexible as a list nowadays?


Dreamora(Posted 2006) [#19]
1. A particle system is not dynamic. You have a fixed amount of particles which is defined by the maximum amount of particles the emitter can have alive (which is normally one of the properties of an emitter / particle type). If you make a max of 2000 per emitter and only have 120 in max its a emitter effect design error and not a "reason" to use lists above arrays :)

2. Yes arrays are as flexible if you know how to use slicing (ie NOT slicing for each single instance, thats an extremely bad idea :) )


tafty(Posted 2006) [#20]
Thanks to everyone for their help and all the other interesting info.

I thought I'd follow this up with my findings as I hate it when I'm searching for info on a problem, you find someone who has a problem that matches yours but there's no conclusion from the original poster.

Here's what I did during this phase of optimisation (ie on top of previous optimisations):

1) Where possible I downsized all my images to below the next lowest power of 2
2) I changed two out of three of my particle systems (emitters) to use arrays rather than TLists (with the third it just makes more sense for it to be a list)
3) I changed all my TLists to use RemoveLink

And the result: a minor performance improvement but still plenty of slow down when the screen became busy. Cue a lot of despondency, followed by some head scratching, followed by the thought; when did all this slowdown start anyway?

The answer: after the sound was added. One quick investigation later revealed that my TActiveChannel objects weren't pooled and when there's lots of bouncing going on this can result in many new TActiveChannels being instantiated at once as there's no limit on how many bounce sounds can be created so a new "channel" is always requested. One object pooling exercise later and it's running perfectly!


Dreamora(Posted 2006) [#21]
Great to hear :-)


Grey Alien(Posted 2006) [#22]
thanks for the report. Yeah I'm using an array of channels to prevent any slowdown creation AND noise AND the fact that you run out of channels at 4096.