Fixed rate logic

BlitzMax Forums/BlitzMax Beginners Area/Fixed rate logic

QuickSilva(Posted 2009) [#1]
There have been many threads on these forums about various timing methods in Blitz and I now have a pretty good grasp on delta timing but what I am really looking for is a nice, easy to understand fixed rate logic explanation\example as the ones I have looked at seem to make the whole thing really overly complex sounding.

Please would someone be kind enough to explain it in a noob friendly manner for someone who has never used it before. I`m sure that there are many people new to such concepts that would appreciate a helping hand apart from me.

Thanks for any help,

Jason.


H&K(Posted 2009) [#2]
Make sure you calculations and frame drawing takes less cpu cycles than the computer has (ie make the program run slower than the computer can do), so that you can draw/calculate 1/25th of a sec(say) faster than 1/25th of a second, then Wait untill 1/25th of a second has finished

Probably easiest way would be you use the message stack, every 1/25(say) post a message to draw screen, and post a mesasge to do logic cycle(again lets say 1/25 of a sec).

As long as teh logic cycle takes less time than the next post for screen, and screen less time than the next post to logic, you game now updates rock solid at 25 frames a second/ (cept your bound to get stupid windows messages messing it up if either logic of drawing is nearly 1/50 of a sec)


Gabriel(Posted 2009) [#3]
I'm doing fixed rate tweening, and have been for years, and I have to be honest. I didn't understand anything H&K said. So if anything I'm about to say repeats on what he just said, it's not because I'm ignorant or that I'm suggesting it's wrong, I just couldn't follow it.

Fixed rate logic is really pretty simple. What you do is decouple your logic (logic is the collective term used for everything except rendering really, so stuff like moving your game objects around, collision, physics, all that stuff) from your rendering. So instead of calling update and render once each per loop, you may call each once, more than once or not at all, depending on how the game is running.

As the name suggests, logic is run at a fixed rate. So you pick a desired frame rate. This frame rate is for logic only and has nothing to do with your rendering frame rate, so I'll call it UPS - Updates Per Second - to help differentiate. Essentially, you're just deciding how many times per second you want to run all your game logic. If you have a lot of complex physics, you might want this as high as 60UPS, but most games can drop down as low as 30UPS and still be very responsive.

So each time you run your loop, you see if enough time has passed that another update is due. If your UPS rate is 30 then you'll be running an update as soon as (1000/30) 33.3 milliseconds have passed. If your game is able to run much faster, you can see that you will often run the main loop and that much time won't have passed. On these occasions, you simply won't call your logic, you'll go straight to rendering. If your game is running too slow and can't keep up, it may be necessary to run your logic twice. IE: If more than 66.6 milliseconds have passed since the last update, you'll need to update twice.

Now the problem that might have occurred to you by now is that things are going to be really jerky updating 30 times a second, and even more so when you render the same frame multiple times without calling your logic at all. And it would, if we stopped there, but we don't. What we then do is "tween" between updates. So if the time which has passed is only half way to the next update, what we then do is interpolate the position, rotation, scale, etc of everything in the scene so that it is half way between where you determined it would be at the end of the last logic update, and where it was at the end of the PREVIOUS logic update.

It's very important to understand that you're tweening backwards. I know it won't make sense at this point, but it works. You can't tween forwards because you don't know where things will be in a future update. Objects will intersect with other objects because you have not yet calculated collisions. So we always tween backwards.

And that's really the essence of the theory. For the practice, I highly recommend Gaffer on Games, whose article is excellent. Personally, I had a few problems getting my head around his theory, which is why I've given you my version as well.

http://gafferongames.com/game-physics/fix-your-timestep/

Incidentally, I'm not his crazy internet stalker. His code is not flawed. Gaffer worked for Dynamix on Tribes 2 and I think he's worked for Pandemic, Sony, etc since then. He knows his stuff. If you can get your head around his stuff, it deals with complex physics, multiplayer, whatever you'll ever have to throw at it. Makes stuff like bullet time spectactularly easy too.


QuickSilva(Posted 2009) [#4]
Thanks guys, most helpful. Sorry for the late response.

Gabriel, that was a really good explanation. Any chance of a very simple piece of code (bare minimum) showing it in action?

I think a coversion of the Gaffer code with some comments would be a great help if you could spare a few minutes. I would be most greatful :)

Also how would you do things like bullet time as you mention?

Jason.


Grey Alien(Posted 2009) [#5]
The Gaffer code is classic but it took me quite a few readthroughs until I finally understood the very last bit. Remember fixed rate logic means more complex drawing functions but easier logic code. Delta time is the reverse i.e. more complex logic but easier drawing. But the complications for both are really not that bad and frankly one of the two methods is essential if you don't want to rely on a timer or something.


Foppy(Posted 2009) [#6]
In my games I use the method posted by Swerdnik years ago. It fixes the logic frame rate at a certain value, but it doesn't do complicated thing with drawing. I guess this means you could see stuttering in the graphics though that has never been a problem for me. But it could explain the cobwebs in my online guestbook.

Here's a post by Grey Alien ;) containing Swerdnik's code:

http://www.blitzbasic.com/Community/posts.php?topic=42173#472662

The code is also posted by Mike Boeh himself near the bottom of this thread:

http://forums.indiegamer.com/archive/index.php/t-3099.html


Warpy(Posted 2009) [#7]
This follows the Gaffer method, as far as I can tell:
Global boxes:TList=New TList
'this is a box. It will accelerate under gravity, spin at a constant rate,
'and fade away as it gets lower
Type box
	Field ox#,oy#,x#,y#
	Field oan#,an#
	Field vx#,vy#,van#
	
	Function Create:box(x#,y#)
		b:box=New box
		b.x=x
		b.y=y
		b.vx=Rnd(-3,3)
		b.van=Rnd(-20,20)
		boxes.addlast b
		Return b
	End Function
	
	Method update()
		'remember previous state
		ox=x
		oy=y
		oan=an
		
		If oy>600
			'if was off screen *last* frame, delete this box
			'shouldn't use current frame to delete box, because on render it might be interpolated back onto the screen
			boxes.remove Self
		EndIf
		
		'move to new state
		vy:+1
		x:+vx
		y:+vy
		an:+van
	End Method
	
	Method draw(alpha#)
		'work out position and rotation at *apparent* time
		cx#=ox+(x-ox)*alpha
		cy#=oy+(y-oy)*alpha
		can#=oan+andiff(an,oan)*alpha	'can't just do (an-oan)*alpha because what if oan=170 and an=-170?
		
		'fade depends on y-position so can use cy instead of keeping track of old fade and new fade
		fade#=1-(cy/600.0)
		
		SetAlpha fade
		SetRotation can
		DrawRect cx,cy,20,20
	End Method
End Type

'this just gives the difference between two angles
Function andiff#(an1#,an2#)
	an1=(an1-an2) Mod 360
	If an1<-180 an1:+180
	If an1>180 an1:-180
	Return an1
End Function



'init graphics
Graphics 800,600,0
SetBlend ALPHABLEND


'set up timing
Global dt#=.01
Global t#=0
Global ctime#=MilliSecs() 'must set ctime to current time because otherwise you do millions of logic steps in the first frame!
Global accumulator#=0

While 1

	'work out time elapsed since last frame started
	newtime#=MilliSecs()
	deltaTime#=(newtime-ctime)/1000.0
	ctime=newtime
	accumulator:+deltatime
	
	If MouseX()>0
		dt=MouseX()/2400.0
	EndIf
	
	steps=0	'keep track of how many steps done this frame, just for curiosity's sake
	
	While accumulator>dt
		If Rand(10)=1	'create a box every 10 logic steps, on average
			box.Create Rand(800),0
		EndIf
		
		
		For b:box=EachIn boxes
			b.update
		Next
		
		t:+dt
		accumulator:-dt
		steps:+1
	Wend
	
	'render stage
	Local alpha#=accumulator/dt
	
	'show some information about the speed of the simulation
	SetRotation 0
	SetAlpha 1
	DrawText "dt: "+dt,0,0
	DrawText "logic FPS: "+1/dt,0,15
	DrawText "steps: this frame: "+steps,0,30
	DrawText "display FPS: "+1/deltatime,0,45
	
	'draw the boxes
	For b:box=EachIn boxes
		b.draw alpha
	Next

	Flip
	Cls
	
	If KeyHit(KEY_ESCAPE) Or AppTerminate()
		End
	EndIf
Wend


QuickSilva, I suggest you have a go at working things out yourself for a while instead of coming here and asking for code, you'll get a much better understanding of how everything works.


QuickSilva(Posted 2009) [#8]
OK thanks for all of the info and links, still a little unsure but like Warpy says it`s probably best to give it a go myself. I`ll give it a go and post my results so you guys can tell me if I`m understanding things correctly.

One other thing I would like to ask though before I start, is tweening really needed or is it just the icing on the cake? Can the tweening be added later if need be without changing my game too much?

Finally, can tweening be added to delta timing and is it needed in the same way as with fixed rate logic?

Thanks again for everyones time with regards to this topic.

Jason.


Gabriel(Posted 2009) [#9]
I don't have an example I could share as I've really never used Max2D, and an example using TV3D would probably confuse you ( Quaternions and matrices, it's all 3D. )

Is tweening really needed? Not necessarily, but you will need to increase your logic rate if you don't use it. With tweening, your logic can go at 30 UPS or even 20 or 15 and still be very smooth and responsive. Without tweening, you would need to run your logic at least 60 UPS or possibly much higher. I've seen people running their logic at 100 or even 200 UPS. If your logic is very simple, a casual game for example, then this might be fine. With a physics engine, AI, pathfinding, etc, 200 UPS would kill you.

Can tweening be added later? Yes, but I'd advise against it. In theory it shouldn't be too hard, but you might program things in a way which makes it difficult to add tweening later. If you do everything with tweening in mind, you'll avoid that.

No, tweening can't be added to delta time, because tweening is a visual update without a logic update. Delta time doesn't let you update visuals without updating logic, so there would be nothing to tween between.

No, it's not necessary, or even advantageous to tween with delta timing, for the same reason. Delta timing is all about going as fast as you can, so there's never anything to tween.

Tweening is purely about keeping the screen updates as smooth as possible when you're running at a low logic rate. That's not the problem with delta timing. The problem with delta timing is that you can't use it with a physics engine or any kind of numerical integration or any kind of calcuation where the result must be consistent, because your simulation can and will "explode". Also you can't lower the rate of your logic right down with delta timing, which can be vital if you're CPU bound, as most games are these days. These are the problems that are solved by using fixed rate logic. Tweening is just something you add to fixed rate logic to ensure that you're not sacrificing any visual smoothness in order to fix all those other problems.


GaryV(Posted 2009) [#10]
Thank you for your insight, Gabriel


QuickSilva(Posted 2009) [#11]
Thanks Gabriel that has help me understand things a great deal better. This thread has really helped in general.

One a final note before I go and put this into action, with delta timing I can simply set the delta value to delta*.1 to get slow motion (bullet time effect) I cannot see how this is done with fixed rate logic. Is it simple to do?

Jason.


Gabriel(Posted 2009) [#12]
Same thing with render tweening. Just pass a smaller timestep to your logic code and let the tweening carry on as before. Obviously your timestep won't be fixed if you pass in a smaller timestep, but so long as the value is constant during bullet time and constant out of bullet time ( ie: you're only ever passing in two different values ) it shouldn't cause any problems.

You can also pause everything in your game with this too. Just pass 0 to your logic and if your logic code is correct, everything should stop.


QuickSilva(Posted 2009) [#13]
Cool, thanks :)

Jason.


Grey Alien(Posted 2009) [#14]
@Foppy: Wow that's ancient, and also in BlitzPlus. I adapted that for my framework when I started using BlitzMax an added in a delta element for ultra smoothness, but it means it's no longer "proper" Fixed Rate because varying sizes of delta can get passed into the logic functions.

@QuickSilva: Yeah like Gabriel says choose a method, test it, make sure you understand it (like how it applies to movement, acceleration/deceleration, gravity, timers, drawing etc.) and then stick to it. Changing timing methods mid game would probably be a code refactoring nightmare.


MGE(Posted 2009) [#15]
Fixed rate with higher logic cycles like Ga's framework is ideal for today's modern machines. For older computers I still like delta where each loop does 1 logic/render update. As long as you cap the delta min/max it's a breeze to code.


QuickSilva(Posted 2009) [#16]
There`s an example in Krylars book called `The Rolling Timer`. After reading through this thread I assume that it is basically doing the same thing. Am I correct or is this a different method altogether?



@Grey Alien :

Your framework uses one of the smoothest timing methods around. Do you use tweening or is the high logic rate that you use (200 I recall) enough to avoid this? You also say that you use a delta element for extra smoothness. How do things look without this added?

Also how is the slow motion achieved in your framework if you do not mind me asking? Can you give me a small example of what needs to be added to make this work?

Thanks,

Jason.


Grey Alien(Posted 2009) [#17]
@QuickSilva: Yeah I use 200 logic updates per second. This is normally 3.something logic updates per frame. For the first 3 whole logic updates Delta is 1 and for the last update Delta is <1. This makes it very smooth. Without the final fractional delta it is less smooth. Slow motion was achieved by applying a multiplier to delta (the multiplier is less than one). Did you see this in action yet with the bullet time on the Mega Pill explosion in Unwell Mel (level 7), it's rather spectacular :-)


QuickSilva(Posted 2009) [#18]
Yes I did. It does indeed look very cool in action. You did a great job on that game. Thanks for answering my question too.

Jason.


QuickSilva(Posted 2009) [#19]
@ MGE or anyone else who knows the answer,

When you say cap the delta values min and max values could you please explain how this would be done?

Jason.


Grey Alien(Posted 2009) [#20]
Let's say the game experiences a huge delay and instead of Delta being in a "normal" range you get a silly value like 200, well then everything would move WAY too far so you are better off capping it by saying If Delta>2 then Delta=2. 2 is just a Max number I've picked here but it could be whatever you feel is suitable. Not quite sure why you'd cap the lower value though (cap is also probably the wrong word for a lower value unless it goes negative but Delta shouldn't do that unless the timer rolls round on the PC, so you could cap it at 0.)


QuickSilva(Posted 2009) [#21]
Ah I see, so simple :) Thanks for the explanation.

Jason.


QuickSilva(Posted 2009) [#22]
After taking in all of the great info provided in this thread I`m finally starting to understand things. I just wanted to say a big thanks once more to everyone who helped. I`m sure that this thread will help many others too in the future.

One thing I wanted to ask is that delta timing never seems to be quite as smooth as fixed rate logic. Is this common? Can both methods be made to produce the same smooth look? I`ve even tried a steady delta timing method where the delta only gets updated if it changes by a certain amount. This is still not as smooth as fixed rate timing. Here`s the link to that code,

http://www.blitzbasic.com/codearcs/codearcs.php?code=431

After seeing the benefits I am starting to like fixed rate logic more and more but I would still like to know if the same smoothness can be achieve with delta timing.

Jason.


*(Posted 2009) [#23]
This is how I do it
Global OldMillisecs:Long=MilliSecs()
Global NewMillisecs:Long=MilliSecs()
Global DeltaString:String = CurrentTime$()
Global DeltaTimePassed:Long = 0
Global DesiredFPS:Long = 60

Global UserUpdate:Long = NewMillisecs

Function CheckDeltaTime()
	'called every cycle but updated every second
	If CurrentTime$()<>DeltaString
		OldMillisecs = NewMillisecs
		NewMillisecs = MilliSecs()
		DeltaString = CurrentTime$()
		
		DeltaTimePassed = NewMillisecs - OldMillisecs
	EndIf
End Function


then call the CheckDeltaTime at the end of the main loop and to do a check do this (DesiredFPS is something like 60 for 60fps)
	If MilliSecs()>UserUpdate+ ( DeltaTimePassed /DesiredFPS )
            'update goes here
        Endif


Hope it helps


Gabriel(Posted 2009) [#24]
One thing I wanted to ask is that delta timing never seems to be quite as smooth as fixed rate logic. Is this common? Can both methods be made to produce the same smooth look?

Yes, it's common that fixed rate logic with tweening will be smoother on machines which can cope well with your game, simply because you spend more time rendering and less time needlessly updating logic. Unless you run your logic at a very high rate, in which case, it will be more or less the same. On slower machines, where the computer is struggling to keep up, I would expect delta time and fixed rate logic to produce similar results.


QuickSilva(Posted 2009) [#25]
@ Grey Alien :

You mention that you use a delta value also in you timing code. Is this a replacement for tweening or do you just choose not to tween? How does your timing code look when you set a low logic rate of say 10? Does your delta value have the same effect as tweening?

@ All

The tweening part seems to be the most difficult to implement. Is it hard to do? This is what I am struggling with the most I think. I could just leave that part out but I want to try to understand it.

Also what sort of machines (spec-wise) would be capable of running at a logic rate of 200?

Jason.


Foppy(Posted 2009) [#26]
Also what sort of machines (spec-wise) would be capable of running at a logic rate of 200?
It depends on the code that is executed. But with my games I have no problem running them at 200 fps on my 6 year old 1.7 Ghz pc. If I were to write an RTS game with lots of pathfinding computations I would program it in such a way that one computation is divided over multiple logic frames.


Gabriel(Posted 2009) [#27]
The tweening part seems to be the most difficult to implement. Is it hard to do?

No, it's a piece of cake in 2D. You're just doing linear interpolation between two positions, two scales, two colors, two rotations, etc. I suppose the rotations offer the biggest challenge since you would need to ensure that they tween the correct way. IE: Interpolating between angles of 1 and 359 would need to be done the short way and not the long way. But even that is just a case of a simple condition to check for it.

In 3D, it's slightly more difficult because you can't tween euler angles and you have to use quaternions which can be easily interpolated with a Spherical Linear intERPolation. In 2D, it's dead simple. All you're really doing is the exact same equation you were doing with delta time except that instead of moving something by 2*Delta pixels you're positioning it at (End-Start)*Tween pixels.


QuickSilva(Posted 2009) [#28]
Excellent. I finally think that I understand it now.

Cheers guys!

Jason.


QuickSilva(Posted 2009) [#29]
Is it possible to get the true running speed of my program using fixed rate logic?

With delta timing you can set flip to false to get a true reading but if you do this with fixed rate logic it peaks at a certain value. I`m assuming that this is the correct behaviour or am I missing something?

Jason.


Grey Alien(Posted 2009) [#30]
@QuickSilva: If the logic rate was reduce below about 60Hz with my method it would start to look crappy (because there is no tweening, that's for rendering only). The delta is not used for tweening, it just smooths out the final logic iteration per frame.


QuickSilva(Posted 2009) [#31]
Cheers Grey for clearing that up.

With tweening, size, rotation, scale etc... obviously need to be tweened but what about animation frames? Do we need to do anything fancy with those incase a frame may not have been the same half a step back in time? Is this being too picky or does it need to be looked into.

I`m talking purely 2D games here. Would there be any visible benefit from doing this? I`m guessing not but I just wanted to ask those that may have tried to do it in their own work.

Also the way I understand it is that if you are running at a logic rate of 10 you are saving more cpu time than if you are doing a 200 logic rate. When calling up the task manager to see if this is true both values are giving me about the same cpu usage. Why is this? Adding a delay 10 into my main loop lowers this value in both cases. Is adding this good practice or not?

Finally, logic updates can vary between frames, rendering only ever occurs once per frame. Is this correct? The rendering part is now confusing me a little as when I try to calculate my true fps it seems to reach a limit and then stop. I`m not sure why this is, I`m obviously misunderstanding something with regards to when the rendering takes place. Maybe this is the correct behaviour?

Should I be using flip false or flip true? Are there benefits to each method? Flip true seems to keep the cpu usage at a much lower reading but should I locking to the refresh rate or not?

Any further advice would be most appreciated.

Jason.


Grey Alien(Posted 2009) [#32]
Yes you would need to tween anim frames too or it would look bad.

I used Flip 1 because I don't like vertical tearing. It looks much nicer imho for 2D games.


Gabriel(Posted 2009) [#33]
With tweening, size, rotation, scale etc... obviously need to be tweened but what about animation frames? Do we need to do anything fancy with those incase a frame may not have been the same half a step back in time? Is this being too picky or does it need to be looked into.

Yes, you'll need to tween animation speed too.

Also the way I understand it is that if you are running at a logic rate of 10 you are saving more cpu time than if you are doing a 200 logic rate.When calling up the task manager to see if this is true both values are giving me about the same cpu usage. Why is this?

No, you're not saving the CPU time, you're using it more efficiently. Instead of running the logic 200 times just to keep the screen updates smooth, you're spending all your time rendering instead, as rendering is what determines the smoothness of your visuals. Now there may be a game whose logic needs to run 200 times per second, but I don't think I've ever seen one.

Finally, logic updates can vary between frames, rendering only ever occurs once per frame. Is this correct?

With fixed rate logic? No. Either can happen multiple times per frame. If rendering only ever happened once, you wouldn't need to tween. If you're on a fast machine, you might have time to render 5 or 10 frames for every logic update.

Should I be using flip false or flip true? Are there benefits to each method?

I'm a big advocate of Flip True. I can't see any sense in using anything else. If you were making a 3D first person shooter, I suppose I'd recommend having it optional because there are FPS fans who insist on having every frame rendered and don't care about tearing. Personally I find tearing very distracting and quite unprofessional. Having said that, Resident Evil 5 tears like a myopic origami expert, so I guess some people don't mind it.


HrdNutz(Posted 2009) [#34]
sorry i haven't read everything in this thread other than someone was looking for a simple example of fixed steps with interpolation.

http://www.blitzbasic.com/codearcs/codearcs.php?code=2039

this is something I posted a while ago in code archives, IMO this is as simple as it gets for an example.

p.s. I would like to add that tweening is the smoothest possible solution other than pure delta timing. With higher FPS you have more precise motion, which only gets smoother with better performance. Fixed timing without interpolation has capped FPS, so if you run your updates at 200Hz and your machine is capable of rendering the game at 1500 fps, you will max out at 200FPS, anything higher than that and you're drawing the same frame more than once.


TaskMaster(Posted 2009) [#35]
I would not say any one method is the "smoothest possible". As long as you update every vertical refresh, it will be as smooth as it can be.


Gabriel(Posted 2009) [#36]
As long as you update every vertical refresh, it will be as smooth as it can be.

Of course it won't. If it moves in uneven amounts (which it does in anything other than tweening or pure delta timing) then it won't be smooth.


QuickSilva(Posted 2009) [#37]
Thanks HrdNutz for the link, it confirms that I am doing things correctly which is good :)

I now have the movement, rotation and scale of my objects all working properly with tweening thanks to the advice of all who have help me for which I am most greatful. I`m quite proud that it is working so well.

The final component that I still cannot grasp though is how to tween the animation frames. I`m working on a 2D game so how can I tween the frames as they are integer values such as frame 1,2,3 etc... Should I be tweening when the frames get updated instead? I`m a little confused with the theory.

Another question that this brings up is with counters. Should I be simply using frame_counter:+1 each logic update or should I be using a version like if millisecs is 500 millisecs greater then last time then update the frame_counter? Will both versions work or should I make sure I only update my counters by using the millisecs method?

Hopefully this will be the final question and then I will have a good grasp on what I need to do.

Overall I have really started to appreciate the benefits of this timing method and the smoothness it brings.

Jason.


HrdNutz(Posted 2009) [#38]
Sorry TaskMaster, what i meant was "theoretically" smoothest possible by measuring FPS with no vertical sync. In reality rendering cant really show faster than a monitor can refresh itself regardless if the app is waiting for vertical refresh or not.

edit: tweening also deals with all time remainders inside the render loop, so you don't get some odd and uneven movement due to remainder.


HrdNutz(Posted 2009) [#39]
@quicksilva

im not sure what you mean by tweening frames, because you cannot interpolate between two textures unless you do some crazy stuff like morphing. If sprite frame rate isn't tied directly to logic, aka you have a sprite and you just want it to keep playing the animation at the same speed regardless of updates, then animation code should just go into your render routines and not be part of logic updates. If you require that your animation is tied directly to logic, aka animation is exactly 2 seconds long, and at the end you want to update something in logic, then you should probably figure a way to update the frames in your logic loops and not rendering.

basically you should be setting which animation to play inside logic updates, but run the timing and frame updating code inside render routine, it really doesn't have much to do with logic at all, unless your animation frame is dependent on some game logic state. If any of your animations exceed your update time, for example you run your code at 10Hz and animation plays at 24FPS, you cannot update your frames inside logic, just not enought UPS. Watch out if you have animation that you want to play at 100FPS (many many frames) but the app FPS is running at lower speed, then you have to skip some frames to make sure the animation is playing at proper interval.

As for frame counting you want to use frame:+1 inside the logic loop, because you are guaranteed updates at a precise interval. This is how you can reproduce behaviours and bugs exactly every time. IF you want to have variable update frequencies, like all of a sudden you want to change the frequency from say 30 to 60Hz, you should keep track of application time, where you accumulate the UPDATE-TIME (dont use real delta here, use the fixed one) the 'fixed' delta is your update time, which you use to scale everything just like in delta timing, except this delta never ever changes, so you can depend on it.

the whole idea about fixed intervals is that your app goes through EXACTLY SAME STATES every time you run it, and it will produce EXACT SAME ANSWERS every time. Something not possible pure delta timing.


HrdNutz(Posted 2009) [#40]
hmm, i can barely understand what I wrote above myself O.O

TLDR: sprite frame updates mostly belong in the rendering loops, update loop can control the animation based on game state changes, and rendering loop takes care of figuring out which frame to display next int he current animation. you do not interpolate frames, you only skip frames if you're somehow behind.

Keep track of application time in Millisecs or in Frames, both work. Just make sure that using millisecs you accumulate the fixed delta time, which never changes. If logic is runing at 10ups, your fixed delta = 100ms at all times.


Gabriel(Posted 2009) [#41]
The final component that I still cannot grasp though is how to tween the animation frames. I`m working on a 2D game so how can I tween the frames as they are integer values such as frame 1,2,3 etc... Should I be tweening when the frames get updated instead? I`m a little confused with the theory.


Yes, tween the time. I generally have a timeeelapsed field in objects for this sort of thing. Then you can tween how much time has elapsed and use that time elapsed and the total time of the animation to calculate which frame to show.

Another question that this brings up is with counters. Should I be simply using frame_counter:+1 each logic update or should I be using a version like if millisecs is 500 millisecs greater then last time then update the frame_counter? Will both versions work or should I make sure I only update my counters by using the millisecs method?

No idea. What are your counters doing? What do you use them for? I don't recall ever wanting or needing to count frames, I always deal in time.


HrdNutz(Posted 2009) [#42]
what Gab said above about interpolating time works too, i just personally use render loops to update my animations using REAL delta time (not the fixed one, i got both lol)


QuickSilva(Posted 2009) [#43]
@ HrdNutz :

In you code example, in the render phase, there is a part of the code that I am having trouble understanding,

' interpolate between old and actual positions
Local tx# = X * tween + OldX * (1.0 - tween)

Please could you explain it in a little more detail for me? I know it tweens between the old and new values but ut`s the way it works that I cannot quite grasp.

Thanks,
Jason.


Warpy(Posted 2009) [#44]
When tween is zero, you get
tx# = X*0 + OldX*(1 - 0)
      = OldX


And when tween is 1, you get
tx# = X*1 + OldX*(1-1)
      = X + OldX*0
      = X


And when, for example, tween is 0.5:
tx# = X*0.5 + OldX*(1 - 0.5)
      = X*0.5 + OldX*0.5


so halfway between X and OldX.

Fairly simple! Again, QuickSilva, if you're having trouble with this, you really need to work on your maths.


QuickSilva(Posted 2009) [#45]
You`re completely correct, my Maths does let me down :(

Thanks for explaining it though, you made it seem a lot clearer.

Jason.


QuickSilva(Posted 2009) [#46]
Just wanted to quickly ask, some people type float values like, 1.0 whereas others simply type 1. Is there a reason for this, good coding practice perhaps or does it actually change the result that we get back?
Should I also be type 0.5 or is .5 just as good?

I suppose what I`m asking is, is it more for the programmer to make things easier to read?

Jason.


Warpy(Posted 2009) [#47]
0.5 and .5 are interpreted exactly the same by the compiler.

1.0 and 1 are different however - 1.0 is interpreted as a float, while 1 is interpreted to be an integer - i.e. a whole number. This can become important if you're using another integer, e.g.

1.0/2 gets performed as a floating point operation, so the result is 0.5

but

1/2 gets performed as an integer operation, so the decimal part of the result is dropped and the result is just 0.

Sometimes it is useful to do this automatic rounding though, which is why the compiler works the way it does.

If you're working with two integers but want to get a floating point result, just change one or both of the numbers to a float before the operation:

Float(1) / 2 = 0.5

note that Float(1/2) = 0 still, because it works out 1/2 = 0, then turns the result of that into a float.

(by the way, a 'floating point number' is the computer's way of representing numbers with decimal or fractional parts. It's called a 'floating' point because the digits are stored in a fixed number of bits in memory, and then the computer keeps track of where the decimal point is - it can move about. This is in contrast to fixed point numbers, where, for example, the first four bits will represent the whole number part, and the last four bits represent the fractional part - the decimal point is fixed in one place)


Grey Alien(Posted 2009) [#48]
Yeah it's definitely work knowing how the compile converts numbers in the source code. I often stick a .0 on the end of an Int because I want the result to be floating point.


QuickSilva(Posted 2009) [#49]
Thank you for explaining things so clearly Warpy, I wish I had a Maths teacher like you back in the day.

When I test my code with a really low logic rate and also use tweening any objects that I want to leave the screen on the right and reappear on the left seem to return to the left of the screen from the right almost in reverse. Am I doing something wrong or is this the correct behaviour? Is there a way around it if it is? Without tweening all is fine so that seems to be the problem. Is it just a case of not tweening in these situations?

Are there any other situations like this that I should look out for?

Jason.


Warpy(Posted 2009) [#50]
Say your screen's 800 pixels wide.

In the previous frame, the object was on the right hand side of the screen, say at x=780.

In this frame, it's at the left hand side of the screen, say x=20.

When you're tweening, you're interpolating between these two numbers, so you obviously get something that's in the range 20..780, i.e. to the right of the current position.

What you need to do is, when you wrap the object's new position round to the left of the screen, also do the same to the old x value, so it ends up being a negative number.

Something like
x:+xvelocity 'move right
if x>800
   x=x-800  'wrap object to left of screen
   ox=ox-800  'correct old position
endif


You also need to watch out for this sort of thing when doing collisions, but I expect that's a bit beyond you at the moment.


Grey Alien(Posted 2009) [#51]
@Quicksilve: hoho, that situation is like one of the issues I brought up with tweening a while back (same issue can occur with objects bouncing off another object i.e. changing path) and thus I prefer not to tweening it because I don't want to code special tween cases. That's just me though. Warpy's code will work.


QuickSilva(Posted 2009) [#52]
I can now see why you use a higher logic rate in your framework. Still, the good thing is I finally understand the theory :)

With collisions I`m assuming you mean that with a very low logic rate your objects may enter other objects? How do we safe guard against that then? It would be handy to know for future reference even if it is beyond me at present.

Cheers for the continued helped. I bet you are all tired of me by now. I`ve just always found this subject very confusing.

Jason.


Warpy(Posted 2009) [#53]
"Collisions" just means the fact that moving objects may bump into each other, or into the scenery, and how to deal with that.

For example, if a ball bounces off a wall, its velocity is reflected off the wall. In verlet motion, because the velocity is defined implicitly as the difference between the current position and the previous position, you need to correct the old position by moving it "into" the wall, so it's the correct distance and direction behind the current position.


QuickSilva(Posted 2009) [#54]
When deciding my objects speeds should I being doing anything special to keep them in sync with my timing or can I use any values I choose? In other words, will using certain values result in a cleaner movement of my sprites?

Jason.


Jur(Posted 2009) [#55]
Choose such speeds which fit your game (gameplay-wise). If your game is fast paced, you will probably need higher logic rate to keep movements smoother. And of course, use float coordinates.


QuickSilva(Posted 2009) [#56]
So there is no benefit from choosing certain values in advance then.

Just wanted to check.

Jason.


Grey Alien(Posted 2009) [#57]
Yeah no benefit to certain speeds. There used to be when you had a fixed refresh rate (you'd typically move 1 or 2 pixels per frame so it looked really smooth), but as you don't know the framerate on PC and are doing tweening, you can move at any speed and it can still look smooth.


ImaginaryHuman(Posted 2009) [#58]
This is an interesting thread and good review of the subject of timing in general.

Earlier on as I was reading through I thought about a few things. One of them I think may have been explained more toward the posts up until this one, regarding animation. My concern is, if you use fixed rate logic with tweening, and your logic rate is very low - lets say 10fps, and you're using tweening to update your graphics, that's all very well for the update but now you are stuck with only being able to do logic on your game objects 10 times per second.

For example, let's say you have an enemy character which you want to be able to change its direction of movement quite quickly and perhaps randomly. Let's say you want it to be so erratic that it should change direction possibly every frame - at 60fps. If your logic is only 10fps, your game character cannot go through all of its animation steps. It would appear to change direction less often. So the problem is, with a low fixed rate for logic you are required to use a `lower resolution` of animation timing. Animation becomes `blockier` or jerkier, in a sense. Just to push the point, if your logic rate was 5fps, you would start to see all objects in the game changing directions only 5 times per second and it would be very noticeable like a stuttering of animation, plus animation frames would only change 5 times per second - not very smooth.

I think this is one of the drawbacks of the idea of a low logic rate using fixed rate logic. It's all very well that you save lots of cpu time but it does impose limitations on how fine-grained your animation can be. The only way to get around that at a low logic rate is to do actually do some kind of morphing or tweening of the graphic images themselves - like perhaps if each logic frame was a `keyframe` and the rendering system was using vectorgraphics with tweening between keyframes. That would work for smoothing out animation, but you'd still only really be able to have a new key a few times per second.

Still, most animation is probably not noticeably different after, say, 25fps logic, so you could probably still pull of a fairly low logic rate (30fps say) and use tweening.

Another thing I wanted to mention was with regard to what GrayAlien has said about his framework's timing. You mentioned a logic rate of 200fps, and presumably targetting a framerate of about 60fps, resulting in 3-point-something logic updates per graphics frame. You talked about needing to use a delta value after the last frame in order to make up the exact logic position of, e.g. 3.2 logic steps, which would be exactly the position objects should be in when the next graphics frame is to be rendered. This is good, and I can see why when you render the logic results visually you are drawing state which is much more precicely timed.

However, the whole reason you are having to do that extra delta step to simulate a fractional logic framerate is because your logic rate is not an exact multiple of your graphics rate. If your graphics rate was 60hz and your logic rate was 60*3=180hz, your delta would always be 0 and you would only have to do exactly 3 logic steps per frame. Since 60hz is so popular, I wonder why you chose to go with 200 as a default, when 180 would have been automatically smoother?

What I'm getting at, though, isn't whether you chose the logic rate well, but that in order to have perfectly precice timing you must have a fixed logic rate which ultimately is an exact multiple of your graphics rate (unless you have added delta or tweening which will be almost as accurate). If you have the fixed rate logic at 200hz without tweening or delta, objects will understandable be a little bit off from where they should be and a moire effect will result, wiggling the object position and decreasing smoothness. So why not just lock the logic rate in at a multiple of the graphics rate?

I know I know, what you're thinking is that you don't know what hz rate the user's computer is going to use for the display, and that you need to adapt to multiple hz rates. Well this is where my next idea comes in. One of the major benefits of having a sense of the predictable passage of time, is that when you design your game and set up your animation you will know for sure how fast the animation `normally` plays. So if your logic is fixed at 200hz then you know that you can create animations which will always take the same amount of time to complete no matter what the graphics framerate is. And the same with delta timing, it lets you lock-in a predictable `rate-of-time` so that you can design for that rate and let the timing system take care of how it displays.

Well, my idea is NOT make the logic rate fixed, but to instead make it an exact multiple of the display hz rate. So for 60hz you might have logic at 120 or 180, and for 85hz you might have logic at 170 or 255. You could perhaps have an `ideal` logic rate, which could be 200hz, and then pick the nearest multiple to that rate - so in the case of 60hz you'd choose 180. Doing this will totally remove all sense of predictable time and your animations will now be all over the place. BUT, you now introduce something I'll call a `design-time` framerate. This is simple a fixed framerate which you design your animations for - like you would have done for fixed rate or delta logic, or even for when you were sure a 60hz refresh with Flip 1 would never be compromized.

This `design time` hz rate is now easy to target - it could be, say, 50hz, 60hz, maybe even 25hz. The only thing you have to do then is transition between design time and logic time. How would we do that? Simple linear tweening. So you run your graphics at a flexible rate, you run your logic at a multiple of the display hz, and then when you do your logic you tween between logic timing and design timing. This lets you a) design for a fixed framerate, b) choose a logic rate which exactly fits into the display hz rate, c) lets the graphics update still occur as often as possible.

Another advantage is you could have multiple `design time` logic rates, depending on the object you are animating. If you have some object whose animation really looks fine changing no more than 5 times per second, where each animation change must occur on the beat of 1/5th of a second intervals, then you could use a design-time logic rate of 5hz which is then tweened to your semi-fixed logic rate. Then you might have the player character which you want to update the animation of at 25hz and the movement at 120hz - you can have two pieces of logic running at different hertz rates which are then tweened to the main logic rate.

Something else I was thinking about, when someone mentioned 3D and physics, is a way to overcome the explosiveness of a variable logic hz. What if you implement a double-buffered logic system? In one buffer you do normal logic fixed at a specific logic rate, say 50hz. Then in another buffer at 200hz, you copy the logic from the slow-reliable buffer into the fast-interpolation buffer every 4 cycles, and inbetween the four cycles you implement delta timing. Not enough would happen to explode the physics because every 1/50th of a second the main logic refreshes the temporary logic. Meanwhile the temporary logic can be delta timed, you can slow it way down, you can scale it, you can do `bullet time`, and then when you reach the point where you need to refresh proper positions you update the fixed rate buffer. This way you keep updating at a fixed logic rate in one logic system and inbetween updates you temporarily do delta timing.

It would be somewhat like predicting the positions of objects like you do when you implement multiplayer networked games - you don't necessarily know exactly where an object will be in the next true game state, so you predict it, you interpolate to it, and then you perhaps implement spline smoothing between the `actual` logic (the fixed rate) and the `predicted` logic (with delta time). I think it would work, would allow you to slow logic down or speed it up, while still keeping the physics simulation stable.

One more thing I wanted to mention is that there is a delay between when the logic starts to be processed and when the frame flips the display. Let's say that right just before the next logic cycle begins, the player spaceship is supposed to change animation frames. So you start doing your next logic cycle and notice the animation frame should have changed. so you change the frame, and then you draw the new frame. However, let's say that just prior to when you flip the new frame, the logic would have told you that another animation frame needs to be shown. Now it will again have to wait until 1 frame late. What if you predict the future? What if you look at what the game time *will be* at the time when the frame actually flips next, rather than what the time *was* at the start of the frame.

Since by the time you Flip a new frame into view, time has passed, you should really be rendering `the future` so that when it flips into view it is `on time`. This will in most cases remove the 1fps lag. In some cases it won't have any effect - e.g. user input. If the user pressed a key during the previous frame and it didn't get processed yet, it's effects are not going to be considered until 1 frame later. Unfortunately you can't just suddenly do a whole new logic update right at the last minute in the current frame, which might've caught the user-input and produced a different graphic upon display. But aha - that's where a high logic rate comes in. A high fixed or semi-fixed logic rate increases the responsiveness of user input. And even if `rendering the future` doesn't improve the user responsiveness, it can at least show you a more precice world state at the time of the flip. The only thing is I'm not sure if this would throw the world state out of sync with the player - if the world is 1 step ahead of the player the player could be at a disadvantage.


Jur(Posted 2009) [#59]
Delta value does not only cover fractional logic iteration but also variations in loop time durations so it changes constantly. On my system there is no difference in smoothness between 200 an 180 hz.
Your idea about having different logic rates for different logic operations is interesting but I think that one would need very good reason to complicate his code with such stuff :) My "design" time is "real" time. For animations, timers etc. I always use milliseconds. It is logical to use "real" time scale for time depending operations. These values are then converted to "logic rate" values. That way you can change logic rate, and speed will remain the same. Of course if you make logic rate to low and use short timed intervals in your code then things will not be ok - the logic rate "granulation" must be "fine" enough to cover all time intervals.


HrdNutz(Posted 2009) [#60]
the idea behind fixed interval updates is predictability, your app goes through same exact states, every time, with no discrepancies. Any variation in frequency may result in un-reproducible behavior, something pure delta timing suffers from. Tweening is used to deal with any time remainders outside logic updates, but otherwise logic is exact on-a-dime.

regarding user input; you can record it outside the update loop, and use last recorded values inside logic iteration. This removes the sensitivity problems even in as low as 10Hz updates.

60Hz is a sweet spot for most games, some may go as low as 10, but higher than 60 is mostly wasteful with tweening.


QuickSilva(Posted 2009) [#61]
Is it better to record user input this way too, say for a demo mode or replay? I was actually wondering how this would be done.

If I wanted to make my game more friendly to lower spec computers what would be the most beneficial way? Would reducing the hertz (in the graphics command) from 60 to 30 help for a slower computer or would reducing the logic rate be most beneficial? Do either of these methods make a big difference to people with older hardware? Should I do both? Maybe a delay in the main loop could help free up some time too.

Jason.


HrdNutz(Posted 2009) [#62]
Reducing logic hertz (updates per second) benefits performance, how much, depends on complexity of your logic. Rendering should generally be allowed to run full scpeed, which most of the time is capped by monitor refresh rate anyways.

Replays are cake wen using fixed interval updates; every time you process input like key_down and key_up, record the action into a list or whatever, along with time or update iteration. For example; at 1000 milliseconds of app time KEYUP was pressed, at 4000 millisecs KEYUP was unpressed; that's holding the UP key pressed for 3 seconds, and is represented by 2 recorded values, when the key was pressed and when it was released. Substitute real input with recorded values to replay the actions, and since you know the specific times for each input, you can make your game go through the same exact actions, and because using fixed interval updates you are guaranteed exact game state at specific game time, actions will replay without discrepancy.

recodred values are roughly:
TIME, ACTION

to replay:
if game time = recorded time, execute recorded action


QuickSilva(Posted 2009) [#63]
So simple :)

So reducing the hertz parameter of the Graphics command say to Graphics 640,480,0,30 from (Graphics 640,480,0,60) would be of no benefit?

Jason.


Grey Alien(Posted 2009) [#64]
Yes using fixed rate logic (and tweening) should yield a perfectly accurate replay whereas delta may not. I say "may" because it depends on what sort of game objects you have and how critical having the same set of data is for a replays - it may not matter for many game types.


ImaginaryHuman(Posted 2009) [#65]
I would say go with a frame number to reference your replay data, rather than a time, because there are all sorts of reasons why your game could be running slower or faster than it did when you recorded - such as the o/s interrupting your application and creating jitters (right Gray?). If you store a frame number then even if there is a sudden lag during playback it will pick up where it left off, otherwise if it is time based it could
skip a bunch of moves.

As to HZ in the Graphics command, this is asking for what hz refresh rate you want the screen to be when you open it, it's not really to do with logic rate. If you ask for a 30hz screen and if by change the hardware supports 30hz you'll get a screen with that refresh rate and it'll be really uncomfortable to look at. That said, if 30hz is not available, the hz will be promoted to a default like 60hz. If you then use Flip -1, it will flip at 30hz even if the hardware display is at 60hz. That said, what's the chances that any hardware supports any display hz below like 50hz? So maybe it's safe to ask for 30hz, so long as you use Flip -1, cus if you use Flip 1 having asked for 30hz it'll flip at 60 cus that's what the hardware will v-sync at.


HrdNutz(Posted 2009) [#66]
@Imaginary

when using true fixed step updates, time or frame can be used for reference, because there is absolutely no chance that app time will be off, for any reason, because you keep track of app time using 'fixed delta', so frame and time will always be in sync.


Chroma(Posted 2009) [#67]
Yeah I use 200 logic updates per second. This is normally 3.something logic updates per frame. For the first 3 whole logic updates Delta is 1 and for the last update Delta is <1.


GA has been boasting about his timing code for years. Can anyone, from the above description, post some pseudo code that does what he's talking about? 3 passed of Delta 1 followed by a fourth with a fractional remainder. I mean, does anyone else get what he means and can put this in code form?


ImaginaryHuman(Posted 2009) [#68]
HrdNutz, if you are going to use miliseconds you either a) have to stop your application's timer when there is *ANY* time taken up by the o/s or other apps, or b) skip some of the recording when time is lagging behind where it should be. It's just that you didn't mention either of those and without them your timing is going to be off. What is your app going to do when another app is lauched and gobbles up a few seconds of CPU time? Playback is going to freeze for a moment, or it's going to have to skip a section.

Chroma, if your logic rate is fixed at 200hz, and your display is refreshing at a rate of 60hz, then to find out how many logic steps ideally need to be performed every display refresh you do 200/60=3.333333.

So every time you update your graphics and do a Flip 1, you'd have to do 3.3333333 steps of logic.

Basic fixed-rate logic without any additional tweening or delta would simply say that you need to do 3 whole logic steps before the next graphics refresh, and then the .333333 would be added to the `accumulator` - the amount of accumulated time which needs to be used up - ready for the next frame. In the next frame the accumulator would start off at .33333 and then you'd be adding 3.33333 to it, producing 3.666666. You would do another 3 whole steps of logic then render the frame, putting the .66666 into the accumulator. Then the next frame it would be at 3.999999. You'd do 3 steps of logic then render. Then in the next frame you'd get 0.99999+3.33333=4.33332 - you'd now have to do 4 logic steps and then put the .33332 in the acumulator for next time. Then you render again.

This works okay, but it's not entirely accurate. Each time you go to render a frame, you really *should* have done 3.33333 logic steps, but you only did 3, or 4. So it's a little bit off. You could be off by a third of a logic step. So your objects might be slightly ahead or behind where they're supposed to be and this could cause some visible wiggle. Your animation frames might also be off, plus anything else you animate.

To get around this, there are a few solutions. One is to set your logic hz rate really high, to make it more `fine grained`. The higher the rate, the less noticeable the `error` will be. So lets say you did logic at 1000hz, then 1000/60=16.66667. So you'd be doing 16 logic steps per frame, and percentage-wise the difference between 16 steps and 17 steps is much less noticeable than the difference between 3 and 4. The steps are smaller. But this requires a really high logic rate which means you have to do all your calculations many more times, using more CPU time. It can be very intensive and prohibitive.

So then you have the solution of dealing with the 0.33333 in some way. You can either deal with it in the logic or in the graphics rendering. If you deal with it in the graphics rendering, with tweening, you still do either 3 or 4 logic steps and then you look at the decimal remainder (the .33333) and you calculate where the graphics would be/were at 0.33333 of a frame previously. So if in the previous frame an object was at an x coordinate of 50 and now it's going to be at 80, you find the difference (80-50=30). So there are 30 pixels between the two frames. You then do 30*0.33333, to get 9.99999. So then you draw the objects at the place/state they'd be at 9.99999 pixels from their previous frame, rather than at 80, you might draw at 50+9.99999=59.99999. That's tweening, but it does require that your graphics rendering is a bit more complicated. Everything that moves or changes has to be scaled in this way. But it has the up-side that you don't have to change your logic code. You can just do a whole 3 or 4 steps of logic each frame. That's also good for physics stability which needs whole increments not fractions.

The other solution is what GrayAlien has done. He's moved the decimal part of the logic steps from the graphics side into the logic side. So now he does 3.33333 steps of logic each frame. This ensures that objects are exactly where they are supposed to be instead of potentially a whole frame behind or ahead. He does this by doing 3 normal iterations of logic code. Then to make up the .33333 he scales everything in the logic by multiplying by 0.33333 for one final logic pass. Objects move about 1 extra 3rd of their movement distance, animation frames advance by one third of a frame (it adds up), etc. This puts all the logic in one place rather than trying to scale during the rendering pass. This makes it easier. One thing it does do, however, is it makes all physics calculations unstable because now he's doing less than fixed/whole-amount logic steps. That fluctuation (between whole frames and 1/3rd of a frame) is not good for physics stability, but it works fine when you don't have a realistic physics simulation - like most 2d games which fake the physics.

So you might ask yourself why GrayAlien isn't just using a fixed logic rate of, say, 100Hz, since he's taking care of the remainder with a delta value. And there's no major reason why you couldn't do that. However, there are other advantages to a higher fixed rate. The more often you can process logic, and in particular user-input, the more immediately responsive you can be to the user's actions. Processing user-input at 200hz is going to give you `3 or 4 chances` to process user-input per frame. That means that if the user pressed a key sort of towards the end of the frame, there might still be enough logic time to process the keypress and thus influence what graphics will be drawn, rather than wait until the next frame to process it. It can make the game more responsive. However, to do that, ideally you'd have to space out the logic steps and not just do them immediately after each other. If you're trying to do 3.33333 logic steps per frame, then ideally you'd do one step at 0 milliseconds offset, 1 at 5, 1 at 10, and the .3333 at 15 - and this would be offset based on the accumulator. But I don't think GrayAlien is doing this.

Another benefit of a higher logic rate is to in most cases solve problems with collision detection for fast moving objects. An object could jump from one side of an obstacle to another without seeming to touch it, but if the logic hertz is fine-grained enough you will be more likely to register the collision. 200hz is therefore better for collisions than 60hz, supposing you are doing pretty basic rectangle or circle collisions and not swept polygons.


HrdNutz(Posted 2009) [#69]
@Imaginary

App time is tracked within the update loop; lets say your frequency is 10Hz, that makes the fixed delta 100ms. You can either track each update, or scale it by the fixed delta. In other words; 3 update ticks will track as 300ms constant, there is just no way those two numbers will be out of sync, thus you could either use the update iteration 3, or time of 300ms, because at 3 updates your app time will be 300ms (always), its essentially the same thing. Any hickups are dealt outside the update loop, but inside you are guaranteed app time to be 100ms per each update tick (at 10Hz). You can just think of App time as Frame Number * Fixed Delta.

also, nothing against GA's framework (mad respect), but it does not use true fixed interval updates, because he throws the remainder into another update tick with smaller delta. This essentially removed the benefit of fixed step updates to reproduce exact application states.


ImaginaryHuman(Posted 2009) [#70]
Okay.

I agree that Gray's framework is not stable for real physics math, which I said above also, but for most 2D games and the kind of audience his framework targets it doesn't matter. Most people aren't going to be doing real physics calculations. I agree that a fixed step with graphics tweening is the only real solution for steady physics and exactly-accurate updates. But without physics the delta pass works just as well.

It might be interesting to also review here the effects of external applications on the smoothness of the game - perhaps you can give us a summary of jitter correction, Gray?


Grey Alien(Posted 2009) [#71]
@ImaginaryHuman: Great analysis, it's correct as far as my framework goes. I did consider trying the logic at 180Hz or 120Hz so most of the time it would be in sync with the display (at 3x or 2x) but of course there's no guarantee it would actually stay in sync and also I thought that almost being in sync but occasionally slipping out might look pretty bad if I always used a Delta of 1 and didn't bother with the fractional Delta.

Re: Jitter Correction, I'm pretty sure there's a detailed thread about it round here somewhere where I posted a test app.

Basically instead of blindly reacting to a big change in Delta caused by external apps interfering it looks smoother if you only alter Delta a little bit each frame until the time loss/gain is caught up. You do this by keeping a history of the time taken for each frame in an array and average it, so when the delta changes a lot you can interpolate between the new Delta and the average delta and pick a suitable value. This seems to smooth out some bad jitters but you'll still see them, just not as bad. Also certain situations will always look bad with or without jitter correction, like if the game is being severely interrupted by something external.

Another factor with jitter correction is that as the delta changes you may notice your game speed up and slow down as it compensates for time loss/gain on a particularly interrupted system (so that 10 seconds of game time still equals 10 seconds of real time). This can be reduced by clamping the amount of delta change you allow BUT you will loose/gain actual game time so if you had a 10 second counter, it wouldn't match your watch, it might be 11 seconds or 9 (that's a big exaggeration to make the point).

Basically with all this timing lark it's not an exact science. We simply cannot sync to the display like in days of old, so we have to come up with other solutions. Some are more suitable for certain games e.g. physics based games and others are suitable for heavy logic games (low frequency logic updates with tweening) and some are suitable for fairly light logic arcadey games (high frequency update with delta component), and jitter correction works well on some PCs and not others depending on how bad the jitter is. It's kinda all down to personal preference (even VSync is down to personal preference but I prefer it on, but you cannot rely on it being on as it may be forced off in drivers). The main thing is YOU MUST use some kind of timing instead of none, so pick your method and stick with it, and then just get the game done! :-)

To be clear, I'm not saying my method is better than anyone else's, it's just another method that works for me that I find easy to understand and implement, and it's tried and tested on countless systems. Horses for Courses as they say.


Chroma(Posted 2009) [#72]
Excellent analysis there ImaginaryHuman.

And basically what you're doing GA is you're sampling the delta time over a specific number of cycles and averaging it to one delta time thereby 'theoretically' eliminating huge jumps in delta cause by backgroud apps eating up cpu cycles. You're storing the most recent 10 or 20 delta times and then adding them together and dividing by the respective number of samples. I did the same thing about 7 years ago. I think sometimes you try to be too mysterious in your methods hehe...I can assure you what you're doing is nothing new. :)

I still think the Gaffer way is the best as you can run the logic at a fixed rate and it will also tween the graphics if the render fps dips below the logic fps. Also, you say tweening can cause collisions to fail. All tween does is interpolate rendering, NOT the logic. So if you're running your logic at 200 cycles then you most likely won't miss a collision and with the rendering running at full speed you most likely won't draw an object inside an object inadvertently. So, you're avoiding tweening for no reason and it would probably help smooth out your graphics even more. Give it a shot. The best that could happen is your graphics get even smoother and the worst is...you stick with your current timing code.

Btw, I despise vertical tearing too and also use Flip 1.


ImaginaryHuman(Posted 2009) [#73]
Thanks. And GreyAlien, sorry I spelt your nickname as Gray with an A - it's one of those international differences.

I agree that there are different ideal solutions depending on what type of game you're doing and what kinds of calculations it needs to do.

The downside of tweening is that is complicates drawing operations because you now have to do a lot of `logic` calculations right in the middle of doing rendering. Keeping the logic and rendering separate can make things a bit simpler to understand and implement. Rendering code can get a bit complex sometimes so maybe it's easier to keep the logic all in one place and maybe that's also a bit cleaner in terms of being object oriented.

The upside is that if you can put in the time to make it work, fixed rate plus tweening is about as good as it gets. But let's not forget that if you can make an efficient game which under-uses the available cpu/gpu power then you can just use Flip 1 as your timing and be done with it, so long as the vertical sync is working of course.

Opting for `tearing` to be a feature of the display in a game, is to me a bit pointless considering all the effort the system has to go to to draw to a hidden backbuffer and flip it to the front. You might as well use a single-buffered display, save on the flip, plus save on needing to do a clearscreen, plus be able to use dirty-rects and preserve the background. It would still tear much like a Flip 0 screen but it'd be even faster. The only drawback is the player can see `the drawing process` mid-flow.

I think for quality purposes you've gotta go with Flip 1. The only thing I don't like about Flip 1 is that if your framerate drop below the hz rate of the display, it *halves* the framerate. Then the game is sitting in a loop for almost a whole frame doing nothing waiting for the next vblank. This is also where triple buffering could help - you could start drawing the next frame while the current one is still waiting to flip. It would be nice to have a command that tells you how close the system is to doing a vsync in terms of milliseconds, so you could decide whether to try to call Flip 1 or whether it's too late and to do a Flip 0 then start on the next frame.


Grey Alien(Posted 2009) [#74]
@ImaginaryHuman: No worries about Gray, I saw it but declined to comment :-) I often wondered if I should have registered GrayAlienGames and redirected.


HrdNutz(Posted 2009) [#75]
regarding Jitters and stuff; they are caused not only by background processes, but by the application itself as well. Different frames may take different times to render, not to mention that frame time measure is based on previous frame, so its an estimate to begin with. Just try rendering a random amount of things from 0 to 10000, jitters will be ridiculous. To help smooth out the estimate frame timing, average deltas, and look into equalization, basically reject or clamp some deltas that are way out of the ordinary, so you don't end up feeding huge spikes to your averaging algorithm.


ImaginaryHuman(Posted 2009) [#76]
That's where triple buffering would help - you can use some of the spare time at the end of a frame to start working on the next frame. But not so easy to do in BlitzMax. Also you're not very likely in most games to go from 0 objets to thousands of objects, more likely a sustained rate of x number of objects plus some increase. That's one of the things to think about when you design the game - if there's going to be a lot of slowdown when there's too many objects then maybe you have to up the system requirements or cut back on the eye candy.


Jur(Posted 2009) [#77]
I can say that jitters are really problematic on some computers as mine is such. The Grey Aliens jitter correction makes a big difference in smoothness.
Recently I did few changes to his delta timing and add accumulator for tweening, to see how it would work in my project. Then I had to do few changes to my code but not much really. After few solved bugs, I got my game running with tweening instead of delta timing... and it wasnt really smooth. The cause - jitters! So I utilized Greys jitter correction which kept movements smooth in delta timing and found out that it is doing just as fine job keeping things smooth with tweening (i am now filling the accumulator with average times).
Bottom line... tweening or delta time, none will provide smooth movements (at least on some computers) if you dont have a solution for jitters.


Grey Alien(Posted 2009) [#78]
Very interesting Jur.


ImaginaryHuman(Posted 2009) [#79]
That's good to know. Perhaps some form of jitter correction/time smoothing needs to be a standard part of our timing systems rather than an optional extra.

It was so much easier on the Amiga when you could disable all multitasking and have full control of a very predictable specification of hardware, you could push things to the limit of the hardware and be sure it would never fluctuate into screen lag. See what we gave up in the name of multitasking? :-)

One issue I see with jitter correction is, in a way, it stretches and contracts logic time, which like Grey said, has the effect of making objects speed up and slow down - not entirely desirable. Is there a better way?


Grey Alien(Posted 2009) [#80]
Yep, it was even easier in DOS, that was totally smooth too.

So the speed up/down will only occur after a severe jitter, like a major time out. Of course you can clamp those delta values like I said but then the game time is no longer mapped to real time.


ImaginaryHuman(Posted 2009) [#81]
Do you think it is better, following a major interruption, to just let the game time pick up where it left off, or should you be trying to accelerate it to where it should be?


Chroma(Posted 2009) [#82]
Well...Diablo and Diablo II accelerated to where it should be so I'd say use that method since it's been used in a hit game.


TaskMaster(Posted 2009) [#83]
If your game is going to network, then it must catch up.

And networking is another thing that Grey's method would have a problem with. If you are doing a fixed logic rate, then all players games will be able to run at the same rate and sprites (or other objects) will run the same number of steps and be in the same spot. But, the minute you do a partial LOGIC update with a partial Delta, you lose that. So, I think the high logic rate works, but you should not be doing that extra little partial update in your logic, if you are going to do any networking.


Grey Alien(Posted 2009) [#84]
@ImaginaryHuman: yeah after a big interruption I ignore the massive delta to avoid a huge amount of logic processing to catch up, but after lots of little interruptions may be worth catching up. You just have to figure out what is too long, 10 frames or 50 frames or whatever.


ImaginaryHuman(Posted 2009) [#85]
Or you could have a separate section of logic to handle network traffic which runs at a lower fixed rate like 16/25/30fps and decouple it from the rest of the logic system so that you're not tied down. ??

It's a good point though, about networking. It's not an area that many people have experience with. I can see that you'd need some kind of `game time` to synchronize the events from other computers. Maybe the server machine should run with fixed rate logic then the clients can be whatever delta/tweened speed they want?


Grey Alien(Posted 2009) [#86]
Yeah not many of us have made networked games so it's a good point and a tricky issue. I expect that syncing multiplayer games is a whole big specialist area with many pitfalls and possible solutions - it's a full-time job for people in AAA game companies. Maybe there are some decent books and free stuff on the web about it.


QuickSilva(Posted 2009) [#87]
Sorry to bring this up again but I`m still struggling to see what is happening with the tweening part despite Warpys explanation.

Local tx#=x*tween+oldx*(1.0-tween)


Let`s say that my current x position is 200 and my old x position was 100. Next my tween value is 0.5.

In my mind the following should work,

TweenPosition=OldXPosition+(CurrentXPosition-OldXPosition*TweenValue)


So, TweenPosition=100+(200-100*0.5)=150

Am I missing something? Maybe someone code post an alternative to the original code (tx#=x*tween+oldx*(1.0-tween)) to show what is happening in a different manner?

Jason.


Warpy(Posted 2009) [#88]
You got your brackets wrong, but my and your equation are the same, otherwise. The *tweenvalue bit needs to be outside the brackets:

TweenPosition = OldXPosition + (CurrentXPosition - OldXPosition) * TweenValue



ImaginaryHuman(Posted 2009) [#89]
So multiplication comes before any of the addition or subtraction, right?

Cus I thought it should be:

TweenPosition = OldPosition + ((CurrentXPosition-OldXPosition) * TweenValue)

otherwise aren't you multiplying the result of oldx+(currx-oldx), as a result, by the tween value?


Warpy(Posted 2009) [#90]
You're taught the order of precedence rule in school: BIDMAS

Brackets
Indices
Division
Multiplication
Addition
Subtraction


So in quicksilva's equation, you work out the brackets first:
(CurrentXPosition - OldXPosition) = (200 - 100) = 100

Then do the multiplication:
(CurrentXPosition - OldXPosition)*TweenValue = 100 * 0.5 = 50

And finally do the addition:
OldPosition + ((CurrentXPosition-OldXPosition) * TweenValue) = 100 + 50 = 150



QuickSilva(Posted 2009) [#91]
So in theory my interpretation (apart from the brackets being in the wrong place) should work then?

Jason.


Warpy(Posted 2009) [#92]
Yep


Grey Alien(Posted 2009) [#93]
For some reason it was BEDMAS for me where E = Exponential

Maybe it's a UK thing.


Jesse(Posted 2009) [#94]
USA too. That I remember. and so there won't be a comfusion, it shoud be in this order:

()
^
*,/ same priority in any order
+,- same priority in any order

Imaginary Human, the outer parentesis work but are not necessary.


QuickSilva(Posted 2009) [#95]
Eureka I`ve got it! Seriously though, I finally understand everything that has been written about here and it feels great! My Maths is also getting better by the day and I am trying to learn all of the good stuff that I was never really taught at School. I`m actually really enjoying it now despite my frustrating experiences with it in the past. Maths is fun! BMax is also proving to be a joy to program in. I`m glad that I finally took the plunged and started to learn how to use it.

You guys are great, thanks a lot to everyone that helped :)

Onward to making my first game complete with proper timing...

Jason.


Grey Alien(Posted 2009) [#96]
Nice one dude! It feels great to understand this stuff. I found I used some of my old school maths for a few of my recent games and it was pretty cool to finally use it in "real life" haha. I enjoyed maths at school though, learnt trig to rotate 3D cubes before we were taught it ... was a pretty geeky guy ;-)


Warpy(Posted 2009) [#97]
Maybe it's a UK thing.


err, Jake, I'm from Newcastle...

Actually, I was taught BODMAS, with O for Ordinals, but the way it's on the GCSE syllabus now is BIDMAS, so I went with that.

This thread just keeps on going, doesn't it?


Grey Alien(Posted 2009) [#98]
err, Jake, I'm from Newcastle...
Wasn't sure if you were UK or not, for some reason I thought you might be in the US, don't know why. Anyway, if you live there, no wonder you are into maths, as you probably don't want to go outside ;-p I flew to it once from Bristol and saw several famous snooker players on the aeroplane and in the airport but that was about all.


ImaginaryHuman(Posted 2009) [#99]
Re the precedence of operations, yeah I can see that now. It doesn't read well, in terms of English. I think I prefer explicitly using extra brackets everywhere to make exactly sure things are calculated in the order I want them, ie I'd write it as:

TweenPosition = OldPosition + ((CurrentXPosition-OldXPosition) * TweenValue)

which allows me to read `left-to-right` more or less, ie you start out with OldPosition, then you add to it everything inside the backets. Within the brackets you take current, you subtract old from it, then you multiply. That's much easier to understand than `skip the old position, go into the brackets, do the bracket part, then jump ahead to multiply, then go back and add`. Maybe just my preference, because otherwise it doesn't look intuitive that the multiply would happen earlier.


TaskMaster(Posted 2009) [#100]
I do the same thing ImaginaryHuman. I put parens around it so it works out the way I like.

The biggest gotcha to the whole thing is the order in which subtraction and addition can be done is arbitrary, so it can sneak up and bite you in the butt.

3 + 4 - 2 - 3 = ?

The order that this statement is done will change its outcome. And the results could be different answers.


Jesse(Posted 2009) [#101]
TaskMaster
I don't think so. how do you figure that?


Warpy(Posted 2009) [#102]
TaskMaster - you read from left to right. So, you do 3 + 4, then subtract 2, then subtract 3. Even if you do subtraction first, you do 4 - 2, then subtract 3, then add 3, getting the same answer.


Grey Alien(Posted 2009) [#103]
Yeah I tend to add in "unneeded" brackets to aid readability so there can be no ambiguity about what the processing order is.


Jesse(Posted 2009) [#104]
It might be reasonable when it is a big equation but for simple ones I find it unnecessary. Using Bmax Ide, I find it difficult to work with brackets when there are many so I try not to add unnecesary ones but I can go crazy with Blide.


Grey Alien(Posted 2009) [#105]
Agreed. Too many brackets can also be hard to work out. Sometimes to simplify I make a local variable and work out some of the formula in that before plugging it into the main one. There's no real need to but it's easier to read/maintain. Obviously I wouldn't do this for some critical piece of code that needed to be optimised.


TaskMaster(Posted 2009) [#106]
3+4-2-3

Some compilers work right to left. Other work left to right.

Lets go left to right:
(3+4)-2-3
(7-2)-3
5-3
2

Now, right to left:
3+4-(2-3)
3+(4- -1)
3+5
8


ImaginaryHuman(Posted 2009) [#107]
Ok so we just need to know how BlitzMax is doing it.

I agree about too many brackets getting way confusing sometimes, to even figure out how many brackets to put in and where. But I do like the simplicity of reading left-to-right. And I do a similar thing, GreyAlien, where I'll break it down into parts if it gets too complicated. But I've seen some advanced coders here writing many many parts of a formula all on one line. I find that hard to read, especially with no comments either.


Jesse(Posted 2009) [#108]
TaskMaster, the bottom one works like this:
3+4(-2-3)
3+4-5
3(+4-5)
3-1
2

in order to express a negative number you have to include the sign so for the computer to know it is one you have to include the sign and is why the explanation work.


TaskMaster(Posted 2009) [#109]
That is where all of the confusion comes from. And why it is just so much cleaner to include the parens. :)


Warpy(Posted 2009) [#110]
This is why some people think Reverse Polish Notation is better than normal notation. I disagree, but it certainly makes these kinds of things less complicated.


Grey Alien(Posted 2009) [#111]
@Warpy: Did you post in the "What's the Most Nerdish thing you've done" thread? As that may well qualify ;-p Actually I'd classify it as Geekish, because Geek implies skill and Nerd implies uh dumbassness.


Warpy(Posted 2009) [#112]
That is nowhere near the geekiest thing I've done.

(The algorithms section of the code archives belongs to me now, by the way.)


Grey Alien(Posted 2009) [#113]
Agreed that you have exceeded your previous geek limitations. Congrats.

"All your algorithm are belong to us"


QuickSilva(Posted 2009) [#114]
I have my tweening up and running very nicely now but I still get the slightest little hiccup every now and then in windowed mode. I have seen many post about this before, is there a solution or is it just a Windows thing? Full screen works great but the windowed thing bugs me a little so if I can solve it somehow I would like too. I have tried Flip 0 and Flip 1 with the same results. I was hoping that by adding the tweening that this wouldn`t happen.

Jason.


MGE(Posted 2009) [#115]
"Full screen works great but the windowed thing bugs me a little so if I can solve it somehow I would like too."

You'll solve it...and then release your game and 1000's of peeps will go "man...it stutters a bit windowed mode". LOL.. Welcome to PC game coding.


QuickSilva(Posted 2009) [#116]
Thing is, I do not remember Blitz3D doing this in windowed mode so this is why I wanted to ask. If it was purely a PC quirk then I would have the same problem in Blitz3D would I not?

Another thing is that even Grey Aliens Framework demos show the stuttering in windowed mode now when I compile with BMax 1.32. They were perfectly smooth when compiled with version 1.30. I`m wondering if something may have changed in the latest version.

[Edit] Just downloaded the AOTMG demo from Jakes website and this is silky smooth in windowed mode so the problem definitely seems to be with the latest BMax. Can anyone else confirm this?

Jason.


ImaginaryHuman(Posted 2009) [#117]
Is it that you're defaulting to threaded builds and using the new garbage collector?

Alternatively, are you giving some time to the operating system to complete it tasks rather than hogging the cpu and shutting it out, causing it to have to force your app to pause? You should at least either do a Delay 1, or poll the system for events so that the o/s can get some time to do it's thing more spread-out rather than in a burst?


Grey Alien(Posted 2009) [#118]
I would be keen to find out what is causing that stuttering too, hopefully it's the threading and can be turned off.


QuickSilva(Posted 2009) [#119]
@ImaginaryHuman :

I`m not using threads at all so the new garbage collector isn`t being used either.

The code being used is the same code I was using in version 1.30 which didn`t stutter at all using Jakes FrameWork. I have tried adding a delay to see if it made any difference, just to check, but it doesn`t.

I know that windows are now centered automatically so maybe something else has been tinkered with too?

Jason.


Grey Alien(Posted 2009) [#120]
I know that windows are now centered automatically
Great! Anyone know if that's true of the Mac too? So DX and OpenGL?

Perhaps the VSync or something has been tweaked. This is a bummer and may stop me using V1.32 Although V1.32 has some cool fixes by the sounds of things.


ImaginaryHuman(Posted 2009) [#121]
I just meant that if you have `Threaded` selected under the build menu it will use the new garbage collector, regardless of whether you actually use threads for anything.


Grey Alien(Posted 2009) [#122]
I'll test out V1.32 towards the end of next week (going to Seattle for a few days meanwhile). It could be machine related too, like it might be OK before AND after on mine, or bad with both versions on mine. Won't know until I test it. So if someone else can do more tests meanwhile, then great.


QuickSilva(Posted 2009) [#123]
@ImaginaryHuman :

Sorry, I meant to point out that I didn`t have the threaded option on either.

@GreyAlien :

It would be handy if you could check when you get back and let us know your results. In the meantime I will try out a few different things like going back to version 1.30 and seeing if that changes anything. With regards to the window being centered on Mac I cannot say (but I`m sure that Mark wouldn`t have added it if it didn`t) and yes, it works for both DX and OGL.

A great addition! All we need now is native icon support.

Jason.


QuickSilva(Posted 2009) [#124]
OK, figured out that the problem was not caused by BMax but by me setting the AutoImageFlags to 0 as I was going for a retro\pixel i.e. no blurring of images when scaled upward.

Can anyone explain why this results in the odd jittering of my images when they move? Also is there a way to avoid this whilst retaining the pixel sharpness of my images?

Jason.


QuickSilva(Posted 2009) [#125]
Back on subject now. I`m adding a few bells and whistles to my timing code now that it`s working well.

First question, should I be adding any jitter correction to the delta value, will it have any impact when using tweening or is it redundant in this case? I`m thinking of adding the delta values to an array and using the average from this. Would this work and if so how many values would be good to store for an accurate result?

@HrdNutz:

You once quoted this in another thread regarding timing,


Even fixed step loops use some type of Delta Timing as a base, to ensure the right amount of steps/second is executed. And the Delta Time is where jitters occur. The simple way to average Deltas is to keep adding new deltas to a buffer (removing the oldest ones) and using the average of all those as Delta Time. The buffer can scale dynamically depending on execution rate - a .5 second to 1 second buffer works well. There are other ways to do this, but this is the simplest and effective.



What do you mean by a .5 to 1 second buffer, an array with 500 or 1000 places? (1 second = 1000 millisecs) Also how would you scale this to match the execution rate?

Second question, in the Retro 64 method (link above from Foppy) the programmer checks for alt-tab cases like this,

;account for when the user alt-tabs :)
If NumTicks > 20 Then NumTicks = LastNumTicks
LastNumTicks = NumTicks	


I can see the point to this but just wanted to ask how the number 20 is chosen in this case?

Thanks for any further help, it`s all coming together now :)

Jason.


Pete Carter(Posted 2009) [#126]
One of you should write up tutorials on the different timing methods and post them in the blitzmax tutorial section. because this topic is something most new people dont know about and end up asking questions because theres almost to much stuff about delta time, frame limiting and tweening to know whats good and what isnt.

Pete


Grey Alien(Posted 2009) [#127]
Few glad the problem was the flag. It's simply that no autoimageflags loads the image in without filteredimage and thus the image will not draw at sub-pixel coords which is required for totally smooth movement.


QuickSilva(Posted 2009) [#128]
You can rest easy Grey Alien :) BTW, you mean phew! I think ;)

Jason.


Grey Alien(Posted 2009) [#129]
yep I did mean phew, lol, weird typo...


QuickSilva(Posted 2009) [#130]
We all get those days Grey.

Back on topic, can anyone answer my latest questions in post 125?

Jason.


ImaginaryHuman(Posted 2009) [#131]
Firstly, when you're usign tweening you are probably not using delta timing at the same time. So why would you add anything to delta when you're tweening? Tweening is really to compensate for the difference between the timing of `whole logic frame updates` versus the precice decimal amount of logic frames which should be done at a given time. I'm not sure how tweening would deal with jittering, maybe it would `just work`? Anyone?

As to the 20 frames averaging, or 20 frames capping, it's just a number picked out of the air which presumably allows there to be normal operation when there is up to that much lag but not more. You could pick another number. Probably need to experiment to force a jitter lag and see how it performs.

I'm only replying to this because nobody else did for a while, so sorry if this is not helpful.


QuickSilva(Posted 2009) [#132]
@ImaginaryHuman :

To my understanding, and please correct me if I am wrong here, even fixed rate logic is using some type of Delta Timing so that it knows how many steps it needs to execute. This is the part that I thought may benefit from a bit of jitter correction but again I may be wrong? If tweening is involved then maybe it is not needed at all?

Any further input on this would be most helpful,

Jason.


ImaginaryHuman(Posted 2009) [#133]
No, fixed rate logic does not have anything to do with delta. It runs at a fixed rate of n logic cycles per second - where n is an integer, ie whole cycles only. It never does `part of a cycle` like delta timing does.

However, it's when you add tweening, as an extra interpolation in the *graphics code*, or by copying the current whole-frame logic state into a temporary logic buffer to perform tweening on it so that the graphics code doesn't have to mess with the tweening, that you get into this variable display stuff.

The difference between tweening and delta is that delta is usually applied to the logic code, whereas tweening is applied to the graphics code (unless doing the double-buffered logic I just mentioned).

If you get a jitter when you're doing delta timing you can just adjust the delta by adding a `speedup` or `slowdown` value to it each frame until it's back to `current time`.

If you get a jitter when you're doing fixed rate logic, it's a different ball of wax. By the time that you are supposed to be rendering graphics, lets say maybe you were supposed to do 3 logic steps before rendering but only got to do 1 due to a small jitter, your logic is now 2 frames behind. Then along comes your graphics with tweening code and is supposed to be emulating a logic hz of 3.2 cycles per frame. Maybe it would then try to use logic data 2 frames old and tween the equivalent of a predicted movement based on old data? would you then have to smooth the object movements using splines? It could get quite tricky. Delta timing doesn't suffer quite so much because you just multiply everything by the delta value, but especially where you have a high logic hz rate you may have to account for multiple logic frames each time you come to do your tweening.

So basically your tween value gets adjusted by adding the delta jitter value to it somehow, but this might not get you quite so smooth results as delta timing handling jitters. Anyone else care to share some experience with this?


Chroma(Posted 2009) [#134]
loads the image in without filteredimage and thus the image will not draw at sub-pixel coords which is required for totally smooth movement.


What?! If you don't load an image with the FILTEREDIMAGE flag then it won't draw it at fractional coordinates? It'll only draw it at Ints?? Can anyone corroborate this? I'm not anywhere near my computer atm.