MilliSecs : How do I count seconds

BlitzMax Forums/BlitzMax Beginners Area/MilliSecs : How do I count seconds

DannyD(Posted 2005) [#1]
Hi,
I want a function to run for x seconds.I believe that MilliSecs() counts from the start of the application.I want to be able to specify that a function runs for x seconds at any point in the program.I could pass this paramater to the function but the code eludes me.Any suggestions ?


Warren(Posted 2005) [#2]
There are 1000 milliseconds in a second, so do something like:

StartTime = MilliSecs()


And then wait until MilliSecs() returns >= StartTime+1000 ...


Gabriel(Posted 2005) [#3]
I believe that MilliSecs() counts from the start of the application.


Just to clear something up ( because Warren's answer is really all you need ) millisecs() does not begin at zero when you start the application. You always have to monitor the changes in millisecs() ( as Warren demonstrates ) not the actual value.


skidracer(Posted 2005) [#4]
But you can't compare the time as Warren suggests. "If Millisecs()>StartTime+30000" will fail due to MilliSecs() rollover, however "If Millisecs()-StartTime>30000" doesn't.

This is especially true on Linux where MilliSecs returns millis since January1,1970 or something so is more ofen than not a negative value in BlitzMax's signed world.
Global StartTime=MilliSecs()

' test timeout in code

Local runningtime=(MilliSecs()-StartTime)/1000
If runningtime>30 end ' time out after 30 seconds




Warren(Posted 2005) [#5]
Hmm. Might be cool for Blitz Research to change MilliSecs() then so it returns something consistently useful on all platforms - if possible, I mean.

But I think for the vast majority of cases, what I suggested works fine. Stressing over MilliSecs() rollover is like arguing against using the built in Rand functions because they aren't, in the strictest scientific sense, absolutely perfectly random. What-ev. ;)


HopeDagger(Posted 2005) [#6]
I'm pretty sure MilliSecs() is as consistant as it can get, since it's just calling the underlying OS function of the same purpose.


Dreamora(Posted 2005) [#7]
WarrenM: Yours does not work when the Windows PC has been up for more than 24 days as millisecs will be negative then.

the most consistent version is

abs(millisecs () - start_time)>= wished_time_difference

as this eliminates the overflow problem.

It only breaks when millisecs () switches from + to - and vice versa, which is all 24,xxx days ...


Tom Darby(Posted 2005) [#8]
...you should only really use Millisecs() for relative time measurements, anyhow; this sort of eliminates the need for a discrete 'starting' value for Millisecs().

Millisecs() already returns something consistently useful on all three platforms--an accurate value for the number of milliseconds that have elapsed since some (platform-dependent) discrete moment in time. That's all it needs to do; the actual starting point for the count is irrelevant so long as it counts the milliseconds elapsed accurately.


Warren(Posted 2005) [#9]
Yours does not work when the Windows PC has been up for more than 24 days

So my method will work on every Windows machine? Cool. :)


Dreamora(Posted 2005) [#10]
nope
hybernation is not counted as shut down so the millisecs () will run as if the system has been running all the time :P


FlameDuck(Posted 2005) [#11]
That's all it needs to do; the actual starting point for the count is irrelevant so long as it counts the milliseconds elapsed accurately.
That depends on what you're using it for. You could argue that if Millisecs() consistently returned a unix timestamp on all platforms, it could be used for more purposes than simply measuring delta time.

Ofcourse the really correct way to measure this is by using an accumulator, and incremental relative MilliSecs(). That way you can accurately measure time on as large a scale as you want, indefinately. Well until your accumulator overflows anyway.


Michael Reitzenstein(Posted 2005) [#12]
Ofcourse the really correct way to measure this is by using an accumulator, and incremental relative MilliSecs(). That way you can accurately measure time on as large a scale as you want, indefinately. Well until your accumulator overflows anyway.


Exactly. Why can't we have a ProgramMillisecs( ) command, that always starts at 0 from program execution? Not to mention a 64 bit millisecs. It's conceivable that someone could write a resident app in BlitzMax that'll overflow a 32 bit even if MilliSecs( ) started from 0.


HopeDagger(Posted 2005) [#13]
FlameDuck: Like what? I can't think of any practical uses such a 'feature' would have.


Hotcakes(Posted 2005) [#14]
One solution is to implement threading and start your own counter, like FlameDuck's accumulator. Oh, wait, threading isn't in. Damn.

Anyway's skid's suggestion is the best I know of, that is:
StartTime=MilliSecs()
TimeSinceStart=MilliSecs()-StartTime
Warren's method would completely break if the MilliSecs() returned a negative value (like Linux probably does, every time) and Dreamora's version would hiccup at the rollover. skidracer's solution doesn't suffer those problems afaik.


FlameDuck(Posted 2005) [#15]
FlameDuck: Like what? I can't think of any practical uses such a 'feature' would have.
Syncronizing network traffic within a distributed system for example.


*(Posted 2005) [#16]
why not use CurrentTime$() and store that then check and if its changed then a second has passed.