Millisecs() is slow

BlitzMax Forums/BlitzMax Programming/Millisecs() is slow

Zeke(Posted 2007) [#1]
here is speedtest:




deps(Posted 2007) [#2]
Did you take into account that Print isn't really the fastest function on this side of the galaxy?


Chroma(Posted 2007) [#3]
What's this GetTickCount() thing?! And why hasn't someone posted about it before...

Is GetTickCount() better? Can you make a new test without using Print?


Dreamora(Posted 2007) [#4]
Millisecs() is fast, only a few cpu ticks.

But print is a serious problem if you print more than a few times per frame which means about 250-300 lines per second. Which you quite sure do not do :)


Floyd(Posted 2007) [#5]
MilliSecs() does seem unreasonably slow. I get about 7400 CPU cycles per call.

Here is a simplified version.

StartTime = MilliSecs()

For n = 1 To 1000000
	nothing = MilliSecs()
Next

EndTime = MilliSecs()


Print
Print "Time for one million calls to MilliSecs(): " + (EndTime - StartTime) + "ms"

If your CPU runs at 1GHz then the number of milliseconds for a million calls is the same as the number of cycles per call.

If your CPU runs at 2GHz then multiply by 2.


Jake L.(Posted 2007) [#6]
Is there any reason to call millisecs() more than once or twice a frame? Just wondering why you speedtested millisecs()...

If you're after accuracy, use GetTickCount under Windows, there should be an example in the code archives.


ImaginaryHuman(Posted 2007) [#7]
In most cases you only need to call it once or just a few times per frame since you can store it in a variable and reuse it.

I mean, I've written a multitasking script execution virtual machine thing and it uses millisecs for timing and generally doesn't need to call it more than a few times per frame at most even though much of the task switching and stuff is millisecs based.


Kibo(Posted 2007) [#8]
Given how simple the C source code for the BlitzMax Millisecs() function is (see mod/brl.mod/blitz.mod/blitz_app.c) I would expect any slowness to be either a result of the operating system's timing function itself being slow, or some overhead issue somewhere else. For instance, under Windows, BlitzMax's Millisecs() implementation is one line of code:

int bbMilliSecs(){
return timeGetTime();
}

Under Mac OS X or Linux, it's a different call followed by a division by 1000. Those routines are all so simple that if it's going too slowly for you, the slowness would have to be coming from something either above or below the level of the Millisecs() function. In other words, if you find any slowness there, it could be the operating system's fault, it could be overhead in storing of the values in BlitzMax variables, or it could be the way you've written your testing code (such as the use of Print().)

There are some reports that timeGetTime() is imprecise (at least on older computers: http://www.geisswerks.com/ryan/FAQS/timing.html from 2002) and of course there is the famous way the number rolls over every 49 days ( http://msdn2.microsoft.com/en-us/library/aa912626.aspx ).

Floyd here's, a data point: Your test code gave me about 160 milliseconds for 1,000,000 calls (on an Intel Mac.)


Orca(Posted 2007) [#9]
I would have assumed max uses QueryPerformanceTimer on win32. AFAIK its the highest resolution, least noisey/latent of windows timers.


FlameDuck(Posted 2007) [#10]
I would have assumed Windows only had one timer. I should have guessed that the SPOT rule isn't too popular at Microsoft.


xlsior(Posted 2007) [#11]
I would have assumed Windows only had one timer. I should have guessed that the SPOT rule isn't too popular at Microsoft.


(SPOT = Single Point Of Truth)

I guess that one kind of goes out the window when you come up with a different 'better' API every few years, with backwards compatibility layers going back to the jurassic era...