Speed question regarding Banks

Blitz3D Forums/Blitz3D Programming/Speed question regarding Banks

Kryzon(Posted 2009) [#1]
Is it faster to store and retrieve values from Blitz banks rather than...

1) Blitz variables?

2) Type fields?

3) Arrays?


_PJ_(Posted 2009) [#2]
I'm not an expert, and this is really speculation, but I feel confident :)

Aside from handles and strings so on, the actual process is equivalent, providing you are peeking / poking the right format. I would guess that on compilation, the effect is the same, values are popped and pushed off the stack. What IS crucial with respect top overheads ehre, is the size of the stack - large numbers of local variabels etc. would increase the stack considerably, so if you can Peek or Poke the exact locations efficiently and in an ordered manner, you can incresae the speed.


Yasha(Posted 2009) [#3]
....Unfortunately for Malice, he's wrong.

All three methods are faster than banks. Variables and []-arrays are both located on the stack and are lightning fast (this is why it won't let you re-size or return []-arrays). ()-arrays are ever so slightly slower, but the difference is pretty much negligible and small enough to be cancelled out by the extra multiplication and addition operation you need if you intend to simulate multiple dimensions in a []-array. Type objects aren't stack allocated, and obviously there's an extra operation to dereference the field pointer, so they're slightly slower than plain variables as well, but not by very much - not enough that most people notice.

Banks, however, are significantly slower. Their memory isn't on the stack either - in fact the entire point of the CreateBank() command is to give you a fresh block of contiguous memory to do with as you like. The main thing to note here is that banks can't be accessed with pointers, the way fields can, so there is some extra overhead in that every peek or poke is a function call. Since they're plain memory they also have no native support for strings, but generally nobody using strings cares much about performance anyway. So the answer to the question is no - banks are the slowest of the main ways to handle data. However, that's still a pretty tiny speed difference and I have yet to run into many situations myself where the flexibility of using banks didn't far outweigh their barely-noticeable speed hit.


RifRaf(Posted 2009) [#4]
I added a code archive entry some time ago, that lets you act on a bank just like it was any other stream such as an open file. I havent noticed any real speed hits using it though , Yasha im sure is correct that the speed difference is very small.

http://www.blitzbasic.com/codearcs/codearcs.php?code=2412


Charrua(Posted 2009) [#5]
sorry for my ignorance, but what's the difference between been in stack than from plain memroy? isn't the stack in memory?
as i see both are indexed memory acces at the end wiht the calculation of the index in the middle (from assembler point of view),isn't it?

edit:
ok Var1=Array[index] isn't the same as Var1 = ReturnValueFromFunc(bank,index), from that point of view accesing a bank is slower.

still my question: why the stack is faster than memory?

Juan


Yasha(Posted 2009) [#6]
There's no particular reason to suggest that the stack necessarily is faster than any other part of memory. This will vary depending on the implementation, anyway. What matters though is that because they're not in the stack, the data stored in banks and type objects can't be accessed directly, and it's this indirection that produces the overhead - a variable in the stack (or not, if the handle is Global) pointing to that data in some way has to be dereferenced before the data itself can be accessed.


Charrua(Posted 2009) [#7]
i don't want to disturb the purpose of this post, soryy Kryzon.

if i understood, types and banks should be memory dinamically allocated and we have to pass trouhg some SO function call to get acces to them and Global or Stack passed variables are like it they were in "local memroy" and for that reason ther are slow?

thank's

Juan


Mahan(Posted 2009) [#8]
There might, as suggested, be a small speed hit when using banks, but I'm fairly sure that this is in most cases negligible.


To take a similar example: I once had a colleague at a previous work place who wrote a very long function/method. (about 2,5k lines). It was filled with cases and if's to manage all the logic and was therefore mostly unreadable. When I asked why this logic wasn't broken down to small pieces/functions he said it was because all function calls have overhead. And this was a function with both file access and DB lookups etc. After this behemoth was rewritten to ~20 smaller functions the speed loss was as best theoretic and of course not even measurable.

Sens-moral: Never optimize at first. Just write clear and easily understandable code. Just write it clear as water! Afterwards if you need to adjust something for speed this code will be much easier to adapt for this new need. Trying to adapt heavily optimized code is often not that easy.


Yasha(Posted 2009) [#9]
Mahan makes a very good point here. The speed differences I mentioned above are on the order of one or two nanoseconds; unless you're writing a software renderer or physics engine you really aren't ever going to notice the difference between any of the methods suggested at the top.

Just to clarify the theoretical perspective for Charrua though: actual variable names are bound directly to memory locations, either an absolute position in the case of a Global variable, or a position relative to the start of the current stack frame in the case of a local variable. These are the fastest kind of access because they're completely direct. Dynamically allocated memory (banks and types) is a step behind; the value held under the variable name is just the location where the actual data can be found, because dynamically allocated objects are going to be in some randomly assigned space. This means that getting something out of a type is a two-step process: you get the memory location from the variable itself, then go there to access the data. For banks it's a three-step process as there's also a function call involved.


Kryzon(Posted 2009) [#10]
Interesting info you got on your first post, Yasha.

Sens-moral: Never optimize at first. Just write clear and easily understandable code. Just write it clear as water! Afterwards if you need to adjust something for speed this code will be much easier to adapt for this new need. Trying to adapt heavily optimized code is often not that easy.

That's a nice guideline.

Thank you all for the input!


_PJ_(Posted 2009) [#11]
Thanks for the clarification, Yasha. As I said, it was only conjecture, and I appreciate your response, now I know better :D

As for the optimisation AFTER code has been 'complteted', yeah, this is something I try to do. Trying to optimise-as-you go leads to the code looking confusing, and possibly can end up worse if you 'lose track' of where you are.
A very good tip indeed.


Charrua(Posted 2009) [#12]
clear as water!

thank's

Juan