Threads: Too many heap sections
Archives Forums/BlitzMax Bug Reports/Threads: Too many heap sections
| ||
EDIT: This should be in bug reports! (please move) I get a "fatal GC error" with text "too many heap sections" when building and running the following code in threaded mode (always at iteration #121): SuperStrict Framework BRL.StandardIO Import BRL.Math Local i% Repeat i :+ 1 GCSuspend Local d:Double[] = [0!] For Local i% = 1 To 100000 d = [Sqr( d[0] + i )] Next GCResume Print "Iteration #"+i Forever Non threaded build runs fine. |
| ||
The above was with an SVN build. Current version 1.32 works until iteration #217, where memory consumption is around 700MB. Same also happens on other apps when mem is around that. Eg. this fails at 138. Framework BRL.StandardIO Import BRL.LinkedList Local data:TList = New TList For Local i% = 1 To 200 Print i For Local j% = 1 To 10000 data.AddLast New Int[100] Next Next A bit annoying, given that non-threaded builds can go on even when I run out of RAM (3GB). |
| ||
Can confirm this happens also with 1.33 at iteration 138 |
| ||
I noticed this in blitz_array.c (brl.blitz) today, which is probably of some significance://***** Note: Not called in THREADED mode. static void bbArrayFree( BBObject *o ){ ... Perhaps it's something to do with that? |
| ||
I noticed this in blitz_array.c (brl.blitz) today, which is probably of some significance: This shouldn't affect anything, since the memory for those objects is still allocated using the garbage collector. As such, it gets collected by the GC rather than just leaking once references to it are lost. No finalizer is needed as a result. |
| ||
It seems there's no time enought for the GC to work between the GCResume and the GCSuspend on every cycle. This works:SuperStrict Framework BRL.StandardIO Import BRL.Math Local i% Repeat i :+ 1 GCSuspend Local d:Double[] = [0!] For Local i% = 1 To 100000 d = [Sqr( d[0] + i )] Next GCResume gccollect() Print "Iteration #"+i Forever So it does not seem like a bug to me. |
| ||
So it does not seem like a bug to me. I'm not sure about the first example, but the second shouldn't crash. And both work in non-threaded mode. |
| ||
How come you keep ending up with ' ' in front of some of your posts? O_o The following is 100% vague speculation and should be taken with a fraction of a grain of salt I think the way the Hans Boehme garbage collector works is it doesn't just collect junked memory once a buffer is full of pointers to junk memory (which seems to, basically, be how the regular ref-counting GC works), but rather seems to traverse pointers and references to pointers (I can't think of a better term) and find out which objects are separated from any root ('global' in a sense) objects. Anyhow, if the GC can't keep up with the amount of memory you're allocating there, probably because you're creating tons and tons of tiny little things the GC has to follow and check, then it seems likely that you will inevitably have a problem. Thus, the solution is to force the GC to collect junked memory rather than let it do things lazily as it would by default (how I see it, anyway). I could be entirely wrong on how it works, but I haven't spent a great deal of time looking at how the threaded GC works. So far it just seems to me like it's incapable of keeping up with the amount of data it has to deal with and the rate at which you're throwing that data at it. |
| ||
Reading the documentation on the HBGC will probably show you why there are issues. My guess is it might be more efficient to force gccollect much more often - this might also alleviate some of the long waits some people are having. One problem is down to the large number of small objects requiring GC'ing. Ideally you would allocate a block of memory for these and free that block in one go when they were unused. But that's obviously requiring the programmer to think more about what he's doing ;-) |
| ||
GCCollect calling more often, or even in a Background Thread does not really help, in an multithreaded particle system approach I ran into massive gc slowdowns im (roughly) more than 60k particles to update... this one reads funny - http://users.notam02.no/~kjetism/rollendurchmesserzeitsammler/README Edit....: But to see it realtive... ...i tried this in java: public static void main(String[] args) { List<int[]> data = new LinkedList<int[]>(); for (int i=0; i<200; i++) { System.out.println("Iteration: "+i); for (int j=0; j < 10000; j++) { data.add(new int[100]); } } } but... ... Iteration: 13 Iteration: 14 Iteration: 15 Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at tests.Main.main(Main.java:25) Java Result: 1 |
| ||
The HBGC does have an option for incremental collection though, which might even out the load. But last i read it was still experimental... |
| ||
Rollendurchmesserzeitsammler looks interesting... It says it is a drop-in replacement for HBGC. I wonder if it is much better? |
| ||
I also get this error (Threads: Too many heap sections) under Windows Vista Home. On Windows XP the app seems to run fine. Weird! :( |
| ||
Rollendurchmesserzeitsammler looks interesting... It says it is a drop-in replacement for HBGC. I wonder if it is much better? "RollThroughKnifeTimeCollector" ? I'm curious how it compares to the current one, given that it can have a pretty massive performance degradation over the original garbage collection in certain situations... |
| ||
ehm... It's more like "roll-diameter-time-collector"! The german word "Durchmesser" has nothing to do with a knife. "durch Messer" can be translated with "trough knifes". |