On Wednesday, 24 September 2014 at 11:59:52 UTC, Kagamin wrote:
On Tuesday, 23 September 2014 at 16:47:09 UTC, David Nadlinger wrote:
I was briefly discussing this with Andrei at (I think) DConf 2013. I suggested moving data to a separate global GC heap on casting stuff to shared.

Yes, that sounds expensive. A real example from my work: client receives big dataset (~1GB) from server in a background thread, builds and checks constraints and indexes (which is sort of expensive too; RBTree) and hands it over to the main thread. And client machine is not quite powerful for frequent marshaling of such big dataset, handling it at all is enough of a problem. If you copied it twice, you have 3GB working set, and GC needs somewhat 2x reserve, raising memory requirements to 6GB, without dup requirements are 1-2GB. Also when you trigger collection during copying to shared GC, what it does, stops the world again?

Yes, that's the problem I see with the shared GC. But I think cases like this should be solved "easily" with a mechanism for transfer of responsibility between thread GCs. The truly problematic cases are shared objects with roots in various threads

Reply via email to