On May 19, 2009, at 10:30 PM, Mridul Muralidharan wrote:


I am still not very convinced about the value about this implementation - particularly considering the advances made since 1.3 in memory allocators and garbage collection.

My fundamental concern is not with the slowness of garbage collection. I am asserting (along with the paper) that garbage collection is not an optimal choice for a large data processing system. I don't want to improve the garbage collector, I want to manage a subset of the memory without it.



The side effect of this proposal is many, and sometimes non-obvious.
Like implicitly moving young generation data into older generation, causing much more memory pressure for gc, fragmentation of memory blocks causing quite a bit of memory pressure, replicating quite a bit of functionality with garbage collection, possibility of bugs with ref counting, etc.

I don't understand your concerns regarding the load on the gc and memory fragmentation. Let's say I have 10,000 tuples, each with 10 fields. Let's also assume that these tuples live long enough to make it into the "old" memory pool, since this is the interesting case where objects live long enough to cause a problem. In the current implementation there will be 110,000 objects that the gc has to manage moving into the old pool, and check every time it cleans the old pool. In the proposed implementation there would be 10,001 objects (assuming all the data fit into one buffer) to manage. And rather than allocating 100,000 small pieces of memory, we would have allocated one large segment. My belief is that this would lighten the load on the gc.

This does replicate some of the functionality of the garbage collector. Complex systems frequently need to re-implement foundational functionality in order to optimize it for their needs. Hence many RDBMS engines have their own implementations of memory management, file I/O, thread scheduling, etc.

As for bugs in ref counting, I agree that forgetting to deallocate is one of the most pernicious problems of allowing programmers to do memory management. But in this case all that will happen is that a buffer will get left around that isn't needed. If the system needs more memory then that buffer will eventually get selected for flushing to disk, and then it will stay there as no one will call it back into memory. So the cost of forgetting to deallocate is minor.



If assumption that current working set of bag/tuple does not need to be spilled, and anything else can be, then this will pretty much deteriorate to current impl in worst case.
That is not the assumption. There are two issues: 1) trying to spill bags only when we determine we need to is highly error prone, because we can't accurately determine when we need to and because we sometimes can't dump fast enough to survive; 2) current memory usage is far too high, and needs to be reduced.






A much more simpler method to gain benefits would be to handle primitives as ... primitives and not through the java wrapper classes for them. It should be possible to write schema aware tuples which make use of the primitives specified to take a fraction of memory required (4 bytes + null_check boolean for int + offset mapping instead of 24/32 bytes it currently is, etc).

In my observation, at least 50% of the data in pig is untyped, which means it's a byte array. Of the 50% that people declare or is determined by the program, probably 50-80% of that are chararrays and maps. So that means that somewhere around <25% of the data is numeric. Shrinking that 25% by 75% will be nice, but not adequate. And it does nothing to help with the issue of being able to spill in a controlled way instead of only in emergency situations.

Alan.

Reply via email to