On 10/31/06, Geir Magnusson Jr. <[EMAIL PROTECTED]> wrote:
Are IBM's and Sun's are know to work well for production loads?
Don't know. Probably yes. It's claimed that SUN's work is the first parallel compaction design [1] and IBM's work is an improvement [2], while Compressor is a further improvement [3]. Well anyway, too choose a high-level design is actually not a big deal. I think the real issues come from details in implementation (load balance and scalability). So probably I can write two parallel compactors based on [2] and [3], and hopefully find some solution suitable for Harmony with targeted workloads. Thanks, xiaofeng [1] Christine Flood, Dave Detlefs, Nir Shavit, and Catherine Zhang. Parallel garbage collection for shared memory multiprocessors. In Usenix Java Virtual Machine Research and Technology Symposium (JVM'01), Monterey, CA, April 2001. [2] Diab Abuaiadh, Yoav Ossia, Erez Petrank, and Uri Silbershtein. An efficient parallel heap compaction algorithm. In ACM Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA'04), ACM SIGPLAN Notices, Vancouver, October 2004. ACM Press. [3] Haim Kermany, Erez Petrank. The Compressor: concurrent, incremental, and parallel compaction. In Proceedings of the 2006 ACM SIGPLAN Conference on Programming Languages Design and Implementation (PLDI 2006), Pages 354-363, 2006.
Xiao-Feng Li wrote: > Weldon, the problem is, there is no well-established parallel > compaction algorithms. So far the best known work are 1. SUN's work; > 2. IBM's work and 3. Compressor. No one knows which one is the best > for different workloads. We have to identify one algorithm for > implementation, and at the moment Compressor looks to be the right > choice, or we write more than one compactors. > > Thanks, > xiaofeng > > On 10/30/06, Weldon Washburn <[EMAIL PROTECTED]> wrote: >> Since the Compressor algorithm was published only this year, perhaps it >> makes sense to consider it experimental. Perhaps make it a compile time >> switch so that that folks focused on production quality VM don't trip on >> it. Even assuming the implementation of the Compressor algorithm is bug >> free, there may be unforeseen performance problems that surface with >> different workloads. >> >> On 10/29/06, Xiao-Feng Li <[EMAIL PROTECTED]> wrote: >> > >> > On 10/29/06, Rana Dasgupta <[EMAIL PROTECTED]> wrote: >> > > Xiao Feng, >> > > I will read the reference to understand what are the compressor >> > > advantages, and how the algorithm is implemented, thanks. >> > > >> > > Even when you have 1GB of physical memory, is there not an >> overhead of >> > page >> > > faults? >> > >> > Yes, I agree that page faults definitely will be an overhead. I guess >> > the page mapping overhead in Compressor is lower than the benefits it >> > achieves. But yes, we need evaluate it independently. >> > >> > > Is it an option to compact the heap in parts and/or to increase the >> > number >> > > of passes to reduce the space overhead? >> > >> > The key idea of Compressor is to keep the object order during parallel >> > compaction. There are other algorithms like "mark-copy" which require >> > less additional copy space, but can't maintain the object order. In >> > order to enable the parallel compaction of multiple blocks, the >> > assumption is, we have to assume the to-space in the worst case is the >> > equal size as the from-space. We can use a to-space with 30% size of >> > the from-space in most compaction collections without problem, but we >> > need be prepared for the worst case. A possible solution is, to have a >> > fall-back algorithm when the to-space is smaller than required. This >> > is not a new idea, e.g., GCv4.1 employs something similar and there >> > are some papers available. [1] in ISMM06 is an example. >> > >> > [1] http://www.cs.purdue.edu/homes/pmcgache/ismm06.pdf >> > >> > > Is this significantly better than doing semi-space copying at each GC >> > cycle, >> > > since one major advantage of compaction( other than preserving >> > allocation >> > > order ) over copying, was probably less space overhead? >> > >> > Yes. The major advantage in my opinion is less physical space >> > overhead. Well it introduces the vitual space overhead. If assuming >> > the same of physical space overhead as a semi-space collector, we need >> > evaluate the real benefits of object locality to trade off the >> > collection pause time. >> > >> > > Are we looking for a parallel compaction algorithm for all >> situations, >> > or >> > > can we think of choosing at JVM startup based on user input, >> > client/server, >> > > or OS feedback on execution environment? >> > >> > I think some adaptive choice is better. It means we need provide the >> > choices at first. :-) I guess it's not a big overhead to have two >> > parallel compactors. >> > >> > Thanks, >> > xiaofeng >> > >> > > Sorry for all these questions before reading the book :-) >> > > >> > > Rana >> > > >> > > >> > > >> > > >> > > >> > > >> > > > On 10/27/06, Xiao-Feng Li <[EMAIL PROTECTED]> wrote: >> > > > > >> > > > > Hi, all, the plan for GCv5 parallel compaction is to apply the >> idea >> > of >> > > > > Compressor [1]. But it has an issue I want to discuss with you. >> > > > > Compressor needs to reserve an unmapped virtual space for >> > compaction. >> > > > > The size of the reserved part is the same as that of copy reserve >> > > > > space in a semi-space collector. This means about that part of >> the >> > > > > virtual space is unusable for the JVM. In a typical setting, the >> > > > > wasted part is half size of the total compaction space. If we >> have >> > 1GB >> > > > > physical memory, the JVM is ok for Compressor because the virtual >> > > > > space is large enough to wast half; but if the phsical memory is >> > >2GB, >> > > > > Compressor may have a problem in 32bit machine: some of phsical >> > mapped >> > > > > space might be wasted. >> > > > > >> > > > > Any opinion on this? >> > > > > >> > > > > Thanks, >> > > > > xiaofeng >> > > > > >> > > > > [1] http://www.cs.technion.ac.il/~erez/Papers/compressor-pldi.pdf >> > > > >> > > > >> > > >> > > >> > >> >> >> >> -- >> Weldon Washburn >> Intel Enterprise Solutions Software Division >> >> >