Hi JC,
Sorry for a latency in reply. I'm going to a 4-week vacation. But I'll try to help you occasionally when there will be access to the Internet. On 6/23/17 09:58, JC Beyler wrote:
Thanks. The following seems to work: % cd hotspot; patch -p1 < ../hotspot.patch Hopefully, I'll be able to build and play with it in order to understand better.
Yes. I normally do this: cd <repo>/hotspot hg import ../hotspot.patch It returns the following errors: applying ../hotspot.patch patching file src/cpu/x86/vm/macroAssembler_x86.cpp Hunk #1 succeeded at 5662 with fuzz 2 (offset 60 lines). patching file src/share/vm/gc/g1/g1CollectedHeap.cpp Hunk #1 FAILED at 74 Hunk #2 succeeded at 4316 with fuzz 1 (offset 33 lines). Hunk #3 FAILED at 4506 2 out of 3 hunks FAILED -- saving rejects to file src/share/vm/gc/g1/g1CollectedHeap.cpp.rej patching file src/share/vm/gc/g1/g1CollectedHeap.hpp Hunk #1 FAILED at 302 1 out of 1 hunks FAILED -- saving rejects to file src/share/vm/gc/g1/g1CollectedHeap.hpp.rej patching file src/share/vm/gc/g1/g1MarkSweep.cpp Hunk #1 FAILED at 47 Hunk #2 FAILED at 249 2 out of 2 hunks FAILED -- saving rejects to file src/share/vm/gc/g1/g1MarkSweep.cpp.rej patching file src/share/vm/gc/parallel/psMarkSweep.cpp Hunk #1 FAILED at 50 Hunk #2 FAILED at 609 2 out of 2 hunks FAILED -- saving rejects to file src/share/vm/gc/parallel/psMarkSweep.cpp.rej patching file src/share/vm/gc/parallel/psParallelCompact.cpp Hunk #1 FAILED at 59 Hunk #2 FAILED at 2167 2 out of 2 hunks FAILED -- saving rejects to file src/share/vm/gc/parallel/psParallelCompact.cpp.rej patching file src/share/vm/gc/shared/collectedHeap.cpp Hunk #1 FAILED at 37 Hunk #2 FAILED at 294 Hunk #3 FAILED at 314 Hunk #4 FAILED at 335 4 out of 4 hunks FAILED -- saving rejects to file src/share/vm/gc/shared/collectedHeap.cpp.rej patching file src/share/vm/gc/shared/collectedHeap.hpp Hunk #1 succeeded at 146 with fuzz 2 (offset 3 lines). patching file src/share/vm/gc/shared/collectedHeap.inline.hpp Hunk #1 FAILED at 156 1 out of 1 hunks FAILED -- saving rejects to file src/share/vm/gc/shared/collectedHeap.inline.hpp.rej patching file src/share/vm/gc/shared/genCollectedHeap.cpp Hunk #1 FAILED at 48 Hunk #2 FAILED at 721 2 out of 2 hunks FAILED -- saving rejects to file src/share/vm/gc/shared/genCollectedHeap.cpp.rej patching file src/share/vm/gc/shared/referenceProcessor.cpp Hunk #1 FAILED at 34 Hunk #2 FAILED at 256 2 out of 2 hunks FAILED -- saving rejects to file src/share/vm/gc/shared/referenceProcessor.cpp.rej patching file src/share/vm/gc/shared/threadLocalAllocBuffer.cpp Hunk #1 FAILED at 28 Hunk #2 FAILED at 120 Hunk #3 FAILED at 182 Hunk #4 FAILED at 305 4 out of 4 hunks FAILED -- saving rejects to file src/share/vm/gc/shared/threadLocalAllocBuffer.cpp.rej patching file src/share/vm/gc/shared/threadLocalAllocBuffer.hpp Hunk #1 FAILED at 43 Hunk #2 FAILED at 51 Hunk #3 FAILED at 65 Hunk #4 FAILED at 114 Hunk #5 FAILED at 158 5 out of 5 hunks FAILED -- saving rejects to file src/share/vm/gc/shared/threadLocalAllocBuffer.hpp.rej patching file src/share/vm/prims/jvmti.xml Hunk #1 succeeded at 11719 with fuzz 1 (offset 190 lines). patching file src/share/vm/prims/jvmtiEnv.cpp Hunk #1 FAILED at 45 Hunk #2 FAILED at 54 Hunk #3 succeeded at 1932 with fuzz 2 (offset -17 lines). 2 out of 3 hunks FAILED -- saving rejects to file src/share/vm/prims/jvmtiEnv.cpp.rej patching file src/share/vm/runtime/thread.hpp Hunk #1 FAILED at 614 1 out of 1 hunks FAILED -- saving rejects to file src/share/vm/runtime/thread.hpp.rej file src/share/vm/prims/jvmtiHeapTransition.hpp already exists 1 out of 1 hunks FAILED -- saving rejects to file src/share/vm/prims/jvmtiHeapTransition.hpp.rej file src/share/vm/runtime/heapMonitoring.cpp already exists 1 out of 1 hunks FAILED -- saving rejects to file src/share/vm/runtime/heapMonitoring.cpp.rej file src/share/vm/runtime/heapMonitoring.hpp already exists 1 out of 1 hunks FAILED -- saving rejects to file src/share/vm/runtime/heapMonitoring.hpp.rej file test/serviceability/jvmti/HeapMonitor/MyPackage/Frame.java already exists 1 out of 1 hunks FAILED -- saving rejects to file test/serviceability/jvmti/HeapMonitor/MyPackage/Frame.java.rej file test/serviceability/jvmti/HeapMonitor/MyPackage/HeapMonitorFrequentTest.java already exists 1 out of 1 hunks FAILED -- saving rejects to file test/serviceability/jvmti/HeapMonitor/MyPackage/HeapMonitorFrequentTest.java.rej file test/serviceability/jvmti/HeapMonitor/MyPackage/HeapMonitorOnOffTest.java already exists 1 out of 1 hunks FAILED -- saving rejects to file test/serviceability/jvmti/HeapMonitor/MyPackage/HeapMonitorOnOffTest.java.rej file test/serviceability/jvmti/HeapMonitor/MyPackage/HeapMonitorRecentTest.java already exists 1 out of 1 hunks FAILED -- saving rejects to file test/serviceability/jvmti/HeapMonitor/MyPackage/HeapMonitorRecentTest.java.rej file test/serviceability/jvmti/HeapMonitor/MyPackage/HeapMonitorTest.java already exists 1 out of 1 hunks FAILED -- saving rejects to file test/serviceability/jvmti/HeapMonitor/MyPackage/HeapMonitorTest.java.rej file test/serviceability/jvmti/HeapMonitor/libHeapMonitor.c already exists 1 out of 1 hunks FAILED -- saving rejects to file test/serviceability/jvmti/HeapMonitor/libHeapMonitor.c.rej abort: patch failed to apply
It seems, we've found the reason (please, see above).
I see a couple of more emails from you. Will try to help when you have some problems or questions.
Expectable. :)
I agree, it is better at least to document this behavior.
I need to think more on this. The whole style is different than that used in the JVMTI. Normally, each agent does some steps like the following: - add the capabilities required by the agent (normally in the ONLOAD phase) - set the agent event callbacks - enable/disable event notification mode - relinquish the capabilities that are not needed anymore In your approach there is no tight association with any agent yet. Also, it seems, the heap monitoring initialization/finalization are not separated from sampling starting/stopping. The heap allocation events are processed by the VM itself (in GC or JVMTI). It is not clear yet if it could be somehow adjusted to JVMTI style and if it is really necessary.
I do not think it is really important to support multi-agents here. However, it is important to make a choice sooner rather than later as it impacts the design. Thanks, Serguei
|
- Re: Low-Overhead Heap Profiling Robbin Ehn
- Re: Low-Overhead Heap Profiling serguei.spit...@oracle.com
- Re: Low-Overhead Heap Profiling Robbin Ehn
- Re: Low-Overhead Heap Profiling JC Beyler
- Re: Low-Overhead Heap Profiling JC Beyler
- Re: Low-Overhead Heap Profilin... Robbin Ehn
- Re: Low-Overhead Heap Pro... Robbin Ehn
- Re: Low-Overhead Heap Pro... Robbin Ehn
- Re: Low-Overhead Heap Pro... Thomas Schatzl
- Re: Low-Overhead Heap Pro... serguei.spit...@oracle.com
- Re: Low-Overhead Heap Pro... serguei.spit...@oracle.com
- Re: Low-Overhead Heap Pro... serguei.spit...@oracle.com
- Re: Low-Overhead Heap Pro... serguei.spit...@oracle.com
- Re: Low-Overhead Heap Pro... Thomas Schatzl
- Re: Low-Overhead Heap Pro... Robbin Ehn
- Re: Low-Overhead Heap Pro... Robbin Ehn
- Re: Low-Overhead Heap Pro... JC Beyler
- Re: Low-Overhead Heap Pro... JC Beyler
- Re: Low-Overhead Heap Pro... Robbin Ehn
- Re: Low-Overhead Heap Pro... JC Beyler
- Re: Low-Overhead Heap Pro... Robbin Ehn