Hi David,

On 2/9/2012 11:36 AM, David Holmes wrote:
On 9/02/2012 2:41 PM, Poonam Bajaj wrote:
Could I have code reviews for the following fix:

7009098: SA cannot open core file larger than 2GB
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7009098
Webrev: http://cr.openjdk.java.net/~poonam/7009098/webrev.00/

Serviceability Agent on Linux is not able to open core files larger
than 2GB. The fix is to enable the 'Large File Support' by compiling
the libsaproc.so with -D_FILE_OFFSET_BITS=64. This is added to both
agent/src/os/linux/Makefile (which builds SA from agent/make/) and
make/linux/makefiles/saproc.make (which builds SA with Hotspot build).

This seems ok.

There is one more change with which SA should first load libraries
from the path specified with SA_ALTROOT rather than loading from
the host system.

Where does this come from? It is not related to this CR. Is there another CR indicating the current behaviour is incorrect? Or a spec for it that indicates it is incorrect?

This change became necessary to open the same customer provided core which required the first change. There is no separate bug filed for this change.

There is a document in the hotspot repository hotspot/agent/doc/transported_core.html which talks about the transported cores and the use of SA_ALTROOT.

" The best possible way to debug a transported core dump is to match the debugger machine to that of core dump machine. i.e., have same Kernel and libthread patch level between the machines. mdb (Solaris modular debugger) may be used to find the Kernel patch level of core dump machine and debugger machine may be brought to the same level.

If the matching machine is "far off" in your network, then

 * consider using rlogin and CLHSDB - SA command line HSDB interface
   <http://cafebabe.uk.oracle.com/lxr/source/hotspot/agent/doc/clhsdb.html>
   or
 * use SA remote debugging and debug the core from core machine remotely.

But, it may not be feasible to find matching machine to debug. If so, you can copy all application shared objects (and libthread_db.so, if needed) from the core dump machine into your debugger machine's directory, say, /export/applibs. Now, set *SA_ALTROOT* environment variable to point to /export/applibs directory. Note that /export/applibs should either contain matching 'full path' of libraries. i.e., /usr/lib/libthread_db.so from core machine should be under /export/applibs/use/lib directory and /use/java/jre/lib/sparc/client/libjvm.so from core machine should be under /export/applibs/use/java/jre/lib/sparc/client so on or /export/applibs should just contain libthread_db.so, libjvm.so etc. directly. "

" On Linux, SA parses core and shared library ELF files. SA *does not* use libthread_db.so or rtld_db.so for core dump debugging (although libthread_db.so is used for live process debugging). But, you may still face problems with transported core dumps, because matching shared objects may not be in the path(s) specified in core dump file. To workaround this, you can define environment variable *SA_ALTROOT* to be the directory where shared libraries are kept. The semantics of this env. variable is same as that for Solaris (please refer above)."

So, if SA_ALTROOT is specified, libraries from this path should get loaded and not from the path embedded in the core file.


Thanks,
Poonam


David
-----


Thanks,
Poonam
--


Reply via email to