Ryan Brush had the right answer. If I add the following VM parameter -agentlib:hprof=cpu=samples,heap=sites,depth=6,force=n,thread=y,verbose=n,file=prof.output
a profiling file called prof.output gets created in my working directory. Because I'm running locally, both the mapred.task.profile* options and mapred.child.java.opts are ignored. This makes sense now. Thanks. On Wed, May 18, 2011 at 6:34 AM, Brush,Ryan <[email protected]> wrote: > Not familiar with this setup, but I assume this is using the "local" > runner, which simply launches the job in the same process as your program. > Therefore no new JVMs are spun up, so the hprof settings in the > configuration never apply. > > > The simplest way to fix this is probably to just set the -agentlib:... > directly on the JVM of your local program, which will include the > Map/Reduce processing in that process. > > On 5/17/11 6:51 PM, "Mark question" <[email protected]> wrote: > > >or conf.setBoolean("mapred.task.profile", true); > > > >Mark > > > >On Tue, May 17, 2011 at 4:49 PM, Mark question <[email protected]> > >wrote: > > > >> I usually do this setting inside my java program (in run function) as > >> follows: > >> > >> JobConf conf = new JobConf(this.getConf(),My.class); > >> conf.set("*mapred*.task.*profile*", "true"); > >> > >> then I'll see some output files in that same working directory. > >> > >> Hope that helps, > >> Mark > >> > >> > >> On Tue, May 17, 2011 at 4:07 PM, W.P. McNeill <[email protected]> > wrote: > >> > >>> I am running a Hadoop Java program in local single-JVM mode via an IDE > >>> (IntelliJ). I want to do performance profiling of it. Following the > >>> instructions in chapter 5 of *Hadoop: the Definitive Guide*, I added > >>>the > >>> following properties to my job configuration file. > >>> > >>> > >>> <property> > >>> <name>mapred.task.profile</name> > >>> <value>true</value> > >>> </property> > >>> > >>> <property> > >>> <name>mapred.task.profile.params</name> > >>> > >>> > >>> > >>><value>-agentlib:hprof=cpu=samples,heap=sites,depth=6,force=n,thread=y,v > >>>erbose=n,file=%s</value> > >>> </property> > >>> > >>> <property> > >>> <name>mapred.task.profile.maps</name> > >>> <value>0-</value> > >>> </property> > >>> > >>> <property> > >>> <name>mapred.task.profile.reduces</name> > >>> <value>0-</value> > >>> </property> > >>> > >>> > >>> With these properties, the job runs as before, but I don't see any > >>> profiler > >>> output. > >>> > >>> I also tried simply setting > >>> > >>> > >>> <property> > >>> <name>mapred.child.java.opts</name> > >>> > >>> > >>> > >>><value>-agentlib:hprof=cpu=samples,heap=sites,depth=6,force=n,thread=y,v > >>>erbose=n,file=%s</value> > >>> </property> > >>> > >>> > >>> Again, no profiler output. > >>> > >>> I know I have HPROF installed because running "java > >>>-agentlib:hprof=help" > >>> at > >>> the command prompt produces a result. > >>> > >>> Is is possible to run HPROF on a local Hadoop job? Am I doing > >>>something > >>> wrong? > >>> > >> > >> > > ---------------------------------------------------------------------- > CONFIDENTIALITY NOTICE This message and any included attachments are from > Cerner Corporation and are intended only for the addressee. The information > contained in this message is confidential and may constitute inside or > non-public information under international, federal, or state securities > laws. Unauthorized forwarding, printing, copying, distribution, or use of > such information is strictly prohibited and may be unlawful. If you are not > the addressee, please promptly delete this message and notify the sender of > the delivery error by e-mail or you may call Cerner's corporate offices in > Kansas City, Missouri, U.S.A at (+1) (816)221-1024. >
