Hi,

I am trying to setup something to automatically profile my Crunch jobs on an Hadoop cluster.

I have been a long time user of hprof & "mapred.task.profile" because it is so easy to use on Hadoop. However, I am now moving away from it:

 - will be removed from Java 9
 - suffers from safe point bias
 - does not allow to profile native code
 - gathering other metrics than stack trace samples can be useful

I had like to replace hprof by Flight Recorder and/or perf. Unlike hprof, both need to be started and stopped programmatically since there is not glue for them in Hadoop. I can see three options:


1. Hack the app

It can be done using DoFn.initialize/cleanup. Or all DoFns invoke the same idempotent code, or dedicated DoFns are inserted at specific points. Both seems horrific and disgusting :)


2. Java agent

Profiling is not tied to Crunch and any tool can be profiled. Main drawbacks are that the agent must be deployed on all the nodes and that it does not have easy access to metadata like user, job name, stage etc.

A good example of such agent is statsd-jvm-profiler, see https://github.com/etsy/statsd-jvm-profiler. They even have a small bridge to push Cascading metadata to the agent, see https://github.com/etsy/statsd-jvm-profiler/blob/master/example/StatsDProfilerFlowListener.scala.


3. Dedicated Crunch API

Some code needs to be executed on JVM startup / shutdown. AFAIK it is not currently possible but could be added (however I am not sure how to implement it on Spark). Unlike a javaagent, it does not require to deploy something on the nodes, metadata can be pushed to the services (ie. ctx) and it is more flexible.


I believe that allowing users to easily run code at JVM startup / shutdown would be an useful improvement. Any opinion ?

Clément MATHIEU




Reply via email to