My goal is to use hprof to profile where the bottleneck is.
Is there anyway to do this without modifying and rebuilding Spark source
code.

I've tried to add "
-Xrunhprof:cpu=samples,depth=100,interval=20,lineno=y,thread=y,file=/home/ubuntu/out.hprof"
to spark-class script, but it can only profile the CPU usage of the
org.apache.spark.deploy.SparkSubmit
class, and can not provide insights for other classes like BlockManager,
and user classes.

Any suggestions? Thanks a lot!

Best Regards,
Jia

Reply via email to