pan3793 commented on code in PR #41709:
URL: https://github.com/apache/spark/pull/41709#discussion_r1239521669
##########
core/src/main/scala/org/apache/spark/util/Utils.scala:
##########
@@ -2287,6 +2287,22 @@ private[spark] object Utils extends Logging with
SparkClassUtils {
}.map(threadInfoToThreadStackTrace)
}
+ /** Return a heap dump. Used to capture dumps for the web UI */
+ def getHeapHistogram(): Array[String] = {
+ val pid = String.valueOf(ProcessHandle.current().pid())
+ val builder = new ProcessBuilder("jmap", "-histo:live", pid)
Review Comment:
@dongjoon-hyun I mean where to find `jmap`, not environment variables
propagation.
Consider the following case:
There is a chance that the multi JDK installed on the machine, let's say we
have 8 and 11 install at
```
/opt/openjdk-8
/opt/openjdk-11
```
and commands of 8 are added into `PATH`, `PATH=/opt/openjdk-8/bin:$PATH`.
If the Spark executor process runs with `JAVA_HOME=/opt/openjdk-11`, with
such invocation, even the sub-process know `JAVA_HOME` but still search `jmap`
from `PATH`, then `/opt/openjdk-8/bin/jmap` will be used.
```
new ProcessBuilder("jmap", ...)
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]