Github user LucaCanali commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22218#discussion_r212939411
  
    --- Diff: 
core/src/main/scala/org/apache/spark/executor/ExecutorSource.scala ---
    @@ -17,11 +17,13 @@
     
     package org.apache.spark.executor
     
    +import java.lang.management.ManagementFactory
     import java.util.concurrent.ThreadPoolExecutor
     
     import scala.collection.JavaConverters._
     
     import com.codahale.metrics.{Gauge, MetricRegistry}
    +import com.sun.management.OperatingSystemMXBean
    --- End diff --
    
    Indeed this is a very good point that I had overlooked. I have now directly 
checked and this appears to work OK on OpenJDK (and on Oracle JVM of course). 
In addition, I tested manually with IBM JDK (IBM J9 VM, Java 1.8.0_181,  where 
one would indeed suspect incompatibilities and surprisingly this appears to 
work in that case too. I believe this may come from recent work by IBM to make 
`com.ibm.lang.management.OperatingSystemMXBean.getProcessCpuTime` compatible 
with `com.sun.management.OperatingSystemMXBean.getProcessCpuTime`? See also 
[this 
link](https://www.ibm.com/support/knowledgecenter/en/SSYKE2_8.0.0/com.ibm.java.vm.80.doc/docs/dcomibmlangmanagementosmxbeaniscputime100ns.html)
    
    I guess that if this is confirmed, we should be fine with a large fraction 
of the commonly used JDKs. In addition, we could handle the exception in case 
getProcessCpuTime is not available on a particular platform where the executor 
is running, for example returning the value -1 for this gauge in that case. Any 
thoughts/suggestions on this proposal?



---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to