[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r119938547 --- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala --- @@ -143,14 +144,29 @@ class SparkHadoopUtil extends Logging { * Returns a function that can be called to find Hadoop FileSystem bytes read. If * getFSBytesReadOnThreadCallback is called from thread r at time t, the returned callback will * return the bytes read on r since t. - * - * @return None if the required method can't be found. --- End diff -- this doesn't return a `None`, but the doc is still corrected about the behavior. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user yhuai commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r119938185 --- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala --- @@ -143,14 +144,29 @@ class SparkHadoopUtil extends Logging { * Returns a function that can be called to find Hadoop FileSystem bytes read. If * getFSBytesReadOnThreadCallback is called from thread r at time t, the returned callback will * return the bytes read on r since t. - * - * @return None if the required method can't be found. --- End diff -- Why removing this line instead of the doc? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user asfgit closed the pull request at: https://github.com/apache/spark/pull/17617 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user jerryshao commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r119517532 --- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala --- @@ -143,14 +144,30 @@ class SparkHadoopUtil extends Logging { * Returns a function that can be called to find Hadoop FileSystem bytes read. If * getFSBytesReadOnThreadCallback is called from thread r at time t, the returned callback will * return the bytes read on r since t. - * - * @return None if the required method can't be found. */ private[spark] def getFSBytesReadOnThreadCallback(): () => Long = { -val threadStats = FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics) -val f = () => threadStats.map(_.getBytesRead).sum -val baselineBytesRead = f() -() => f() - baselineBytesRead +val f = () => FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics.getBytesRead).sum +val baseline = (Thread.currentThread().getId, f()) + +new Function0[Long] { + private val bytesReadMap = new mutable.HashMap[Long, Long]() + + /** + * Returns a function that can be called to calculate Hadoop FileSystem bytes read. --- End diff -- Done. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user jerryshao commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r119517540 --- Diff: core/src/test/scala/org/apache/spark/metrics/InputOutputMetricsSuite.scala --- @@ -319,6 +319,37 @@ class InputOutputMetricsSuite extends SparkFunSuite with SharedSparkContext } assert(bytesRead >= tmpFile.length()) } + + test("input metrics with old Hadoop API in different thread") { +val bytesRead = runAndReturnBytesRead { + sc.textFile(tmpFilePath, 4).mapPartitions { iter => +val buf = new ArrayBuffer[String]() +ThreadUtils.runInNewThread("testThread", false) { + iter.flatMap(_.split(" ")).foreach(buf.append(_)) +} + +buf.iterator + }.count() +} +assert(bytesRead != 0) --- End diff -- Done. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r119495777 --- Diff: core/src/test/scala/org/apache/spark/metrics/InputOutputMetricsSuite.scala --- @@ -319,6 +319,37 @@ class InputOutputMetricsSuite extends SparkFunSuite with SharedSparkContext } assert(bytesRead >= tmpFile.length()) } + + test("input metrics with old Hadoop API in different thread") { +val bytesRead = runAndReturnBytesRead { + sc.textFile(tmpFilePath, 4).mapPartitions { iter => +val buf = new ArrayBuffer[String]() +ThreadUtils.runInNewThread("testThread", false) { + iter.flatMap(_.split(" ")).foreach(buf.append(_)) +} + +buf.iterator + }.count() +} +assert(bytesRead != 0) --- End diff -- this assert is unnecessary. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r119495029 --- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala --- @@ -143,14 +144,30 @@ class SparkHadoopUtil extends Logging { * Returns a function that can be called to find Hadoop FileSystem bytes read. If * getFSBytesReadOnThreadCallback is called from thread r at time t, the returned callback will * return the bytes read on r since t. - * - * @return None if the required method can't be found. */ private[spark] def getFSBytesReadOnThreadCallback(): () => Long = { -val threadStats = FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics) -val f = () => threadStats.map(_.getBytesRead).sum -val baselineBytesRead = f() -() => f() - baselineBytesRead +val f = () => FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics.getBytesRead).sum +val baseline = (Thread.currentThread().getId, f()) + +new Function0[Long] { + private val bytesReadMap = new mutable.HashMap[Long, Long]() + + /** + * Returns a function that can be called to calculate Hadoop FileSystem bytes read. --- End diff -- move these comments before `new Function0[Long]` or before `def getFSBytesReadOnThreadCallback`. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user ueshin commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r119301228 --- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala --- @@ -21,8 +21,10 @@ import java.io.IOException import java.security.PrivilegedExceptionAction import java.text.DateFormat import java.util.{Arrays, Comparator, Date, Locale} +import java.util.concurrent.ConcurrentHashMap --- End diff -- nit: unneeded import. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user jerryshao commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r119276245 --- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala --- @@ -143,14 +144,18 @@ class SparkHadoopUtil extends Logging { * Returns a function that can be called to find Hadoop FileSystem bytes read. If * getFSBytesReadOnThreadCallback is called from thread r at time t, the returned callback will * return the bytes read on r since t. - * - * @return None if the required method can't be found. */ private[spark] def getFSBytesReadOnThreadCallback(): () => Long = { -val threadStats = FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics) -val f = () => threadStats.map(_.getBytesRead).sum -val baselineBytesRead = f() -() => f() - baselineBytesRead +val f = () => FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics.getBytesRead).sum +val baseline = (Thread.currentThread().getId, f()) +val bytesReadMap = new ConcurrentHashMap[Long, Long]() + +() => { + bytesReadMap.put(Thread.currentThread().getId, f()) + bytesReadMap.asScala.map { case (k, v) => --- End diff -- I see. Let me fix it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user jerryshao commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r119275824 --- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala --- @@ -143,14 +144,18 @@ class SparkHadoopUtil extends Logging { * Returns a function that can be called to find Hadoop FileSystem bytes read. If * getFSBytesReadOnThreadCallback is called from thread r at time t, the returned callback will * return the bytes read on r since t. - * - * @return None if the required method can't be found. */ private[spark] def getFSBytesReadOnThreadCallback(): () => Long = { -val threadStats = FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics) -val f = () => threadStats.map(_.getBytesRead).sum -val baselineBytesRead = f() -() => f() - baselineBytesRead +val f = () => FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics.getBytesRead).sum +val baseline = (Thread.currentThread().getId, f()) +val bytesReadMap = new ConcurrentHashMap[Long, Long]() + +() => { --- End diff -- That's a good idea, let me change the code. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r119275510 --- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala --- @@ -143,14 +144,18 @@ class SparkHadoopUtil extends Logging { * Returns a function that can be called to find Hadoop FileSystem bytes read. If * getFSBytesReadOnThreadCallback is called from thread r at time t, the returned callback will * return the bytes read on r since t. - * - * @return None if the required method can't be found. */ private[spark] def getFSBytesReadOnThreadCallback(): () => Long = { -val threadStats = FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics) -val f = () => threadStats.map(_.getBytesRead).sum -val baselineBytesRead = f() -() => f() - baselineBytesRead +val f = () => FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics.getBytesRead).sum +val baseline = (Thread.currentThread().getId, f()) +val bytesReadMap = new ConcurrentHashMap[Long, Long]() + +() => { --- End diff -- I think it's better to create an anonymous `Function0` instance and treat `bytesReadMap` as a member variable and document the multi-thread semantic for the `apply` method. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r119275374 --- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala --- @@ -143,14 +144,18 @@ class SparkHadoopUtil extends Logging { * Returns a function that can be called to find Hadoop FileSystem bytes read. If * getFSBytesReadOnThreadCallback is called from thread r at time t, the returned callback will * return the bytes read on r since t. - * - * @return None if the required method can't be found. */ private[spark] def getFSBytesReadOnThreadCallback(): () => Long = { -val threadStats = FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics) -val f = () => threadStats.map(_.getBytesRead).sum -val baselineBytesRead = f() -() => f() - baselineBytesRead +val f = () => FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics.getBytesRead).sum +val baseline = (Thread.currentThread().getId, f()) +val bytesReadMap = new ConcurrentHashMap[Long, Long]() + +() => { + bytesReadMap.put(Thread.currentThread().getId, f()) + bytesReadMap.asScala.map { case (k, v) => --- End diff -- this is not atomic, shall we synchronize on `bytesReadMap` when calculating the sum? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r119274153 --- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala --- @@ -143,14 +144,18 @@ class SparkHadoopUtil extends Logging { * Returns a function that can be called to find Hadoop FileSystem bytes read. If * getFSBytesReadOnThreadCallback is called from thread r at time t, the returned callback will * return the bytes read on r since t. - * - * @return None if the required method can't be found. */ private[spark] def getFSBytesReadOnThreadCallback(): () => Long = { --- End diff -- let's update the document to say that, the returned function may be called in multiple threads. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user jiangxb1987 commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r118830572 --- Diff: core/src/test/scala/org/apache/spark/metrics/InputOutputMetricsSuite.scala --- @@ -319,6 +319,45 @@ class InputOutputMetricsSuite extends SparkFunSuite with SharedSparkContext } assert(bytesRead >= tmpFile.length()) } + + test("input metrics with old Hadoop API in different thread") { +val bytesRead = runAndReturnBytesRead { + sc.textFile(tmpFilePath, 4).mapPartitions { iter => +val buf = new ArrayBuffer[String]() +val thread = new Thread() { + override def run(): Unit = { +iter.flatMap(_.split(" ")).foreach(buf.append(_)) + } +} +thread.start() +thread.join() + +buf.iterator + }.count() +} +assert(bytesRead != 0) +assert(bytesRead >= tmpFile.length()) + } + + test("input metrics with new Hadoop API in different thread") { +val bytesRead = runAndReturnBytesRead { + sc.newAPIHadoopFile(tmpFilePath, classOf[NewTextInputFormat], classOf[LongWritable], +classOf[Text]).mapPartitions { iter => +val buf = new ArrayBuffer[String]() +val thread = new Thread() { --- End diff -- nit: Same as above, we could rewrite to: ``` ThreadUtils.runInNewThread("TestThread") { iter.map(_._2.toString).flatMap(_.split(" ")).foreach(buf.append(_)) } ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user jiangxb1987 commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r118830567 --- Diff: core/src/test/scala/org/apache/spark/metrics/InputOutputMetricsSuite.scala --- @@ -319,6 +319,45 @@ class InputOutputMetricsSuite extends SparkFunSuite with SharedSparkContext } assert(bytesRead >= tmpFile.length()) } + + test("input metrics with old Hadoop API in different thread") { +val bytesRead = runAndReturnBytesRead { + sc.textFile(tmpFilePath, 4).mapPartitions { iter => +val buf = new ArrayBuffer[String]() +val thread = new Thread() { --- End diff -- nit: We could use `ThreadUtils.runInNewThread()` to make this shorter, like: ``` ThreadUtils.runInNewThread("TestThread") { iter.flatMap(_.split(" ")).foreach(buf.append(_)) } ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user jerryshao commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r118809133 --- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala --- @@ -251,7 +251,13 @@ class HadoopRDD[K, V]( null } // Register an on-task-completion callback to close the input stream. - context.addTaskCompletionListener{ context => closeIfNeeded() } + context.addTaskCompletionListener { context => +// Update the bytes read before closing is to make sure lingering bytesRead statistics in +// this thread get correctly added. +updateBytesRead() --- End diff -- Close can be called in another thread as I remembered, so I added here to avoid lingering bytesRead in task running thread (Some bytes can be read when creating InputFormat), also it is no harm to call this `updateBytesRead` again. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user jerryshao commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r118808801 --- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala --- @@ -142,14 +143,18 @@ class SparkHadoopUtil extends Logging { * Returns a function that can be called to find Hadoop FileSystem bytes read. If * getFSBytesReadOnThreadCallback is called from thread r at time t, the returned callback will * return the bytes read on r since t. - * - * @return None if the required method can't be found. */ private[spark] def getFSBytesReadOnThreadCallback(): () => Long = { -val threadStats = FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics) -val f = () => threadStats.map(_.getBytesRead).sum -val baselineBytesRead = f() -() => f() - baselineBytesRead +val f = () => FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics.getBytesRead).sum --- End diff -- For the previous code, `threadStats` and `f` function can be executed in two threads, so the metrics we got can be wrong. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user jiangxb1987 commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r118805573 --- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala --- @@ -251,7 +251,13 @@ class HadoopRDD[K, V]( null } // Register an on-task-completion callback to close the input stream. - context.addTaskCompletionListener{ context => closeIfNeeded() } + context.addTaskCompletionListener { context => +// Update the bytes read before closing is to make sure lingering bytesRead statistics in +// this thread get correctly added. +updateBytesRead() --- End diff -- Will this duplicate with what we do in `close()`? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17617: [SPARK-20244][Core] Handle incorrect bytesRead me...
Github user jiangxb1987 commented on a diff in the pull request: https://github.com/apache/spark/pull/17617#discussion_r118805015 --- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala --- @@ -142,14 +143,18 @@ class SparkHadoopUtil extends Logging { * Returns a function that can be called to find Hadoop FileSystem bytes read. If * getFSBytesReadOnThreadCallback is called from thread r at time t, the returned callback will * return the bytes read on r since t. - * - * @return None if the required method can't be found. */ private[spark] def getFSBytesReadOnThreadCallback(): () => Long = { -val threadStats = FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics) -val f = () => threadStats.map(_.getBytesRead).sum -val baselineBytesRead = f() -() => f() - baselineBytesRead +val f = () => FileSystem.getAllStatistics.asScala.map(_.getThreadStatistics.getBytesRead).sum --- End diff -- Why are you changing this? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org