Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1383#issuecomment-50443081
Sorry to come back to this after a while. Disk faults can be transient as
well right? I'm not sure if we'd want to exit the executor simply because of
one disk fault.
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1383#issuecomment-50478348
@rxin Thank you for your comment.
On a second thought, it's not good solution and I noticed the root cause of
this issue is that FetchFailedException is not thrown
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1383#issuecomment-50506336
Thanks - do you mind closing this one?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1383#issuecomment-50506697
OK. Instead, please watch this PR https://github.com/apache/spark/pull/1578
.
This maybe a solution for this issue.
---
If your project is set up for it, you can
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1383#issuecomment-49379957
@rxin, I noticed some issues related to this issue.
When, following 3 situation which maybe disk fault , executor doesn't stop.
So, tasks assigned to the executor
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1383#issuecomment-49255082
My PR handles IOException as fatal but I think it's not good because
IOException is not always fatal.
The problem I want to solve is IOException thrown when writing
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1383#issuecomment-49102405
Thanks for submitting this. Is there any way we can construct a unit test
for this as well?
---
If your project is set up for it, you can reply to this email and have your
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1383#discussion_r14968203
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1223,6 +1223,8 @@ private[spark] object Utils extends Logging {
/** Returns true
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1383#issuecomment-49104351
OK. I will add a comment for my change.
And I will also add test case for this issue to FailureSuite.scala. Is that
proper?
---
If your project is set up for it,
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/1383
[SPARK-1667] Jobs never finish successfully once bucket file missing
occurred
If jobs execute shuffle, bucket files are created in a temporary directory
(named like spark-local-*).
When the
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1383#issuecomment-48800479
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
11 matches
Mail list logo