Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1480
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1480#issuecomment-70577766
Let's close this issue pending an update from @watermen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/1480#issuecomment-68792088
@watermen Can you update the patch as @andrewor14 mentioned?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/1480#issuecomment-55338293
@watermen A recently merged PR may be relevant to your patch: #2138. Since
both coarse-grained and fine-grained executors now cleans up its environment in
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1480#issuecomment-54694581
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1480#issuecomment-50442106
But why is that? The JVM should always call shutdown hooks when it exists.
Is Mesos killing the process?
I'm curious because we might have other behavior that
Github user watermen commented on the pull request:
https://github.com/apache/spark/pull/1480#issuecomment-50426714
@mateiz Without this, the DiskStore's shutdown hook is only called when
spark on Standalone Mode. When running spark over Mesos in âfine-grainedâ
modes or
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1480#issuecomment-50289129
Can you explain what happens without this? I thought the DiskStore's
shutdown hook is still called when the executor exits, so it will still clean
up blocks.
---
If
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1480#issuecomment-50289133
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user watermen opened a pull request:
https://github.com/apache/spark/pull/1480
[SPARK-2572] Delete the local dir on executor automatically when using
spark on Mesos.
When running spark over Mesos in âfine-grainedâ modes or
âcoarse-grainedâ mode. After the
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1480#issuecomment-49405622
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
11 matches
Mail list logo