Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17551
@barnardb only in Spark standalone mode HistoryServer is embedded into
Master process for convenience IIRC. You can always start a standalone
HistoryServer process.
Also
Github user barnardb commented on the issue:
https://github.com/apache/spark/pull/17551
> It's still running your code, right? Why can't you add a configuration to
your own code that tells it to wait some time before shutting down the
SparkContext?
We're trying to support
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17551
> Our use case involves jobs running in a remote cluster without a Spark
master.
It's still running your code, right? Why can't you add a configuration to
your own code that tells it to
Github user barnardb commented on the issue:
https://github.com/apache/spark/pull/17551
Our use case involves jobs running in a remote cluster without a Spark
master. I agree that the history server is the better to solve this, but we'd
like to get a solution that doesn't depend on a
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/17551
I agree with @srowen and @jerryshao this is what the history server is for
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17551
Agree with @srowen , the proposed solution overlaps the key functionality
of history server. Usually we should stop the app and release the resources as
soon as application finished. This
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17551
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/17551
I would oppose this change. It is what the history server is for. Or also
profiling tools, or simple debugging mechanisms you allude to. It doesn't
belong as yet another flag in Spark
---
If your