Github user barnardb commented on the issue:

    https://github.com/apache/spark/pull/17551
  
    > It's still running your code, right? Why can't you add a configuration to 
your own code that tells it to wait some time before shutting down the 
SparkContext?
    
    We're trying to support arbitrary jobs running on the cluster, to make it 
easy for users to inspect the jobs that they run there. This was a quick way to 
achieve that, but I agree with the other commenters that this quite hacky, and 
that the history server would be a nicer solution. Our problem with the history 
server right now is that while the current driver-side `EventLoggingListener` + 
history-server-side `FsHistoryProvider` implementations are great for 
environments with HDFS, they're much less convenient in a cluster without a 
distributed filesystem. I'd propose that I close this PR, and work on an 
RPC-based listener-provider combination to use with the history server.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to