Github user paragpc commented on the pull request:

    https://github.com/apache/spark/pull/11736#issuecomment-196983711
  
    The only available alternative which I know is manually copy files to 
desired back up location. Manual copy of files doesn't provide real-time 
events, especially in case of spark streaming jobs which can run for a long 
period of time. Copying files after job completion does not make sense if 
someone wants real time backup as the events happen. 
    In cloud environment, this patch provides common way for all users to 
configure backup of event logs using existing cloud back up agents by 
configuring the back up directory to local disks(from where these back up 
agents would pick up these files). At the same time, it also allows users to 
configure main event log directory of their own choice. What do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to