Aniket Bhatnagar created SPARK-18421:
----------------------------------------

             Summary: Dynamic disk allocation
                 Key: SPARK-18421
                 URL: https://issues.apache.org/jira/browse/SPARK-18421
             Project: Spark
          Issue Type: New Feature
          Components: Spark Core
    Affects Versions: 2.0.1
            Reporter: Aniket Bhatnagar
            Priority: Minor


Dynamic allocation feature allows you to add executors and scale computation 
power. This is great, however, I feel like we also need a way to dynamically 
scale storage. Currently, if the disk is not able to hold the spilled/shuffle 
data, the job is aborted (in yarn, the node manager kills the container) 
causing frustration and loss of time. In deployments like AWS EMR, it is 
possible to run an agent that add disks on the fly if it sees that the disks 
are running out of space and it would be great if Spark could immediately start 
using the added disks just as it does when new executors are added.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to