Dear Spark community

I was watching this 
presentation<https://databricks.com/session/deep-dive-apache-spark-memory-management>
 that is about spark memory management.

He talks about how they achieve fairness between different tasks in one 
executor (12:00). And he presents the idea of dynamic assignment between tasks 
and he declares that Spark spill other task's pages to disk if more tasks begin 
to execute.

I read before tasks in Spark are essentially threads and in Java we don't have 
this capability to manage the memory of threads and establish memory fairness 
between them. I wonder how Spark achievs this?

I ask this question in the stackoverflow and you can see this 
here<https://stackoverflow.com/questions/68053227/how-spark-achieves-memory-fairness-between-tasks/68063303#68053227>,
 and they are postulating that there is no spill to disk part. I really get 
confused what is actually going on under the hood?

thank you very much.

sincerely Hatef Alipoor

Reply via email to