[jira] [Updated] (SPARK-6334) spark-local dir not getting cleared during ALS

2015-04-01 Thread Antony Mayi (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antony Mayi updated SPARK-6334:
---
Attachment: gc.png

 spark-local dir not getting cleared during ALS
 --

 Key: SPARK-6334
 URL: https://issues.apache.org/jira/browse/SPARK-6334
 Project: Spark
  Issue Type: Bug
  Components: MLlib
Affects Versions: 1.2.0
Reporter: Antony Mayi
 Attachments: als-diskusage.png, gc.png


 when running bigger ALS training spark spills loads of temp data into the 
 local-dir (in my case yarn/local/usercache/antony.mayi/appcache/... - running 
 on YARN from cdh 5.3.2) eventually causing all the disks of all nodes running 
 out of space (in my case I have 12TB of available disk capacity before 
 kicking off the ALS but it all gets used (and yarn kills the containers when 
 reaching 90%).
 even with all recommended options (configuring checkpointing and forcing GC 
 when possible) it still doesn't get cleared.
 here is my (pseudo)code (pyspark):
 {code}
 sc.setCheckpointDir('/tmp')
 training = 
 sc.pickleFile('/tmp/dataset').repartition(768).persist(StorageLevel.MEMORY_AND_DISK)
 model = ALS.trainImplicit(training, 50, 15, lambda_=0.1, blocks=-1, alpha=40)
 sc._jvm.System.gc()
 {code}
 the training RDD has about 3.5 billions of items (~60GB on disk). after about 
 6 hours the ALS will consume all 12TB of disk space in local-dir data and 
 gets killed. my cluster has 192 cores, 1.5TB RAM and for this task I am using 
 37 executors of 4 cores/28+4GB RAM each.
 this is the graph of disk consumption pattern showing the space being all 
 eaten from 7% to 90% during the ALS (90% is when YARN kills the container):
 !als-diskusage.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6334) spark-local dir not getting cleared during ALS

2015-03-14 Thread Antony Mayi (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antony Mayi updated SPARK-6334:
---
Attachment: als-diskusage.png

this is the disk usage pattern during ALS - 90% is when YARN kills the 
container:
!als-diskusage.png!

 spark-local dir not getting cleared during ALS
 --

 Key: SPARK-6334
 URL: https://issues.apache.org/jira/browse/SPARK-6334
 Project: Spark
  Issue Type: Bug
  Components: MLlib
Affects Versions: 1.2.0
Reporter: Antony Mayi
 Attachments: als-diskusage.png


 when running bigger ALS training spark spills loads of temp data into the 
 local-dir (in my case yarn/local/usercache/antony.mayi/appcache/... - running 
 on YARN from cdh 5.3.2) eventually causing all the disks of all nodes running 
 out of disk space (in my case I have 12TB of available disk capacity before 
 kicking off the ALS but it all gets used (and yarn kills the containers when 
 reaching 90%).
 even with all recommended options (configuring checkpointing and forcing GC 
 when possible) it still doesn't get cleared.
 here is my (pseudo)code (pyspark):
 {code:python}
 sc.setCheckpointDir('/tmp')
 training = 
 sc.pickleFile('/tmp/dataset').repartition(768).persist(StorageLevel.MEMORY_AND_DISK)
 model = ALS.trainImplicit(training, 50, 15, lambda_=0.1, blocks=-1, alpha=40)
 sc._jvm.System.gc()
 {code}
 the training RDD has about 3.5 billions of items (~60GB on disk). after about 
 6 hours the ALS will consume all 12TB of disk space in local-dir data and 
 gets killed. my cluster has 192 cores, 1.5TB RAM and for this task I am using 
 37 executors of 4 cores/28+4GB RAM each. if possible I'll try attaching the 
 graph of disk consumption pattern showing the space being all eaten from 7% 
 to 90% during the ALS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6334) spark-local dir not getting cleared during ALS

2015-03-14 Thread Antony Mayi (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antony Mayi updated SPARK-6334:
---
Description: 
when running bigger ALS training spark spills loads of temp data into the 
local-dir (in my case yarn/local/usercache/antony.mayi/appcache/... - running 
on YARN from cdh 5.3.2) eventually causing all the disks of all nodes running 
out of space (in my case I have 12TB of available disk capacity before kicking 
off the ALS but it all gets used (and yarn kills the containers when reaching 
90%).

even with all recommended options (configuring checkpointing and forcing GC 
when possible) it still doesn't get cleared.

here is my (pseudo)code (pyspark):
{code}
sc.setCheckpointDir('/tmp')
training = 
sc.pickleFile('/tmp/dataset').repartition(768).persist(StorageLevel.MEMORY_AND_DISK)
model = ALS.trainImplicit(training, 50, 15, lambda_=0.1, blocks=-1, alpha=40)
sc._jvm.System.gc()
{code}

the training RDD has about 3.5 billions of items (~60GB on disk). after about 6 
hours the ALS will consume all 12TB of disk space in local-dir data and gets 
killed. my cluster has 192 cores, 1.5TB RAM and for this task I am using 37 
executors of 4 cores/28+4GB RAM each.

this is the graph of disk consumption pattern showing the space being all eaten 
from 7% to 90% during the ALS (90% is when YARN kills the container):
!als-diskusage.png!

  was:
when running bigger ALS training spark spills loads of temp data into the 
local-dir (in my case yarn/local/usercache/antony.mayi/appcache/... - running 
on YARN from cdh 5.3.2) eventually causing all the disks of all nodes running 
out of disk space (in my case I have 12TB of available disk capacity before 
kicking off the ALS but it all gets used (and yarn kills the containers when 
reaching 90%).

even with all recommended options (configuring checkpointing and forcing GC 
when possible) it still doesn't get cleared.

here is my (pseudo)code (pyspark):
{code:python}
sc.setCheckpointDir('/tmp')
training = 
sc.pickleFile('/tmp/dataset').repartition(768).persist(StorageLevel.MEMORY_AND_DISK)
model = ALS.trainImplicit(training, 50, 15, lambda_=0.1, blocks=-1, alpha=40)
sc._jvm.System.gc()
{code}

the training RDD has about 3.5 billions of items (~60GB on disk). after about 6 
hours the ALS will consume all 12TB of disk space in local-dir data and gets 
killed. my cluster has 192 cores, 1.5TB RAM and for this task I am using 37 
executors of 4 cores/28+4GB RAM each.

this is the graph of disk consumption pattern showing the space being all eaten 
from 7% to 90% during the ALS (90% is when YARN kills the container):
!als-diskusage.png!


 spark-local dir not getting cleared during ALS
 --

 Key: SPARK-6334
 URL: https://issues.apache.org/jira/browse/SPARK-6334
 Project: Spark
  Issue Type: Bug
  Components: MLlib
Affects Versions: 1.2.0
Reporter: Antony Mayi
 Attachments: als-diskusage.png


 when running bigger ALS training spark spills loads of temp data into the 
 local-dir (in my case yarn/local/usercache/antony.mayi/appcache/... - running 
 on YARN from cdh 5.3.2) eventually causing all the disks of all nodes running 
 out of space (in my case I have 12TB of available disk capacity before 
 kicking off the ALS but it all gets used (and yarn kills the containers when 
 reaching 90%).
 even with all recommended options (configuring checkpointing and forcing GC 
 when possible) it still doesn't get cleared.
 here is my (pseudo)code (pyspark):
 {code}
 sc.setCheckpointDir('/tmp')
 training = 
 sc.pickleFile('/tmp/dataset').repartition(768).persist(StorageLevel.MEMORY_AND_DISK)
 model = ALS.trainImplicit(training, 50, 15, lambda_=0.1, blocks=-1, alpha=40)
 sc._jvm.System.gc()
 {code}
 the training RDD has about 3.5 billions of items (~60GB on disk). after about 
 6 hours the ALS will consume all 12TB of disk space in local-dir data and 
 gets killed. my cluster has 192 cores, 1.5TB RAM and for this task I am using 
 37 executors of 4 cores/28+4GB RAM each.
 this is the graph of disk consumption pattern showing the space being all 
 eaten from 7% to 90% during the ALS (90% is when YARN kills the container):
 !als-diskusage.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6334) spark-local dir not getting cleared during ALS

2015-03-14 Thread Antony Mayi (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antony Mayi updated SPARK-6334:
---
Description: 
when running bigger ALS training spark spills loads of temp data into the 
local-dir (in my case yarn/local/usercache/antony.mayi/appcache/... - running 
on YARN from cdh 5.3.2) eventually causing all the disks of all nodes running 
out of disk space (in my case I have 12TB of available disk capacity before 
kicking off the ALS but it all gets used (and yarn kills the containers when 
reaching 90%).

even with all recommended options (configuring checkpointing and forcing GC 
when possible) it still doesn't get cleared.

here is my (pseudo)code (pyspark):
{code:python}
sc.setCheckpointDir('/tmp')
training = 
sc.pickleFile('/tmp/dataset').repartition(768).persist(StorageLevel.MEMORY_AND_DISK)
model = ALS.trainImplicit(training, 50, 15, lambda_=0.1, blocks=-1, alpha=40)
sc._jvm.System.gc()
{code}

the training RDD has about 3.5 billions of items (~60GB on disk). after about 6 
hours the ALS will consume all 12TB of disk space in local-dir data and gets 
killed. my cluster has 192 cores, 1.5TB RAM and for this task I am using 37 
executors of 4 cores/28+4GB RAM each.

this is the graph of disk consumption pattern showing the space being all eaten 
from 7% to 90% during the ALS (90% is when YARN kills the container):
!als-diskusage.png!

  was:
when running bigger ALS training spark spills loads of temp data into the 
local-dir (in my case yarn/local/usercache/antony.mayi/appcache/... - running 
on YARN from cdh 5.3.2) eventually causing all the disks of all nodes running 
out of disk space (in my case I have 12TB of available disk capacity before 
kicking off the ALS but it all gets used (and yarn kills the containers when 
reaching 90%).

even with all recommended options (configuring checkpointing and forcing GC 
when possible) it still doesn't get cleared.

here is my (pseudo)code (pyspark):
{code:python}
sc.setCheckpointDir('/tmp')
training = 
sc.pickleFile('/tmp/dataset').repartition(768).persist(StorageLevel.MEMORY_AND_DISK)
model = ALS.trainImplicit(training, 50, 15, lambda_=0.1, blocks=-1, alpha=40)
sc._jvm.System.gc()
{code}

the training RDD has about 3.5 billions of items (~60GB on disk). after about 6 
hours the ALS will consume all 12TB of disk space in local-dir data and gets 
killed. my cluster has 192 cores, 1.5TB RAM and for this task I am using 37 
executors of 4 cores/28+4GB RAM each. if possible I'll try attaching the graph 
of disk consumption pattern showing the space being all eaten from 7% to 90% 
during the ALS.




 spark-local dir not getting cleared during ALS
 --

 Key: SPARK-6334
 URL: https://issues.apache.org/jira/browse/SPARK-6334
 Project: Spark
  Issue Type: Bug
  Components: MLlib
Affects Versions: 1.2.0
Reporter: Antony Mayi
 Attachments: als-diskusage.png


 when running bigger ALS training spark spills loads of temp data into the 
 local-dir (in my case yarn/local/usercache/antony.mayi/appcache/... - running 
 on YARN from cdh 5.3.2) eventually causing all the disks of all nodes running 
 out of disk space (in my case I have 12TB of available disk capacity before 
 kicking off the ALS but it all gets used (and yarn kills the containers when 
 reaching 90%).
 even with all recommended options (configuring checkpointing and forcing GC 
 when possible) it still doesn't get cleared.
 here is my (pseudo)code (pyspark):
 {code:python}
 sc.setCheckpointDir('/tmp')
 training = 
 sc.pickleFile('/tmp/dataset').repartition(768).persist(StorageLevel.MEMORY_AND_DISK)
 model = ALS.trainImplicit(training, 50, 15, lambda_=0.1, blocks=-1, alpha=40)
 sc._jvm.System.gc()
 {code}
 the training RDD has about 3.5 billions of items (~60GB on disk). after about 
 6 hours the ALS will consume all 12TB of disk space in local-dir data and 
 gets killed. my cluster has 192 cores, 1.5TB RAM and for this task I am using 
 37 executors of 4 cores/28+4GB RAM each.
 this is the graph of disk consumption pattern showing the space being all 
 eaten from 7% to 90% during the ALS (90% is when YARN kills the container):
 !als-diskusage.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org