[ 
https://issues.apache.org/jira/browse/SPARK-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Gupta reopened SPARK-7337:
-------------------------------

I am running it in "local" mode and using Java API. It should spill over 
hard-disk. I can clearly see that 500 tasks are not created by next stage hence 
OutOfMemoryError is coming.

Please let me know if you need further information. I tried to run same logic 
with custom code with 50 partitions and it worked. When I tried FPGrowth with 
same data with >50 partitions it failed. Perhaps you need to check why 500 
tasks are not created for "collect at FPGrowth.scala:131" (source of error).

> FPGrowth algo throwing OutOfMemoryError
> ---------------------------------------
>
>                 Key: SPARK-7337
>                 URL: https://issues.apache.org/jira/browse/SPARK-7337
>             Project: Spark
>          Issue Type: Bug
>          Components: MLlib
>    Affects Versions: 1.3.1
>         Environment: Ubuntu
>            Reporter: Amit Gupta
>         Attachments: FPGrowthBug.png
>
>
> When running FPGrowth algo with huge data in GBs and with numPartitions=500 
> then after some time it throws OutOfMemoryError.
> Algo runs correctly upto "collect at FPGrowth.scala:131" where it creates 500 
> tasks. It fails at next stage "flatMap at FPGrowth.scala:150" where it fails 
> to create 500 tasks and create some internal calculated 17 tasks.
> Please refer to attachment - print screen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to