[ 
https://issues.apache.org/jira/browse/SPARK-19255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15829715#comment-15829715
 ] 

Ashok Kumar commented on SPARK-19255:
-------------------------------------

@Sean
I will put my scenario in another way.
Assume each data block size is of 128MB. 
Total datasize is 1PB.
Total no of blocks = 1PB/128MB ~ 8 million block.
So we can expect 8 million task to get launched.
So in this scenario, this issue will occur.

> SQL Listener is causing out of memory, in case of large no of shuffle 
> partition
> -------------------------------------------------------------------------------
>
>                 Key: SPARK-19255
>                 URL: https://issues.apache.org/jira/browse/SPARK-19255
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>         Environment: Linux
>            Reporter: Ashok Kumar
>            Priority: Minor
>         Attachments: spark_sqllistener_oom.png
>
>
> Test steps.
> 1.CREATE TABLE sample(imei string,age int,task bigint,num double,level 
> decimal(10,3),productdate timestamp,name string,point int)USING 
> com.databricks.spark.csv OPTIONS (path "data.csv", header "false", 
> inferSchema "false");
> 2. set spark.sql.shuffle.partitions=100000;
> 3. select count(*) from (select task,sum(age) from sample group by task) t;
> After running above query, number of objects in map variable 
> _stageIdToStageMetrics has increase to very high number , this increment is 
> proportional to number of shuffle partition.
> Please have a look at attached screenshot



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to