[ 
https://issues.apache.org/jira/browse/SPARK-17457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon resolved SPARK-17457.
----------------------------------
    Resolution: Incomplete

> Spark SQL shows poor performance for group by and sort by on multiple columns 
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-17457
>                 URL: https://issues.apache.org/jira/browse/SPARK-17457
>             Project: Spark
>          Issue Type: Improvement
>    Affects Versions: 1.4.0
>            Reporter: Sabyasachi Nayak
>            Priority: Major
>              Labels: bulk-closed
>
> In one of the use case when we are running one hive query with Tez it is 
> taking 45 mnts.But the same query when I am running in Spark SQL using 
> hivecontext it is taking more than 2 hours.This query has no joins only group 
> by and sort by  on multiple columns.
> spark-submit --class DataLoadingSpark --master yarn --deploy-mode client 
> --num-executors 60 --executor-memory 16G --driver-memory 4G --executor-cores 
> 5  --conf spark.yarn.executor.memoryOverhead=2048 --conf 
> spark.shuffle.consolidateFiles=true --conf spark.shuffle.memoryFraction=0.5 
> --conf spark.storage.memoryFraction=.1 --conf spark.io.compression.codec=lzf 
> --conf spark.driver.extraJavaOptions="-XX:MaxPermSize=1024m -XX:PermSize=512m 
> -Dhdp.version=2.3.2.0-2950" --conf spark.shuffle.blockTransferService=nio  
> DataLoadingSpark.jar --inputFile basket_txn.
> Spark UI shows
> Input is 500+ GB and Shuffle write is also 500+GB
> Spark version - 1.4.0
> HDP 2.3.2.0-2950
> 50 node cluster 1100 Vcores



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to