Nicolas Poggi commented on SPARK-23310:

[~sitalke...@gmail.com] we have found around 18% higher time in 
 of TPC-DS at least at scale factor 1000 (1TB). From 348.3. to 422.6s with the 
setting ON.  The regression is in {{{color:#24292e}ReadAheadInputStream.read}}, 
and {color}seems to happen when reading a small amounts of data due to checks 
and locks.   [~juliuszsompolski] can provide more internal details.

Overall for TPC-DS SF 1000, we don't see any significat improvement (over 5%) 
with the feature ON in the rest of the queries.  But of course other workloads 
can probably improve.

> Perf regression introduced by SPARK-21113
> -----------------------------------------
>                 Key: SPARK-23310
>                 URL: https://issues.apache.org/jira/browse/SPARK-23310
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.3.0
>            Reporter: Yin Huai
>            Priority: Blocker
> While running all TPC-DS queries with SF set to 1000, we noticed that Q95 
> (https://github.com/databricks/spark-sql-perf/blob/master/src/main/resources/tpcds_2_4/q95.sql)
>  has noticeable regression (11%). After looking into it, we found that the 
> regression was introduced by SPARK-21113. Specially, ReadAheadInputStream 
> gets lock congestion. After setting 
> spark.unsafe.sorter.spill.read.ahead.enabled set to false, the regression 
> disappear and the overall performance of all TPC-DS queries has improved.
> I am proposing that we set spark.unsafe.sorter.spill.read.ahead.enabled to 
> false by default for Spark 2.3 and re-enable it after addressing the lock 
> congestion issue. 

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to