siknezevic commented on pull request #27246:
URL: https://github.com/apache/spark/pull/27246#issuecomment-646931795


   > > Could you please let me know would it be OK to hard-code the read buffer 
size to 1024?
   > 
   > You think the performance is independent of running platforms, e.g., CPU 
arch and disk I/O? I'm not 100% sure that the `1024` value is the best on our 
supported platforms...
   > 
   > > With 10TB TPCDS data set I tested spilling with query q14a and buffer 
size of 1024. Execution with hard-coded read buffer size is faster by 37% (27 
min vs 37 min) comparing to the execution when buffer size is parameterized and 
the same size 1024 is used. Query q14a, for 10TB data set, generates around 180 
million joins per partition and when buffer size is parameterized, that 
translates into 10 min longer execution time.
   > 
   > Why does the parameterized one have so much overhead?
   
   Not sure. It looks that call to package.scala to read parameter takes some 
time. And that time is big enough to cause performance hit because it is 
executed for each join row. In the case of 10TB data set there is around 180 
million rows per partition. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to