Re: Increase Timeout or optimize Spark UT?

2017-08-22 Thread Mark Hamstra
This is another argument for getting the code to the point where this can default to "true": SQLConf.scala: val ADAPTIVE_EXECUTION_ENABLED = buildConf(" *spark.sql.adaptive.enabled*") On Tue, Aug 22, 2017 at 12:27 PM, Reynold Xin wrote: > +1 > > > On Tue, Aug 22, 2017 at

Re: SPIP: Spark on Kubernetes

2017-08-22 Thread yonzhang2012
+1 (non-binding) I am specifically interested in setting up testing environment for my company's Spark use and also expecting more comprehensive documents on getting development env setup in case of bug fix or new feature development, now it is only briefly documented in

Re: Increase Timeout or optimize Spark UT?

2017-08-22 Thread Reynold Xin
+1 On Tue, Aug 22, 2017 at 12:25 PM, Maciej Szymkiewicz wrote: > Hi, > > From my experience it is possible to cut quite a lot by reducing > spark.sql.shuffle.partitions to some reasonable value (let's say > comparable to the number of cores). 200 is a serious overkill

Re: Increase Timeout or optimize Spark UT?

2017-08-22 Thread Maciej Szymkiewicz
Hi, >From my experience it is possible to cut quite a lot by reducing spark.sql.shuffle.partitions to some reasonable value (let's say comparable to the number of cores). 200 is a serious overkill for most of the test cases anyway. Best, Maciej On 21 August 2017 at 03:00, Dong Joon Hyun