Re: Configure spark.kryoserializer.buffer.max at runtime does not take effect

2016-11-17 Thread kant kodali
yeah I feel like this is a bug since you can't really modify the settings
once you were given spark session or spark context. so the work around
would be to use --conf. In your case it would be like this

./spark-shell --conf spark.kryoserializer.buffer.max=1g



On Thu, Nov 17, 2016 at 1:59 PM, Koert Kuipers  wrote:

> getOrCreate uses existing SparkSession if available, in which case the
> settings will be ignored
>
> On Wed, Nov 16, 2016 at 10:55 PM, bluishpenguin  > wrote:
>
>> Hi all,
>> I would like to configure the following setting during runtime as below:
>>
>> spark = (SparkSession
>> .builder
>> .appName("ElasticSearchIndex")
>> .config("spark.kryoserializer.buffer.max", "1g")
>> .getOrCreate())
>>
>> But I still hit error,
>> Caused by: org.apache.spark.SparkException: Kryo serialization failed:
>> Buffer overflow. Available: 0, required: 1614707. To avoid this, increase
>> spark.kryoserializer.buffer.max value.
>>
>> It works when configure it along with the spark-submit command as below:
>> spark-submit pyspark-shell-main --name "PySparkShell" --conf
>> spark.kryoserializer.buffer.max=1g
>>
>> Any idea what have I done wrong?
>>
>> Thank you.
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-spark-user-list.
>> 1001560.n3.nabble.com/Configure-spark-kryoserializer-buffer-
>> max-at-runtime-does-not-take-effect-tp28094.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> -
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>
>


Re: Configure spark.kryoserializer.buffer.max at runtime does not take effect

2016-11-17 Thread Koert Kuipers
getOrCreate uses existing SparkSession if available, in which case the
settings will be ignored

On Wed, Nov 16, 2016 at 10:55 PM, bluishpenguin 
wrote:

> Hi all,
> I would like to configure the following setting during runtime as below:
>
> spark = (SparkSession
> .builder
> .appName("ElasticSearchIndex")
> .config("spark.kryoserializer.buffer.max", "1g")
> .getOrCreate())
>
> But I still hit error,
> Caused by: org.apache.spark.SparkException: Kryo serialization failed:
> Buffer overflow. Available: 0, required: 1614707. To avoid this, increase
> spark.kryoserializer.buffer.max value.
>
> It works when configure it along with the spark-submit command as below:
> spark-submit pyspark-shell-main --name "PySparkShell" --conf
> spark.kryoserializer.buffer.max=1g
>
> Any idea what have I done wrong?
>
> Thank you.
>
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/Configure-spark-kryoserializer-buffer-max-at-
> runtime-does-not-take-effect-tp28094.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


Configure spark.kryoserializer.buffer.max at runtime does not take effect

2016-11-16 Thread bluishpenguin
Hi all, 
I would like to configure the following setting during runtime as below:

spark = (SparkSession
.builder
.appName("ElasticSearchIndex")
.config("spark.kryoserializer.buffer.max", "1g")
.getOrCreate())

But I still hit error, 
Caused by: org.apache.spark.SparkException: Kryo serialization failed:
Buffer overflow. Available: 0, required: 1614707. To avoid this, increase
spark.kryoserializer.buffer.max value.

It works when configure it along with the spark-submit command as below:
spark-submit pyspark-shell-main --name "PySparkShell" --conf
spark.kryoserializer.buffer.max=1g

Any idea what have I done wrong?

Thank you.




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Configure-spark-kryoserializer-buffer-max-at-runtime-does-not-take-effect-tp28094.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org