Hello all,

The data is generated by the vendors, while some days, the data size will be 
very huge, and it will overflow the default  value of 
spark.kryoserializer.buffer.max,  
So how to calculate the spark.kryoserializer.buffer.max when the data size is 
changed ahead of raising the exception during the runtime?

Appreciate your any suggestions. 

BR.
Arthur Li

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to