[ 
https://issues.apache.org/jira/browse/SPARK-25552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nuno Azevedo updated SPARK-25552:
---------------------------------
    Description: 
After upgrading from Spark 1.6.3 to 2.3.0 our jobs started to need about 50% 
more memory to run. The Spark properties used were the defaults in both 
versions.

 

For instance, before we were running a job with Spark 1.6.3 and it was running 
fine with 50 GB of memory.

!Spark1.6-50GB.png|width=800,height=456!

 

After upgrading to Spark 2.3.0, when running the same job again with the same 
50 GB of memory it failed due to out of memory.

!Spark2.3-50GB.png|width=800,height=366!

 

Then, we started incrementing the memory until we were able to run the job, 
which was with 70 GB.

!Spark2.3-70GB.png|width=800,height=366!

 

The Spark upgrade was the only change in our environment. After taking a look 
at what seems to be causing this we noticed that Kryo Serializer is the main 
culprit for the raise in memory consumption.

  was:
After upgrading from Spark 1.6.3 to 2.3.0 our jobs started to need about 50% 
more memory to run.

 

For instance, before we were running a job with Spark 1.6.3 and it was running 
fine with 50 GB of memory.

!Spark1.6-50GB.png|width=800,height=456!

 

After upgrading to Spark 2.3.0, when running the same job again with the same 
50 GB of memory it failed due to out of memory.

!Spark2.3-50GB.png|width=800,height=366!

 

Then, we started incrementing the memory until we were able to run the job, 
which was with 70 GB.

!Spark2.3-70GB.png|width=800,height=366!

 

The Spark upgrade was the only change in our environment. After taking a look 
at what seems to be causing this we noticed that Kryo Serializer is the main 
culprit for the raise in memory consumption.


> Upgrade from Spark 1.6.3 to 2.3.0 seems to make jobs use about 50% more memory
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-25552
>                 URL: https://issues.apache.org/jira/browse/SPARK-25552
>             Project: Spark
>          Issue Type: Bug
>          Components: Java API, Spark Core
>    Affects Versions: 2.3.0
>         Environment: Originally found in an AWS Kubernetes environment with 
> Spark Embedded.
> Also happens in a small scale with Spark Embedded both in Linux and MacOS.
>            Reporter: Nuno Azevedo
>            Priority: Major
>         Attachments: Spark1.6-50GB.png, Spark2.3-50GB.png, Spark2.3-70GB.png
>
>
> After upgrading from Spark 1.6.3 to 2.3.0 our jobs started to need about 50% 
> more memory to run. The Spark properties used were the defaults in both 
> versions.
>  
> For instance, before we were running a job with Spark 1.6.3 and it was 
> running fine with 50 GB of memory.
> !Spark1.6-50GB.png|width=800,height=456!
>  
> After upgrading to Spark 2.3.0, when running the same job again with the same 
> 50 GB of memory it failed due to out of memory.
> !Spark2.3-50GB.png|width=800,height=366!
>  
> Then, we started incrementing the memory until we were able to run the job, 
> which was with 70 GB.
> !Spark2.3-70GB.png|width=800,height=366!
>  
> The Spark upgrade was the only change in our environment. After taking a look 
> at what seems to be causing this we noticed that Kryo Serializer is the main 
> culprit for the raise in memory consumption.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to