[jira] [Commented] (SPARK-28859) Remove value check of MEMORY_OFFHEAP_SIZE in declaration section

2019-10-22 Thread Yifan Xing (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957365#comment-16957365
 ] 

Yifan Xing commented on SPARK-28859:


Created a pr: [https://github.pie.apple.com/pie/apache-spark/pull/469]

 

[~holden] seems like this Jira is assigned to [~yifan Xu], who is Yifan Xu. (I 
am [~yifan_xing] :)) Sorry for the duplicated names. Would you mind reassign?

I also don't have permission to update the ticket status. Would you like to 
update it to `In Review` or allow me permission?

 

Thank you!

> Remove value check of MEMORY_OFFHEAP_SIZE in declaration section
> 
>
> Key: SPARK-28859
> URL: https://issues.apache.org/jira/browse/SPARK-28859
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.0.0
>Reporter: Yang Jie
>Assignee: yifan
>Priority: Minor
>
> Now MEMORY_OFFHEAP_SIZE has default value 0, but It should be greater than 0 
> when 
> MEMORY_OFFHEAP_ENABLED is true,, should we check this condition in code?
>  
> SPARK-28577 add this check before request memory resource to Yarn 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28859) Remove value check of MEMORY_OFFHEAP_SIZE in declaration section

2019-10-09 Thread Yifan Xing (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16948098#comment-16948098
 ] 

Yifan Xing commented on SPARK-28859:


When MEMORY_OFFHEAP_ENABLED is false, do we still require user to give a > 0 
number for MEMORY_OFFHEAP_SIZE?

 

Currently, MEMORY_OFFHEAP_SIZE's default value is 0. Do we want to remove the 
default value or use a different value?

> Remove value check of MEMORY_OFFHEAP_SIZE in declaration section
> 
>
> Key: SPARK-28859
> URL: https://issues.apache.org/jira/browse/SPARK-28859
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.0.0
>Reporter: Yang Jie
>Assignee: yifan
>Priority: Minor
>
> Now MEMORY_OFFHEAP_SIZE has default value 0, but It should be greater than 0 
> when 
> MEMORY_OFFHEAP_ENABLED is true,, should we check this condition in code?
>  
> SPARK-28577 add this check before request memory resource to Yarn 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28859) Remove value check of MEMORY_OFFHEAP_SIZE in declaration section

2019-10-09 Thread Yifan Xing (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16947865#comment-16947865
 ] 

Yifan Xing commented on SPARK-28859:


Looks like this is done? If you check 
[resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala|https://github.com/apache/spark/pull/25309/files#diff-f8659513cf91c15097428c3d8dfbcc35]

`executorOffHeapMemorySizeAsMb` function has the following code on line 192-193.

{{require(sizeInMB > 0,}}
{{ s"${MEMORY_OFFHEAP_SIZE.key} must be > 0 when ${MEMORY_OFFHEAP_ENABLED.key} 
== true")}}

> Remove value check of MEMORY_OFFHEAP_SIZE in declaration section
> 
>
> Key: SPARK-28859
> URL: https://issues.apache.org/jira/browse/SPARK-28859
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 3.0.0
>Reporter: Yang Jie
>Priority: Minor
>
> Now MEMORY_OFFHEAP_SIZE has default value 0, but It should be greater than 0 
> when 
> MEMORY_OFFHEAP_ENABLED is true,, should we check this condition in code?
>  
> SPARK-28577 add this check before request memory resource to Yarn 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28845) Enable spark.sql.execution.sortBeforeRepartition only for retried stages

2019-09-23 Thread Yifan Xing (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16936194#comment-16936194
 ] 

Yifan Xing commented on SPARK-28845:


Can you please clarify what do you mean by "before repartition config"?

 

> Enable spark.sql.execution.sortBeforeRepartition only for retried stages
> 
>
> Key: SPARK-28845
> URL: https://issues.apache.org/jira/browse/SPARK-28845
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 3.0.0
>Reporter: Yuanjian Li
>Priority: Major
>
> For fixing the correctness bug of SPARK-28699, we disable radix sort for the 
> scenario of repartition in Spark SQL. This will cause a performance 
> regression.
> So for limiting the performance overhead, we'll do the optimizing work by 
> only enable sort before repartition config for the retried stage. This work 
> depends on SPARK-25341.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org