Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/23262
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/23262
@cloud-fan Updated, thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/23262#discussion_r240188043
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala
---
@@ -416,7 +416,12 @@ case class
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/23262
@HyukjinKwon Ok, removed it, thanks for review.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/23262
@HyukjinKwon @mgaido91 Thanks for review. @cloud-fan @kiszk Would you like
to give some suggestions: remove the object `RDDConversions` , or leave it
there
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/23262#discussion_r240114106
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ExistingRDD.scala ---
@@ -53,7 +53,7 @@ object RDDConversions
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/23262#discussion_r240113822
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ExistingRDD.scala ---
@@ -33,7 +33,7 @@ object RDDConversions
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/23262
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
GitHub user eatoncys opened a pull request:
https://github.com/apache/spark/pull/23262
[SPARK-26312][SQL]Converting converters in RDDConversions into arrays to
improve their access performance
## What changes were proposed in this pull request?
`RDDConversions` would
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/23010
But we may forget to filter null values when we write sql. The following
function protects this situation and writes the value of null partitions as
__HIVE_DEFAULT_PARTITION__
def
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/23010
@cloud-fan, Thanks for review, Do you mean we should filter out invalid
partitions in sql before write?
---
-
To unsubscribe
Github user eatoncys closed the pull request at:
https://github.com/apache/spark/pull/22561
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/23010
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
GitHub user eatoncys opened a pull request:
https://github.com/apache/spark/pull/23010
[SPARK-26012][SQL]Null and '' values should not cause dynamic partition
failure of string types
## What changes were proposed in this pull request?
Dynamic partition will fail
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/22561
@cloud-fan Yes, it has problems for Not expression, we need find some good
ways. Thanks for review.
---
-
To unsubscribe, e
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/22561
@cloud-fan which should be proved is that: the partitions returned of p'
shound contain the partitions returned by p. Here, let p' = p && x, if x is
true then p' == p;
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/22561#discussion_r225054113
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PruneFileSourcePartitions.scala
---
@@ -39,21 +40,31 @@ private[sql] object
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/22561#discussion_r225053437
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PruneFileSourcePartitions.scala
---
@@ -39,21 +40,31 @@ private[sql] object
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/22561#discussion_r225050369
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PruneFileSourcePartitions.scala
---
@@ -39,21 +40,31 @@ private[sql] object
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/22561
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/22561
cc @gatorsmile @cloud-fan
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user eatoncys opened a pull request:
https://github.com/apache/spark/pull/22561
[SPARK-25548][SQL]In the PruneFileSourcePartitions optimizer, replace the
nonPartitionOps field with true in the And(partitionOps, nonPartitionOps) to
make the partition can be pruned
## What
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/22053
@cloud-fan Unaligned accesses are not supported on SPARC architecture,
which is discussed on the issure:
https://issues.apache.org/jira/browse/SPARK-16962
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/22053
@kiszk The comments updated , Thanks for review.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
GitHub user eatoncys opened a pull request:
https://github.com/apache/spark/pull/22053
[SPARK-25069][Core]Using UnsafeAlignedOffset to make the entire record of 8
byte Items aligned like which is used in UnsafeExternalSorter
## What changes were proposed in this pull request
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/21823#discussion_r204597636
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SameResultSuite.scala ---
@@ -58,4 +61,16 @@ class SameResultSuite extends QueryTest with
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/21823
Can we merge it to master? @cloud-fan @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/21823#discussion_r204199119
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SameResultSuite.scala ---
@@ -58,4 +61,16 @@ class SameResultSuite extends QueryTest with
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/21823#discussion_r203990617
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/CanonicalizeSuite.scala
---
@@ -50,4 +52,30 @@ class CanonicalizeSuite
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/21823#discussion_r203972375
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlan.scala
---
@@ -237,7 +239,7 @@ abstract class QueryPlan[PlanType
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/21823
@cloud-fan why not fix this in doCanonicalize? I think it is better to fix
it in doCanonicalize, but I'm not very
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/21823
@cloud-fan fix this in dedupRight is Ok, but maybe there are other
operations like dedupRight to change the case of the word
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/21823
@cloud-fan
case j @ Join(left, right, _, _) if !j.duplicateResolved =>
j.copy(right = dedupRight(left, right))
dedupRight generate a new logical plan for the right ch
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/21823
@cloud-fan Cast 'Key' to lower case is done by rule of ResolveReferences:
![image](https://user-images.githubusercontent.com/26834091/42987332-7798ba3e-8c2b-11e8-9bed-d8be2e
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/21823
cc @cloud-fan @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user eatoncys opened a pull request:
https://github.com/apache/spark/pull/21823
[SPARK-24870][SQL]Cache can't work normally if there are case letters in SQL
## What changes were proposed in this pull request?
Modified the canonicalized to not case-insensitive.
B
Github user eatoncys closed the pull request at:
https://github.com/apache/spark/pull/19819
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user eatoncys closed the pull request at:
https://github.com/apache/spark/pull/21084
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/21084
@jerryshao @hvanhovell Ok, I will close it, thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/21084
@jerryshao , There is not any issue without transient, but I think it is
better to keep same to other fields, and make it clearly which fields do not
need to be serialized
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/21084
@jiangxb1987 It does not take significant time to serialize the
taskMemoryManager, because the value is null in driver side, but I think it is
better to keep same to other fields in the Task
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/21084
@hvanhovell The field 'taskMemoryManager' is only used in executor side, so
it is not needed to serialize it when sending the task from driver t
GitHub user eatoncys opened a pull request:
https://github.com/apache/spark/pull/21084
[SPARK-23998][Core]It may be better to add @transient to field
'taskMemoryManager' in class Task, for it is only be set and used in executor
side
Add @transient to field 'taskMe
GitHub user eatoncys opened a pull request:
https://github.com/apache/spark/pull/19819
[SPARK-22606][Streaming]Add threadId to the CachedKafkaConsumer key
## What changes were proposed in this pull request?
If the value of param 'spark.streaming.concurrentJobs' is mor
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/19022#discussion_r134905955
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ExpressionSetSuite.scala
---
@@ -210,4 +210,13 @@ class
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/19022#discussion_r134905739
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ExpressionSet.scala
---
@@ -17,7 +17,7 @@
package
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/19022#discussion_r134703026
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ExpressionSet.scala
---
@@ -59,6 +59,12 @@ class ExpressionSet protected
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/19022#discussion_r134695982
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ExpressionSet.scala
---
@@ -59,6 +59,12 @@ class ExpressionSet protected
GitHub user eatoncys opened a pull request:
https://github.com/apache/spark/pull/19022
[Spark-21807][SQL]The getAliasedConstraints function in LogicalPlan will
take a long time when number of expressions is greater than 100
## What changes were proposed in this pull request
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132806724
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -572,6 +572,14 @@ object SQLConf {
"disable loggi
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132616342
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -572,6 +572,14 @@ object SQLConf {
"disable loggi
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132616033
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala
---
@@ -149,4 +150,56 @@ class WholeStageCodegenSuite
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132610861
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -572,6 +572,14 @@ object SQLConf {
"disable loggi
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132610543
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala
---
@@ -149,4 +149,75 @@ class WholeStageCodegenSuite
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132388819
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WholeStageCodegenExec.scala
---
@@ -370,6 +370,14 @@ case class WholeStageCodegenExec
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132376473
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WholeStageCodegenExec.scala
---
@@ -370,6 +370,14 @@ case class WholeStageCodegenExec
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132374541
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WholeStageCodegenExec.scala
---
@@ -370,6 +370,14 @@ case class WholeStageCodegenExec
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132370096
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeFormatter.scala
---
@@ -89,6 +89,14 @@ object CodeFormatter
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132368646
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -572,6 +572,14 @@ object SQLConf {
"disable loggi
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132368484
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WholeStageCodegenExec.scala
---
@@ -370,6 +370,14 @@ case class WholeStageCodegenExec
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132365359
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -572,6 +572,13 @@ object SQLConf {
"disable loggi
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132365401
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -356,6 +356,19 @@ class CodegenContext
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132365436
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -356,6 +356,19 @@ class CodegenContext
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132347436
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -356,6 +356,18 @@ class CodegenContext
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132347148
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -356,6 +356,18 @@ class CodegenContext
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132347198
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -572,6 +572,13 @@ object SQLConf {
"disable loggi
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r132347018
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/AggregateBenchmark.scala
---
@@ -301,6 +301,61 @@ class AggregateBenchmark
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/18810
cc @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r131585903
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -356,6 +356,16 @@ class CodegenContext
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r131593593
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WholeStageCodegenExec.scala
---
@@ -370,6 +370,12 @@ case class WholeStageCodegenExec
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r131585857
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -356,6 +356,18 @@ class CodegenContext
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r131340166
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -356,6 +356,16 @@ class CodegenContext
GitHub user eatoncys opened a pull request:
https://github.com/apache/spark/pull/18810
[SPARK-21603][sql]The wholestage codegen will be much slower then
wholestage codegen is closed when the function is too long
## What changes were proposed in this pull request?
Close the
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r123475601
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -543,6 +543,30 @@ class SparkConf(loadDefaults: Boolean) extends
Cloneable with
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r123461583
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -543,6 +543,30 @@ class SparkConf(loadDefaults: Boolean) extends
Cloneable with
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/18322
cc @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r123420930
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -258,23 +256,7 @@ private[deploy] class SparkSubmitArguments(args
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r123420644
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -543,6 +545,42 @@ class SparkConf(loadDefaults: Boolean) extends
Cloneable with
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r123420083
--- Diff:
core/src/test/scala/org/apache/spark/deploy/master/MasterSuite.scala ---
@@ -704,6 +707,43 @@ class MasterSuite extends SparkFunSuite
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/18351
I think it is better to hide it. @fjh100456
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/18351
I think it is better to hide it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/18351
I think it is better to hide it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122942651
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -543,6 +543,30 @@ class SparkConf(loadDefaults: Boolean) extends
Cloneable with
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122939834
--- Diff:
core/src/test/scala/org/apache/spark/deploy/master/MasterSuite.scala ---
@@ -704,6 +707,43 @@ class MasterSuite extends SparkFunSuite
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122926099
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -543,6 +543,30 @@ class SparkConf(loadDefaults: Boolean) extends
Cloneable with
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122925526
--- Diff:
core/src/test/scala/org/apache/spark/deploy/master/MasterSuite.scala ---
@@ -704,6 +707,43 @@ class MasterSuite extends SparkFunSuite
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122676148
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -543,6 +543,30 @@ class SparkConf(loadDefaults: Boolean) extends
Cloneable with
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/18322
@jerryshao, I have added a unit test in MasterSuite, would you like to
review it again, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122638913
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -278,6 +278,14 @@ private[deploy] class SparkSubmitArguments(args
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122562905
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -658,19 +658,22 @@ private[deploy] class Master(
private def
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122562797
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -278,6 +278,12 @@ private[deploy] class SparkSubmitArguments(args
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122439758
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -658,19 +658,22 @@ private[deploy] class Master(
private def
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122424679
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -658,19 +658,22 @@ private[deploy] class Master(
private def
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18322#discussion_r122424299
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -278,6 +278,15 @@ private[deploy] class SparkSubmitArguments(args
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/18322
@jerryshao I have added warning logs in SparkSubmit , would you like to
review it again, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/18322
@jerryshao Ok, I will add warning logs in SparkSubmit, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/18322
@jerryshao I have modified the "app.coresLeft>0" to "app.coresLeft >=
coresPerExecutor.getOrElse(1)".
And another question is : is it will be better to allocate ano
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/18322
@jerryshao The problem is: If we start an app with the param
--total-executor-cores=4 and spark.executor.cores=3, the code
"app.coresLeft>0" is a
Github user eatoncys commented on the issue:
https://github.com/apache/spark/pull/18322
@jerryshao I have not see any issue here, and I have tested this again
using the latest Master code, the problem also exists.
---
If your project is set up for it, you can reply to this email and
GitHub user eatoncys opened a pull request:
https://github.com/apache/spark/pull/18322
[SPARK-21115][Core]If the cores left is less than the coresPerExecutor,the
cores left will not be allocated, so it should not to check in every schedule
## What changes were proposed in this pull
1 - 100 of 119 matches
Mail list logo