[GitHub] [spark] cloud-fan commented on issue #26127: [SPARK-29348][SQL] Add observable Metrics for Streaming queries

2019-12-02 Thread GitBox
cloud-fan commented on issue #26127: [SPARK-29348][SQL] Add observable Metrics 
for Streaming queries
URL: https://github.com/apache/spark/pull/26127#issuecomment-561045758
 
 
   retest this please


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26644: [SPARK-30004][SQL] Allow merge 
UserDefinedType into a native DataType
URL: https://github.com/apache/spark/pull/26644#issuecomment-561045154
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26412: [SPARK-29774][SQL] Date 
and Timestamp type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#discussion_r353027110
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -246,6 +247,68 @@ class Analyzer(
   CleanupAliases)
   )
 
+  /**
+   * For [[UnresolvedAdd]]:
+   * 1. If one side is timestamp/date/string and the other side is interval, 
turns it to
 
 Review comment:
   for error reporting, I don't think that there is a perfect solution. We can 
say that interval should be int to make the operation legal, or int should be 
timestamp.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26644: [SPARK-30004][SQL] Allow merge 
UserDefinedType into a native DataType
URL: https://github.com/apache/spark/pull/26644#issuecomment-561045161
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19579/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26644: [SPARK-30004][SQL] Allow merge 
UserDefinedType into a native DataType
URL: https://github.com/apache/spark/pull/26644#issuecomment-561045154
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26644: [SPARK-30004][SQL] Allow merge 
UserDefinedType into a native DataType
URL: https://github.com/apache/spark/pull/26644#issuecomment-561045161
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19579/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26412: [SPARK-29774][SQL] Date 
and Timestamp type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#discussion_r353026693
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -246,6 +247,68 @@ class Analyzer(
   CleanupAliases)
   )
 
+  /**
+   * For [[UnresolvedAdd]]:
+   * 1. If one side is timestamp/date/string and the other side is interval, 
turns it to
 
 Review comment:
   do we support interval + int?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType

2019-12-02 Thread GitBox
SparkQA commented on issue #26644: [SPARK-30004][SQL] Allow merge 
UserDefinedType into a native DataType
URL: https://github.com/apache/spark/pull/26644#issuecomment-561044614
 
 
   **[Test build #114757 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114757/testReport)**
 for PR 26644 at commit 
[`3d68a75`](https://github.com/apache/spark/commit/3d68a758ff975ab1e3e671505a375e6e0cbd9c84).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] Fokko commented on a change in pull request #26644: [SPARK-30004][SQL] Allow merge UserDefinedType into a native DataType

2019-12-02 Thread GitBox
Fokko commented on a change in pull request #26644: [SPARK-30004][SQL] Allow 
merge UserDefinedType into a native DataType
URL: https://github.com/apache/spark/pull/26644#discussion_r353026018
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/UserDefinedTypeSuite.scala
 ##
 @@ -287,4 +293,63 @@ class UserDefinedTypeSuite extends QueryTest with 
SharedSparkSession with Parque
 checkAnswer(spark.createDataFrame(data, schema).selectExpr("typeof(a)"),
   Seq(Row("array")))
   }
+
+  test("Allow merge UserDefinedType into a native DataType") {
 
 Review comment:
   After a good night of sleep, I've came up with an integration test, that 
will mimic the behavior as in Delta.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #26684: [SPARK-30001][SQL] ResolveRelations should handle both V1 and V2 tables.

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26684: [SPARK-30001][SQL] 
ResolveRelations should handle both V1 and V2 tables.
URL: https://github.com/apache/spark/pull/26684#discussion_r353025715
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -2836,35 +2869,17 @@ class Analyzer(
   }
 
   /**
-   * Performs the lookup of DataSourceV2 Tables. The order of resolution is:
-   *   1. Check if this relation is a temporary table.
-   *   2. Check if it has a catalog identifier. Here we try to load the table.
-   *  If we find the table, return the v2 relation and catalog.
-   *   3. Try resolving the relation using the V2SessionCatalog if that is 
defined.
-   *  If the V2SessionCatalog returns a V1 table definition,
-   *  return `None` so that we can fallback to the V1 code paths.
-   *  If the V2SessionCatalog returns a V2 table, return the v2 relation 
and V2SessionCatalog.
+   * Performs the lookup of DataSourceV2 Tables from v2 catalog.
*/
-  private def lookupV2RelationAndCatalog(
-  identifier: Seq[String]): Option[(DataSourceV2Relation, CatalogPlugin, 
Identifier)] =
+  private def lookupV2Relation(identifier: Seq[String]): 
Option[DataSourceV2Relation] =
 identifier match {
-  case CatalogObjectIdentifier(catalog, ident) if 
!CatalogV2Util.isSessionCatalog(catalog) =>
+  case NonSessionCatalogAndIdentifier(catalog, ident) =>
 
 Review comment:
   shall we also respect current namespace here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #26684: [SPARK-30001][SQL] ResolveRelations should handle both V1 and V2 tables.

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26684: [SPARK-30001][SQL] 
ResolveRelations should handle both V1 and V2 tables.
URL: https://github.com/apache/spark/pull/26684#discussion_r353025223
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -776,34 +770,73 @@ class Analyzer(
   case _ => plan
 }
 
-def apply(plan: LogicalPlan): LogicalPlan = 
ResolveTables(plan).resolveOperatorsUp {
-  case i @ InsertIntoStatement(u @ 
UnresolvedRelation(AsTableIdentifier(ident)), _, child, _, _)
-  if child.resolved =>
-EliminateSubqueryAliases(lookupTableFromCatalog(ident, u)) match {
+def apply(plan: LogicalPlan): LogicalPlan = 
ResolveTempViews(plan).resolveOperatorsUp {
+  case i @ InsertIntoStatement(
+  u @ UnresolvedRelation(SessionCatalogAndIdentifier(catalog, ident)), 
_, _, _, _)
+if i.query.resolved =>
+val relation = ResolveTempViews(u) match {
+  case unresolved: UnresolvedRelation =>
+lookupRelation(catalog, ident, recurse = 
false).getOrElse(unresolved)
+  case tempView => tempView
+}
+
+EliminateSubqueryAliases(relation) match {
   case v: View =>
 u.failAnalysis(s"Inserting into a view is not allowed. View: 
${v.desc.identifier}.")
   case other => i.copy(table = other)
 }
+
   case u: UnresolvedRelation => resolveRelation(u)
 }
 
-// Look up the table with the given name from catalog. The database we 
used is decided by the
-// precedence:
-// 1. Use the database part of the table identifier, if it is defined;
-// 2. Use defaultDatabase, if it is defined(In this case, no temporary 
objects can be used,
-//and the default database is only used to look up a view);
-// 3. Use the currentDb of the SessionCatalog.
-private def lookupTableFromCatalog(
-tableIdentifier: TableIdentifier,
-u: UnresolvedRelation,
-defaultDatabase: Option[String] = None): LogicalPlan = {
-  val tableIdentWithDb = tableIdentifier.copy(
-database = tableIdentifier.database.orElse(defaultDatabase))
-  try {
-v1SessionCatalog.lookupRelation(tableIdentWithDb)
-  } catch {
-case _: NoSuchTableException | _: NoSuchDatabaseException =>
-  u
+// Look up a relation from a given session catalog with the following 
logic:
+// 1) If a relation is not found in the catalog, return None.
+// 2) If a relation is found,
+//   a) if it is a v1 table not running on files, create a v1 relation
+//   b) otherwise, create a v2 relation.
+// 3) Otherwise, return None.
+// If recurse is set to true, it will call `resolveRelation` recursively 
to resolve
+// relations with the correct database scope.
+private def lookupRelation(
+catalog: CatalogPlugin,
+ident: Identifier,
+recurse: Boolean): Option[LogicalPlan] = {
+  val newIdent = withNewNamespace(ident)
+  assert(newIdent.namespace.size == 1)
+
+  CatalogV2Util.loadTable(catalog, newIdent) match {
+case Some(v1Table: V1Table) =>
+  val tableIdent = TableIdentifier(newIdent.name, 
newIdent.namespace.headOption)
+  if (!isRunningDirectlyOnFiles(tableIdent)) {
 
 Review comment:
   Do we need this check? If we find a v1 table, we should read this table 
instead of treating table name as path and read files directly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #26684: [SPARK-30001][SQL] ResolveRelations should handle both V1 and V2 tables.

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26684: [SPARK-30001][SQL] 
ResolveRelations should handle both V1 and V2 tables.
URL: https://github.com/apache/spark/pull/26684#discussion_r353024107
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -776,34 +770,73 @@ class Analyzer(
   case _ => plan
 }
 
-def apply(plan: LogicalPlan): LogicalPlan = 
ResolveTables(plan).resolveOperatorsUp {
-  case i @ InsertIntoStatement(u @ 
UnresolvedRelation(AsTableIdentifier(ident)), _, child, _, _)
-  if child.resolved =>
-EliminateSubqueryAliases(lookupTableFromCatalog(ident, u)) match {
+def apply(plan: LogicalPlan): LogicalPlan = 
ResolveTempViews(plan).resolveOperatorsUp {
+  case i @ InsertIntoStatement(
+  u @ UnresolvedRelation(SessionCatalogAndIdentifier(catalog, ident)), 
_, _, _, _)
+if i.query.resolved =>
+val relation = ResolveTempViews(u) match {
+  case unresolved: UnresolvedRelation =>
+lookupRelation(catalog, ident, recurse = 
false).getOrElse(unresolved)
+  case tempView => tempView
+}
+
+EliminateSubqueryAliases(relation) match {
   case v: View =>
 u.failAnalysis(s"Inserting into a view is not allowed. View: 
${v.desc.identifier}.")
   case other => i.copy(table = other)
 }
+
   case u: UnresolvedRelation => resolveRelation(u)
 }
 
-// Look up the table with the given name from catalog. The database we 
used is decided by the
-// precedence:
-// 1. Use the database part of the table identifier, if it is defined;
-// 2. Use defaultDatabase, if it is defined(In this case, no temporary 
objects can be used,
-//and the default database is only used to look up a view);
-// 3. Use the currentDb of the SessionCatalog.
-private def lookupTableFromCatalog(
-tableIdentifier: TableIdentifier,
-u: UnresolvedRelation,
-defaultDatabase: Option[String] = None): LogicalPlan = {
-  val tableIdentWithDb = tableIdentifier.copy(
-database = tableIdentifier.database.orElse(defaultDatabase))
-  try {
-v1SessionCatalog.lookupRelation(tableIdentWithDb)
-  } catch {
-case _: NoSuchTableException | _: NoSuchDatabaseException =>
-  u
+// Look up a relation from a given session catalog with the following 
logic:
+// 1) If a relation is not found in the catalog, return None.
+// 2) If a relation is found,
+//   a) if it is a v1 table not running on files, create a v1 relation
+//   b) otherwise, create a v2 relation.
+// 3) Otherwise, return None.
+// If recurse is set to true, it will call `resolveRelation` recursively 
to resolve
+// relations with the correct database scope.
+private def lookupRelation(
+catalog: CatalogPlugin,
+ident: Identifier,
+recurse: Boolean): Option[LogicalPlan] = {
+  val newIdent = withNewNamespace(ident)
+  assert(newIdent.namespace.size == 1)
+
+  CatalogV2Util.loadTable(catalog, newIdent) match {
+case Some(v1Table: V1Table) =>
+  val tableIdent = TableIdentifier(newIdent.name, 
newIdent.namespace.headOption)
+  if (!isRunningDirectlyOnFiles(tableIdent)) {
+val relation = v1SessionCatalog.getRelation(v1Table.v1Table)
+if (recurse) {
+  Some(resolveRelation(relation))
+} else {
+  Some(relation)
+}
+  } else {
+None
+  }
+case Some(table) =>
+  Some(DataSourceV2Relation.create(table))
+case None => None
+  }
+}
+
+// The namespace used for lookup is decided by the following precedence:
+// 1. Use the existing namespace if it is defined.
+// 2. Use defaultDatabase fom AnalysisContext, if it is defined. In this 
case, no temporary
+//objects can be used, and the default database is only used to look 
up a view.
+// 3. Use the current namespace of the session catalog.
+private def withNewNamespace(ident: Identifier): Identifier = {
+  if (ident.namespace.nonEmpty) {
+ident
+  } else {
+val defaultNamespace = AnalysisContext.get.defaultDatabase match {
+  case Some(db) => Array(db)
+  case None => Array(v1SessionCatalog.getCurrentDatabase)
 
 Review comment:
   isn't it done by `V2SessionCatalog` already?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [spark] cloud-fan commented on a change in pull request #26684: [SPARK-30001][SQL] ResolveRelations should handle both V1 and V2 tables.

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26684: [SPARK-30001][SQL] 
ResolveRelations should handle both V1 and V2 tables.
URL: https://github.com/apache/spark/pull/26684#discussion_r353024107
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -776,34 +770,73 @@ class Analyzer(
   case _ => plan
 }
 
-def apply(plan: LogicalPlan): LogicalPlan = 
ResolveTables(plan).resolveOperatorsUp {
-  case i @ InsertIntoStatement(u @ 
UnresolvedRelation(AsTableIdentifier(ident)), _, child, _, _)
-  if child.resolved =>
-EliminateSubqueryAliases(lookupTableFromCatalog(ident, u)) match {
+def apply(plan: LogicalPlan): LogicalPlan = 
ResolveTempViews(plan).resolveOperatorsUp {
+  case i @ InsertIntoStatement(
+  u @ UnresolvedRelation(SessionCatalogAndIdentifier(catalog, ident)), 
_, _, _, _)
+if i.query.resolved =>
+val relation = ResolveTempViews(u) match {
+  case unresolved: UnresolvedRelation =>
+lookupRelation(catalog, ident, recurse = 
false).getOrElse(unresolved)
+  case tempView => tempView
+}
+
+EliminateSubqueryAliases(relation) match {
   case v: View =>
 u.failAnalysis(s"Inserting into a view is not allowed. View: 
${v.desc.identifier}.")
   case other => i.copy(table = other)
 }
+
   case u: UnresolvedRelation => resolveRelation(u)
 }
 
-// Look up the table with the given name from catalog. The database we 
used is decided by the
-// precedence:
-// 1. Use the database part of the table identifier, if it is defined;
-// 2. Use defaultDatabase, if it is defined(In this case, no temporary 
objects can be used,
-//and the default database is only used to look up a view);
-// 3. Use the currentDb of the SessionCatalog.
-private def lookupTableFromCatalog(
-tableIdentifier: TableIdentifier,
-u: UnresolvedRelation,
-defaultDatabase: Option[String] = None): LogicalPlan = {
-  val tableIdentWithDb = tableIdentifier.copy(
-database = tableIdentifier.database.orElse(defaultDatabase))
-  try {
-v1SessionCatalog.lookupRelation(tableIdentWithDb)
-  } catch {
-case _: NoSuchTableException | _: NoSuchDatabaseException =>
-  u
+// Look up a relation from a given session catalog with the following 
logic:
+// 1) If a relation is not found in the catalog, return None.
+// 2) If a relation is found,
+//   a) if it is a v1 table not running on files, create a v1 relation
+//   b) otherwise, create a v2 relation.
+// 3) Otherwise, return None.
+// If recurse is set to true, it will call `resolveRelation` recursively 
to resolve
+// relations with the correct database scope.
+private def lookupRelation(
+catalog: CatalogPlugin,
+ident: Identifier,
+recurse: Boolean): Option[LogicalPlan] = {
+  val newIdent = withNewNamespace(ident)
+  assert(newIdent.namespace.size == 1)
+
+  CatalogV2Util.loadTable(catalog, newIdent) match {
+case Some(v1Table: V1Table) =>
+  val tableIdent = TableIdentifier(newIdent.name, 
newIdent.namespace.headOption)
+  if (!isRunningDirectlyOnFiles(tableIdent)) {
+val relation = v1SessionCatalog.getRelation(v1Table.v1Table)
+if (recurse) {
+  Some(resolveRelation(relation))
+} else {
+  Some(relation)
+}
+  } else {
+None
+  }
+case Some(table) =>
+  Some(DataSourceV2Relation.create(table))
+case None => None
+  }
+}
+
+// The namespace used for lookup is decided by the following precedence:
+// 1. Use the existing namespace if it is defined.
+// 2. Use defaultDatabase fom AnalysisContext, if it is defined. In this 
case, no temporary
+//objects can be used, and the default database is only used to look 
up a view.
+// 3. Use the current namespace of the session catalog.
+private def withNewNamespace(ident: Identifier): Identifier = {
+  if (ident.namespace.nonEmpty) {
+ident
+  } else {
+val defaultNamespace = AnalysisContext.get.defaultDatabase match {
+  case Some(db) => Array(db)
+  case None => Array(v1SessionCatalog.getCurrentDatabase)
 
 Review comment:
   isn't it done by `V2SessionCatalog` already?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [spark] SparkQA commented on issue #26692: [SPARK-30060][CORE] Rename metrics enable/disable configs

2019-12-02 Thread GitBox
SparkQA commented on issue #26692: [SPARK-30060][CORE] Rename metrics 
enable/disable configs
URL: https://github.com/apache/spark/pull/26692#issuecomment-561042469
 
 
   **[Test build #114756 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114756/testReport)**
 for PR 26692 at commit 
[`49052de`](https://github.com/apache/spark/commit/49052de53438341b7d1669fcb52a3126e1422157).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #26684: [SPARK-30001][SQL] ResolveRelations should handle both V1 and V2 tables.

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26684: [SPARK-30001][SQL] 
ResolveRelations should handle both V1 and V2 tables.
URL: https://github.com/apache/spark/pull/26684#discussion_r353023465
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -776,34 +770,73 @@ class Analyzer(
   case _ => plan
 }
 
-def apply(plan: LogicalPlan): LogicalPlan = 
ResolveTables(plan).resolveOperatorsUp {
-  case i @ InsertIntoStatement(u @ 
UnresolvedRelation(AsTableIdentifier(ident)), _, child, _, _)
-  if child.resolved =>
-EliminateSubqueryAliases(lookupTableFromCatalog(ident, u)) match {
+def apply(plan: LogicalPlan): LogicalPlan = 
ResolveTempViews(plan).resolveOperatorsUp {
+  case i @ InsertIntoStatement(
+  u @ UnresolvedRelation(SessionCatalogAndIdentifier(catalog, ident)), 
_, _, _, _)
+if i.query.resolved =>
+val relation = ResolveTempViews(u) match {
+  case unresolved: UnresolvedRelation =>
+lookupRelation(catalog, ident, recurse = 
false).getOrElse(unresolved)
+  case tempView => tempView
+}
+
+EliminateSubqueryAliases(relation) match {
   case v: View =>
 u.failAnalysis(s"Inserting into a view is not allowed. View: 
${v.desc.identifier}.")
   case other => i.copy(table = other)
 }
+
   case u: UnresolvedRelation => resolveRelation(u)
 }
 
-// Look up the table with the given name from catalog. The database we 
used is decided by the
-// precedence:
-// 1. Use the database part of the table identifier, if it is defined;
-// 2. Use defaultDatabase, if it is defined(In this case, no temporary 
objects can be used,
-//and the default database is only used to look up a view);
-// 3. Use the currentDb of the SessionCatalog.
-private def lookupTableFromCatalog(
-tableIdentifier: TableIdentifier,
-u: UnresolvedRelation,
-defaultDatabase: Option[String] = None): LogicalPlan = {
-  val tableIdentWithDb = tableIdentifier.copy(
-database = tableIdentifier.database.orElse(defaultDatabase))
-  try {
-v1SessionCatalog.lookupRelation(tableIdentWithDb)
-  } catch {
-case _: NoSuchTableException | _: NoSuchDatabaseException =>
-  u
+// Look up a relation from a given session catalog with the following 
logic:
 
 Review comment:
   nit: `a given session catalog` -> `the given session catalog`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #26684: [SPARK-30001][SQL] ResolveRelations should handle both V1 and V2 tables.

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26684: [SPARK-30001][SQL] 
ResolveRelations should handle both V1 and V2 tables.
URL: https://github.com/apache/spark/pull/26684#discussion_r353023298
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -777,36 +772,84 @@ class Analyzer(
 }
 
 def apply(plan: LogicalPlan): LogicalPlan = 
ResolveTables(plan).resolveOperatorsUp {
-  case i @ InsertIntoStatement(u @ 
UnresolvedRelation(AsTableIdentifier(ident)), _, child, _, _)
-  if child.resolved =>
-EliminateSubqueryAliases(lookupTableFromCatalog(ident, u)) match {
+  case i @ InsertIntoStatement(
+  u @ UnresolvedRelation(CatalogObjectIdentifier(catalog, ident)), _, 
_, _, _)
+if i.query.resolved && CatalogV2Util.isSessionCatalog(catalog) =>
+val relation = ResolveTempViews(u) match {
 
 Review comment:
   good catch! Can we resolve temp views inside `InsertIntoStatement` in 
`ResolveTempViews` as well?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #26684: [SPARK-30001][SQL] ResolveRelations should handle both V1 and V2 tables.

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26684: [SPARK-30001][SQL] 
ResolveRelations should handle both V1 and V2 tables.
URL: https://github.com/apache/spark/pull/26684#discussion_r353022551
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -2836,35 +2869,17 @@ class Analyzer(
   }
 
   /**
-   * Performs the lookup of DataSourceV2 Tables. The order of resolution is:
-   *   1. Check if this relation is a temporary table.
-   *   2. Check if it has a catalog identifier. Here we try to load the table.
-   *  If we find the table, return the v2 relation and catalog.
-   *   3. Try resolving the relation using the V2SessionCatalog if that is 
defined.
-   *  If the V2SessionCatalog returns a V1 table definition,
-   *  return `None` so that we can fallback to the V1 code paths.
-   *  If the V2SessionCatalog returns a V2 table, return the v2 relation 
and V2SessionCatalog.
+   * Performs the lookup of DataSourceV2 Tables from v2 catalog.
*/
-  private def lookupV2RelationAndCatalog(
-  identifier: Seq[String]): Option[(DataSourceV2Relation, CatalogPlugin, 
Identifier)] =
+  private def lookupV2Relation(identifier: Seq[String]): 
Option[DataSourceV2Relation] =
 
 Review comment:
   shall we move it into `ResolveTables` if it's only called there?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26692: [SPARK-30060][CORE] Rename metrics enable/disable configs

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26692: [SPARK-30060][CORE] Rename metrics 
enable/disable configs
URL: https://github.com/apache/spark/pull/26692#issuecomment-561040751
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19578/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26692: [SPARK-30060][CORE] Rename metrics enable/disable configs

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26692: [SPARK-30060][CORE] Rename 
metrics enable/disable configs
URL: https://github.com/apache/spark/pull/26692#issuecomment-561040751
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19578/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26692: [SPARK-30060][CORE] Rename metrics enable/disable configs

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26692: [SPARK-30060][CORE] Rename 
metrics enable/disable configs
URL: https://github.com/apache/spark/pull/26692#issuecomment-561040741
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26692: [SPARK-30060][CORE] Rename metrics enable/disable configs

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26692: [SPARK-30060][CORE] Rename metrics 
enable/disable configs
URL: https://github.com/apache/spark/pull/26692#issuecomment-561040741
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on issue #26696: [WIP][SPARK-18886][CORE] Only reset scheduling delay timer if allocated slots are fully utilized

2019-12-02 Thread GitBox
cloud-fan commented on issue #26696: [WIP][SPARK-18886][CORE] Only reset 
scheduling delay timer if allocated slots are fully utilized
URL: https://github.com/apache/spark/pull/26696#issuecomment-561039532
 
 
   Sufficient discussions are needed for this problem. AFAIK, the issue of 
delay scheduling is: it has a timer per task set manager, and the timer gets 
reset as soon as there is one task from this task set manager gets scheduled on 
a preferred location.
   
   A stage may keep waiting for locality and not leverage available nodes in 
the cluster, if its task duration is shorter than the locality wait time (3 
seconds by default).
   
   A simple solution is: we never reset the timer. When a stage has been 
waiting long enough for locality, this stage should not wait for locality 
anymore. However, this may hurt performance if the last task is scheduled to a 
non-preferred location, and a preferred location becomes available right after 
this task gets scheduled, and locality can bring 50x speed up.
   
   I don't have a good idea now. cc @JoshRosen @tgravescs @vanzin @jiangxb1987 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26743: Merge pull request #1 from apache/master

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26743: Merge pull request #1 from 
apache/master
URL: https://github.com/apache/spark/pull/26743#issuecomment-561038373
 
 
   Can one of the admins verify this patch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26743: Merge pull request #1 from apache/master

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26743: Merge pull request #1 from 
apache/master
URL: https://github.com/apache/spark/pull/26743#issuecomment-561038024
 
 
   Can one of the admins verify this patch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26743: Merge pull request #1 from apache/master

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26743: Merge pull request #1 from 
apache/master
URL: https://github.com/apache/spark/pull/26743#issuecomment-561038024
 
 
   Can one of the admins verify this patch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] yakoterry opened a new pull request #26743: Merge pull request #1 from apache/master

2019-12-02 Thread GitBox
yakoterry opened a new pull request #26743: Merge pull request #1 from 
apache/master
URL: https://github.com/apache/spark/pull/26743
 
 
   merge with pull request
   
   
   
   ### What changes were proposed in this pull request?
   
   
   
   ### Why are the changes needed?
   
   
   
   ### Does this PR introduce any user-facing change?
   
   
   
   ### How was this patch tested?
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26656: [SPARK-27986][SQL] Support 
ANSI SQL filter clause for aggregate expression
URL: https://github.com/apache/spark/pull/26656#issuecomment-561035308
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26656: [SPARK-27986][SQL] Support 
ANSI SQL filter clause for aggregate expression
URL: https://github.com/apache/spark/pull/26656#issuecomment-561035319
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114752/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26656: [SPARK-27986][SQL] Support ANSI SQL 
filter clause for aggregate expression
URL: https://github.com/apache/spark/pull/26656#issuecomment-561035308
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26656: [SPARK-27986][SQL] Support ANSI SQL 
filter clause for aggregate expression
URL: https://github.com/apache/spark/pull/26656#issuecomment-561035319
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114752/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA removed a comment on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression

2019-12-02 Thread GitBox
SparkQA removed a comment on issue #26656: [SPARK-27986][SQL] Support ANSI SQL 
filter clause for aggregate expression
URL: https://github.com/apache/spark/pull/26656#issuecomment-560981466
 
 
   **[Test build #114752 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114752/testReport)**
 for PR 26656 at commit 
[`4d1413f`](https://github.com/apache/spark/commit/4d1413f4c8e05e9bf47e60c955a7031b9aac7e83).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter clause for aggregate expression

2019-12-02 Thread GitBox
SparkQA commented on issue #26656: [SPARK-27986][SQL] Support ANSI SQL filter 
clause for aggregate expression
URL: https://github.com/apache/spark/pull/26656#issuecomment-561034860
 
 
   **[Test build #114752 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114752/testReport)**
 for PR 26656 at commit 
[`4d1413f`](https://github.com/apache/spark/commit/4d1413f4c8e05e9bf47e60c955a7031b9aac7e83).
* This patch passes all tests.
* This patch merges cleanly.
* This patch adds no public classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #26738: [SPARK-30082][SQL] Do not replace Zeros when replacing NaNs

2019-12-02 Thread GitBox
dongjoon-hyun commented on a change in pull request #26738: [SPARK-30082][SQL] 
Do not replace Zeros when replacing NaNs
URL: https://github.com/apache/spark/pull/26738#discussion_r353013375
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/DataFrameNaFunctions.scala
 ##
 @@ -456,11 +456,23 @@ final class DataFrameNaFunctions private[sql](df: 
DataFrame) {
 val keyExpr = df.col(col.name).expr
 def buildExpr(v: Any) = Cast(Literal(v), keyExpr.dataType)
 val branches = replacementMap.flatMap { case (source, target) =>
-  Seq(buildExpr(source), buildExpr(target))
+  if (isNaN(source) || isNaN(target)) {
+col.dataType match {
+  case IntegerType | LongType | ShortType | ByteType => Seq.empty
 
 Review comment:
   Thank you for your guide, @cloud-fan !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26696: [WIP][SPARK-18886][CORE] Only reset scheduling delay timer if allocated slots are fully utilized

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26696: [WIP][SPARK-18886][CORE] Only 
reset scheduling delay timer if allocated slots are fully utilized
URL: https://github.com/apache/spark/pull/26696#issuecomment-561030004
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19577/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26696: [WIP][SPARK-18886][CORE] Only reset scheduling delay timer if allocated slots are fully utilized

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26696: [WIP][SPARK-18886][CORE] Only reset 
scheduling delay timer if allocated slots are fully utilized
URL: https://github.com/apache/spark/pull/26696#issuecomment-561030004
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19577/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26696: [WIP][SPARK-18886][CORE] Only reset scheduling delay timer if allocated slots are fully utilized

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26696: [WIP][SPARK-18886][CORE] Only reset 
scheduling delay timer if allocated slots are fully utilized
URL: https://github.com/apache/spark/pull/26696#issuecomment-561029996
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26696: [WIP][SPARK-18886][CORE] Only reset scheduling delay timer if allocated slots are fully utilized

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26696: [WIP][SPARK-18886][CORE] Only 
reset scheduling delay timer if allocated slots are fully utilized
URL: https://github.com/apache/spark/pull/26696#issuecomment-561029996
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #26696: [WIP][SPARK-18886][CORE] Only reset scheduling delay timer if allocated slots are fully utilized

2019-12-02 Thread GitBox
SparkQA commented on issue #26696: [WIP][SPARK-18886][CORE] Only reset 
scheduling delay timer if allocated slots are fully utilized
URL: https://github.com/apache/spark/pull/26696#issuecomment-561029642
 
 
   **[Test build #114755 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114755/testReport)**
 for PR 26696 at commit 
[`06ca01f`](https://github.com/apache/spark/commit/06ca01f9b904ab25ced0506b204210a8555232fb).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on issue #26696: [WIP][SPARK-18886][CORE] Only reset scheduling delay timer if allocated slots are fully utilized

2019-12-02 Thread GitBox
cloud-fan commented on issue #26696: [WIP][SPARK-18886][CORE] Only reset 
scheduling delay timer if allocated slots are fully utilized
URL: https://github.com/apache/spark/pull/26696#issuecomment-561029067
 
 
   ok to test


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26696: [WIP][SPARK-18886][CORE] Only reset scheduling delay timer if allocated slots are fully utilized

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26696: [WIP][SPARK-18886][CORE] Only 
reset scheduling delay timer if allocated slots are fully utilized
URL: https://github.com/apache/spark/pull/26696#issuecomment-559284127
 
 
   Can one of the admins verify this patch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #26738: [SPARK-30082][SQL] Do not replace Zeros when replacing NaNs

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26738: [SPARK-30082][SQL] Do 
not replace Zeros when replacing NaNs
URL: https://github.com/apache/spark/pull/26738#discussion_r353008601
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/DataFrameNaFunctions.scala
 ##
 @@ -456,11 +456,23 @@ final class DataFrameNaFunctions private[sql](df: 
DataFrame) {
 val keyExpr = df.col(col.name).expr
 def buildExpr(v: Any) = Cast(Literal(v), keyExpr.dataType)
 val branches = replacementMap.flatMap { case (source, target) =>
-  Seq(buildExpr(source), buildExpr(target))
+  if (isNaN(source) || isNaN(target)) {
+col.dataType match {
+  case IntegerType | LongType | ShortType | ByteType => Seq.empty
 
 Review comment:
   checked with scala
   ```
   scala> Float.NaN == 0
   res0: Boolean = false
   
   scala> Float.NaN.toInt == 0
   res1: Boolean = true
   ```
   
   This is also true in Spark. When comparing float and int, we cast int to 
float to compare, so `NaN != 0`.
   
   I think it's a bug that we cast the value to the column type and compare. We 
shouldn't do any cast and let the type coercion rules to do proper cast for 
`CaseKeyWhen`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] bmarcott commented on issue #26633: [SPARK-29994][CORE] Add WILDCARD task location

2019-12-02 Thread GitBox
bmarcott commented on issue #26633: [SPARK-29994][CORE] Add WILDCARD task 
location
URL: https://github.com/apache/spark/pull/26633#issuecomment-561027490
 
 
   Could someone help review my proposed solution for 
[SPARK-18886](https://issues.apache.org/jira/browse/SPARK-18886?jql=project%20%3D%20SPARK%20AND%20text%20~%20delay)
 here: 
   https://github.com/apache/spark/pull/26696
   
   The idea is to only reset scheduling delay timers if allocated slots, based 
on the scheduling policy (FIFO vs FAIR), are fully utilized.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] yaooqinn commented on a change in pull request #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
yaooqinn commented on a change in pull request #26412: [SPARK-29774][SQL] Date 
and Timestamp type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#discussion_r353007005
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -246,6 +247,68 @@ class Analyzer(
   CleanupAliases)
   )
 
+  /**
+   * For [[UnresolvedAdd]]:
+   * 1. If one side is timestamp/date/string and the other side is interval, 
turns it to
 
 Review comment:
   ```
   org.apache.spark.sql.AnalysisException
   +cannot resolve '(INTERVAL '1 days' + 1)' due to data type mismatch: 
differing types in '(INTERVAL '1 days' + 1)'
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] yaooqinn commented on a change in pull request #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
yaooqinn commented on a change in pull request #26412: [SPARK-29774][SQL] Date 
and Timestamp type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#discussion_r353006365
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -246,6 +247,68 @@ class Analyzer(
   CleanupAliases)
   )
 
+  /**
+   * For [[UnresolvedAdd]]:
+   * 1. If one side is timestamp/date/string and the other side is interval, 
turns it to
 
 Review comment:
   they all go to `ADD` then fail with type checking


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26412: [SPARK-29774][SQL] Date 
and Timestamp type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#discussion_r353005949
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -246,6 +247,68 @@ class Analyzer(
   CleanupAliases)
   )
 
+  /**
+   * For [[UnresolvedAdd]]:
+   * 1. If one side is timestamp/date/string and the other side is interval, 
turns it to
 
 Review comment:
   then how about `if one side is interval and the other side is not interval`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] yaooqinn commented on a change in pull request #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
yaooqinn commented on a change in pull request #26412: [SPARK-29774][SQL] Date 
and Timestamp type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#discussion_r353005158
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -246,6 +247,68 @@ class Analyzer(
   CleanupAliases)
   )
 
+  /**
+   * For [[UnresolvedAdd]]:
+   * 1. If one side is timestamp/date/string and the other side is interval, 
turns it to
 
 Review comment:
   you missed `interval + interval`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun edited a comment on issue #26742: [SPARK-30051][BUILD] Clean up hadoop-3.2 dependency

2019-12-02 Thread GitBox
dongjoon-hyun edited a comment on issue #26742: [SPARK-30051][BUILD] Clean up 
hadoop-3.2 dependency
URL: https://github.com/apache/spark/pull/26742#issuecomment-561022246
 
 
   `spark-core` already embeds the required `jetty` libraries into 
`spark-core_2.13_*.jar`, and these files are not used even in our UTs. Ur, are 
we using these libraries?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun edited a comment on issue #26742: [SPARK-30051][BUILD] Clean up hadoop-3.2 dependency

2019-12-02 Thread GitBox
dongjoon-hyun edited a comment on issue #26742: [SPARK-30051][BUILD] Clean up 
hadoop-3.2 dependency
URL: https://github.com/apache/spark/pull/26742#issuecomment-561022246
 
 
   `spark-core` already embeds the required `jetty` libraries into 
`spark-core_2.13_*.jar`, and this is not used even in our UTs. Ur, are we using 
these libraries?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on issue #26742: [SPARK-30051][BUILD] Clean up hadoop-3.2 dependency

2019-12-02 Thread GitBox
dongjoon-hyun commented on issue #26742: [SPARK-30051][BUILD] Clean up 
hadoop-3.2 dependency
URL: https://github.com/apache/spark/pull/26742#issuecomment-561023253
 
 
   If I misunderstand something, I'll close this PR~ Please let me know.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on issue #26742: [SPARK-30051][BUILD] Clean up hadoop-3.2 dependency

2019-12-02 Thread GitBox
dongjoon-hyun commented on issue #26742: [SPARK-30051][BUILD] Clean up 
hadoop-3.2 dependency
URL: https://github.com/apache/spark/pull/26742#issuecomment-561022246
 
 
   `spark-core` already embeds the required `jetty` libraries into 
`spark-core_2.13_*.jar`, and this is not used for even in our UTs. Ur, are we 
using these libraries?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26702: [SPARK-30070][SQL] Support ANSI datetimes predicate - overlaps

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26702: [SPARK-30070][SQL] Support 
ANSI datetimes predicate - overlaps
URL: https://github.com/apache/spark/pull/26702#issuecomment-561020932
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26702: [SPARK-30070][SQL] Support ANSI datetimes predicate - overlaps

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26702: [SPARK-30070][SQL] Support 
ANSI datetimes predicate - overlaps
URL: https://github.com/apache/spark/pull/26702#issuecomment-561020937
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114750/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26702: [SPARK-30070][SQL] Support ANSI datetimes predicate - overlaps

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26702: [SPARK-30070][SQL] Support ANSI 
datetimes predicate - overlaps
URL: https://github.com/apache/spark/pull/26702#issuecomment-561020937
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114750/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26702: [SPARK-30070][SQL] Support ANSI datetimes predicate - overlaps

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26702: [SPARK-30070][SQL] Support ANSI 
datetimes predicate - overlaps
URL: https://github.com/apache/spark/pull/26702#issuecomment-561020932
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #26702: [SPARK-30070][SQL] Support ANSI datetimes predicate - overlaps

2019-12-02 Thread GitBox
SparkQA commented on issue #26702: [SPARK-30070][SQL] Support ANSI datetimes 
predicate - overlaps
URL: https://github.com/apache/spark/pull/26702#issuecomment-561020469
 
 
   **[Test build #114750 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114750/testReport)**
 for PR 26702 at commit 
[`3b39ec1`](https://github.com/apache/spark/commit/3b39ec1bbeb9d76f2f2551094feb1a7c08573f13).
* This patch passes all tests.
* This patch merges cleanly.
* This patch adds no public classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA removed a comment on issue #26702: [SPARK-30070][SQL] Support ANSI datetimes predicate - overlaps

2019-12-02 Thread GitBox
SparkQA removed a comment on issue #26702: [SPARK-30070][SQL] Support ANSI 
datetimes predicate - overlaps
URL: https://github.com/apache/spark/pull/26702#issuecomment-560970222
 
 
   **[Test build #114750 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114750/testReport)**
 for PR 26702 at commit 
[`3b39ec1`](https://github.com/apache/spark/commit/3b39ec1bbeb9d76f2f2551094feb1a7c08573f13).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #26716: [SPARK-30083][SQL] visitArithmeticUnary should wrap PLUS case with UnaryPositive for type checking

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26716: [SPARK-30083][SQL] 
visitArithmeticUnary should wrap PLUS case with UnaryPositive for type checking
URL: https://github.com/apache/spark/pull/26716#discussion_r35386
 
 

 ##
 File path: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ExpressionParserSuite.scala
 ##
 @@ -226,10 +226,10 @@ class ExpressionParserSuite extends AnalysisTest {
   }
 
   test("unary arithmetic expressions") {
-assertEqual("+a", 'a)
+assertEqual("+a", UnaryPositive('a))
 assertEqual("-a", -'a)
 assertEqual("~a", ~'a)
-assertEqual("-+~~a", -(~(~'a)))
+assertEqual("-+~~a", -UnaryPositive(~(~'a)))
 
 Review comment:
   shall we create a shortcut '+' for `UnaryPositive` as well? The `-` is 
defined in `org.apache.spark.sql.catalyst.dsl`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #26716: [SPARK-30083][SQL] visitArithmeticUnary should wrap PLUS case with UnaryPositive for type checking

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26716: [SPARK-30083][SQL] 
visitArithmeticUnary should wrap PLUS case with UnaryPositive for type checking
URL: https://github.com/apache/spark/pull/26716#discussion_r352999849
 
 

 ##
 File path: docs/sql-migration-guide.md
 ##
 @@ -254,6 +254,8 @@ license: |
 
 
 
+- Since Spark 3.0, the unary arithmetic operator plus(`+`) only accepts 
string, numeric and interval type values as inputs. Besides, `+` with a 
integral string representation will be coerced to double value, e.g. `+'1'` 
results `1.0`. In Spark version 2.4 and earlier,  there is no type checking for 
it, thus, all type values with a `+` prefix are valid, e.g. `+ array(1, 2)` is 
valid and results `[1, 2]`. Besides, there is no implicit type coercion for `+` 
with string, e.g. `+'1'` results `1`.
+
 
 Review comment:
   in 2.4, `+'1'` results to string `'1'`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26412: [SPARK-29774][SQL] Date 
and Timestamp type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#discussion_r352998689
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -246,6 +247,68 @@ class Analyzer(
   CleanupAliases)
   )
 
+  /**
+   * For [[UnresolvedAdd]]:
+   * 1. If one side is timestamp/date/string and the other side is interval, 
turns it to
 
 Review comment:
   it's better to reduce the coupling between analyzer rule and type coercion 
rule. I think here we should turn into `TimeAdd` if one side is interval, and 
type coercion rule will cast date/string to timestamp for `TimeAdd`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26412: [SPARK-29774][SQL] Date 
and Timestamp type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#discussion_r352998814
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -246,6 +247,68 @@ class Analyzer(
   CleanupAliases)
   )
 
+  /**
+   * For [[UnresolvedAdd]]:
+   * 1. If one side is timestamp/date/string and the other side is interval, 
turns it to
+   * [[TimeAdd]];
+   * 2. else if one side is date, turns it to [[DateAdd]] ;
+   * 3. else turns it to [[Add]].
+   *
+   * For [[UnresolvedSubtract]]:
+   * 1. If the left side is timestamp/date/string and the right side is an 
interval, turns it to
 
 Review comment:
   ditto


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26412: [SPARK-29774][SQL] Date and 
Timestamp type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#issuecomment-561012407
 
 
   Test FAILed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114754/
   Test FAILed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26716: [SPARK-30083][SQL] visitArithmeticUnary should wrap PLUS case with UnaryPositive for type checking

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26716: [SPARK-30083][SQL] 
visitArithmeticUnary should wrap PLUS case with UnaryPositive for type checking
URL: https://github.com/apache/spark/pull/26716#issuecomment-561012558
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26716: [SPARK-30083][SQL] visitArithmeticUnary should wrap PLUS case with UnaryPositive for type checking

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26716: [SPARK-30083][SQL] 
visitArithmeticUnary should wrap PLUS case with UnaryPositive for type checking
URL: https://github.com/apache/spark/pull/26716#issuecomment-561012567
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114749/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26412: [SPARK-29774][SQL] Date and Timestamp 
type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#issuecomment-561012407
 
 
   Test FAILed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114754/
   Test FAILed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26716: [SPARK-30083][SQL] visitArithmeticUnary should wrap PLUS case with UnaryPositive for type checking

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26716: [SPARK-30083][SQL] 
visitArithmeticUnary should wrap PLUS case with UnaryPositive for type checking
URL: https://github.com/apache/spark/pull/26716#issuecomment-561012558
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26412: [SPARK-29774][SQL] Date and 
Timestamp type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#issuecomment-561012395
 
 
   Merged build finished. Test FAILed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA removed a comment on issue #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
SparkQA removed a comment on issue #26412: [SPARK-29774][SQL] Date and 
Timestamp type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#issuecomment-561007606
 
 
   **[Test build #114754 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114754/testReport)**
 for PR 26412 at commit 
[`0f5618b`](https://github.com/apache/spark/commit/0f5618b09a8d6527cee6f568b764b4ff059c4e0d).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
SparkQA commented on issue #26412: [SPARK-29774][SQL] Date and Timestamp type 
+/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#issuecomment-561012362
 
 
   **[Test build #114754 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114754/testReport)**
 for PR 26412 at commit 
[`0f5618b`](https://github.com/apache/spark/commit/0f5618b09a8d6527cee6f568b764b4ff059c4e0d).
* This patch **fails Spark unit tests**.
* This patch merges cleanly.
* This patch adds no public classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26716: [SPARK-30083][SQL] visitArithmeticUnary should wrap PLUS case with UnaryPositive for type checking

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26716: [SPARK-30083][SQL] 
visitArithmeticUnary should wrap PLUS case with UnaryPositive for type checking
URL: https://github.com/apache/spark/pull/26716#issuecomment-561012567
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114749/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26412: [SPARK-29774][SQL] Date and Timestamp 
type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#issuecomment-561012395
 
 
   Merged build finished. Test FAILed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA removed a comment on issue #26716: [SPARK-30083][SQL] visitArithmeticUnary should wrap PLUS case with UnaryPositive for type checking

2019-12-02 Thread GitBox
SparkQA removed a comment on issue #26716: [SPARK-30083][SQL] 
visitArithmeticUnary should wrap PLUS case with UnaryPositive for type checking
URL: https://github.com/apache/spark/pull/26716#issuecomment-560963260
 
 
   **[Test build #114749 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114749/testReport)**
 for PR 26716 at commit 
[`170819c`](https://github.com/apache/spark/commit/170819c0c705593002192ce653b4e96af27f1198).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #26716: [SPARK-30083][SQL] visitArithmeticUnary should wrap PLUS case with UnaryPositive for type checking

2019-12-02 Thread GitBox
SparkQA commented on issue #26716: [SPARK-30083][SQL] visitArithmeticUnary 
should wrap PLUS case with UnaryPositive for type checking
URL: https://github.com/apache/spark/pull/26716#issuecomment-561012104
 
 
   **[Test build #114749 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114749/testReport)**
 for PR 26716 at commit 
[`170819c`](https://github.com/apache/spark/commit/170819c0c705593002192ce653b4e96af27f1198).
* This patch passes all tests.
* This patch merges cleanly.
* This patch adds no public classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #26738: [SPARK-30082][SQL] Do not replace Zeros when replacing NaNs

2019-12-02 Thread GitBox
dongjoon-hyun commented on a change in pull request #26738: [SPARK-30082][SQL] 
Do not replace Zeros when replacing NaNs
URL: https://github.com/apache/spark/pull/26738#discussion_r352996206
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/DataFrameNaFunctions.scala
 ##
 @@ -456,11 +456,23 @@ final class DataFrameNaFunctions private[sql](df: 
DataFrame) {
 val keyExpr = df.col(col.name).expr
 def buildExpr(v: Any) = Cast(Literal(v), keyExpr.dataType)
 val branches = replacementMap.flatMap { case (source, target) =>
-  Seq(buildExpr(source), buildExpr(target))
+  if (isNaN(source) || isNaN(target)) {
+col.dataType match {
+  case IntegerType | LongType | ShortType | ByteType => Seq.empty
 
 Review comment:
   Thank you for making a PR and I fully understand this issue, @johnhany97 .
   
   One concern is that the current behavior is consistent with Spark's `CAST` 
operation which converts `NaN` to `0` during DOUBLE-to-INT casting. 
Theoretically, while Apache Spark casts the given value to the given column 
type first, `NaN` becomes `0`.
   ```scala
   scala> Seq(Double.NaN, 0.0).toDF.selectExpr("cast(value as int)").show
   +-+
   |value|
   +-+
   |0|
   |0|
   +-+
   ```
   So, it's a natural behavior in that sequence. However, I agree that `na` 
function specially needs this fix.
   
   Hi, @gatorsmile and @cloud-fan . How do you think about this PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] iRakson commented on a change in pull request #26467: [SPARK-29477]Improve tooltip for Streaming tab

2019-12-02 Thread GitBox
iRakson commented on a change in pull request #26467: [SPARK-29477]Improve 
tooltip for Streaming tab
URL: https://github.com/apache/spark/pull/26467#discussion_r352995040
 
 

 ##
 File path: 
streaming/src/test/scala/org/apache/spark/streaming/UISeleniumSuite.scala
 ##
 @@ -162,7 +163,7 @@ class UISeleniumSuite
 outputOpIds.map(_.text) should be (List("0", "1"))
 
 // Check job ids
-val jobIdCells = findAll(cssSelector( """#batch-job-table a""")).toSeq
+val jobIdCells = findAll(cssSelector( """#jobId""")).toSeq
 
 Review comment:
   Ok, i understand. Now, i have removed all the user facing changes and added 
a filter to filter out the jobIds. I guess this will be fine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] gengliangwang commented on a change in pull request #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
gengliangwang commented on a change in pull request #26412: [SPARK-29774][SQL] 
Date and Timestamp type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#discussion_r352993670
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
 ##
 @@ -246,6 +247,54 @@ class Analyzer(
   CleanupAliases)
   )
 
+  /**
+   * 1. Turns Add/Subtract of DateType/TimestampType/StringType and 
CalendarIntervalType
+   *to TimeAdd/TimeSub.
+   * 2. Turns Add/Subtract of TimestampType/DateType/IntegerType
+   *and TimestampType/IntegerType/DateType to 
DateAdd/DateSub/SubtractDates and
+   *to SubtractTimestamps.
+   * 3. Turns Multiply/Divide of CalendarIntervalType and NumericType
+   *to MultiplyInterval/DivideInterval
+   */
+  case class ResolveBinaryArithmetic(conf: SQLConf) extends Rule[LogicalPlan] {
+override def apply(plan: LogicalPlan): LogicalPlan = 
plan.resolveOperatorsUp {
+  case p: LogicalPlan => p.transformExpressionsUp {
+case UnresolvedAdd(l, r) => (l.dataType, r.dataType) match {
+  case (TimestampType | DateType | StringType, CalendarIntervalType) =>
+Cast(TimeAdd(l, r), l.dataType)
+  case (CalendarIntervalType, TimestampType | DateType | StringType) =>
+Cast(TimeAdd(r, l), r.dataType)
+  case (DateType, _) => DateAdd(l, r)
 
 Review comment:
   @maropu It's true that there no active work about that. We should revisit 
and try creating a full plan next Q1/Q2.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #26741: [SPARK-30104][SQL] Fix catalog resolution for 'global_temp'

2019-12-02 Thread GitBox
cloud-fan commented on a change in pull request #26741: [SPARK-30104][SQL] Fix 
catalog resolution for 'global_temp'
URL: https://github.com/apache/spark/pull/26741#discussion_r352993187
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2SQLSuite.scala
 ##
 @@ -1813,6 +1813,17 @@ class DataSourceV2SQLSuite
 }
   }
 
+  test("global temp db is used as a table name under v2 catalog") {
+val globalTempDB = 
spark.sessionState.conf.getConf(StaticSQLConf.GLOBAL_TEMP_DATABASE)
+val t = s"testcat.$globalTempDB"
+withTable(t) {
+  sql(s"CREATE TABLE $t (id bigint, data string) USING foo")
+  sql("USE testcat")
+  // The following should not throw AnalysisException, but should use 
`testcat.$globalTempDB`.
+  sql(s"DESCRIBE TABLE $globalTempDB")
 
 Review comment:
   what happens if we have a table `testcat` under catalog `testcat`, and then 
run `DESCRIBE TABLE testcat`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26412: [SPARK-29774][SQL] Date and 
Timestamp type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#issuecomment-561007875
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26412: [SPARK-29774][SQL] Date and 
Timestamp type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#issuecomment-561007881
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19576/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26412: [SPARK-29774][SQL] Date and Timestamp 
type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#issuecomment-561007875
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26412: [SPARK-29774][SQL] Date and Timestamp 
type +/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#issuecomment-561007881
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19576/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #26412: [SPARK-29774][SQL] Date and Timestamp type +/- null should be null as Postgres

2019-12-02 Thread GitBox
SparkQA commented on issue #26412: [SPARK-29774][SQL] Date and Timestamp type 
+/- null should be null as Postgres
URL: https://github.com/apache/spark/pull/26412#issuecomment-561007606
 
 
   **[Test build #114754 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114754/testReport)**
 for PR 26412 at commit 
[`0f5618b`](https://github.com/apache/spark/commit/0f5618b09a8d6527cee6f568b764b4ff059c4e0d).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26434: [SPARK-29544] [SQL] optimize skewed partition based on data size

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26434: [SPARK-29544] [SQL] optimize 
skewed partition based on data size
URL: https://github.com/apache/spark/pull/26434#issuecomment-561004901
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114751/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26434: [SPARK-29544] [SQL] optimize skewed partition based on data size

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26434: [SPARK-29544] [SQL] optimize skewed 
partition based on data size
URL: https://github.com/apache/spark/pull/26434#issuecomment-561004901
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114751/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26434: [SPARK-29544] [SQL] optimize skewed partition based on data size

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26434: [SPARK-29544] [SQL] optimize skewed 
partition based on data size
URL: https://github.com/apache/spark/pull/26434#issuecomment-561004893
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26434: [SPARK-29544] [SQL] optimize skewed partition based on data size

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26434: [SPARK-29544] [SQL] optimize 
skewed partition based on data size
URL: https://github.com/apache/spark/pull/26434#issuecomment-561004893
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA removed a comment on issue #26434: [SPARK-29544] [SQL] optimize skewed partition based on data size

2019-12-02 Thread GitBox
SparkQA removed a comment on issue #26434: [SPARK-29544] [SQL] optimize skewed 
partition based on data size
URL: https://github.com/apache/spark/pull/26434#issuecomment-560975134
 
 
   **[Test build #114751 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114751/testReport)**
 for PR 26434 at commit 
[`18cdcd9`](https://github.com/apache/spark/commit/18cdcd98771dfb708bea6939dd5082e7bfaf7670).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #26434: [SPARK-29544] [SQL] optimize skewed partition based on data size

2019-12-02 Thread GitBox
SparkQA commented on issue #26434: [SPARK-29544] [SQL] optimize skewed 
partition based on data size
URL: https://github.com/apache/spark/pull/26434#issuecomment-561004374
 
 
   **[Test build #114751 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114751/testReport)**
 for PR 26434 at commit 
[`18cdcd9`](https://github.com/apache/spark/commit/18cdcd98771dfb708bea6939dd5082e7bfaf7670).
* This patch passes all tests.
* This patch merges cleanly.
* This patch adds no public classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] srowen commented on issue #26742: [SPARK-30051][BUILD] Clean up hadoop-3.2 dependency

2019-12-02 Thread GitBox
srowen commented on issue #26742: [SPARK-30051][BUILD] Clean up hadoop-3.2 
dependency
URL: https://github.com/apache/spark/pull/26742#issuecomment-561002412
 
 
   What's the logic here -- it's not actually needed at compile time because?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26738: [SPARK-30082][SQL] Do not replace Zeros when replacing NaNs

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26738: [SPARK-30082][SQL] Do not 
replace Zeros when replacing NaNs
URL: https://github.com/apache/spark/pull/26738#issuecomment-560999453
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26738: [SPARK-30082][SQL] Do not replace Zeros when replacing NaNs

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26738: [SPARK-30082][SQL] Do not 
replace Zeros when replacing NaNs
URL: https://github.com/apache/spark/pull/26738#issuecomment-560999461
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19575/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26738: [SPARK-30082][SQL] Do not replace Zeros when replacing NaNs

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26738: [SPARK-30082][SQL] Do not replace 
Zeros when replacing NaNs
URL: https://github.com/apache/spark/pull/26738#issuecomment-560999461
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder-K8s/19575/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26738: [SPARK-30082][SQL] Do not replace Zeros when replacing NaNs

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26738: [SPARK-30082][SQL] Do not replace 
Zeros when replacing NaNs
URL: https://github.com/apache/spark/pull/26738#issuecomment-560999453
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #26738: [SPARK-30082][SQL] Do not replace Zeros when replacing NaNs

2019-12-02 Thread GitBox
SparkQA commented on issue #26738: [SPARK-30082][SQL] Do not replace Zeros when 
replacing NaNs
URL: https://github.com/apache/spark/pull/26738#issuecomment-560999147
 
 
   **[Test build #114753 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114753/testReport)**
 for PR 26738 at commit 
[`ed6f08d`](https://github.com/apache/spark/commit/ed6f08dbbc0fb4f04e9ae9b59c119351bd4ca038).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26684: [SPARK-30001][SQL] ResolveRelations should handle both V1 and V2 tables.

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26684: [SPARK-30001][SQL] ResolveRelations 
should handle both V1 and V2 tables.
URL: https://github.com/apache/spark/pull/26684#issuecomment-560999052
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114747/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26684: [SPARK-30001][SQL] ResolveRelations should handle both V1 and V2 tables.

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26684: [SPARK-30001][SQL] 
ResolveRelations should handle both V1 and V2 tables.
URL: https://github.com/apache/spark/pull/26684#issuecomment-560999052
 
 
   Test PASSed.
   Refer to this link for build results (access rights to CI server needed): 
   https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/114747/
   Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins removed a comment on issue #26684: [SPARK-30001][SQL] ResolveRelations should handle both V1 and V2 tables.

2019-12-02 Thread GitBox
AmplabJenkins removed a comment on issue #26684: [SPARK-30001][SQL] 
ResolveRelations should handle both V1 and V2 tables.
URL: https://github.com/apache/spark/pull/26684#issuecomment-560999045
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] AmplabJenkins commented on issue #26684: [SPARK-30001][SQL] ResolveRelations should handle both V1 and V2 tables.

2019-12-02 Thread GitBox
AmplabJenkins commented on issue #26684: [SPARK-30001][SQL] ResolveRelations 
should handle both V1 and V2 tables.
URL: https://github.com/apache/spark/pull/26684#issuecomment-560999045
 
 
   Merged build finished. Test PASSed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA removed a comment on issue #26684: [SPARK-30001][SQL] ResolveRelations should handle both V1 and V2 tables.

2019-12-02 Thread GitBox
SparkQA removed a comment on issue #26684: [SPARK-30001][SQL] ResolveRelations 
should handle both V1 and V2 tables.
URL: https://github.com/apache/spark/pull/26684#issuecomment-560950068
 
 
   **[Test build #114747 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114747/testReport)**
 for PR 26684 at commit 
[`985e84d`](https://github.com/apache/spark/commit/985e84db41650113241393d112680769ab524105).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] SparkQA commented on issue #26684: [SPARK-30001][SQL] ResolveRelations should handle both V1 and V2 tables.

2019-12-02 Thread GitBox
SparkQA commented on issue #26684: [SPARK-30001][SQL] ResolveRelations should 
handle both V1 and V2 tables.
URL: https://github.com/apache/spark/pull/26684#issuecomment-560998658
 
 
   **[Test build #114747 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/114747/testReport)**
 for PR 26684 at commit 
[`985e84d`](https://github.com/apache/spark/commit/985e84db41650113241393d112680769ab524105).
* This patch passes all tests.
* This patch merges cleanly.
* This patch adds no public classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on issue #26738: [SPARK-30082][SQL] Do not replace Zeros when replacing NaNs

2019-12-02 Thread GitBox
dongjoon-hyun commented on issue #26738: [SPARK-30082][SQL] Do not replace 
Zeros when replacing NaNs
URL: https://github.com/apache/spark/pull/26738#issuecomment-560998290
 
 
   ok to test


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



  1   2   3   4   5   6   7   8   >