[jira] [Updated] (SPARK-39639) Fix possible null pointer in MySQLDialect listIndexes

2022-06-30 Thread panbingkun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-39639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

panbingkun updated SPARK-39639:
---
Description: 
h3. when:
{quote}val indexComment = rs.getString("Index_comment") {color:#172b4d}*return 
null*{color}
{quote}
h3. then:
{quote}if (indexComment.nonEmpty) properties.put("COMMENT", indexComment) 
{color:#172b4d}*throw NullPointerException*{color}
{quote}
h3. finally:
{quote}MySQLDialect.listIndexes {color:#172b4d}return incorrect result(ingore 
exist index).{color}
{quote}

  was:
h3. when:
{quote}val indexComment = rs.getString("Index_comment") {color:#172b4d}*return 
null*{color}
{quote}
FYI:

String ResultSet.getString(String columnLabel) throws SQLException;
return the column value; if the value is SQL NULL, the value returned is null
h3. then:
{quote}if (indexComment.nonEmpty) properties.put("COMMENT", indexComment) 
{color:#172b4d}*throw NullPointerException*{color}
{quote}
h3. finally:
{quote}MySQLDialect.listIndexes {color:#172b4d}return incorrect result(ingore 
exist index).{color}
{quote}


> Fix possible null pointer in MySQLDialect listIndexes
> -
>
> Key: SPARK-39639
> URL: https://issues.apache.org/jira/browse/SPARK-39639
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: panbingkun
>Priority: Minor
>
> h3. when:
> {quote}val indexComment = rs.getString("Index_comment") 
> {color:#172b4d}*return null*{color}
> {quote}
> h3. then:
> {quote}if (indexComment.nonEmpty) properties.put("COMMENT", indexComment) 
> {color:#172b4d}*throw NullPointerException*{color}
> {quote}
> h3. finally:
> {quote}MySQLDialect.listIndexes {color:#172b4d}return incorrect result(ingore 
> exist index).{color}
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-39639) Fix possible null pointer in MySQLDialect listIndexes

2022-06-30 Thread panbingkun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-39639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

panbingkun updated SPARK-39639:
---
Description: 
h3. when:
{quote}val indexComment = rs.getString("Index_comment") {color:#172b4d}*return 
null*{color}
{quote}
FYI:

String ResultSet.getString(String columnLabel) throws SQLException;
return the column value; if the value is SQL NULL, the value returned is null
h3. then:
{quote}if (indexComment.nonEmpty) properties.put("COMMENT", indexComment) 
{color:#172b4d}*throw NullPointerException*{color}
{quote}
h3. finally:
{quote}MySQLDialect.listIndexes {color:#172b4d}return incorrect result(ingore 
exist index).{color}
{quote}

  was:
h3. when:
{quote}val indexComment = rs.getString("Index_comment")
{quote}*return null*
{quote}{quote}
FYI:
{quote}{quote}String ResultSet.getString(String columnLabel) throws 
SQLException;
return the column value; if the value is SQL NULL, the value returned is null
{quote}{quote}
h3. then:
{quote}if (indexComment.nonEmpty) properties.put("COMMENT", indexComment)
{quote}*throw NullPointerException*
{quote}{quote}
h3. finally:
{quote}MySQLDialect.listIndexes
{quote}return incorrect result(ingore exist index).
{quote}{quote}


> Fix possible null pointer in MySQLDialect listIndexes
> -
>
> Key: SPARK-39639
> URL: https://issues.apache.org/jira/browse/SPARK-39639
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: panbingkun
>Priority: Minor
>
> h3. when:
> {quote}val indexComment = rs.getString("Index_comment") 
> {color:#172b4d}*return null*{color}
> {quote}
> FYI:
> String ResultSet.getString(String columnLabel) throws SQLException;
> return the column value; if the value is SQL NULL, the value returned is null
> h3. then:
> {quote}if (indexComment.nonEmpty) properties.put("COMMENT", indexComment) 
> {color:#172b4d}*throw NullPointerException*{color}
> {quote}
> h3. finally:
> {quote}MySQLDialect.listIndexes {color:#172b4d}return incorrect result(ingore 
> exist index).{color}
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-39639) Fix possible null pointer in MySQLDialect listIndexes

2022-06-30 Thread panbingkun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-39639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

panbingkun updated SPARK-39639:
---
Description: 
h3. when:
{quote}val indexComment = rs.getString("Index_comment")
{quote}*return null*
{quote}{quote}
FYI:
{quote}{quote}String ResultSet.getString(String columnLabel) throws 
SQLException;
return the column value; if the value is SQL NULL, the value returned is null
{quote}{quote}
h3. then:
{quote}if (indexComment.nonEmpty) properties.put("COMMENT", indexComment)
{quote}*throw NullPointerException*
{quote}{quote}
h3. finally:
{quote}MySQLDialect.listIndexes
{quote}return incorrect result(ingore exist index).
{quote}{quote}

> Fix possible null pointer in MySQLDialect listIndexes
> -
>
> Key: SPARK-39639
> URL: https://issues.apache.org/jira/browse/SPARK-39639
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: panbingkun
>Priority: Minor
>
> h3. when:
> {quote}val indexComment = rs.getString("Index_comment")
> {quote}*return null*
> {quote}{quote}
> FYI:
> {quote}{quote}String ResultSet.getString(String columnLabel) throws 
> SQLException;
> return the column value; if the value is SQL NULL, the value returned is null
> {quote}{quote}
> h3. then:
> {quote}if (indexComment.nonEmpty) properties.put("COMMENT", indexComment)
> {quote}*throw NullPointerException*
> {quote}{quote}
> h3. finally:
> {quote}MySQLDialect.listIndexes
> {quote}return incorrect result(ingore exist index).
> {quote}{quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-39639) Fix possible null pointer in MySQLDialect listIndexes

2022-06-30 Thread panbingkun (Jira)
panbingkun created SPARK-39639:
--

 Summary: Fix possible null pointer in MySQLDialect listIndexes
 Key: SPARK-39639
 URL: https://issues.apache.org/jira/browse/SPARK-39639
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.4.0
Reporter: panbingkun
 Fix For: 3.4.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-39626) Upgrade RoaringBitmap from 0.9.28 to 0.9.30

2022-06-28 Thread panbingkun (Jira)
Title: Message Title


 
 
 
 

 
 
 

 
   
 panbingkun updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Spark /  SPARK-39626  
 
 
  Upgrade RoaringBitmap from 0.9.28 to 0.9.30   
 

  
 
 
 
 

 
Change By: 
 panbingkun  
 
 
Summary: 
 Upgrade RoaringBitmap from 0.9.28 to 0.9.30  for fix bug  
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v8.20.10#820010-sha1:ace47f9)  
 
 

 
   
 

  
 

  
 

   



[jira] [Created] (SPARK-39626) Upgrade RoaringBitmap from 0.9.28 to 0.9.30 for fix bug

2022-06-28 Thread panbingkun (Jira)
Title: Message Title


 
 
 
 

 
 
 

 
   
 panbingkun created an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Spark /  SPARK-39626  
 
 
  Upgrade RoaringBitmap from 0.9.28 to 0.9.30 for fix bug   
 

  
 
 
 
 

 
Issue Type: 
  Improvement  
 
 
Affects Versions: 
 3.4.0  
 
 
Assignee: 
 Unassigned  
 
 
Components: 
 Build  
 
 
Created: 
 28/Jun/22 06:03  
 
 
Fix Versions: 
 3.4.0  
 
 
Priority: 
  Minor  
 
 
Reporter: 
 panbingkun  
 

  
 
 
 
 

 
 https://github.com/RoaringBitmap/RoaringBitmap/compare/0.9.28...0.9.30 fix previousValue value smaller than first value   
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
   

[jira] [Created] (SPARK-39613) Upgrade shapeless to 2.3.9

2022-06-27 Thread panbingkun (Jira)
panbingkun created SPARK-39613:
--

 Summary: Upgrade shapeless to 2.3.9
 Key: SPARK-39613
 URL: https://issues.apache.org/jira/browse/SPARK-39613
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 3.4.0
Reporter: panbingkun
 Fix For: 3.4.0


https://github.com/milessabin/shapeless/releases/tag/v2.3.9



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] (SPARK-39527) V2Catalog rename not support newIdent with catalog

2022-06-22 Thread panbingkun (Jira)


[ https://issues.apache.org/jira/browse/SPARK-39527 ]


panbingkun deleted comment on SPARK-39527:


was (Author: panbingkun):
I working on it!

> V2Catalog rename not support newIdent with catalog
> --
>
> Key: SPARK-39527
> URL: https://issues.apache.org/jira/browse/SPARK-39527
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: angerszhu
>Priority: Major
>
> {code:java}
>   test("rename a table") {
> sql("ALTER TABLE h2.test.empty_table RENAME TO h2.test.empty_table2")
> checkAnswer(
>   sql("SHOW TABLES IN h2.test"),
>   Seq(Row("test", "empty_table2")))
>   }
> {code}
> {code:java}
> [info] - rename a table *** FAILED *** (2 seconds, 358 milliseconds)
> [info]   org.apache.spark.sql.catalyst.analysis.NoSuchNamespaceException: 
> Failed table renaming from test.empty_table to h2.test.empty_table2
> [info]   at 
> org.apache.spark.sql.jdbc.H2Dialect$.classifyException(H2Dialect.scala:117)
> [info]   at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.classifyException(JdbcUtils.scala:1176)
> [info]   at 
> org.apache.spark.sql.execution.datasources.v2.jdbc.JDBCTableCatalog.$anonfun$renameTable$1(JDBCTableCatalog.scala:102)
> [info]   at 
> org.apache.spark.sql.execution.datasources.v2.jdbc.JDBCTableCatalog.$anonfun$renameTable$1$adapted(JDBCTableCatalog.scala:100)
> [info]   at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.withConnection(JdbcUtils.scala:1184)
> [info]   at 
> org.apache.spark.sql.execution.datasources.v2.jdbc.JDBCTableCatalog.renameTable(JDBCTableCatalog.scala:100)
> [info]   at 
> org.apache.spark.sql.execution.datasources.v2.RenameTableExec.run(RenameTableExec.scala:51)
> [info]   at 
> org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
> [info]   at 
> org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
> [info]   at 
> org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:171)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
> [info]   at 
> org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
> [info]   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
> [info]   at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
> [info]   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584)
> [info]   at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
> [info]   at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
> [info]   at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
> [info]   at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
> [info]   at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
> [info]   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
> [info]   at org.apache.spark.sql.Dataset.(Dataset.scala:220)
> [info]   at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: 

[jira] [Commented] (SPARK-39527) V2Catalog rename not support newIdent with catalog

2022-06-22 Thread panbingkun (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-39527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557393#comment-17557393
 ] 

panbingkun commented on SPARK-39527:


I working on it!

> V2Catalog rename not support newIdent with catalog
> --
>
> Key: SPARK-39527
> URL: https://issues.apache.org/jira/browse/SPARK-39527
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.3.0
>Reporter: angerszhu
>Priority: Major
>
> {code:java}
>   test("rename a table") {
> sql("ALTER TABLE h2.test.empty_table RENAME TO h2.test.empty_table2")
> checkAnswer(
>   sql("SHOW TABLES IN h2.test"),
>   Seq(Row("test", "empty_table2")))
>   }
> {code}
> {code:java}
> [info] - rename a table *** FAILED *** (2 seconds, 358 milliseconds)
> [info]   org.apache.spark.sql.catalyst.analysis.NoSuchNamespaceException: 
> Failed table renaming from test.empty_table to h2.test.empty_table2
> [info]   at 
> org.apache.spark.sql.jdbc.H2Dialect$.classifyException(H2Dialect.scala:117)
> [info]   at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.classifyException(JdbcUtils.scala:1176)
> [info]   at 
> org.apache.spark.sql.execution.datasources.v2.jdbc.JDBCTableCatalog.$anonfun$renameTable$1(JDBCTableCatalog.scala:102)
> [info]   at 
> org.apache.spark.sql.execution.datasources.v2.jdbc.JDBCTableCatalog.$anonfun$renameTable$1$adapted(JDBCTableCatalog.scala:100)
> [info]   at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.withConnection(JdbcUtils.scala:1184)
> [info]   at 
> org.apache.spark.sql.execution.datasources.v2.jdbc.JDBCTableCatalog.renameTable(JDBCTableCatalog.scala:100)
> [info]   at 
> org.apache.spark.sql.execution.datasources.v2.RenameTableExec.run(RenameTableExec.scala:51)
> [info]   at 
> org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
> [info]   at 
> org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
> [info]   at 
> org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:171)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
> [info]   at 
> org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
> [info]   at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
> [info]   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
> [info]   at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
> [info]   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584)
> [info]   at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
> [info]   at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
> [info]   at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
> [info]   at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
> [info]   at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
> [info]   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
> [info]   at 
> org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
> [info]   at org.apache.spark.sql.Dataset.(Dataset.scala:220)
> [info]   at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: 

[jira] [Commented] (SPARK-39098) Test the error class: PIVOT_VALUE_DATA_TYPE_MISMATCH

2022-05-06 Thread panbingkun (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-39098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17532686#comment-17532686
 ] 

panbingkun commented on SPARK-39098:


duplicate https://issues.apache.org/jira/browse/SPARK-38748?

> Test the error class: PIVOT_VALUE_DATA_TYPE_MISMATCH
> 
>
> Key: SPARK-39098
> URL: https://issues.apache.org/jira/browse/SPARK-39098
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Max Gekk
>Priority: Minor
>  Labels: starter
>
> Add tests for the error classes *PIVOT_VALUE_DATA_TYPE_MISMATCH* to 
> QueryCompilationErrorsSuite. The test should cover the exception throw in 
> QueryCompilationErrors:
> {code:scala}
>   def pivotValDataTypeMismatchError(pivotVal: Expression, pivotCol: 
> Expression): Throwable = {
> new AnalysisException(
>   errorClass = "PIVOT_VALUE_DATA_TYPE_MISMATCH",
>   messageParameters = Array(
> pivotVal.toString, pivotVal.dataType.simpleString, 
> pivotCol.dataType.catalogString))
>   }
> {code}
> For example, here is a test for the error class {*}UNSUPPORTED_FEATURE{*}: 
> [https://github.com/apache/spark/blob/34e3029a43d2a8241f70f2343be8285cb7f231b9/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala#L151-L170]
> +The test must have a check of:+
>  # the entire error message
>  # sqlState if it is defined in the error-classes.json file
>  # the error class



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38690) Use error classes in the compilation errors of SHOW CREATE TABLE

2022-04-26 Thread panbingkun (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17527960#comment-17527960
 ] 

panbingkun commented on SPARK-38690:


I will work on this. Thanks [~maxgekk] 

> Use error classes in the compilation errors of SHOW CREATE TABLE
> 
>
> Key: SPARK-38690
> URL: https://issues.apache.org/jira/browse/SPARK-38690
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Max Gekk
>Priority: Major
>
> Migrate the following errors in QueryCompilationErrors:
> * showCreateTableAsSerdeNotSupportedForV2TablesError
> * showCreateTableNotSupportedOnTempView
> * showCreateTableFailToExecuteUnsupportedFeatureError
> * showCreateTableNotSupportTransactionalHiveTableError
> * showCreateTableFailToExecuteUnsupportedConfError
> * showCreateTableAsSerdeNotAllowedOnSparkDataSourceTableError
> * showCreateTableOrViewFailToExecuteUnsupportedFeatureError
> onto use error classes. Throw an implementation of SparkThrowable. Also write 
> a test per every error in QueryCompilationErrorsSuite.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38734) Test the error class: INDEX_OUT_OF_BOUNDS

2022-04-21 Thread panbingkun (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17526173#comment-17526173
 ] 

panbingkun commented on SPARK-38734:


ignore it!

> Test the error class: INDEX_OUT_OF_BOUNDS
> -
>
> Key: SPARK-38734
> URL: https://issues.apache.org/jira/browse/SPARK-38734
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Max Gekk
>Priority: Minor
>  Labels: starter
>
> Add at least one test for the error class *INDEX_OUT_OF_BOUNDS* to 
> QueryExecutionErrorsSuite. The test should cover the exception throw in 
> QueryExecutionErrors:
> {code:scala}
>   def indexOutOfBoundsOfArrayDataError(idx: Int): Throwable = {
> new SparkIndexOutOfBoundsException(errorClass = "INDEX_OUT_OF_BOUNDS", 
> Array(idx.toString))
>   }
> {code}
> For example, here is a test for the error class *UNSUPPORTED_FEATURE*: 
> https://github.com/apache/spark/blob/34e3029a43d2a8241f70f2343be8285cb7f231b9/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala#L151-L170
> +The test must have a check of:+
> # the entire error message
> # sqlState if it is defined in the error-classes.json file
> # the error class



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38960) Spark should fail fast if initial memory too large(set by "spark.executor.extraJavaOptions") for executor to start

2022-04-19 Thread panbingkun (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17524654#comment-17524654
 ] 

panbingkun commented on SPARK-38960:


I will do it

> Spark should fail fast if initial memory too large(set by 
> "spark.executor.extraJavaOptions") for executor to start
> --
>
> Key: SPARK-38960
> URL: https://issues.apache.org/jira/browse/SPARK-38960
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, Spark Submit, YARN
>Affects Versions: 3.4.0
>Reporter: panbingkun
>Priority: Minor
> Fix For: 3.4.0
>
>
> if you set initial memory(set by 
> "spark.executor.extraJavaOptions=-Xms\{XXX}G" ) larger than maximum 
> memory(set by "spark.executor.memory")
> Eg.
>      *spark.executor.memory=1G*
>      *spark.executor.extraJavaOptions=-Xms2G*
>  
> from the driver process you just see executor failures with no warning, since 
> the more meaningful errors are buried in the executor logs. 
> Eg., on Yarn, you see:
> {noformat}
> Error occurred during initialization of VM
> Initial heap size set to a larger value than the maximum heap size{noformat}
> Instead we should just fail fast with a clear error message in the driver 
> logs.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-38960) Spark should fail fast if initial memory too large(set by "spark.executor.extraJavaOptions") for executor to start

2022-04-19 Thread panbingkun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

panbingkun updated SPARK-38960:
---
Description: 
if you set initial memory(set by "spark.executor.extraJavaOptions=-Xms\{XXX}G" 
) larger than maximum memory(set by "spark.executor.memory")

Eg.

     *spark.executor.memory=1G*

     *spark.executor.extraJavaOptions=-Xms2G*

 

from the driver process you just see executor failures with no warning, since 
the more meaningful errors are buried in the executor logs. 

Eg., on Yarn, you see:
{noformat}
Error occurred during initialization of VM
Initial heap size set to a larger value than the maximum heap size{noformat}
Instead we should just fail fast with a clear error message in the driver logs.

  was:
if you set initial memory(set by "spark.executor.extraJavaOptions=-Xms\{XXX}G" 
) larger than maximum memory(set by "spark.executor.memory")

Eg.

spark.executor.memory=1G

spark.executor.extraJavaOptions=-Xms2G

 

from the driver process you just see executor failures with no warning, since 
the more meaningful errors are buried in the executor logs. 

Eg., on Yarn, you see:

Error occurred during initialization of VM

Initial heap size set to a larger value than the maximum heap size

 

Instead we should just fail fast with a clear error message in the driver logs.


> Spark should fail fast if initial memory too large(set by 
> "spark.executor.extraJavaOptions") for executor to start
> --
>
> Key: SPARK-38960
> URL: https://issues.apache.org/jira/browse/SPARK-38960
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, Spark Submit, YARN
>Affects Versions: 3.4.0
>Reporter: panbingkun
>Priority: Minor
> Fix For: 3.4.0
>
>
> if you set initial memory(set by 
> "spark.executor.extraJavaOptions=-Xms\{XXX}G" ) larger than maximum 
> memory(set by "spark.executor.memory")
> Eg.
>      *spark.executor.memory=1G*
>      *spark.executor.extraJavaOptions=-Xms2G*
>  
> from the driver process you just see executor failures with no warning, since 
> the more meaningful errors are buried in the executor logs. 
> Eg., on Yarn, you see:
> {noformat}
> Error occurred during initialization of VM
> Initial heap size set to a larger value than the maximum heap size{noformat}
> Instead we should just fail fast with a clear error message in the driver 
> logs.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-38960) Spark should fail fast if initial memory too large(set by "spark.executor.extraJavaOptions") for executor to start

2022-04-19 Thread panbingkun (Jira)
panbingkun created SPARK-38960:
--

 Summary: Spark should fail fast if initial memory too large(set by 
"spark.executor.extraJavaOptions") for executor to start
 Key: SPARK-38960
 URL: https://issues.apache.org/jira/browse/SPARK-38960
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core, Spark Submit, YARN
Affects Versions: 3.4.0
Reporter: panbingkun
 Fix For: 3.4.0


if you set initial memory(set by "spark.executor.extraJavaOptions=-Xms\{XXX}G" 
) larger than maximum memory(set by "spark.executor.memory")

Eg.

spark.executor.memory=1G

spark.executor.extraJavaOptions=-Xms2G

 

from the driver process you just see executor failures with no warning, since 
the more meaningful errors are buried in the executor logs. 

Eg., on Yarn, you see:

Error occurred during initialization of VM

Initial heap size set to a larger value than the maximum heap size

 

Instead we should just fail fast with a clear error message in the driver logs.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38725) Test the error class: DUPLICATE_KEY

2022-04-12 Thread panbingkun (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17521003#comment-17521003
 ] 

panbingkun commented on SPARK-38725:


I am working on this. Thanks [~maxgekk] 

> Test the error class: DUPLICATE_KEY
> ---
>
> Key: SPARK-38725
> URL: https://issues.apache.org/jira/browse/SPARK-38725
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.4.0
>Reporter: Max Gekk
>Priority: Minor
>  Labels: starter
>
> Add at least one test for the error class *DUPLICATE_KEY* to 
> QueryParsingErrorsSuite. The test should cover the exception throw in 
> QueryParsingErrors:
> {code:scala}
>   def duplicateKeysError(key: String, ctx: ParserRuleContext): Throwable = {
> // Found duplicate keys '$key'
> new ParseException(errorClass = "DUPLICATE_KEY", messageParameters = 
> Array(key), ctx)
>   }
> {code}
> For example, here is a test for the error class *UNSUPPORTED_FEATURE*: 
> https://github.com/apache/spark/blob/34e3029a43d2a8241f70f2343be8285cb7f231b9/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryCompilationErrorsSuite.scala#L151-L170
> +The test must have a check of:+
> # the entire error message
> # sqlState if it is defined in the error-classes.json file
> # the error class



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21724) Missing since information in the documentation of date functions

2017-08-14 Thread panbingkun (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125520#comment-16125520
 ] 

panbingkun commented on SPARK-21724:


In DescribeFunctionCommand
info.getSince

> Missing since information in the documentation of date functions
> 
>
> Key: SPARK-21724
> URL: https://issues.apache.org/jira/browse/SPARK-21724
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation, SQL
>Affects Versions: 2.3.0
>Reporter: Hyukjin Kwon
>Priority: Minor
>
> Currently, we have missing version information for Spark SQL's builtin 
> functions. 
> Please see https://spark-test.github.io/sparksqldoc/
> For example, we could add the version information as below:
> {code}
> spark-sql> describe function extended datediff;
> Function: datediff
> ...
> Extended Usage:
> Examples:
>   ...
> Since: 1.5.0
> {code}
> and also in the SQL builtin function documentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-21625) sqrt(negative number) should be null

2017-08-03 Thread panbingkun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

panbingkun updated SPARK-21625:
---
Comment: was deleted

(was: case class Sqrt(child: Expression) extends UnaryMathExpression(math.sqrt, 
"SQRT") {
  protected override def nullSafeEval(input: Any): Any = {
if (input.asInstanceOf[Double] < 0) {
  null
} else {
  f(input.asInstanceOf[Double])
}
  }

  override def doGenCode(ctx: CodegenContext, ev: ExprCode): ExprCode = {
nullSafeCodeGen(ctx, ev, c => {
  s"""
if ($c < 0) {
  ${ev.isNull} = true;
} else {
  ${ev.value} = java.lang.Math.sqrt($c);
}
  """
})
  }
})

> sqrt(negative number) should be null
> 
>
> Key: SPARK-21625
> URL: https://issues.apache.org/jira/browse/SPARK-21625
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Yuming Wang
>
> Both Hive and MySQL are null:
> {code:sql}
> hive> select SQRT(-10.0);
> OK
> NULL
> Time taken: 0.384 seconds, Fetched: 1 row(s)
> {code}
> {code:sql}
> mysql> select sqrt(-10.0);
> +---+
> | sqrt(-10.0) |
> +---+
> |  NULL |
> +---+
> 1 row in set (0.00 sec)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21625) sqrt(negative number) should be null

2017-08-03 Thread panbingkun (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112571#comment-16112571
 ] 

panbingkun commented on SPARK-21625:


case class Sqrt(child: Expression) extends UnaryMathExpression(math.sqrt, 
"SQRT") {
  protected override def nullSafeEval(input: Any): Any = {
if (input.asInstanceOf[Double] < 0) {
  null
} else {
  f(input.asInstanceOf[Double])
}
  }

  override def doGenCode(ctx: CodegenContext, ev: ExprCode): ExprCode = {
nullSafeCodeGen(ctx, ev, c => {
  s"""
if ($c < 0) {
  ${ev.isNull} = true;
} else {
  ${ev.value} = java.lang.Math.sqrt($c);
}
  """
})
  }
}

> sqrt(negative number) should be null
> 
>
> Key: SPARK-21625
> URL: https://issues.apache.org/jira/browse/SPARK-21625
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Yuming Wang
>
> Both Hive and MySQL are null:
> {code:sql}
> hive> select SQRT(-10.0);
> OK
> NULL
> Time taken: 0.384 seconds, Fetched: 1 row(s)
> {code}
> {code:sql}
> mysql> select sqrt(-10.0);
> +---+
> | sqrt(-10.0) |
> +---+
> |  NULL |
> +---+
> 1 row in set (0.00 sec)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-21625) sqrt(negative number) should be null

2017-08-03 Thread panbingkun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-21625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

panbingkun updated SPARK-21625:
---
Comment: was deleted

(was: so, if spark sql will follow it,
we must modify : 
case class Sqrt(child: Expression) extends UnaryMathExpression(math.sqrt, 
"SQRT"))

> sqrt(negative number) should be null
> 
>
> Key: SPARK-21625
> URL: https://issues.apache.org/jira/browse/SPARK-21625
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Yuming Wang
>
> Both Hive and MySQL are null:
> {code:sql}
> hive> select SQRT(-10.0);
> OK
> NULL
> Time taken: 0.384 seconds, Fetched: 1 row(s)
> {code}
> {code:sql}
> mysql> select sqrt(-10.0);
> +---+
> | sqrt(-10.0) |
> +---+
> |  NULL |
> +---+
> 1 row in set (0.00 sec)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21625) sqrt(negative number) should be null

2017-08-03 Thread panbingkun (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112455#comment-16112455
 ] 

panbingkun commented on SPARK-21625:


so, if spark sql will follow it,
we must modify : 
case class Sqrt(child: Expression) extends UnaryMathExpression(math.sqrt, 
"SQRT")

> sqrt(negative number) should be null
> 
>
> Key: SPARK-21625
> URL: https://issues.apache.org/jira/browse/SPARK-21625
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Yuming Wang
>
> Both Hive and MySQL are null:
> {code:sql}
> hive> select SQRT(-10.0);
> OK
> NULL
> Time taken: 0.384 seconds, Fetched: 1 row(s)
> {code}
> {code:sql}
> mysql> select sqrt(-10.0);
> +---+
> | sqrt(-10.0) |
> +---+
> |  NULL |
> +---+
> 1 row in set (0.00 sec)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21625) sqrt(negative number) should be null

2017-08-03 Thread panbingkun (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112453#comment-16112453
 ] 

panbingkun commented on SPARK-21625:


in java
scala> java.lang.Math.sqrt(-10.0);
res1: Double = NaN

> sqrt(negative number) should be null
> 
>
> Key: SPARK-21625
> URL: https://issues.apache.org/jira/browse/SPARK-21625
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Yuming Wang
>
> Both Hive and MySQL are null:
> {code:sql}
> hive> select SQRT(-10.0);
> OK
> NULL
> Time taken: 0.384 seconds, Fetched: 1 row(s)
> {code}
> {code:sql}
> mysql> select sqrt(-10.0);
> +---+
> | sqrt(-10.0) |
> +---+
> |  NULL |
> +---+
> 1 row in set (0.00 sec)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21625) sqrt(negative number) should be null

2017-08-03 Thread panbingkun (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112442#comment-16112442
 ] 

panbingkun commented on SPARK-21625:


in scala

scala> math.sqrt(-10.0);
res0: Double = NaN

> sqrt(negative number) should be null
> 
>
> Key: SPARK-21625
> URL: https://issues.apache.org/jira/browse/SPARK-21625
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.3.0
>Reporter: Yuming Wang
>
> Both Hive and MySQL are null:
> {code:sql}
> hive> select SQRT(-10.0);
> OK
> NULL
> Time taken: 0.384 seconds, Fetched: 1 row(s)
> {code}
> {code:sql}
> mysql> select sqrt(-10.0);
> +---+
> | sqrt(-10.0) |
> +---+
> |  NULL |
> +---+
> 1 row in set (0.00 sec)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21609) In the Master ui add "log directory" display, is conducive to users to quickly find the log directory path.

2017-08-02 Thread panbingkun (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110524#comment-16110524
 ] 

panbingkun commented on SPARK-21609:


standalone?

> In the Master ui add "log directory" display, is conducive to users to 
> quickly find the log directory path.
> ---
>
> Key: SPARK-21609
> URL: https://issues.apache.org/jira/browse/SPARK-21609
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.3.0
>Reporter: guoxiaolongzte
>
> In the Master ui add "log directory" display, is conducive to users to 
> quickly find the log directory path.
> In the spark application development process, we not only view the executor 
> log and driver log, but also to see the master log and worker log, but the 
> current UI will not show the master and worker log path, resulting in the 
> user is not very clear to find the log path. So, I add "log directory" 
> display.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org