[jira] [Created] (SPARK-43859) Override toString in LateralColumnAliasReference

2023-05-28 Thread Yuming Wang (Jira)
Yuming Wang created SPARK-43859:
---

 Summary: Override toString in LateralColumnAliasReference
 Key: SPARK-43859
 URL: https://issues.apache.org/jira/browse/SPARK-43859
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.5.0
Reporter: Yuming Wang



{code:sql}
select id + 1 as a1, a1 + 2 as a3 from range(10);
{code}

Before:
{noformat}
Project [(id#2L + 1) AS a1#0, (lateralAliasReference('a1, a1, 'a1) + 2) AS a3#1]
+- Range (0, 10, step=1, splits=None)
{noformat}

After:
{noformat}
Project [(id#2L + 1) AS a1#0, (lateralAliasReference(a1) + 2) AS a3#1]
+- Range (0, 10, step=1, splits=None)
{noformat}





--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43858) make benchmark GA use Scala 2.13 as default

2023-05-28 Thread Yang Jie (Jira)
Yang Jie created SPARK-43858:


 Summary: make benchmark GA use Scala 2.13 as default
 Key: SPARK-43858
 URL: https://issues.apache.org/jira/browse/SPARK-43858
 Project: Spark
  Issue Type: Improvement
  Components: Project Infra
Affects Versions: 3.5.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-43857) Generating benchmark results using Scala 2.13

2023-05-28 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-43857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17727016#comment-17727016
 ] 

Yang Jie commented on SPARK-43857:
--

friendly ping [~dongjoon] , If this one duplicated, please let me know

 

> Generating benchmark results using Scala 2.13
> -
>
> Key: SPARK-43857
> URL: https://issues.apache.org/jira/browse/SPARK-43857
> Project: Spark
>  Issue Type: Improvement
>  Components: Build, Tests
>Affects Versions: 3.5.0
>Reporter: Yang Jie
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43857) Generating benchmark results using Scala 2.13

2023-05-28 Thread Yang Jie (Jira)
Yang Jie created SPARK-43857:


 Summary: Generating benchmark results using Scala 2.13
 Key: SPARK-43857
 URL: https://issues.apache.org/jira/browse/SPARK-43857
 Project: Spark
  Issue Type: Improvement
  Components: Build, Tests
Affects Versions: 3.5.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43856) Assign a name to the error class _LEGACY_ERROR_TEMP_2425

2023-05-28 Thread jiaan.geng (Jira)
jiaan.geng created SPARK-43856:
--

 Summary: Assign a name to the error class _LEGACY_ERROR_TEMP_2425
 Key: SPARK-43856
 URL: https://issues.apache.org/jira/browse/SPARK-43856
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.5.0
Reporter: jiaan.geng






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43855) Assign a name to the error class _LEGACY_ERROR_TEMP_2423

2023-05-28 Thread jiaan.geng (Jira)
jiaan.geng created SPARK-43855:
--

 Summary: Assign a name to the error class _LEGACY_ERROR_TEMP_2423
 Key: SPARK-43855
 URL: https://issues.apache.org/jira/browse/SPARK-43855
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.5.0
Reporter: jiaan.geng






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43854) Assign a name to the error class _LEGACY_ERROR_TEMP_2421

2023-05-28 Thread jiaan.geng (Jira)
jiaan.geng created SPARK-43854:
--

 Summary: Assign a name to the error class _LEGACY_ERROR_TEMP_2421
 Key: SPARK-43854
 URL: https://issues.apache.org/jira/browse/SPARK-43854
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.5.0
Reporter: jiaan.geng






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43852) Assign a name to the error class _LEGACY_ERROR_TEMP_2418

2023-05-28 Thread jiaan.geng (Jira)
jiaan.geng created SPARK-43852:
--

 Summary: Assign a name to the error class _LEGACY_ERROR_TEMP_2418
 Key: SPARK-43852
 URL: https://issues.apache.org/jira/browse/SPARK-43852
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.5.0
Reporter: jiaan.geng






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43853) Assign a name to the error class _LEGACY_ERROR_TEMP_2419

2023-05-28 Thread jiaan.geng (Jira)
jiaan.geng created SPARK-43853:
--

 Summary: Assign a name to the error class _LEGACY_ERROR_TEMP_2419
 Key: SPARK-43853
 URL: https://issues.apache.org/jira/browse/SPARK-43853
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.5.0
Reporter: jiaan.geng






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-43851) Support LCA in grouping expressions

2023-05-28 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated SPARK-43851:

Description: 
Teradata supports it:
{code:sql}
create table t1(a int) using  parquet;
select a + 1 as a1, a1 + 1 as a2 from t1 group by a1, a2;
{code}


{noformat}
[UNSUPPORTED_FEATURE.LATERAL_COLUMN_ALIAS_IN_GROUP_BY] The feature is not 
supported: Referencing a lateral column alias via GROUP BY alias/ALL is not 
supported yet.
{noformat}



  was:

{code:sql}
create table t1(a int) using  parquet;
select a + 1 as a1, a1 + 1 as a2 from t1 group by a1, a2;
{code}


{noformat}
[UNSUPPORTED_FEATURE.LATERAL_COLUMN_ALIAS_IN_GROUP_BY] The feature is not 
supported: Referencing a lateral column alias via GROUP BY alias/ALL is not 
supported yet.
{noformat}




> Support LCA in grouping expressions
> ---
>
> Key: SPARK-43851
> URL: https://issues.apache.org/jira/browse/SPARK-43851
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: Yuming Wang
>Priority: Major
>
> Teradata supports it:
> {code:sql}
> create table t1(a int) using  parquet;
> select a + 1 as a1, a1 + 1 as a2 from t1 group by a1, a2;
> {code}
> {noformat}
> [UNSUPPORTED_FEATURE.LATERAL_COLUMN_ALIAS_IN_GROUP_BY] The feature is not 
> supported: Referencing a lateral column alias via GROUP BY alias/ALL is not 
> supported yet.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43851) Support LCA in grouping expressions

2023-05-28 Thread Yuming Wang (Jira)
Yuming Wang created SPARK-43851:
---

 Summary: Support LCA in grouping expressions
 Key: SPARK-43851
 URL: https://issues.apache.org/jira/browse/SPARK-43851
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.5.0
Reporter: Yuming Wang



{code:sql}
create table t1(a int) using  parquet;
select a + 1 as a1, a1 + 1 as a2 from t1 group by a1, a2;
{code}


{noformat}
[UNSUPPORTED_FEATURE.LATERAL_COLUMN_ALIAS_IN_GROUP_BY] The feature is not 
supported: Referencing a lateral column alias via GROUP BY alias/ALL is not 
supported yet.
{noformat}





--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-43799) Protobuf: Support binary descriptor set API in Python

2023-05-28 Thread Snoot.io (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-43799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17727005#comment-17727005
 ] 

Snoot.io commented on SPARK-43799:
--

User 'rangadi' has created a pull request for this issue:
https://github.com/apache/spark/pull/41343

> Protobuf: Support binary descriptor set API in Python
> -
>
> Key: SPARK-43799
> URL: https://issues.apache.org/jira/browse/SPARK-43799
> Project: Spark
>  Issue Type: Task
>  Components: Protobuf
>Affects Versions: 3.5.0
>Reporter: Raghu Angadi
>Priority: Major
>
> SPARK-43530 adds new API to pass binary FileDescrioptorSet in Scala. We need 
> to add this support in Python



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-43821) Make the prompt for `findJar` method in IntegrationTestUtils clearer

2023-05-28 Thread Snoot.io (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-43821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17727003#comment-17727003
 ] 

Snoot.io commented on SPARK-43821:
--

User 'panbingkun' has created a pull request for this issue:
https://github.com/apache/spark/pull/41336

> Make the prompt for `findJar` method in IntegrationTestUtils clearer
> 
>
> Key: SPARK-43821
> URL: https://issues.apache.org/jira/browse/SPARK-43821
> Project: Spark
>  Issue Type: Improvement
>  Components: Connect, Tests
>Affects Versions: 3.5.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
> Fix For: 3.5.0
>
>
> When I am running tests in ClientE2ETestSuite, I often cannot locate them 
> through error prompts when they fail, and I can only search for specific 
> reasons through code
>  * Before applying this patche, the error prompt is as follows:
> Exception encountered when invoking run on a nested suite - Failed to find 
> the jar inside folder: .../spark-community/connector/connect/server/target
>  
>  * After applying this patche, The error prompt is as follows:
> Exception encountered when invoking run on a nested suite - Failed to find 
> the jar: {color:#ff}spark-connect-assembly(.{*}).jar or 
> spark-connect(.{*})3.5.0-SNAPSHOT.jar {color}inside folder: 
> .../spark-community/connector/connect/server/target. {color:#ff}This file 
> can be generated by similar to the following command: build/sbt 
> package|assembly{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-43821) Make the prompt for `findJar` method in IntegrationTestUtils clearer

2023-05-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun reassigned SPARK-43821:
-

Assignee: BingKun Pan

> Make the prompt for `findJar` method in IntegrationTestUtils clearer
> 
>
> Key: SPARK-43821
> URL: https://issues.apache.org/jira/browse/SPARK-43821
> Project: Spark
>  Issue Type: Improvement
>  Components: Connect, Tests
>Affects Versions: 3.5.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
>
> When I am running tests in ClientE2ETestSuite, I often cannot locate them 
> through error prompts when they fail, and I can only search for specific 
> reasons through code
>  * Before applying this patche, the error prompt is as follows:
> Exception encountered when invoking run on a nested suite - Failed to find 
> the jar inside folder: .../spark-community/connector/connect/server/target
>  
>  * After applying this patche, The error prompt is as follows:
> Exception encountered when invoking run on a nested suite - Failed to find 
> the jar: {color:#ff}spark-connect-assembly(.{*}).jar or 
> spark-connect(.{*})3.5.0-SNAPSHOT.jar {color}inside folder: 
> .../spark-community/connector/connect/server/target. {color:#ff}This file 
> can be generated by similar to the following command: build/sbt 
> package|assembly{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43821) Make the prompt for `findJar` method in IntegrationTestUtils clearer

2023-05-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun resolved SPARK-43821.
---
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 41336
[https://github.com/apache/spark/pull/41336]

> Make the prompt for `findJar` method in IntegrationTestUtils clearer
> 
>
> Key: SPARK-43821
> URL: https://issues.apache.org/jira/browse/SPARK-43821
> Project: Spark
>  Issue Type: Improvement
>  Components: Connect, Tests
>Affects Versions: 3.5.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
> Fix For: 3.5.0
>
>
> When I am running tests in ClientE2ETestSuite, I often cannot locate them 
> through error prompts when they fail, and I can only search for specific 
> reasons through code
>  * Before applying this patche, the error prompt is as follows:
> Exception encountered when invoking run on a nested suite - Failed to find 
> the jar inside folder: .../spark-community/connector/connect/server/target
>  
>  * After applying this patche, The error prompt is as follows:
> Exception encountered when invoking run on a nested suite - Failed to find 
> the jar: {color:#ff}spark-connect-assembly(.{*}).jar or 
> spark-connect(.{*})3.5.0-SNAPSHOT.jar {color}inside folder: 
> .../spark-community/connector/connect/server/target. {color:#ff}This file 
> can be generated by similar to the following command: build/sbt 
> package|assembly{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-43850) Cleanup unused imports related suppression rules for Scala 2.13

2023-05-28 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-43850:
-
Summary: Cleanup unused imports related suppression rules for Scala 2.13  
(was: Cleanup Wunused imports related suppression rules for Scala 2.13)

> Cleanup unused imports related suppression rules for Scala 2.13
> ---
>
> Key: SPARK-43850
> URL: https://issues.apache.org/jira/browse/SPARK-43850
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Yang Jie
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43850) Cleanup Wunused imports related suppression rules for Scala 2.13

2023-05-28 Thread Yang Jie (Jira)
Yang Jie created SPARK-43850:


 Summary: Cleanup Wunused imports related suppression rules for 
Scala 2.13
 Key: SPARK-43850
 URL: https://issues.apache.org/jira/browse/SPARK-43850
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 3.5.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43849) Enable unused imports check for Scala 2.13

2023-05-28 Thread Yang Jie (Jira)
Yang Jie created SPARK-43849:


 Summary: Enable unused imports check for Scala 2.13
 Key: SPARK-43849
 URL: https://issues.apache.org/jira/browse/SPARK-43849
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 3.5.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-43657) reuse SPARK_CONF_DIR config maps between driver and executor

2023-05-28 Thread Snoot.io (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-43657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17727001#comment-17727001
 ] 

Snoot.io commented on SPARK-43657:
--

User 'advancedxy' has created a pull request for this issue:
https://github.com/apache/spark/pull/41257

> reuse SPARK_CONF_DIR config maps between driver and executor
> 
>
> Key: SPARK-43657
> URL: https://issues.apache.org/jira/browse/SPARK-43657
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 3.2.4, 3.3.2, 3.4.0
>Reporter: YE
>Priority: Major
>
> Currently, Spark on k8s-cluster creates two config maps per application: one 
> for the driver and another for the executor. However the config map for 
> executor is almost identical for config map for driver, there's no need to 
> create there two duplicate config maps. As ConfigMaps are object on K8S, 
> there would be some limitations for ConfigMaps on K8S:
>  # more config maps means more objects on etcd, and adds overhead to API 
> server
>  # Spark driver pod might be ran under limited permission, which means, it 
> might not be possible to create resources rather than exec pod. Therefore 
> driver might not be allowed to create config maps.
> I would submit a pr to reuse SPARK_CONF_DIR config maps for running spark on 
> k8s-cluster mode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-43657) reuse SPARK_CONF_DIR config maps between driver and executor

2023-05-28 Thread Snoot.io (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-43657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17727000#comment-17727000
 ] 

Snoot.io commented on SPARK-43657:
--

User 'advancedxy' has created a pull request for this issue:
https://github.com/apache/spark/pull/41257

> reuse SPARK_CONF_DIR config maps between driver and executor
> 
>
> Key: SPARK-43657
> URL: https://issues.apache.org/jira/browse/SPARK-43657
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 3.2.4, 3.3.2, 3.4.0
>Reporter: YE
>Priority: Major
>
> Currently, Spark on k8s-cluster creates two config maps per application: one 
> for the driver and another for the executor. However the config map for 
> executor is almost identical for config map for driver, there's no need to 
> create there two duplicate config maps. As ConfigMaps are object on K8S, 
> there would be some limitations for ConfigMaps on K8S:
>  # more config maps means more objects on etcd, and adds overhead to API 
> server
>  # Spark driver pod might be ran under limited permission, which means, it 
> might not be possible to create resources rather than exec pod. Therefore 
> driver might not be allowed to create config maps.
> I would submit a pr to reuse SPARK_CONF_DIR config maps for running spark on 
> k8s-cluster mode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-43845) Setup Scala 2.12 Daily GitHub Action Job

2023-05-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun reassigned SPARK-43845:
-

Assignee: Dongjoon Hyun

> Setup Scala 2.12 Daily GitHub Action Job
> 
>
> Key: SPARK-43845
> URL: https://issues.apache.org/jira/browse/SPARK-43845
> Project: Spark
>  Issue Type: Test
>  Components: Project Infra
>Affects Versions: 3.5.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43845) Setup Scala 2.12 Daily GitHub Action Job

2023-05-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun resolved SPARK-43845.
---
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 41354
[https://github.com/apache/spark/pull/41354]

> Setup Scala 2.12 Daily GitHub Action Job
> 
>
> Key: SPARK-43845
> URL: https://issues.apache.org/jira/browse/SPARK-43845
> Project: Spark
>  Issue Type: Test
>  Components: Project Infra
>Affects Versions: 3.5.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Minor
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-43848) Web UI WholeStageCodegen duration is 0 ms

2023-05-28 Thread chengxingfu (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chengxingfu updated SPARK-43848:

Component/s: SQL
 (was: Web UI)

> Web UI WholeStageCodegen duration is 0 ms
> -
>
> Key: SPARK-43848
> URL: https://issues.apache.org/jira/browse/SPARK-43848
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.2.0
> Environment: spark local mode
>Reporter: chengxingfu
>Priority: Major
> Attachments: 0ms.jpg
>
>
> In the Web UI when I’m running a query with limit clause, the duration of 
> WholeStageCodegen operator is always 0 ms.
> the corresponding feature is SPARK-13916, commitid: 
> 76958d820f57d23e3cbb5b7205c680a5daea0499 . durationMs update only when we 
> iterate the last row of partition, but when we only iterate a few rows, the 
> duration will always be 0 ms
> below  code will repetition the issue:
> spark.sql("use tpcds1g")
> spark.sql("""
> select i_item_sk from item
> limit 100
> """).collect



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-43848) Web UI WholeStageCodegen duration is 0 ms

2023-05-28 Thread chengxingfu (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chengxingfu updated SPARK-43848:

Description: 
In the Web UI when I’m running a query with limit clause, the duration of 
WholeStageCodegen operator is always 0 ms.
the corresponding feature is SPARK-13916, commitid: 
76958d820f57d23e3cbb5b7205c680a5daea0499 . durationMs update only when we 
iterate the last row of partition, but when we only iterate a few rows, the 
duration will always be 0 ms

below  code will repetition the issue:

spark.sql("use tpcds1g")
spark.sql("""
select i_item_sk from item
limit 100
""").collect

  was:
In the Web UI when I’m running a query with limit clause, the duration of 
WholeStageCodegen operator is always 0 ms.
the corresponding feature is SPARK-13916, commitid: 5e86e926 . durationMs 
update only when we iterate the last row of partition, but when we only iterate 
a few rows, the duration will always be 0 ms

below  code will repetition the issue:

spark.sql("use tpcds1g")
spark.sql("""
select i_item_sk from item
limit 100
""").collect


> Web UI WholeStageCodegen duration is 0 ms
> -
>
> Key: SPARK-43848
> URL: https://issues.apache.org/jira/browse/SPARK-43848
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 3.2.0
> Environment: spark local mode
>Reporter: chengxingfu
>Priority: Major
> Attachments: 0ms.jpg
>
>
> In the Web UI when I’m running a query with limit clause, the duration of 
> WholeStageCodegen operator is always 0 ms.
> the corresponding feature is SPARK-13916, commitid: 
> 76958d820f57d23e3cbb5b7205c680a5daea0499 . durationMs update only when we 
> iterate the last row of partition, but when we only iterate a few rows, the 
> duration will always be 0 ms
> below  code will repetition the issue:
> spark.sql("use tpcds1g")
> spark.sql("""
> select i_item_sk from item
> limit 100
> """).collect



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-43848) Web UI WholeStageCodegen duration is 0 ms

2023-05-28 Thread chengxingfu (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chengxingfu updated SPARK-43848:

Description: 
In the Web UI when I’m running a query with limit clause, the duration of 
WholeStageCodegen operator is always 0 ms.
the corresponding feature is SPARK-13916, commitid: 5e86e926 . durationMs 
update only when we iterate the last row of partition, but when we only iterate 
a few rows, the duration will always be 0 ms

below  code will repetition the issue:

spark.sql("use tpcds1g")
spark.sql("""
select i_item_sk from item
limit 100
""").collect

  was:
In the Web UI when I’m running a query with limit clause, the duration of 
WholeStageCodegen operator is always 0 ms.
the corresponding feature is SPARK-13916, commitid: 5e86e926 . durationMs 
update only when we iterate the last row of partition, but when we only iterate 
a few row, the duration will always be 0 ms

below  code will repetition the issue:

spark.sql("use tpcds1g")
spark.sql("""
select i_item_sk from item
limit 100
""").collect


> Web UI WholeStageCodegen duration is 0 ms
> -
>
> Key: SPARK-43848
> URL: https://issues.apache.org/jira/browse/SPARK-43848
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 3.2.0
> Environment: spark local mode
>Reporter: chengxingfu
>Priority: Major
> Attachments: 0ms.jpg
>
>
> In the Web UI when I’m running a query with limit clause, the duration of 
> WholeStageCodegen operator is always 0 ms.
> the corresponding feature is SPARK-13916, commitid: 5e86e926 . durationMs 
> update only when we iterate the last row of partition, but when we only 
> iterate a few rows, the duration will always be 0 ms
> below  code will repetition the issue:
> spark.sql("use tpcds1g")
> spark.sql("""
> select i_item_sk from item
> limit 100
> """).collect



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-43848) Web UI WholeStageCodegen duration is 0 ms

2023-05-28 Thread chengxingfu (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chengxingfu updated SPARK-43848:

Attachment: 0ms.jpg

> Web UI WholeStageCodegen duration is 0 ms
> -
>
> Key: SPARK-43848
> URL: https://issues.apache.org/jira/browse/SPARK-43848
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 3.2.0
> Environment: spark local mode
>Reporter: chengxingfu
>Priority: Major
> Attachments: 0ms.jpg
>
>
> In the Web UI when I’m running a query with limit clause, the duration of 
> WholeStageCodegen operator is always 0 ms.
> the corresponding feature is SPARK-13916, commitid: 5e86e926 . durationMs 
> update only when we iterate the last row of partition, but when we only 
> iterate a few row, the duration will always be 0 ms
> below  code will repetition the issue:
> spark.sql("use tpcds1g")
> spark.sql("""
> select i_item_sk from item
> limit 100
> """).collect



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43848) Web UI WholeStageCodegen duration is 0 ms

2023-05-28 Thread chengxingfu (Jira)
chengxingfu created SPARK-43848:
---

 Summary: Web UI WholeStageCodegen duration is 0 ms
 Key: SPARK-43848
 URL: https://issues.apache.org/jira/browse/SPARK-43848
 Project: Spark
  Issue Type: Bug
  Components: Web UI
Affects Versions: 3.2.0
 Environment: spark local mode
Reporter: chengxingfu


In the Web UI when I’m running a query with limit clause, the duration of 
WholeStageCodegen operator is always 0 ms.
the corresponding feature is SPARK-13916, commitid: 5e86e926 . durationMs 
update only when we iterate the last row of partition, but when we only iterate 
a few row, the duration will always be 0 ms

below  code will repetition the issue:

spark.sql("use tpcds1g")
spark.sql("""
select i_item_sk from item
limit 100
""").collect



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42421) Use the utils to get the switch for dynamic allocation used in local checkpoint

2023-05-28 Thread Kent Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kent Yao reassigned SPARK-42421:


Assignee: Apache Spark

> Use the utils to get the switch for dynamic allocation used in local 
> checkpoint
> ---
>
> Key: SPARK-42421
> URL: https://issues.apache.org/jira/browse/SPARK-42421
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.4.8, 3.3.0, 3.2.3
>Reporter: Wanqiang Ji
>Assignee: Apache Spark
>Priority: Minor
>
> Use the _Utils#isDynamicAllocationEnabled_  to ensure whether enable the 
> dynamic allocation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-42421) Use the utils to get the switch for dynamic allocation used in local checkpoint

2023-05-28 Thread Kent Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kent Yao resolved SPARK-42421.
--
Fix Version/s: 3.5.0
   3.4.1
   Resolution: Fixed

Issue resolved by pull request 39998
[https://github.com/apache/spark/pull/39998]

> Use the utils to get the switch for dynamic allocation used in local 
> checkpoint
> ---
>
> Key: SPARK-42421
> URL: https://issues.apache.org/jira/browse/SPARK-42421
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.4.8, 3.3.0, 3.2.3
>Reporter: Wanqiang Ji
>Assignee: Apache Spark
>Priority: Minor
> Fix For: 3.5.0, 3.4.1
>
>
> Use the _Utils#isDynamicAllocationEnabled_  to ensure whether enable the 
> dynamic allocation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43847) Protobuf: Throw structured error when reading descriptor file in Python fails

2023-05-28 Thread Raghu Angadi (Jira)
Raghu Angadi created SPARK-43847:


 Summary: Protobuf: Throw structured error when reading descriptor 
file in Python fails
 Key: SPARK-43847
 URL: https://issues.apache.org/jira/browse/SPARK-43847
 Project: Spark
  Issue Type: Task
  Components: Protobuf
Affects Versions: 3.5.0
Reporter: Raghu Angadi


`_read_descriptor_set_file()` in `protobuf.functions` reads binary descriptor 
file from a file. It should throw structured Spark SQL error like 
_PROTOBUF_DESCRIPTOR_FILE_NOT_FOUND_ when the file is missing. It currently 
throws native Python error. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43846) Use checkError() to check Exception in SessionCatalogSuite

2023-05-28 Thread BingKun Pan (Jira)
BingKun Pan created SPARK-43846:
---

 Summary: Use checkError() to check Exception in SessionCatalogSuite
 Key: SPARK-43846
 URL: https://issues.apache.org/jira/browse/SPARK-43846
 Project: Spark
  Issue Type: Sub-task
  Components: SQL, Tests
Affects Versions: 3.5.0
Reporter: BingKun Pan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43845) Setup Scala 2.12 Daily GitHub Action Job

2023-05-28 Thread Dongjoon Hyun (Jira)
Dongjoon Hyun created SPARK-43845:
-

 Summary: Setup Scala 2.12 Daily GitHub Action Job
 Key: SPARK-43845
 URL: https://issues.apache.org/jira/browse/SPARK-43845
 Project: Spark
  Issue Type: Test
  Components: Project Infra
Affects Versions: 3.5.0
Reporter: Dongjoon Hyun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-43805) Support SELECT * EXCEPT AND SELECT * REPLACE

2023-05-28 Thread Jia Fan (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-43805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17726627#comment-17726627
 ] 

Jia Fan edited comment on SPARK-43805 at 5/29/23 12:55 AM:
---

Tell the truth, I don't know will spark accept this statement? It doesn't look 
like standard sql. cc [~cloud_fan] [~dongjoon] 


was (Author: fanjia):
Tell the truth, I'm don't know will spark accept this statement? It doesn't 
look like standard sql. cc [~cloud_fan] [~dongjoon] 

> Support SELECT * EXCEPT AND  SELECT * REPLACE
> -
>
> Key: SPARK-43805
> URL: https://issues.apache.org/jira/browse/SPARK-43805
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: melin
>Priority: Major
>
> ref: 
> [https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#select_except]
> https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#select_replace
> [~fanjia] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43843) Saving an AVRO file with Scala 2.13 results in NoClassDefFoundError

2023-05-28 Thread Bruce Robbins (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruce Robbins resolved SPARK-43843.
---
Resolution: Invalid

> Saving an AVRO file with Scala 2.13 results in NoClassDefFoundError
> ---
>
> Key: SPARK-43843
> URL: https://issues.apache.org/jira/browse/SPARK-43843
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.5.0
> Environment: Scala version 2.13.8 (Java HotSpot(TM) 64-Bit Server VM, 
> Java 11.0.12)
>Reporter: Bruce Robbins
>Priority: Major
>
> I launched spark-shell as so:
> {noformat}
> bin/spark-shell --driver-memory 8g --jars `find . -name "spark-avro*.jar" | 
> grep -v test | head -1`
> {noformat}
> I got the below error trying to create an AVRO file:
> {noformat}
> scala> val df = Seq((1, 2), (3, 4)).toDF("a", "b")
> val df = Seq((1, 2), (3, 4)).toDF("a", "b")
> val df: org.apache.spark.sql.DataFrame = [a: int, b: int]
> scala> df.write.mode("overwrite").format("avro").save("avro_file")
> df.write.mode("overwrite").format("avro").save("avro_file")
> java.lang.NoClassDefFoundError: scala/collection/immutable/StringOps
>   at 
> org.apache.spark.sql.avro.AvroFileFormat.supportFieldName(AvroFileFormat.scala:160)
>   at 
> org.apache.spark.sql.execution.datasources.DataSourceUtils$.$anonfun$checkFieldNames$1(DataSourceUtils.scala:75)
>   at 
> org.apache.spark.sql.execution.datasources.DataSourceUtils$.$anonfun$checkFieldNames$1$adapted(DataSourceUtils.scala:74)
>   at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563)
>   at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561)
>   at org.apache.spark.sql.types.StructType.foreach(StructType.scala:105)
>   at 
> org.apache.spark.sql.execution.datasources.DataSourceUtils$.checkFieldNames(DataSourceUtils.scala:74)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:120)
> ...
> scala> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-43843) Saving an AVRO file with Scala 2.13 results in NoClassDefFoundError

2023-05-28 Thread Bruce Robbins (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-43843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17726988#comment-17726988
 ] 

Bruce Robbins commented on SPARK-43843:
---

Nevermind, I had an old {{spark-avro_2.12-3.5.0-SNAPSHOT.jar}} laying about in 
my {{work}} directory which the find in my {{--jars}} value found first.

> Saving an AVRO file with Scala 2.13 results in NoClassDefFoundError
> ---
>
> Key: SPARK-43843
> URL: https://issues.apache.org/jira/browse/SPARK-43843
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.5.0
> Environment: Scala version 2.13.8 (Java HotSpot(TM) 64-Bit Server VM, 
> Java 11.0.12)
>Reporter: Bruce Robbins
>Priority: Major
>
> I launched spark-shell as so:
> {noformat}
> bin/spark-shell --driver-memory 8g --jars `find . -name "spark-avro*.jar" | 
> grep -v test | head -1`
> {noformat}
> I got the below error trying to create an AVRO file:
> {noformat}
> scala> val df = Seq((1, 2), (3, 4)).toDF("a", "b")
> val df = Seq((1, 2), (3, 4)).toDF("a", "b")
> val df: org.apache.spark.sql.DataFrame = [a: int, b: int]
> scala> df.write.mode("overwrite").format("avro").save("avro_file")
> df.write.mode("overwrite").format("avro").save("avro_file")
> java.lang.NoClassDefFoundError: scala/collection/immutable/StringOps
>   at 
> org.apache.spark.sql.avro.AvroFileFormat.supportFieldName(AvroFileFormat.scala:160)
>   at 
> org.apache.spark.sql.execution.datasources.DataSourceUtils$.$anonfun$checkFieldNames$1(DataSourceUtils.scala:75)
>   at 
> org.apache.spark.sql.execution.datasources.DataSourceUtils$.$anonfun$checkFieldNames$1$adapted(DataSourceUtils.scala:74)
>   at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563)
>   at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561)
>   at org.apache.spark.sql.types.StructType.foreach(StructType.scala:105)
>   at 
> org.apache.spark.sql.execution.datasources.DataSourceUtils$.checkFieldNames(DataSourceUtils.scala:74)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:120)
> ...
> scala> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43666) Fix BinaryOps.ge to work with Spark Connect Column

2023-05-28 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon resolved SPARK-43666.
--
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 41305
[https://github.com/apache/spark/pull/41305]

> Fix BinaryOps.ge to work with Spark Connect Column
> --
>
> Key: SPARK-43666
> URL: https://issues.apache.org/jira/browse/SPARK-43666
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, Pandas API on Spark
>Affects Versions: 3.5.0
>Reporter: Haejoon Lee
>Assignee: Haejoon Lee
>Priority: Major
> Fix For: 3.5.0
>
>
> Fix BinaryOps.ge to work with Spark Connect Column



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-43666) Fix BinaryOps.ge to work with Spark Connect Column

2023-05-28 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon reassigned SPARK-43666:


Assignee: Haejoon Lee

> Fix BinaryOps.ge to work with Spark Connect Column
> --
>
> Key: SPARK-43666
> URL: https://issues.apache.org/jira/browse/SPARK-43666
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, Pandas API on Spark
>Affects Versions: 3.5.0
>Reporter: Haejoon Lee
>Assignee: Haejoon Lee
>Priority: Major
>
> Fix BinaryOps.ge to work with Spark Connect Column



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43842) Upgrade `gcs-connector` to 2.2.14

2023-05-28 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon resolved SPARK-43842.
--
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 41352
[https://github.com/apache/spark/pull/41352]

> Upgrade `gcs-connector` to 2.2.14
> -
>
> Key: SPARK-43842
> URL: https://issues.apache.org/jira/browse/SPARK-43842
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Minor
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-43842) Upgrade `gcs-connector` to 2.2.14

2023-05-28 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon reassigned SPARK-43842:


Assignee: Dongjoon Hyun

> Upgrade `gcs-connector` to 2.2.14
> -
>
> Key: SPARK-43842
> URL: https://issues.apache.org/jira/browse/SPARK-43842
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-43840) Switch `scala-213` GitHub Action Job to `scala-212`

2023-05-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun reassigned SPARK-43840:
-

Assignee: Dongjoon Hyun

> Switch `scala-213` GitHub Action Job to `scala-212`
> ---
>
> Key: SPARK-43840
> URL: https://issues.apache.org/jira/browse/SPARK-43840
> Project: Spark
>  Issue Type: Test
>  Components: Project Infra
>Affects Versions: 3.5.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43840) Switch `scala-213` GitHub Action Job to `scala-212`

2023-05-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun resolved SPARK-43840.
---
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 41351
[https://github.com/apache/spark/pull/41351]

> Switch `scala-213` GitHub Action Job to `scala-212`
> ---
>
> Key: SPARK-43840
> URL: https://issues.apache.org/jira/browse/SPARK-43840
> Project: Spark
>  Issue Type: Test
>  Components: Project Infra
>Affects Versions: 3.5.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Minor
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-39979) IndexOutOfBoundsException on groupby + apply pandas grouped map udf function

2023-05-28 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-39979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon resolved SPARK-39979.
--
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 39572
[https://github.com/apache/spark/pull/39572]

> IndexOutOfBoundsException on groupby + apply pandas grouped map udf function
> 
>
> Key: SPARK-39979
> URL: https://issues.apache.org/jira/browse/SPARK-39979
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 3.2.1
>Reporter: yaniv oren
>Assignee: Adam Binford
>Priority: Major
> Fix For: 3.5.0
>
>
> I'm grouping on relatively small subset of groups with big size groups.
> Working with pyarrow version 2.0.0, machines memory is {color:#44}64 
> GiB.{color}
> I'm getting the following error:
> {code:java}
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 387 
> in stage 162.0 failed 4 times, most recent failure: Lost task 387.3 in stage 
> 162.0 (TID 29957) (ip-172-21-129-187.eu-west-1.compute.internal executor 71): 
> java.lang.IndexOutOfBoundsException: index: 2147483628, length: 36 (expected: 
> range(0, 2147483648))
>   at org.apache.arrow.memory.ArrowBuf.checkIndex(ArrowBuf.java:699)
>   at org.apache.arrow.memory.ArrowBuf.setBytes(ArrowBuf.java:890)
>   at 
> org.apache.arrow.vector.BaseVariableWidthVector.setSafe(BaseVariableWidthVector.java:1087)
>   at 
> org.apache.spark.sql.execution.arrow.StringWriter.setValue(ArrowWriter.scala:251)
>   at 
> org.apache.spark.sql.execution.arrow.ArrowFieldWriter.write(ArrowWriter.scala:130)
>   at 
> org.apache.spark.sql.execution.arrow.ArrowWriter.write(ArrowWriter.scala:95)
>   at 
> org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$writeIteratorToStream$1(ArrowPythonRunner.scala:92)
>   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1474)
>   at 
> org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIteratorToStream(ArrowPythonRunner.scala:103)
>   at 
> org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:435)
>   at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2031)
>   at 
> org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:270)
>  {code}
> Why do I hit this 2 GB limit? according SPARK-34588 this is supported, 
> perhaps related to SPARK-34020.
> Please assist.
> Note:
> Is it related to the usage of BaseVariableWidthVector and not 
> BaseLargeVariableWidthVector?
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-39979) IndexOutOfBoundsException on groupby + apply pandas grouped map udf function

2023-05-28 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-39979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon reassigned SPARK-39979:


Assignee: Adam Binford

> IndexOutOfBoundsException on groupby + apply pandas grouped map udf function
> 
>
> Key: SPARK-39979
> URL: https://issues.apache.org/jira/browse/SPARK-39979
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 3.2.1
>Reporter: yaniv oren
>Assignee: Adam Binford
>Priority: Major
>
> I'm grouping on relatively small subset of groups with big size groups.
> Working with pyarrow version 2.0.0, machines memory is {color:#44}64 
> GiB.{color}
> I'm getting the following error:
> {code:java}
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 387 
> in stage 162.0 failed 4 times, most recent failure: Lost task 387.3 in stage 
> 162.0 (TID 29957) (ip-172-21-129-187.eu-west-1.compute.internal executor 71): 
> java.lang.IndexOutOfBoundsException: index: 2147483628, length: 36 (expected: 
> range(0, 2147483648))
>   at org.apache.arrow.memory.ArrowBuf.checkIndex(ArrowBuf.java:699)
>   at org.apache.arrow.memory.ArrowBuf.setBytes(ArrowBuf.java:890)
>   at 
> org.apache.arrow.vector.BaseVariableWidthVector.setSafe(BaseVariableWidthVector.java:1087)
>   at 
> org.apache.spark.sql.execution.arrow.StringWriter.setValue(ArrowWriter.scala:251)
>   at 
> org.apache.spark.sql.execution.arrow.ArrowFieldWriter.write(ArrowWriter.scala:130)
>   at 
> org.apache.spark.sql.execution.arrow.ArrowWriter.write(ArrowWriter.scala:95)
>   at 
> org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$writeIteratorToStream$1(ArrowPythonRunner.scala:92)
>   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1474)
>   at 
> org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIteratorToStream(ArrowPythonRunner.scala:103)
>   at 
> org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:435)
>   at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2031)
>   at 
> org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:270)
>  {code}
> Why do I hit this 2 GB limit? according SPARK-34588 this is supported, 
> perhaps related to SPARK-34020.
> Please assist.
> Note:
> Is it related to the usage of BaseVariableWidthVector and not 
> BaseLargeVariableWidthVector?
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-43805) Support SELECT * EXCEPT AND SELECT * REPLACE

2023-05-28 Thread Dongjoon Hyun (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-43805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17726982#comment-17726982
 ] 

Dongjoon Hyun commented on SPARK-43805:
---

Do you have any other reference, [~melin]?

> Support SELECT * EXCEPT AND  SELECT * REPLACE
> -
>
> Key: SPARK-43805
> URL: https://issues.apache.org/jira/browse/SPARK-43805
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: melin
>Priority: Major
>
> ref: 
> [https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#select_except]
> https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#select_replace
> [~fanjia] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-43805) Support SELECT * EXCEPT AND SELECT * REPLACE

2023-05-28 Thread Dongjoon Hyun (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-43805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17726981#comment-17726981
 ] 

Dongjoon Hyun commented on SPARK-43805:
---

If there is no support from another DBMSs, we had better avoid esoteric 
syntaxes in order to avoid vendor lock-in .

> Support SELECT * EXCEPT AND  SELECT * REPLACE
> -
>
> Key: SPARK-43805
> URL: https://issues.apache.org/jira/browse/SPARK-43805
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: melin
>Priority: Major
>
> ref: 
> [https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#select_except]
> https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#select_replace
> [~fanjia] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-43841) Non-existent column in projection of full outer join with USING results in StringIndexOutOfBoundsException

2023-05-28 Thread Bruce Robbins (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-43841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17726980#comment-17726980
 ] 

Bruce Robbins commented on SPARK-43841:
---

PR at https://github.com/apache/spark/pull/41353

> Non-existent column in projection of full outer join with USING results in 
> StringIndexOutOfBoundsException
> --
>
> Key: SPARK-43841
> URL: https://issues.apache.org/jira/browse/SPARK-43841
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: Bruce Robbins
>Priority: Minor
>
> The following query throws a {{StringIndexOutOfBoundsException}}:
> {noformat}
> with v1 as (
>  select * from values (1, 2) as (c1, c2)
> ),
> v2 as (
>   select * from values (2, 3) as (c1, c2)
> )
> select v1.c1, v1.c2, v2.c1, v2.c2, b
> from v1
> full outer join v2
> using (c1);
> {noformat}
> The query should fail anyway, since {{b}} refers to a non-existent column. 
> But it should fail with a helpful error message, not with a 
> {{StringIndexOutOfBoundsException}}.
> The issue seems to be in 
> {{StringUtils#orderSuggestedIdentifiersBySimilarity}}. 
> {{orderSuggestedIdentifiersBySimilarity}} assumes that a list of candidate 
> attributes with a mix of prefixes will never have an attribute name with an 
> empty prefix. But in this case it does ({{c1}} from the {{coalesce}} has no 
> prefix, since it is not associated with any relation or subquery):
> {noformat}
> +- 'Project [c1#5, c2#6, c1#7, c2#8, 'b]
>+- Project [coalesce(c1#5, c1#7) AS c1#9, c2#6, c2#8] <== c1#9 has no 
> prefix, unlike c2#6 (v1.c2) or c2#8 (v2.c2)
>   +- Join FullOuter, (c1#5 = c1#7)
>  :- SubqueryAlias v1
>  :  +- CTERelationRef 0, true, [c1#5, c2#6]
>  +- SubqueryAlias v2
> +- CTERelationRef 1, true, [c1#7, c2#8]
> {noformat}
> Because of this, {{orderSuggestedIdentifiersBySimilarity}} returns a sorted 
> list of suggestions like this:
> {noformat}
> ArrayBuffer(.c1, v1.c2, v2.c2)
> {noformat}
> {{UnresolvedAttribute.parseAttributeName}} chokes on an attribute name that 
> starts with a namespace separator ('.').



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43742) refactor default column value resolution

2023-05-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun resolved SPARK-43742.
---
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 41262
[https://github.com/apache/spark/pull/41262]

> refactor default column value resolution
> 
>
> Key: SPARK-43742
> URL: https://issues.apache.org/jira/browse/SPARK-43742
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: Wenchen Fan
>Assignee: Wenchen Fan
>Priority: Major
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-43742) refactor default column value resolution

2023-05-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun reassigned SPARK-43742:
-

Assignee: Wenchen Fan

> refactor default column value resolution
> 
>
> Key: SPARK-43742
> URL: https://issues.apache.org/jira/browse/SPARK-43742
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: Wenchen Fan
>Assignee: Wenchen Fan
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-43843) Saving an AVRO file with Scala 2.13 results in NoClassDefFoundError

2023-05-28 Thread Bruce Robbins (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruce Robbins updated SPARK-43843:
--
Environment: Scala version 2.13.8 (Java HotSpot(TM) 64-Bit Server VM, Java 
11.0.12)

> Saving an AVRO file with Scala 2.13 results in NoClassDefFoundError
> ---
>
> Key: SPARK-43843
> URL: https://issues.apache.org/jira/browse/SPARK-43843
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.5.0
> Environment: Scala version 2.13.8 (Java HotSpot(TM) 64-Bit Server VM, 
> Java 11.0.12)
>Reporter: Bruce Robbins
>Priority: Major
>
> I launched spark-shell as so:
> {noformat}
> bin/spark-shell --driver-memory 8g --jars `find . -name "spark-avro*.jar" | 
> grep -v test | head -1`
> {noformat}
> I got the below error trying to create an AVRO file:
> {noformat}
> scala> val df = Seq((1, 2), (3, 4)).toDF("a", "b")
> val df = Seq((1, 2), (3, 4)).toDF("a", "b")
> val df: org.apache.spark.sql.DataFrame = [a: int, b: int]
> scala> df.write.mode("overwrite").format("avro").save("avro_file")
> df.write.mode("overwrite").format("avro").save("avro_file")
> java.lang.NoClassDefFoundError: scala/collection/immutable/StringOps
>   at 
> org.apache.spark.sql.avro.AvroFileFormat.supportFieldName(AvroFileFormat.scala:160)
>   at 
> org.apache.spark.sql.execution.datasources.DataSourceUtils$.$anonfun$checkFieldNames$1(DataSourceUtils.scala:75)
>   at 
> org.apache.spark.sql.execution.datasources.DataSourceUtils$.$anonfun$checkFieldNames$1$adapted(DataSourceUtils.scala:74)
>   at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563)
>   at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561)
>   at org.apache.spark.sql.types.StructType.foreach(StructType.scala:105)
>   at 
> org.apache.spark.sql.execution.datasources.DataSourceUtils$.checkFieldNames(DataSourceUtils.scala:74)
>   at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:120)
> ...
> scala> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43844) Update ORC to 1.9.0

2023-05-28 Thread Dongjoon Hyun (Jira)
Dongjoon Hyun created SPARK-43844:
-

 Summary: Update ORC to 1.9.0
 Key: SPARK-43844
 URL: https://issues.apache.org/jira/browse/SPARK-43844
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 3.5.0
Reporter: Dongjoon Hyun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-43833) Upgrade Scala to 2.13.11

2023-05-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-43833:
--
Parent: (was: SPARK-43831)
Issue Type: Bug  (was: Sub-task)

> Upgrade Scala to  2.13.11
> -
>
> Key: SPARK-43833
> URL: https://issues.apache.org/jira/browse/SPARK-43833
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Dongjoon Hyun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43843) Saving an AVRO file with Scala 2.13 results in NoClassDefFoundError

2023-05-28 Thread Bruce Robbins (Jira)
Bruce Robbins created SPARK-43843:
-

 Summary: Saving an AVRO file with Scala 2.13 results in 
NoClassDefFoundError
 Key: SPARK-43843
 URL: https://issues.apache.org/jira/browse/SPARK-43843
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.5.0
Reporter: Bruce Robbins


I launched spark-shell as so:
{noformat}
bin/spark-shell --driver-memory 8g --jars `find . -name "spark-avro*.jar" | 
grep -v test | head -1`
{noformat}
I got the below error trying to create an AVRO file:
{noformat}
scala> val df = Seq((1, 2), (3, 4)).toDF("a", "b")
val df = Seq((1, 2), (3, 4)).toDF("a", "b")
val df: org.apache.spark.sql.DataFrame = [a: int, b: int]

scala> df.write.mode("overwrite").format("avro").save("avro_file")
df.write.mode("overwrite").format("avro").save("avro_file")
java.lang.NoClassDefFoundError: scala/collection/immutable/StringOps
  at 
org.apache.spark.sql.avro.AvroFileFormat.supportFieldName(AvroFileFormat.scala:160)
  at 
org.apache.spark.sql.execution.datasources.DataSourceUtils$.$anonfun$checkFieldNames$1(DataSourceUtils.scala:75)
  at 
org.apache.spark.sql.execution.datasources.DataSourceUtils$.$anonfun$checkFieldNames$1$adapted(DataSourceUtils.scala:74)
  at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563)
  at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561)
  at org.apache.spark.sql.types.StructType.foreach(StructType.scala:105)
  at 
org.apache.spark.sql.execution.datasources.DataSourceUtils$.checkFieldNames(DataSourceUtils.scala:74)
  at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:120)
...
scala> 
{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43842) Upgrade `gcs-connector` to 2.2.14

2023-05-28 Thread Dongjoon Hyun (Jira)
Dongjoon Hyun created SPARK-43842:
-

 Summary: Upgrade `gcs-connector` to 2.2.14
 Key: SPARK-43842
 URL: https://issues.apache.org/jira/browse/SPARK-43842
 Project: Spark
  Issue Type: Bug
  Components: Build
Affects Versions: 3.5.0
Reporter: Dongjoon Hyun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-40497) Upgrade Scala to 2.13.11

2023-05-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-40497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-40497:
--
Parent: SPARK-43831
Issue Type: Sub-task  (was: Improvement)

> Upgrade Scala to 2.13.11
> 
>
> Key: SPARK-40497
> URL: https://issues.apache.org/jira/browse/SPARK-40497
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.4.0
>Reporter: Yang Jie
>Priority: Major
>
> We tested and decided to skip the following releases. This issue aims to use 
> 2.13.11.
> - 2022-09-21: v2.13.9 released 
> [https://github.com/scala/scala/releases/tag/v2.13.9]
> - 2022-10-13: 2.13.10 released 
> [https://github.com/scala/scala/releases/tag/v2.13.10]
>  
> Scala 2.13.11 Milestone
> - https://github.com/scala/scala/milestone/100



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Closed] (SPARK-43833) Upgrade Scala to 2.13.11

2023-05-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun closed SPARK-43833.
-

> Upgrade Scala to  2.13.11
> -
>
> Key: SPARK-43833
> URL: https://issues.apache.org/jira/browse/SPARK-43833
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Dongjoon Hyun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43833) Upgrade Scala to 2.13.11

2023-05-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun resolved SPARK-43833.
---
Resolution: Duplicate

> Upgrade Scala to  2.13.11
> -
>
> Key: SPARK-43833
> URL: https://issues.apache.org/jira/browse/SPARK-43833
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Dongjoon Hyun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43841) Non-existent column in projection of full outer join with USING results in StringIndexOutOfBoundsException

2023-05-28 Thread Bruce Robbins (Jira)
Bruce Robbins created SPARK-43841:
-

 Summary: Non-existent column in projection of full outer join with 
USING results in StringIndexOutOfBoundsException
 Key: SPARK-43841
 URL: https://issues.apache.org/jira/browse/SPARK-43841
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 3.5.0
Reporter: Bruce Robbins


The following query throws a {{StringIndexOutOfBoundsException}}:
{noformat}
with v1 as (
 select * from values (1, 2) as (c1, c2)
),
v2 as (
  select * from values (2, 3) as (c1, c2)
)
select v1.c1, v1.c2, v2.c1, v2.c2, b
from v1
full outer join v2
using (c1);
{noformat}
The query should fail anyway, since {{b}} refers to a non-existent column. But 
it should fail with a helpful error message, not with a 
{{StringIndexOutOfBoundsException}}.

The issue seems to be in {{StringUtils#orderSuggestedIdentifiersBySimilarity}}. 
{{orderSuggestedIdentifiersBySimilarity}} assumes that a list of candidate 
attributes with a mix of prefixes will never have an attribute name with an 
empty prefix. But in this case it does ({{c1}} from the {{coalesce}} has no 
prefix, since it is not associated with any relation or subquery):
{noformat}
+- 'Project [c1#5, c2#6, c1#7, c2#8, 'b]
   +- Project [coalesce(c1#5, c1#7) AS c1#9, c2#6, c2#8] <== c1#9 has no 
prefix, unlike c2#6 (v1.c2) or c2#8 (v2.c2)
  +- Join FullOuter, (c1#5 = c1#7)
 :- SubqueryAlias v1
 :  +- CTERelationRef 0, true, [c1#5, c2#6]
 +- SubqueryAlias v2
+- CTERelationRef 1, true, [c1#7, c2#8]
{noformat}
Because of this, {{orderSuggestedIdentifiersBySimilarity}} returns a sorted 
list of suggestions like this:
{noformat}
ArrayBuffer(.c1, v1.c2, v2.c2)
{noformat}
{{UnresolvedAttribute.parseAttributeName}} chokes on an attribute name that 
starts with a namespace separator ('.').




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43840) Switch `scala-213` GitHub Action Job to `scala-212`

2023-05-28 Thread Dongjoon Hyun (Jira)
Dongjoon Hyun created SPARK-43840:
-

 Summary: Switch `scala-213` GitHub Action Job to `scala-212`
 Key: SPARK-43840
 URL: https://issues.apache.org/jira/browse/SPARK-43840
 Project: Spark
  Issue Type: Test
  Components: Project Infra
Affects Versions: 3.5.0
Reporter: Dongjoon Hyun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-43836) Make Scala 2.13 as default Scala version in Spark 3.5

2023-05-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun reassigned SPARK-43836:
-

Assignee: Dongjoon Hyun

> Make Scala 2.13 as default Scala version in Spark 3.5
> -
>
> Key: SPARK-43836
> URL: https://issues.apache.org/jira/browse/SPARK-43836
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-43836) Make Scala 2.13 as default Scala version in Spark 3.5

2023-05-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun updated SPARK-43836:
--
Labels: releasenotes  (was: )

> Make Scala 2.13 as default Scala version in Spark 3.5
> -
>
> Key: SPARK-43836
> URL: https://issues.apache.org/jira/browse/SPARK-43836
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
>  Labels: releasenotes
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43836) Make Scala 2.13 as default Scala version in Spark 3.5

2023-05-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun resolved SPARK-43836.
---
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 41344
[https://github.com/apache/spark/pull/41344]

> Make Scala 2.13 as default Scala version in Spark 3.5
> -
>
> Key: SPARK-43836
> URL: https://issues.apache.org/jira/browse/SPARK-43836
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Major
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43839) Convert `_LEGACY_ERROR_TEMP_1337` to `UNSUPPORTED_FEATURE.TIME_TRAVEL`

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk resolved SPARK-43839.
--
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 41349
[https://github.com/apache/spark/pull/41349]

> Convert `_LEGACY_ERROR_TEMP_1337` to `UNSUPPORTED_FEATURE.TIME_TRAVEL`
> --
>
> Key: SPARK-43839
> URL: https://issues.apache.org/jira/browse/SPARK-43839
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-43839) Convert `_LEGACY_ERROR_TEMP_1337` to `UNSUPPORTED_FEATURE.TIME_TRAVEL`

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk reassigned SPARK-43839:


Assignee: BingKun Pan

> Convert `_LEGACY_ERROR_TEMP_1337` to `UNSUPPORTED_FEATURE.TIME_TRAVEL`
> --
>
> Key: SPARK-43839
> URL: https://issues.apache.org/jira/browse/SPARK-43839
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-43834) Use error classes in the compilation errors of `ResolveDefaultColumns`

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk reassigned SPARK-43834:


Assignee: BingKun Pan

> Use error classes in the compilation errors of `ResolveDefaultColumns`
> --
>
> Key: SPARK-43834
> URL: https://issues.apache.org/jira/browse/SPARK-43834
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43834) Use error classes in the compilation errors of `ResolveDefaultColumns`

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk resolved SPARK-43834.
--
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 41345
[https://github.com/apache/spark/pull/41345]

> Use error classes in the compilation errors of `ResolveDefaultColumns`
> --
>
> Key: SPARK-43834
> URL: https://issues.apache.org/jira/browse/SPARK-43834
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-43837) Assign a name to the error class _LEGACY_ERROR_TEMP_103[1-2]

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk reassigned SPARK-43837:


Assignee: BingKun Pan

> Assign a name to the error class _LEGACY_ERROR_TEMP_103[1-2]
> 
>
> Key: SPARK-43837
> URL: https://issues.apache.org/jira/browse/SPARK-43837
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43837) Assign a name to the error class _LEGACY_ERROR_TEMP_103[1-2]

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk resolved SPARK-43837.
--
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 41346
[https://github.com/apache/spark/pull/41346]

> Assign a name to the error class _LEGACY_ERROR_TEMP_103[1-2]
> 
>
> Key: SPARK-43837
> URL: https://issues.apache.org/jira/browse/SPARK-43837
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: BingKun Pan
>Assignee: BingKun Pan
>Priority: Minor
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-43839) Convert `_LEGACY_ERROR_TEMP_1337` to `UNSUPPORTED_FEATURE.TIME_TRAVEL`

2023-05-28 Thread BingKun Pan (Jira)
BingKun Pan created SPARK-43839:
---

 Summary: Convert `_LEGACY_ERROR_TEMP_1337` to 
`UNSUPPORTED_FEATURE.TIME_TRAVEL`
 Key: SPARK-43839
 URL: https://issues.apache.org/jira/browse/SPARK-43839
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 3.5.0
Reporter: BingKun Pan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-43837) Assign a name to the error class _LEGACY_ERROR_TEMP_103[1-2]

2023-05-28 Thread Nikita Awasthi (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-43837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17726932#comment-17726932
 ] 

Nikita Awasthi commented on SPARK-43837:


User 'panbingkun' has created a pull request for this issue:
https://github.com/apache/spark/pull/41346

> Assign a name to the error class _LEGACY_ERROR_TEMP_103[1-2]
> 
>
> Key: SPARK-43837
> URL: https://issues.apache.org/jira/browse/SPARK-43837
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: BingKun Pan
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-43837) Assign a name to the error class _LEGACY_ERROR_TEMP_103[1-2]

2023-05-28 Thread Nikita Awasthi (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-43837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17726931#comment-17726931
 ] 

Nikita Awasthi commented on SPARK-43837:


User 'panbingkun' has created a pull request for this issue:
https://github.com/apache/spark/pull/41346

> Assign a name to the error class _LEGACY_ERROR_TEMP_103[1-2]
> 
>
> Key: SPARK-43837
> URL: https://issues.apache.org/jira/browse/SPARK-43837
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: BingKun Pan
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-43826) Assign a name to the error class _LEGACY_ERROR_TEMP_2416

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk reassigned SPARK-43826:


Assignee: jiaan.geng

> Assign a name to the error class _LEGACY_ERROR_TEMP_2416
> 
>
> Key: SPARK-43826
> URL: https://issues.apache.org/jira/browse/SPARK-43826
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: jiaan.geng
>Assignee: jiaan.geng
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43826) Assign a name to the error class _LEGACY_ERROR_TEMP_2416

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk resolved SPARK-43826.
--
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 41339
[https://github.com/apache/spark/pull/41339]

> Assign a name to the error class _LEGACY_ERROR_TEMP_2416
> 
>
> Key: SPARK-43826
> URL: https://issues.apache.org/jira/browse/SPARK-43826
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: jiaan.geng
>Assignee: jiaan.geng
>Priority: Major
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-43820) Assign a name to the error class _LEGACY_ERROR_TEMP_2411

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk reassigned SPARK-43820:


Assignee: jiaan.geng

> Assign a name to the error class _LEGACY_ERROR_TEMP_2411
> 
>
> Key: SPARK-43820
> URL: https://issues.apache.org/jira/browse/SPARK-43820
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: jiaan.geng
>Assignee: jiaan.geng
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43823) Assign a name to the error class _LEGACY_ERROR_TEMP_2414

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk resolved SPARK-43823.
--
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 41339
[https://github.com/apache/spark/pull/41339]

> Assign a name to the error class _LEGACY_ERROR_TEMP_2414
> 
>
> Key: SPARK-43823
> URL: https://issues.apache.org/jira/browse/SPARK-43823
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: jiaan.geng
>Assignee: jiaan.geng
>Priority: Major
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-43827) Assign a name to the error class _LEGACY_ERROR_TEMP_2417

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk reassigned SPARK-43827:


Assignee: jiaan.geng

> Assign a name to the error class _LEGACY_ERROR_TEMP_2417
> 
>
> Key: SPARK-43827
> URL: https://issues.apache.org/jira/browse/SPARK-43827
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: jiaan.geng
>Assignee: jiaan.geng
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-43823) Assign a name to the error class _LEGACY_ERROR_TEMP_2414

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk reassigned SPARK-43823:


Assignee: jiaan.geng

> Assign a name to the error class _LEGACY_ERROR_TEMP_2414
> 
>
> Key: SPARK-43823
> URL: https://issues.apache.org/jira/browse/SPARK-43823
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: jiaan.geng
>Assignee: jiaan.geng
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43820) Assign a name to the error class _LEGACY_ERROR_TEMP_2411

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk resolved SPARK-43820.
--
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 41339
[https://github.com/apache/spark/pull/41339]

> Assign a name to the error class _LEGACY_ERROR_TEMP_2411
> 
>
> Key: SPARK-43820
> URL: https://issues.apache.org/jira/browse/SPARK-43820
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: jiaan.geng
>Assignee: jiaan.geng
>Priority: Major
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43827) Assign a name to the error class _LEGACY_ERROR_TEMP_2417

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk resolved SPARK-43827.
--
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 41339
[https://github.com/apache/spark/pull/41339]

> Assign a name to the error class _LEGACY_ERROR_TEMP_2417
> 
>
> Key: SPARK-43827
> URL: https://issues.apache.org/jira/browse/SPARK-43827
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: jiaan.geng
>Assignee: jiaan.geng
>Priority: Major
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-43822) Assign a name to the error class _LEGACY_ERROR_TEMP_2413

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk reassigned SPARK-43822:


Assignee: jiaan.geng

> Assign a name to the error class _LEGACY_ERROR_TEMP_2413
> 
>
> Key: SPARK-43822
> URL: https://issues.apache.org/jira/browse/SPARK-43822
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: jiaan.geng
>Assignee: jiaan.geng
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-43822) Assign a name to the error class _LEGACY_ERROR_TEMP_2413

2023-05-28 Thread Max Gekk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-43822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Gekk resolved SPARK-43822.
--
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 41339
[https://github.com/apache/spark/pull/41339]

> Assign a name to the error class _LEGACY_ERROR_TEMP_2413
> 
>
> Key: SPARK-43822
> URL: https://issues.apache.org/jira/browse/SPARK-43822
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.5.0
>Reporter: jiaan.geng
>Assignee: jiaan.geng
>Priority: Major
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org