[jira] [Created] (SPARK-47910) Memory leak when interrupting shuffle write using zstd compression
JacobZheng created SPARK-47910: -- Summary: Memory leak when interrupting shuffle write using zstd compression Key: SPARK-47910 URL: https://issues.apache.org/jira/browse/SPARK-47910 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 3.5.0, 3.4.0, 3.3.0, 3.2.0 Reporter: JacobZheng When spark.sql.execution.interruptOnCancel=true and spark.io.compression.codec=zstd, a memory leak was found when tasks were cancelled at specific times. The reason for this is that cancelling a task interrupts the shuffle write, which then calls org.apache.spark.storage.DiskBlockObjectWriter#closeResources. this process then only closes the ManualCloseOutputStream, which is wrapped with this ZstdInputStreamNoFinalizer will not be closed. Moreover, ZstdInputStreamNoFinalizer doesn't implement Finalizer so it won't be reclaimed by GC automatically. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47909) Parent DataFrame class for Spark Connect and Spark Classic
[ https://issues.apache.org/jira/browse/SPARK-47909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-47909: --- Labels: pull-request-available (was: ) > Parent DataFrame class for Spark Connect and Spark Classic > -- > > Key: SPARK-47909 > URL: https://issues.apache.org/jira/browse/SPARK-47909 > Project: Spark > Issue Type: Sub-task > Components: Connect, PySpark >Affects Versions: 4.0.0 >Reporter: Hyukjin Kwon >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-47901) Upgrade commons-text to 1.12.0
[ https://issues.apache.org/jira/browse/SPARK-47901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Jie reassigned SPARK-47901: Assignee: Yang Jie > Upgrade commons-text to 1.12.0 > -- > > Key: SPARK-47901 > URL: https://issues.apache.org/jira/browse/SPARK-47901 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 4.0.0 >Reporter: Yang Jie >Assignee: Yang Jie >Priority: Major > Labels: pull-request-available > > https://github.com/apache/commons-text/blob/rel/commons-text-1.12.0/RELEASE-NOTES.txt -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-47901) Upgrade commons-text to 1.12.0
[ https://issues.apache.org/jira/browse/SPARK-47901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Jie resolved SPARK-47901. -- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 46127 [https://github.com/apache/spark/pull/46127] > Upgrade commons-text to 1.12.0 > -- > > Key: SPARK-47901 > URL: https://issues.apache.org/jira/browse/SPARK-47901 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 4.0.0 >Reporter: Yang Jie >Assignee: Yang Jie >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > https://github.com/apache/commons-text/blob/rel/commons-text-1.12.0/RELEASE-NOTES.txt -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-47596) Streaming: Migrate logWarn with variables to structured logging framework
[ https://issues.apache.org/jira/browse/SPARK-47596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gengliang Wang resolved SPARK-47596. Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 46079 [https://github.com/apache/spark/pull/46079] > Streaming: Migrate logWarn with variables to structured logging framework > - > > Key: SPARK-47596 > URL: https://issues.apache.org/jira/browse/SPARK-47596 > Project: Spark > Issue Type: Sub-task > Components: Spark Core >Affects Versions: 4.0.0 >Reporter: Gengliang Wang >Assignee: BingKun Pan >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-47909) Parent DataFrame class for Spark Connect and Spark Classic
Hyukjin Kwon created SPARK-47909: Summary: Parent DataFrame class for Spark Connect and Spark Classic Key: SPARK-47909 URL: https://issues.apache.org/jira/browse/SPARK-47909 Project: Spark Issue Type: Sub-task Components: Connect, PySpark Affects Versions: 4.0.0 Reporter: Hyukjin Kwon -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47906) Fix docstring and type hint of `hll_union_agg`
[ https://issues.apache.org/jira/browse/SPARK-47906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-47906: --- Labels: pull-request-available (was: ) > Fix docstring and type hint of `hll_union_agg` > -- > > Key: SPARK-47906 > URL: https://issues.apache.org/jira/browse/SPARK-47906 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 4.0.0 >Reporter: Ruifeng Zheng >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-47907) Put removal of '!' as a synonym for 'NOT' on a keyword level under a config
Serge Rielau created SPARK-47907: Summary: Put removal of '!' as a synonym for 'NOT' on a keyword level under a config Key: SPARK-47907 URL: https://issues.apache.org/jira/browse/SPARK-47907 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 4.0.0 Reporter: Serge Rielau Recently we dissolved the lexer equivalence between '!' and 'NOT'. ! is a prefix operator and a synonym for NOT only in that case. But NOT is used in many more cases in the grammar. Given that there are a handful of known scenearios where users have exploited the undocumented loophole it's best to add a config. Usage found so far is: `c1 ! IN(1, 2)` `c1 ! BETWEEN 1 AND 2` `c1 ! LIKE 'a%'` But there are worse cases: c1 IS ! NULL CREATE TABLE T(c1 INT ! NULL) or even CREATE TABLE IF ! EXISTS T(c1 INT) -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-47883) Make CollectTailExec execute lazily
[ https://issues.apache.org/jira/browse/SPARK-47883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ruifeng Zheng reassigned SPARK-47883: - Assignee: Ruifeng Zheng > Make CollectTailExec execute lazily > > > Key: SPARK-47883 > URL: https://issues.apache.org/jira/browse/SPARK-47883 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 4.0.0 >Reporter: Ruifeng Zheng >Assignee: Ruifeng Zheng >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-47883) Make CollectTailExec execute lazily
[ https://issues.apache.org/jira/browse/SPARK-47883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ruifeng Zheng resolved SPARK-47883. --- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 46101 [https://github.com/apache/spark/pull/46101] > Make CollectTailExec execute lazily > > > Key: SPARK-47883 > URL: https://issues.apache.org/jira/browse/SPARK-47883 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 4.0.0 >Reporter: Ruifeng Zheng >Assignee: Ruifeng Zheng >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47901) Upgrade commons-text to 1.12.0
[ https://issues.apache.org/jira/browse/SPARK-47901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-47901: --- Labels: pull-request-available (was: ) > Upgrade commons-text to 1.12.0 > -- > > Key: SPARK-47901 > URL: https://issues.apache.org/jira/browse/SPARK-47901 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 4.0.0 >Reporter: Yang Jie >Priority: Major > Labels: pull-request-available > > https://github.com/apache/commons-text/blob/rel/commons-text-1.12.0/RELEASE-NOTES.txt -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-47905) ANSI Mode: Trim binary values shall result binary type instead of string type
Kent Yao created SPARK-47905: Summary: ANSI Mode: Trim binary values shall result binary type instead of string type Key: SPARK-47905 URL: https://issues.apache.org/jira/browse/SPARK-47905 Project: Spark Issue Type: Bug Components: SQL Affects Versions: 4.0.0 Reporter: Kent Yao ``` when applied to binary strings is identical in syntax (apart from the default ) and semantics to the corresponding operation on character strings except that the returned value is a binary string. ``` -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-47897) ExpressionSet performance regression in scala 2.12
[ https://issues.apache.org/jira/browse/SPARK-47897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kent Yao resolved SPARK-47897. -- Fix Version/s: 3.4.4 3.5.2 Resolution: Fixed Issue resolved by pull request 46114 [https://github.com/apache/spark/pull/46114] > ExpressionSet performance regression in scala 2.12 > -- > > Key: SPARK-47897 > URL: https://issues.apache.org/jira/browse/SPARK-47897 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.5.1 >Reporter: Zhen Wang >Assignee: Zhen Wang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.4, 3.5.2 > > > After SPARK-38836, the `++` and `--` methods in ExpressionSet of scala 2.12 > were removed, causing performance regression. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-46935) Consolidate error documentation
[ https://issues.apache.org/jira/browse/SPARK-46935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wenchen Fan resolved SPARK-46935. - Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 44971 [https://github.com/apache/spark/pull/44971] > Consolidate error documentation > --- > > Key: SPARK-46935 > URL: https://issues.apache.org/jira/browse/SPARK-46935 > Project: Spark > Issue Type: Improvement > Components: Documentation >Affects Versions: 4.0.0 >Reporter: Nicholas Chammas >Assignee: Nicholas Chammas >Priority: Minor > Labels: pull-request-available > Fix For: 4.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-46935) Consolidate error documentation
[ https://issues.apache.org/jira/browse/SPARK-46935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wenchen Fan reassigned SPARK-46935: --- Assignee: Nicholas Chammas > Consolidate error documentation > --- > > Key: SPARK-46935 > URL: https://issues.apache.org/jira/browse/SPARK-46935 > Project: Spark > Issue Type: Improvement > Components: Documentation >Affects Versions: 4.0.0 >Reporter: Nicholas Chammas >Assignee: Nicholas Chammas >Priority: Minor > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47904) Preserve case in Avro schema when using enableStableIdentifiersForUnionType
[ https://issues.apache.org/jira/browse/SPARK-47904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-47904: --- Labels: pull-request-available (was: ) > Preserve case in Avro schema when using enableStableIdentifiersForUnionType > --- > > Key: SPARK-47904 > URL: https://issues.apache.org/jira/browse/SPARK-47904 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 4.0.0, 3.5.2 >Reporter: Ivan Sadikov >Priority: Major > Labels: pull-request-available > > When enableStableIdentifiersForUnionType is enabled, all of the types are > lowercased which creates a problem when field types are case-sensitive: > {code:java} > Schema.createEnum("myENUM", "", null, List[String]("E1", "e2").asJava), > Schema.createRecord("myRecord2", "", null, false, List[Schema.Field](new > Schema.Field("F", Schema.create(Type.FLOAT))).asJava){code} > would become > {code:java} > struct> {code} > but instead should be > {code:java} > struct> {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47904) Preserve case in Avro schema when using enableStableIdentifiersForUnionType
[ https://issues.apache.org/jira/browse/SPARK-47904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Sadikov updated SPARK-47904: - Description: When enableStableIdentifiersForUnionType is enabled, all of the types are lowercased which creates a problem when field types are case-sensitive: {code:java} Schema.createEnum("myENUM", "", null, List[String]("E1", "e2").asJava), Schema.createRecord("myRecord2", "", null, false, List[Schema.Field](new Schema.Field("F", Schema.create(Type.FLOAT))).asJava){code} would become {code:java} struct> {code} but instead should be {code:java} struct> {code} was:When > Preserve case in Avro schema when using enableStableIdentifiersForUnionType > --- > > Key: SPARK-47904 > URL: https://issues.apache.org/jira/browse/SPARK-47904 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 4.0.0, 3.5.2 >Reporter: Ivan Sadikov >Priority: Major > > When enableStableIdentifiersForUnionType is enabled, all of the types are > lowercased which creates a problem when field types are case-sensitive: > {code:java} > Schema.createEnum("myENUM", "", null, List[String]("E1", "e2").asJava), > Schema.createRecord("myRecord2", "", null, false, List[Schema.Field](new > Schema.Field("F", Schema.create(Type.FLOAT))).asJava){code} > would become > {code:java} > struct> {code} > but instead should be > {code:java} > struct> {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-47904) Preserve case in Avro schema when using enableStableIdentifiersForUnionType
Ivan Sadikov created SPARK-47904: Summary: Preserve case in Avro schema when using enableStableIdentifiersForUnionType Key: SPARK-47904 URL: https://issues.apache.org/jira/browse/SPARK-47904 Project: Spark Issue Type: Bug Components: SQL Affects Versions: 4.0.0, 3.5.2 Reporter: Ivan Sadikov -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47904) Preserve case in Avro schema when using enableStableIdentifiersForUnionType
[ https://issues.apache.org/jira/browse/SPARK-47904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Sadikov updated SPARK-47904: - Description: When > Preserve case in Avro schema when using enableStableIdentifiersForUnionType > --- > > Key: SPARK-47904 > URL: https://issues.apache.org/jira/browse/SPARK-47904 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 4.0.0, 3.5.2 >Reporter: Ivan Sadikov >Priority: Major > > When -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-47869) Upgrade built in hive to Hive-4.0
[ https://issues.apache.org/jira/browse/SPARK-47869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838794#comment-17838794 ] Simhadri Govindappa commented on SPARK-47869: - Hi [~dongjoon] and [~attilapiros] , We saw your PR for upgrading HMS to Hive 4 in Spark. * [https://github.com/apache/spark/pull/45801] * [https://github.com/apache/spark/pull/43064] We are exploring to mark Hive-2.x line EOL in this mail thread: [https://lists.apache.org/thread/sxcrcf4v9j630tl9domp0bn4m33bdq0s] We noticed that the version of hive in spark was Hive-2.3.9 [https://github.com/apache/spark/blob/master/pom.xml#L135] We would be happy to help upgrade Spark to Hive-4.0 from hive side, if you could help us from Spark side. Thanks! Simhadri G cc [~ayushtkn] > Upgrade built in hive to Hive-4.0 > - > > Key: SPARK-47869 > URL: https://issues.apache.org/jira/browse/SPARK-47869 > Project: Spark > Issue Type: Task > Components: Spark Core >Affects Versions: 3.5.1 >Reporter: Simhadri Govindappa >Priority: Major > > Hive 4.0 has been released. It brings in a lot of new features, bug fixes and > performance improvements. > We would like to update the version of hive used in spark to hive-4.0 > [https://lists.apache.org/thread/2jqpvsx8n801zb5pmlhb8f4zloq27p82] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47903) Add remaining scalar types to the Python variant library
[ https://issues.apache.org/jira/browse/SPARK-47903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-47903: --- Labels: pull-request-available (was: ) > Add remaining scalar types to the Python variant library > > > Key: SPARK-47903 > URL: https://issues.apache.org/jira/browse/SPARK-47903 > Project: Spark > Issue Type: Sub-task > Components: PS >Affects Versions: 4.0.0 >Reporter: Harsh Motwani >Priority: Major > Labels: pull-request-available > > Added support for reading the remaining scalar data types (binary, timestamp, > timestamp_ntz, date, float) to the Python Variant library. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-47903) Add remaining scalar types to the Python variant library
Harsh Motwani created SPARK-47903: - Summary: Add remaining scalar types to the Python variant library Key: SPARK-47903 URL: https://issues.apache.org/jira/browse/SPARK-47903 Project: Spark Issue Type: Sub-task Components: PS Affects Versions: 4.0.0 Reporter: Harsh Motwani Added support for reading the remaining scalar data types (binary, timestamp, timestamp_ntz, date, float) to the Python Variant library. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-47898) Port HIVE-12270: Add DBTokenStore support to HS2 delegation token
[ https://issues.apache.org/jira/browse/SPARK-47898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun resolved SPARK-47898. --- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 46115 [https://github.com/apache/spark/pull/46115] > Port HIVE-12270: Add DBTokenStore support to HS2 delegation token > - > > Key: SPARK-47898 > URL: https://issues.apache.org/jira/browse/SPARK-47898 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 4.0.0 >Reporter: Kent Yao >Assignee: Kent Yao >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-47889) Setup gradle as build tool for operator repository
[ https://issues.apache.org/jira/browse/SPARK-47889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun resolved SPARK-47889. --- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 4 [https://github.com/apache/spark-kubernetes-operator/pull/4] > Setup gradle as build tool for operator repository > -- > > Key: SPARK-47889 > URL: https://issues.apache.org/jira/browse/SPARK-47889 > Project: Spark > Issue Type: Sub-task > Components: Kubernetes >Affects Versions: kubernetes-operator-0.1.0 >Reporter: Zhou JIANG >Assignee: Zhou JIANG >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-47889) Setup gradle as build tool for operator repository
[ https://issues.apache.org/jira/browse/SPARK-47889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun reassigned SPARK-47889: - Assignee: Zhou JIANG > Setup gradle as build tool for operator repository > -- > > Key: SPARK-47889 > URL: https://issues.apache.org/jira/browse/SPARK-47889 > Project: Spark > Issue Type: Sub-task > Components: Kubernetes >Affects Versions: kubernetes-operator-0.1.0 >Reporter: Zhou JIANG >Assignee: Zhou JIANG >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47889) Setup gradle as build tool for operator repository
[ https://issues.apache.org/jira/browse/SPARK-47889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun updated SPARK-47889: -- Fix Version/s: kubernetes-operator-0.1.0 (was: 4.0.0) > Setup gradle as build tool for operator repository > -- > > Key: SPARK-47889 > URL: https://issues.apache.org/jira/browse/SPARK-47889 > Project: Spark > Issue Type: Sub-task > Components: Kubernetes >Affects Versions: kubernetes-operator-0.1.0 >Reporter: Zhou JIANG >Assignee: Zhou JIANG >Priority: Major > Labels: pull-request-available > Fix For: kubernetes-operator-0.1.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47902) Compute Current Time* expressions should be foldable
[ https://issues.apache.org/jira/browse/SPARK-47902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-47902: --- Labels: pull-request-available (was: ) > Compute Current Time* expressions should be foldable > > > Key: SPARK-47902 > URL: https://issues.apache.org/jira/browse/SPARK-47902 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 4.0.0 >Reporter: Aleksandar Tomic >Priority: Major > Labels: pull-request-available > > Following PR - https://github.com/apache/spark/pull/44261 changed "compute > current time" family of expressions to be unevaluable, given that these > expressions are supposed to be replaced with literals by QO. Unevaluable > implies that these expressions are not foldable, even though they will be > replaced by literals. > If these expressions were used in places that require constant folding (e.g. > RAND()) new behavior would be to raise an error which is a regression > comparing to behavior prior to spark 4.0. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-47902) Compute Current Time* expressions should be foldable
Aleksandar Tomic created SPARK-47902: Summary: Compute Current Time* expressions should be foldable Key: SPARK-47902 URL: https://issues.apache.org/jira/browse/SPARK-47902 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 4.0.0 Reporter: Aleksandar Tomic Following PR - https://github.com/apache/spark/pull/44261 changed "compute current time" family of expressions to be unevaluable, given that these expressions are supposed to be replaced with literals by QO. Unevaluable implies that these expressions are not foldable, even though they will be replaced by literals. If these expressions were used in places that require constant folding (e.g. RAND()) new behavior would be to raise an error which is a regression comparing to behavior prior to spark 4.0. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-25769) UnresolvedAttribute.sql() incorrectly escapes nested columns
[ https://issues.apache.org/jira/browse/SPARK-25769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-25769: --- Labels: pull-request-available (was: ) > UnresolvedAttribute.sql() incorrectly escapes nested columns > > > Key: SPARK-25769 > URL: https://issues.apache.org/jira/browse/SPARK-25769 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.3.2 >Reporter: Simeon Simeonov >Assignee: Kousuke Saruta >Priority: Major > Labels: pull-request-available > Fix For: 3.2.0 > > > {{UnresolvedAttribute.sql()}} output is incorrectly escaped for nested columns > {code:java} > import org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute > // The correct output is a.b, without backticks, or `a`.`b`. > $"a.b".expr.asInstanceOf[UnresolvedAttribute].sql > // res1: String = `a.b` > // Parsing is correct; the bug is localized to sql() > $"a.b".expr.asInstanceOf[UnresolvedAttribute].nameParts > // res2: Seq[String] = ArrayBuffer(a, b) > {code} > The likely culprit is that the {{sql()}} implementation does not check for > {{nameParts}} being non-empty. > {code:java} > override def sql: String = name match { > case ParserUtils.escapedIdentifier(_) | > ParserUtils.qualifiedEscapedIdentifier(_, _) => name > case _ => quoteIdentifier(name) > } > {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-47887) Remove unused import `spark/connect/common.proto` from `spark/connect/relations.proto`
[ https://issues.apache.org/jira/browse/SPARK-47887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun reassigned SPARK-47887: - Assignee: Yang Jie > Remove unused import `spark/connect/common.proto` from > `spark/connect/relations.proto` > -- > > Key: SPARK-47887 > URL: https://issues.apache.org/jira/browse/SPARK-47887 > Project: Spark > Issue Type: Improvement > Components: Connect >Affects Versions: 4.0.0 >Reporter: Yang Jie >Assignee: Yang Jie >Priority: Major > Labels: pull-request-available > > fix compile waring: > > {code:java} > spark/connect/relations.proto:26:1: warning: Import > spark/connect/common.proto is unused. {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-47887) Remove unused import `spark/connect/common.proto` from `spark/connect/relations.proto`
[ https://issues.apache.org/jira/browse/SPARK-47887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun resolved SPARK-47887. --- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 46106 [https://github.com/apache/spark/pull/46106] > Remove unused import `spark/connect/common.proto` from > `spark/connect/relations.proto` > -- > > Key: SPARK-47887 > URL: https://issues.apache.org/jira/browse/SPARK-47887 > Project: Spark > Issue Type: Improvement > Components: Connect >Affects Versions: 4.0.0 >Reporter: Yang Jie >Assignee: Yang Jie >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > fix compile waring: > > {code:java} > spark/connect/relations.proto:26:1: warning: Import > spark/connect/common.proto is unused. {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-47893) Upgrade ASM to 9.7
[ https://issues.apache.org/jira/browse/SPARK-47893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun resolved SPARK-47893. --- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 46110 [https://github.com/apache/spark/pull/46110] > Upgrade ASM to 9.7 > -- > > Key: SPARK-47893 > URL: https://issues.apache.org/jira/browse/SPARK-47893 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 4.0.0 >Reporter: BingKun Pan >Assignee: BingKun Pan >Priority: Minor > Labels: pull-request-available > Fix For: 4.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47463) An error occurred while pushing down the filter of if expression for iceberg datasource.
[ https://issues.apache.org/jira/browse/SPARK-47463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wenchen Fan updated SPARK-47463: Fix Version/s: 3.5.2 > An error occurred while pushing down the filter of if expression for iceberg > datasource. > > > Key: SPARK-47463 > URL: https://issues.apache.org/jira/browse/SPARK-47463 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 4.0.0 > Environment: Spark 3.5.0 > Iceberg 1.4.3 >Reporter: Zhen Wang >Assignee: Zhen Wang >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0, 3.5.2 > > > Reproduce: > {code:java} > create table t1(c1 int) using iceberg; > select * from > (select if(c1 = 1, c1, null) as c1 from t1) t > where t.c1 > 0; {code} > Error: > {code:java} > org.apache.spark.SparkException: [INTERNAL_ERROR] The Spark SQL phase > optimization failed with an internal error. You hit a bug in Spark or the > Spark plugins you use. Please, report this bug to the corresponding > communities or vendors, and provide the full stack trace. > at > org.apache.spark.SparkException$.internalError(SparkException.scala:107) > at > org.apache.spark.sql.execution.QueryExecution$.toInternalError(QueryExecution.scala:536) > at > org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:548) > at > org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:219) > at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900) > at > org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:218) > at > org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:148) > at > org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:144) > at > org.apache.spark.sql.execution.QueryExecution.assertOptimized(QueryExecution.scala:162) > at > org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:182) > at > org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:179) > at > org.apache.spark.sql.execution.QueryExecution.simpleString(QueryExecution.scala:238) > at > org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$explainString(QueryExecution.scala:284) > at > org.apache.spark.sql.execution.QueryExecution.explainString(QueryExecution.scala:252) > at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:117) > at > org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201) > at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108) > at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900) > at > org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66) > at org.apache.spark.sql.Dataset.withAction(Dataset.scala:4327) > at org.apache.spark.sql.Dataset.collect(Dataset.scala:3580) > at > org.apache.kyuubi.engine.spark.operation.ExecuteStatement.fullCollectResult(ExecuteStatement.scala:72) > at > org.apache.kyuubi.engine.spark.operation.ExecuteStatement.collectAsIterator(ExecuteStatement.scala:164) > at > org.apache.kyuubi.engine.spark.operation.ExecuteStatement.$anonfun$executeStatement$1(ExecuteStatement.scala:87) > at > scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) > at > org.apache.kyuubi.engine.spark.operation.SparkOperation.$anonfun$withLocalProperties$1(SparkOperation.scala:155) > at > org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201) > at > org.apache.kyuubi.engine.spark.operation.SparkOperation.withLocalProperties(SparkOperation.scala:139) > at > org.apache.kyuubi.engine.spark.operation.ExecuteStatement.executeStatement(ExecuteStatement.scala:81) > at > org.apache.kyuubi.engine.spark.operation.ExecuteStatement$$anon$1.run(ExecuteStatement.scala:103) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.AssertionError: assertion failed > at scala.Predef$.assert(Predef.scala:208) > at >
[jira] [Updated] (SPARK-47899) StageFailed event should attach the exception chain
[ https://issues.apache.org/jira/browse/SPARK-47899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arjun Sahoo updated SPARK-47899: Component/s: Scheduler > StageFailed event should attach the exception chain > --- > > Key: SPARK-47899 > URL: https://issues.apache.org/jira/browse/SPARK-47899 > Project: Spark > Issue Type: Improvement > Components: Scheduler, Spark Core >Affects Versions: 3.4.0 >Reporter: Arjun Sahoo >Assignee: BingKun Pan >Priority: Minor > > As part of SPARK-39195, task is marked as failed but the exception chain was > not sent, ultimately the cause becomes `null` in SparkException. It is not > convenient to find the root cause from the detailed message. > {code} > /** >* Called by the OutputCommitCoordinator to cancel stage due to data > duplication may happen. >*/ > private[scheduler] def stageFailed(stageId: Int, reason: String): Unit = { > eventProcessLoop.post(StageFailed(stageId, reason, None)) > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47353) Mode
[ https://issues.apache.org/jira/browse/SPARK-47353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uroš Bojanić updated SPARK-47353: - Summary: Mode (was: TBD) > Mode > > > Key: SPARK-47353 > URL: https://issues.apache.org/jira/browse/SPARK-47353 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 4.0.0 >Reporter: Uroš Bojanić >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-47901) UPgrade commons-text to 1.12.0
Yang Jie created SPARK-47901: Summary: UPgrade commons-text to 1.12.0 Key: SPARK-47901 URL: https://issues.apache.org/jira/browse/SPARK-47901 Project: Spark Issue Type: Improvement Components: Build Affects Versions: 4.0.0 Reporter: Yang Jie https://github.com/apache/commons-text/blob/rel/commons-text-1.12.0/RELEASE-NOTES.txt -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47353) TBD
[ https://issues.apache.org/jira/browse/SPARK-47353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uroš Bojanić updated SPARK-47353: - Summary: TBD (was: NullIf) > TBD > --- > > Key: SPARK-47353 > URL: https://issues.apache.org/jira/browse/SPARK-47353 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 4.0.0 >Reporter: Uroš Bojanić >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47901) Upgrade commons-text to 1.12.0
[ https://issues.apache.org/jira/browse/SPARK-47901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Jie updated SPARK-47901: - Summary: Upgrade commons-text to 1.12.0 (was: UPgrade commons-text to 1.12.0) > Upgrade commons-text to 1.12.0 > -- > > Key: SPARK-47901 > URL: https://issues.apache.org/jira/browse/SPARK-47901 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 4.0.0 >Reporter: Yang Jie >Priority: Major > > https://github.com/apache/commons-text/blob/rel/commons-text-1.12.0/RELEASE-NOTES.txt -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47351) TBD
[ https://issues.apache.org/jira/browse/SPARK-47351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uroš Bojanić updated SPARK-47351: - Summary: TBD (was: Between) > TBD > --- > > Key: SPARK-47351 > URL: https://issues.apache.org/jira/browse/SPARK-47351 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 4.0.0 >Reporter: Uroš Bojanić >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47408) TBD
[ https://issues.apache.org/jira/browse/SPARK-47408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uroš Bojanić updated SPARK-47408: - Summary: TBD (was: Distinct) > TBD > --- > > Key: SPARK-47408 > URL: https://issues.apache.org/jira/browse/SPARK-47408 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 4.0.0 >Reporter: Uroš Bojanić >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47354) TBD
[ https://issues.apache.org/jira/browse/SPARK-47354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uroš Bojanić updated SPARK-47354: - Summary: TBD (was: Case) > TBD > --- > > Key: SPARK-47354 > URL: https://issues.apache.org/jira/browse/SPARK-47354 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 4.0.0 >Reporter: Uroš Bojanić >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47355) TBD
[ https://issues.apache.org/jira/browse/SPARK-47355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uroš Bojanić updated SPARK-47355: - Summary: TBD (was: Min & Max) > TBD > --- > > Key: SPARK-47355 > URL: https://issues.apache.org/jira/browse/SPARK-47355 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 4.0.0 >Reporter: Uroš Bojanić >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-47895) group by all should be idempotent
[ https://issues.apache.org/jira/browse/SPARK-47895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wenchen Fan resolved SPARK-47895. - Fix Version/s: 3.4.4 3.5.2 4.0.0 Resolution: Fixed Issue resolved by pull request 46113 [https://github.com/apache/spark/pull/46113] > group by all should be idempotent > - > > Key: SPARK-47895 > URL: https://issues.apache.org/jira/browse/SPARK-47895 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 3.4.0 >Reporter: Wenchen Fan >Assignee: Wenchen Fan >Priority: Major > Labels: pull-request-available > Fix For: 3.4.4, 3.5.2, 4.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-47895) group by all should be idempotent
[ https://issues.apache.org/jira/browse/SPARK-47895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wenchen Fan reassigned SPARK-47895: --- Assignee: Wenchen Fan > group by all should be idempotent > - > > Key: SPARK-47895 > URL: https://issues.apache.org/jira/browse/SPARK-47895 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 3.4.0 >Reporter: Wenchen Fan >Assignee: Wenchen Fan >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47899) StageFailed event should attach the exception chain
[ https://issues.apache.org/jira/browse/SPARK-47899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arjun Sahoo updated SPARK-47899: Description: As part of SPARK-39195, task is marked as failed but the exception chain was not sent, ultimately the cause becomes `null` in SparkException. It is not convenient to find the root cause from the detailed message. {code} /** * Called by the OutputCommitCoordinator to cancel stage due to data duplication may happen. */ private[scheduler] def stageFailed(stageId: Int, reason: String): Unit = { eventProcessLoop.post(StageFailed(stageId, reason, None)) } {code} was:As part of SPARK-39195, task is marked as failed but the exception chain was not sent, ultimately the cause becomes `null` in SparkException. Applications unable to get the root cause as the cause is null. > StageFailed event should attach the exception chain > --- > > Key: SPARK-47899 > URL: https://issues.apache.org/jira/browse/SPARK-47899 > Project: Spark > Issue Type: Improvement > Components: Spark Core >Affects Versions: 3.4.0 >Reporter: Arjun Sahoo >Assignee: BingKun Pan >Priority: Minor > > As part of SPARK-39195, task is marked as failed but the exception chain was > not sent, ultimately the cause becomes `null` in SparkException. It is not > convenient to find the root cause from the detailed message. > {code} > /** >* Called by the OutputCommitCoordinator to cancel stage due to data > duplication may happen. >*/ > private[scheduler] def stageFailed(stageId: Int, reason: String): Unit = { > eventProcessLoop.post(StageFailed(stageId, reason, None)) > } > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-47413) Substring, Right, Left (all collations)
[ https://issues.apache.org/jira/browse/SPARK-47413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot reassigned SPARK-47413: -- Assignee: (was: Apache Spark) > Substring, Right, Left (all collations) > --- > > Key: SPARK-47413 > URL: https://issues.apache.org/jira/browse/SPARK-47413 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 4.0.0 >Reporter: Uroš Bojanić >Priority: Major > Labels: pull-request-available > > Enable collation support for the *Substring* built-in string function in > Spark (including *Right* and *Left* functions). First confirm what is the > expected behaviour for these functions when given collated strings, then move > on to the implementation that would enable handling strings of all collation > types. Implement the corresponding unit tests > (CollationStringExpressionsSuite) and E2E tests (CollationSuite) to reflect > how this function should be used with collation in SparkSQL, and feel free to > use your chosen Spark SQL Editor to experiment with the existing functions to > learn more about how they work. In addition, look into the possible use-cases > and implementation of similar functions within other other open-source DBMS, > such as [PostgreSQL|https://www.postgresql.org/docs/]. > > The goal for this Jira ticket is to implement the {*}Substring{*}, > {*}Right{*}, and *Left* functions so that they support all collation types > currently supported in Spark. To understand what changes were introduced in > order to enable full collation support for other existing functions in Spark, > take a look at the Spark PRs and Jira tickets for completed tasks in this > parent (for example: Contains, StartsWith, EndsWith). > > Read more about ICU [Collation Concepts|http://example.com/] and > [Collator|http://example.com/] class. Also, refer to the Unicode Technical > Standard for > [collation|https://www.unicode.org/reports/tr35/tr35-collation.html#Collation_Type_Fallback]. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47899) StageFailed event should attach the exception chain
[ https://issues.apache.org/jira/browse/SPARK-47899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arjun Sahoo updated SPARK-47899: Summary: StageFailed event should attach the exception chain (was: StageFailed event should attach with exception chain) > StageFailed event should attach the exception chain > --- > > Key: SPARK-47899 > URL: https://issues.apache.org/jira/browse/SPARK-47899 > Project: Spark > Issue Type: Improvement > Components: Spark Core >Affects Versions: 3.4.0 >Reporter: Arjun Sahoo >Assignee: BingKun Pan >Priority: Minor > > As part of SPARK-39195, task is marked as failed but the exception chain was > not sent, ultimately the cause becomes `null` in SparkException. Applications > unable to get the root cause as the cause is null. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-47413) Substring, Right, Left (all collations)
[ https://issues.apache.org/jira/browse/SPARK-47413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot reassigned SPARK-47413: -- Assignee: Apache Spark > Substring, Right, Left (all collations) > --- > > Key: SPARK-47413 > URL: https://issues.apache.org/jira/browse/SPARK-47413 > Project: Spark > Issue Type: Sub-task > Components: SQL >Affects Versions: 4.0.0 >Reporter: Uroš Bojanić >Assignee: Apache Spark >Priority: Major > Labels: pull-request-available > > Enable collation support for the *Substring* built-in string function in > Spark (including *Right* and *Left* functions). First confirm what is the > expected behaviour for these functions when given collated strings, then move > on to the implementation that would enable handling strings of all collation > types. Implement the corresponding unit tests > (CollationStringExpressionsSuite) and E2E tests (CollationSuite) to reflect > how this function should be used with collation in SparkSQL, and feel free to > use your chosen Spark SQL Editor to experiment with the existing functions to > learn more about how they work. In addition, look into the possible use-cases > and implementation of similar functions within other other open-source DBMS, > such as [PostgreSQL|https://www.postgresql.org/docs/]. > > The goal for this Jira ticket is to implement the {*}Substring{*}, > {*}Right{*}, and *Left* functions so that they support all collation types > currently supported in Spark. To understand what changes were introduced in > order to enable full collation support for other existing functions in Spark, > take a look at the Spark PRs and Jira tickets for completed tasks in this > parent (for example: Contains, StartsWith, EndsWith). > > Read more about ICU [Collation Concepts|http://example.com/] and > [Collator|http://example.com/] class. Also, refer to the Unicode Technical > Standard for > [collation|https://www.unicode.org/reports/tr35/tr35-collation.html#Collation_Type_Fallback]. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-47873) Write collated strings to hive as regular strings
[ https://issues.apache.org/jira/browse/SPARK-47873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot reassigned SPARK-47873: -- Assignee: (was: Apache Spark) > Write collated strings to hive as regular strings > - > > Key: SPARK-47873 > URL: https://issues.apache.org/jira/browse/SPARK-47873 > Project: Spark > Issue Type: Improvement > Components: Spark Core, SQL >Affects Versions: 4.0.0 >Reporter: Stefan Kandic >Priority: Major > Labels: pull-request-available > > As hive doesn't support collations we should write collated strings with a > regular string type but keep the collation in table metadata to properly read > them back. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-47873) Write collated strings to hive as regular strings
[ https://issues.apache.org/jira/browse/SPARK-47873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot reassigned SPARK-47873: -- Assignee: Apache Spark > Write collated strings to hive as regular strings > - > > Key: SPARK-47873 > URL: https://issues.apache.org/jira/browse/SPARK-47873 > Project: Spark > Issue Type: Improvement > Components: Spark Core, SQL >Affects Versions: 4.0.0 >Reporter: Stefan Kandic >Assignee: Apache Spark >Priority: Major > Labels: pull-request-available > > As hive doesn't support collations we should write collated strings with a > regular string type but keep the collation in table metadata to properly read > them back. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47899) StageFailed event should attach with exception chain
[ https://issues.apache.org/jira/browse/SPARK-47899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arjun Sahoo updated SPARK-47899: Summary: StageFailed event should attach with exception chain (was: StageFailed event should attach with exception stack) > StageFailed event should attach with exception chain > > > Key: SPARK-47899 > URL: https://issues.apache.org/jira/browse/SPARK-47899 > Project: Spark > Issue Type: Improvement > Components: Spark Core >Affects Versions: 3.4.0 >Reporter: Arjun Sahoo >Assignee: BingKun Pan >Priority: Minor > > As part of SPARK-39195, task is marked as failed but the exception chain was > not sent, ultimately the cause becomes `null` in SparkException. Applications > unable to get the root cause as the cause is null. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47899) StageFailed event should attach with exception stack
[ https://issues.apache.org/jira/browse/SPARK-47899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arjun Sahoo updated SPARK-47899: Description: As part of SPARK-39195, task is marked as failed but the exception chain was not sent, ultimately the cause becomes `null` in SparkException. Applications unable to get the root cause as the cause is null. > StageFailed event should attach with exception stack > > > Key: SPARK-47899 > URL: https://issues.apache.org/jira/browse/SPARK-47899 > Project: Spark > Issue Type: Improvement > Components: Spark Core >Affects Versions: 3.4.0 >Reporter: Arjun Sahoo >Assignee: BingKun Pan >Priority: Minor > > As part of SPARK-39195, task is marked as failed but the exception chain was > not sent, ultimately the cause becomes `null` in SparkException. Applications > unable to get the root cause as the cause is null. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47899) StageFailed event should attach with exception stack
[ https://issues.apache.org/jira/browse/SPARK-47899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arjun Sahoo updated SPARK-47899: Fix Version/s: (was: 3.4.0) > StageFailed event should attach with exception stack > > > Key: SPARK-47899 > URL: https://issues.apache.org/jira/browse/SPARK-47899 > Project: Spark > Issue Type: Improvement > Components: Spark Core >Affects Versions: 3.4.0 >Reporter: Arjun Sahoo >Assignee: BingKun Pan >Priority: Minor > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-47899) StageFailed event should attach with exception stack
Arjun Sahoo created SPARK-47899: --- Summary: StageFailed event should attach with exception stack Key: SPARK-47899 URL: https://issues.apache.org/jira/browse/SPARK-47899 Project: Spark Issue Type: Improvement Components: Spark Core Affects Versions: 3.4.0 Reporter: Arjun Sahoo Assignee: BingKun Pan Fix For: 3.4.0 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47898) Port HIVE-12270: Add DBTokenStore support to HS2 delegation token
[ https://issues.apache.org/jira/browse/SPARK-47898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated SPARK-47898: --- Labels: pull-request-available (was: ) > Port HIVE-12270: Add DBTokenStore support to HS2 delegation token > - > > Key: SPARK-47898 > URL: https://issues.apache.org/jira/browse/SPARK-47898 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 4.0.0 >Reporter: Kent Yao >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-47898) Port HIVE-12270: Add DBTokenStore support to HS2 delegation token
Kent Yao created SPARK-47898: Summary: Port HIVE-12270: Add DBTokenStore support to HS2 delegation token Key: SPARK-47898 URL: https://issues.apache.org/jira/browse/SPARK-47898 Project: Spark Issue Type: Improvement Components: SQL Affects Versions: 4.0.0 Reporter: Kent Yao -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-47897) ExpressionSet performance regression in scala 2.12
Zhen Wang created SPARK-47897: - Summary: ExpressionSet performance regression in scala 2.12 Key: SPARK-47897 URL: https://issues.apache.org/jira/browse/SPARK-47897 Project: Spark Issue Type: Improvement Components: SQL Affects Versions: 3.5.1 Reporter: Zhen Wang After [SPARK-38836|https://issues.apache.org/jira/browse/SPARK-38836], the `++` and `--` methods in ExpressionSet were removed, causing performance regression. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-47850) Support converting insert for unpartitioned Hive table
[ https://issues.apache.org/jira/browse/SPARK-47850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Jie reassigned SPARK-47850: Assignee: Cheng Pan > Support converting insert for unpartitioned Hive table > -- > > Key: SPARK-47850 > URL: https://issues.apache.org/jira/browse/SPARK-47850 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 4.0.0 >Reporter: Cheng Pan >Assignee: Cheng Pan >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-47767) Show offset value in TakeOrderedAndProjectExec
[ https://issues.apache.org/jira/browse/SPARK-47767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hyukjin Kwon reassigned SPARK-47767: Assignee: guihuawen > Show offset value in TakeOrderedAndProjectExec > -- > > Key: SPARK-47767 > URL: https://issues.apache.org/jira/browse/SPARK-47767 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.4.0, 3.5.0, 4.0.0 >Reporter: guihuawen >Assignee: guihuawen >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > Show the offset value in TakeOrderedAndProjectExec. > > For example: > > explain select * from test_limit_offset order by a limit 2 offset 1; > plan > == Physical Plan == > TakeOrderedAndProject(limit=3, orderBy=[a#171 ASC NULLS FIRST|#171 ASC NULLS > FIRST], output=[a#171|#171]) > +- Scan hive spark_catalog.bigdata_qa.test_limit_offset [a#171|#171], > HiveTableRelation [`spark_catalog`.`test`.`test_limit_offset`, > org.apache.hadoop.hive.ql.io.orc.OrcSerde, Data Cols: [a#171|#171], Partition > Cols: []] > > No offset is displayed. If it is displayed, it will be more user-friendly > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-47850) Support converting insert for unpartitioned Hive table
[ https://issues.apache.org/jira/browse/SPARK-47850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Jie resolved SPARK-47850. -- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 46052 [https://github.com/apache/spark/pull/46052] > Support converting insert for unpartitioned Hive table > -- > > Key: SPARK-47850 > URL: https://issues.apache.org/jira/browse/SPARK-47850 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 4.0.0 >Reporter: Cheng Pan >Assignee: Cheng Pan >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-47767) Show offset value in TakeOrderedAndProjectExec
[ https://issues.apache.org/jira/browse/SPARK-47767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hyukjin Kwon resolved SPARK-47767. -- Resolution: Fixed Issue resolved by pull request 45931 [https://github.com/apache/spark/pull/45931] > Show offset value in TakeOrderedAndProjectExec > -- > > Key: SPARK-47767 > URL: https://issues.apache.org/jira/browse/SPARK-47767 > Project: Spark > Issue Type: Improvement > Components: SQL >Affects Versions: 3.4.0, 3.5.0, 4.0.0 >Reporter: guihuawen >Assignee: guihuawen >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > Show the offset value in TakeOrderedAndProjectExec. > > For example: > > explain select * from test_limit_offset order by a limit 2 offset 1; > plan > == Physical Plan == > TakeOrderedAndProject(limit=3, orderBy=[a#171 ASC NULLS FIRST|#171 ASC NULLS > FIRST], output=[a#171|#171]) > +- Scan hive spark_catalog.bigdata_qa.test_limit_offset [a#171|#171], > HiveTableRelation [`spark_catalog`.`test`.`test_limit_offset`, > org.apache.hadoop.hive.ql.io.orc.OrcSerde, Data Cols: [a#171|#171], Partition > Cols: []] > > No offset is displayed. If it is displayed, it will be more user-friendly > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-47852) Support DataFrameQueryContext for reverse operations
[ https://issues.apache.org/jira/browse/SPARK-47852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hyukjin Kwon reassigned SPARK-47852: Assignee: Haejoon Lee > Support DataFrameQueryContext for reverse operations > > > Key: SPARK-47852 > URL: https://issues.apache.org/jira/browse/SPARK-47852 > Project: Spark > Issue Type: Bug > Components: PySpark >Affects Versions: 4.0.0 >Reporter: Haejoon Lee >Assignee: Haejoon Lee >Priority: Major > Labels: pull-request-available > > To improve error message for reverse ops -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-47858) Refactoring the structure for DataFrame error context
[ https://issues.apache.org/jira/browse/SPARK-47858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hyukjin Kwon resolved SPARK-47858. -- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 46063 [https://github.com/apache/spark/pull/46063] > Refactoring the structure for DataFrame error context > - > > Key: SPARK-47858 > URL: https://issues.apache.org/jira/browse/SPARK-47858 > Project: Spark > Issue Type: Bug > Components: PySpark >Affects Versions: 4.0.0 >Reporter: Haejoon Lee >Assignee: Haejoon Lee >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > The current implementation for PySpark DataFrame error context could be more > flexible by addressing some hacky spots. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-47852) Support DataFrameQueryContext for reverse operations
[ https://issues.apache.org/jira/browse/SPARK-47852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hyukjin Kwon resolved SPARK-47852. -- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 46063 [https://github.com/apache/spark/pull/46063] > Support DataFrameQueryContext for reverse operations > > > Key: SPARK-47852 > URL: https://issues.apache.org/jira/browse/SPARK-47852 > Project: Spark > Issue Type: Bug > Components: PySpark >Affects Versions: 4.0.0 >Reporter: Haejoon Lee >Assignee: Haejoon Lee >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > To improve error message for reverse ops -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-47858) Refactoring the structure for DataFrame error context
[ https://issues.apache.org/jira/browse/SPARK-47858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hyukjin Kwon reassigned SPARK-47858: Assignee: Haejoon Lee > Refactoring the structure for DataFrame error context > - > > Key: SPARK-47858 > URL: https://issues.apache.org/jira/browse/SPARK-47858 > Project: Spark > Issue Type: Bug > Components: PySpark >Affects Versions: 4.0.0 >Reporter: Haejoon Lee >Assignee: Haejoon Lee >Priority: Major > Labels: pull-request-available > > The current implementation for PySpark DataFrame error context could be more > flexible by addressing some hacky spots. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-47864) Enhance "Installation" page to cover all installable options
[ https://issues.apache.org/jira/browse/SPARK-47864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hyukjin Kwon resolved SPARK-47864. -- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 46096 [https://github.com/apache/spark/pull/46096] > Enhance "Installation" page to cover all installable options > > > Key: SPARK-47864 > URL: https://issues.apache.org/jira/browse/SPARK-47864 > Project: Spark > Issue Type: Bug > Components: PySpark >Affects Versions: 4.0.0 >Reporter: Haejoon Lee >Assignee: Haejoon Lee >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > Like Installation page from Pandas, we might need to cover all installable > options with related dependencies from our Installation documentation. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-47864) Enhance "Installation" page to cover all installable options
[ https://issues.apache.org/jira/browse/SPARK-47864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hyukjin Kwon reassigned SPARK-47864: Assignee: Haejoon Lee > Enhance "Installation" page to cover all installable options > > > Key: SPARK-47864 > URL: https://issues.apache.org/jira/browse/SPARK-47864 > Project: Spark > Issue Type: Bug > Components: PySpark >Affects Versions: 4.0.0 >Reporter: Haejoon Lee >Assignee: Haejoon Lee >Priority: Major > Labels: pull-request-available > > Like Installation page from Pandas, we might need to cover all installable > options with related dependencies from our Installation documentation. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-47896) Upgrade netty to `4.1.109.Final`
[ https://issues.apache.org/jira/browse/SPARK-47896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun updated SPARK-47896: -- Parent: SPARK-47046 Issue Type: Sub-task (was: Improvement) > Upgrade netty to `4.1.109.Final` > > > Key: SPARK-47896 > URL: https://issues.apache.org/jira/browse/SPARK-47896 > Project: Spark > Issue Type: Sub-task > Components: Build >Affects Versions: 4.0.0 >Reporter: BingKun Pan >Assignee: BingKun Pan >Priority: Minor > Labels: pull-request-available > Fix For: 4.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-47896) Upgrade netty to `4.1.109.Final`
[ https://issues.apache.org/jira/browse/SPARK-47896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun reassigned SPARK-47896: - Assignee: BingKun Pan > Upgrade netty to `4.1.109.Final` > > > Key: SPARK-47896 > URL: https://issues.apache.org/jira/browse/SPARK-47896 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 4.0.0 >Reporter: BingKun Pan >Assignee: BingKun Pan >Priority: Minor > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Resolved] (SPARK-47896) Upgrade netty to `4.1.109.Final`
[ https://issues.apache.org/jira/browse/SPARK-47896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun resolved SPARK-47896. --- Fix Version/s: 4.0.0 Resolution: Fixed Issue resolved by pull request 46112 [https://github.com/apache/spark/pull/46112] > Upgrade netty to `4.1.109.Final` > > > Key: SPARK-47896 > URL: https://issues.apache.org/jira/browse/SPARK-47896 > Project: Spark > Issue Type: Improvement > Components: Build >Affects Versions: 4.0.0 >Reporter: BingKun Pan >Assignee: BingKun Pan >Priority: Minor > Labels: pull-request-available > Fix For: 4.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org