[jira] [Updated] (SPARK-45724) Add a separate LogAppender for tests that need to print `RuleExecutor.dumpTimeSpent()` to avoid truncation of Metrics

2023-10-30 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45724:
-
Summary: Add a separate LogAppender for tests that need to print 
`RuleExecutor.dumpTimeSpent()` to avoid truncation of Metrics  (was: Add a 
separate LogAppender for tests that need to print 
`RuleExecutor.dumpTimeSpent()` to avoid truncation of Metrics information.)

> Add a separate LogAppender for tests that need to print 
> `RuleExecutor.dumpTimeSpent()` to avoid truncation of Metrics
> -
>
> Key: SPARK-45724
> URL: https://issues.apache.org/jira/browse/SPARK-45724
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL, Tests
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45724) Add a separate LogAppender for tests that need to print `RuleExecutor.dumpTimeSpent()` to avoid truncation of Metrics information.

2023-10-30 Thread Yang Jie (Jira)
Yang Jie created SPARK-45724:


 Summary: Add a separate LogAppender for tests that need to print 
`RuleExecutor.dumpTimeSpent()` to avoid truncation of Metrics information.
 Key: SPARK-45724
 URL: https://issues.apache.org/jira/browse/SPARK-45724
 Project: Spark
  Issue Type: Improvement
  Components: SQL, Tests
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45701) Clean up the deprecated API usage related to `SetOps`

2023-10-28 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45701:
-
Description: 
* method - in trait SetOps is deprecated (since 2.13.0)
 * method -- in trait SetOps is deprecated (since 2.13.0)
 * method + in trait SetOps is deprecated (since 2.13.0)
 * method retain in trait SetOps is deprecated (since 2.13.0)

 
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/storage/BlockReplicationPolicy.scala:70:32:
 method + in trait SetOps is deprecated (since 2.13.0): Consider requiring an 
immutable Set or fall back to Set.union
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.storage.BlockReplicationUtils.getSampleIds.indices.$anonfun,
 origin=scala.collection.SetOps.+, version=2.13.0
[warn]       if (set.contains(t)) set + i else set + t
[warn]                                ^ {code}

  was:
* method - in trait SetOps is deprecated (since 2.13.0)
 * method – in trait SetOps is deprecated (since 2.13.0)
 * method + in trait SetOps is deprecated (since 2.13.0)
 * method retain in trait SetOps is deprecated (since 2.13.0)

 
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/storage/BlockReplicationPolicy.scala:70:32:
 method + in trait SetOps is deprecated (since 2.13.0): Consider requiring an 
immutable Set or fall back to Set.union
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.storage.BlockReplicationUtils.getSampleIds.indices.$anonfun,
 origin=scala.collection.SetOps.+, version=2.13.0
[warn]       if (set.contains(t)) set + i else set + t
[warn]                                ^ {code}


> Clean up the deprecated API usage related to `SetOps`
> -
>
> Key: SPARK-45701
> URL: https://issues.apache.org/jira/browse/SPARK-45701
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> * method - in trait SetOps is deprecated (since 2.13.0)
>  * method -- in trait SetOps is deprecated (since 2.13.0)
>  * method + in trait SetOps is deprecated (since 2.13.0)
>  * method retain in trait SetOps is deprecated (since 2.13.0)
>  
> {code:java}
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/storage/BlockReplicationPolicy.scala:70:32:
>  method + in trait SetOps is deprecated (since 2.13.0): Consider requiring an 
> immutable Set or fall back to Set.union
> [warn] Applicable -Wconf / @nowarn filters for this warning: msg= message>, cat=deprecation, 
> site=org.apache.spark.storage.BlockReplicationUtils.getSampleIds.indices.$anonfun,
>  origin=scala.collection.SetOps.+, version=2.13.0
> [warn]       if (set.contains(t)) set + i else set + t
> [warn]                                ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45684) Clean up the deprecated API usage related to `SeqOps`

2023-10-27 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45684:
-
Description: 
* method transform in trait SeqOps is deprecated (since 2.13.0)
 * method reverseMap in trait SeqOps is deprecated (since 2.13.0)
 * method union in trait SeqOps is deprecated (since 2.13.0)

{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala:675:15:
 method transform in trait SeqOps is deprecated (since 2.13.0): Use 
`mapInPlace` on an `IndexedSeq` instead
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.ml.classification.LogisticRegression.train.$anonfun, 
origin=scala.collection.mutable.SeqOps.transform, version=2.13.0
[warn]       centers.transform(_ / numCoefficientSets)
[warn]               ^ {code}

  was:
* method transform in trait SeqOps is deprecated (since 2.13.0)
 * method reverseMap in trait SeqOps is deprecated (since 2.13.0)
 * method retain in trait SetOps is deprecated (since 2.13.0)
 * method union in trait SeqOps is deprecated (since 2.13.0)

{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala:675:15:
 method transform in trait SeqOps is deprecated (since 2.13.0): Use 
`mapInPlace` on an `IndexedSeq` instead
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.ml.classification.LogisticRegression.train.$anonfun, 
origin=scala.collection.mutable.SeqOps.transform, version=2.13.0
[warn]       centers.transform(_ / numCoefficientSets)
[warn]               ^ {code}


> Clean up the deprecated API usage related to `SeqOps`
> -
>
> Key: SPARK-45684
> URL: https://issues.apache.org/jira/browse/SPARK-45684
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build, Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> * method transform in trait SeqOps is deprecated (since 2.13.0)
>  * method reverseMap in trait SeqOps is deprecated (since 2.13.0)
>  * method union in trait SeqOps is deprecated (since 2.13.0)
> {code:java}
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala:675:15:
>  method transform in trait SeqOps is deprecated (since 2.13.0): Use 
> `mapInPlace` on an `IndexedSeq` instead
> [warn] Applicable -Wconf / @nowarn filters for this warning: msg= message>, cat=deprecation, 
> site=org.apache.spark.ml.classification.LogisticRegression.train.$anonfun, 
> origin=scala.collection.mutable.SeqOps.transform, version=2.13.0
> [warn]       centers.transform(_ / numCoefficientSets)
> [warn]               ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-45651) Snapshots of some packages are not published any more

2023-10-27 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-45651:


Assignee: Enrico Minack

> Snapshots of some packages are not published any more
> -
>
> Key: SPARK-45651
> URL: https://issues.apache.org/jira/browse/SPARK-45651
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Enrico Minack
>Assignee: Enrico Minack
>Priority: Major
>  Labels: pull-request-available
>
> Snapshots of some packages are not been published anymore, e.g. 
> spark-sql_2.13-4.0.0 has not been published since Sep, 13th: 
> https://repository.apache.org/content/groups/snapshots/org/apache/spark/spark-sql_2.13/4.0.0-SNAPSHOT/
> There have been some attempts to fix CI: SPARK-45535 SPARK-45536
> Assumption is that memory consumption during build exceeds the available 
> memory of the Github host.
> The following could be attempted:
> - enable manual trigger of the {{publish_snapshots.yml}} workflow
> - enable some memory use logging to proof that exceeded memory is the root 
> cause
> - attempt to reduce memory footprint and see impact in above logging
> - revert memory use logging



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45651) Snapshots of some packages are not published any more

2023-10-27 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45651.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43555
[https://github.com/apache/spark/pull/43555]

> Snapshots of some packages are not published any more
> -
>
> Key: SPARK-45651
> URL: https://issues.apache.org/jira/browse/SPARK-45651
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Enrico Minack
>Assignee: Enrico Minack
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> Snapshots of some packages are not been published anymore, e.g. 
> spark-sql_2.13-4.0.0 has not been published since Sep, 13th: 
> https://repository.apache.org/content/groups/snapshots/org/apache/spark/spark-sql_2.13/4.0.0-SNAPSHOT/
> There have been some attempts to fix CI: SPARK-45535 SPARK-45536
> Assumption is that memory consumption during build exceeds the available 
> memory of the Github host.
> The following could be attempted:
> - enable manual trigger of the {{publish_snapshots.yml}} workflow
> - enable some memory use logging to proof that exceeded memory is the root 
> cause
> - attempt to reduce memory footprint and see impact in above logging
> - revert memory use logging



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45701) Clean up the deprecated API usage related to `SetOps`

2023-10-27 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45701:
-
Description: 
* method - in trait SetOps is deprecated (since 2.13.0)
 * method – in trait SetOps is deprecated (since 2.13.0)
 * method + in trait SetOps is deprecated (since 2.13.0)
 * method retain in trait SetOps is deprecated (since 2.13.0)

 
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/storage/BlockReplicationPolicy.scala:70:32:
 method + in trait SetOps is deprecated (since 2.13.0): Consider requiring an 
immutable Set or fall back to Set.union
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.storage.BlockReplicationUtils.getSampleIds.indices.$anonfun,
 origin=scala.collection.SetOps.+, version=2.13.0
[warn]       if (set.contains(t)) set + i else set + t
[warn]                                ^ {code}

  was:
* method - in trait SetOps is deprecated (since 2.13.0)
 * method – in trait SetOps is deprecated (since 2.13.0)
 * method + in trait SetOps is deprecated (since 2.13.0)

 
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/storage/BlockReplicationPolicy.scala:70:32:
 method + in trait SetOps is deprecated (since 2.13.0): Consider requiring an 
immutable Set or fall back to Set.union
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.storage.BlockReplicationUtils.getSampleIds.indices.$anonfun,
 origin=scala.collection.SetOps.+, version=2.13.0
[warn]       if (set.contains(t)) set + i else set + t
[warn]                                ^ {code}


> Clean up the deprecated API usage related to `SetOps`
> -
>
> Key: SPARK-45701
> URL: https://issues.apache.org/jira/browse/SPARK-45701
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> * method - in trait SetOps is deprecated (since 2.13.0)
>  * method – in trait SetOps is deprecated (since 2.13.0)
>  * method + in trait SetOps is deprecated (since 2.13.0)
>  * method retain in trait SetOps is deprecated (since 2.13.0)
>  
> {code:java}
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/storage/BlockReplicationPolicy.scala:70:32:
>  method + in trait SetOps is deprecated (since 2.13.0): Consider requiring an 
> immutable Set or fall back to Set.union
> [warn] Applicable -Wconf / @nowarn filters for this warning: msg= message>, cat=deprecation, 
> site=org.apache.spark.storage.BlockReplicationUtils.getSampleIds.indices.$anonfun,
>  origin=scala.collection.SetOps.+, version=2.13.0
> [warn]       if (set.contains(t)) set + i else set + t
> [warn]                                ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45685) Use `LazyList` instead of `Stream`

2023-10-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45685:
-
Description: 
* class Stream in package immutable is deprecated (since 2.13.0)
 * object Stream in package immutable is deprecated (since 2.13.0)
 * type Stream in package scala is deprecated (since 2.13.0)
 * value Stream in package scala is deprecated (since 2.13.0)
 * method append in class Stream is deprecated (since 2.13.0)
 * method toStream in trait IterableOnceOps is deprecated (since 2.13.0)

 
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/core/src/test/scala/org/apache/spark/sql/GenTPCDSData.scala:49:20:
 class Stream in package immutable is deprecated (since 2.13.0): Use LazyList 
(which is fully lazy) instead of Stream (which has a lazy tail only)
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.BlockingLineStream.BlockingStreamed.stream, 
origin=scala.collection.immutable.Stream, version=2.13.0
[warn]     val stream: () => Stream[T])
[warn]                    ^ {code}

  was:
* class Stream in package immutable is deprecated (since 2.13.0)object Stream in
 * package immutable is deprecated (since 2.13.0)
 * type Stream in package scala is deprecated (since 2.13.0)
 * value Stream in package scala is deprecated (since 2.13.0)
 * method append in class Stream is deprecated (since 2.13.0)
 * method toStream in trait IterableOnceOps is deprecated (since 2.13.0)

 
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/core/src/test/scala/org/apache/spark/sql/GenTPCDSData.scala:49:20:
 class Stream in package immutable is deprecated (since 2.13.0): Use LazyList 
(which is fully lazy) instead of Stream (which has a lazy tail only)
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.BlockingLineStream.BlockingStreamed.stream, 
origin=scala.collection.immutable.Stream, version=2.13.0
[warn]     val stream: () => Stream[T])
[warn]                    ^ {code}


> Use `LazyList` instead of `Stream`
> --
>
> Key: SPARK-45685
> URL: https://issues.apache.org/jira/browse/SPARK-45685
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build, Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> * class Stream in package immutable is deprecated (since 2.13.0)
>  * object Stream in package immutable is deprecated (since 2.13.0)
>  * type Stream in package scala is deprecated (since 2.13.0)
>  * value Stream in package scala is deprecated (since 2.13.0)
>  * method append in class Stream is deprecated (since 2.13.0)
>  * method toStream in trait IterableOnceOps is deprecated (since 2.13.0)
>  
> {code:java}
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/core/src/test/scala/org/apache/spark/sql/GenTPCDSData.scala:49:20:
>  class Stream in package immutable is deprecated (since 2.13.0): Use LazyList 
> (which is fully lazy) instead of Stream (which has a lazy tail only)
> [warn] Applicable -Wconf / @nowarn filters for this warning: msg= message>, cat=deprecation, 
> site=org.apache.spark.sql.BlockingLineStream.BlockingStreamed.stream, 
> origin=scala.collection.immutable.Stream, version=2.13.0
> [warn]     val stream: () => Stream[T])
> [warn]                    ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45699) Fix "Widening conversion from `TypeA` to `TypeB` is deprecated because it loses precision"

2023-10-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45699:
-
Summary: Fix "Widening conversion from `TypeA` to `TypeB` is deprecated 
because it loses precision"  (was: Fix "Widening conversion from `TypeA` to 
`TypeB` is deprecated because it loses precision. Write `.toTypeB` instead")

> Fix "Widening conversion from `TypeA` to `TypeB` is deprecated because it 
> loses precision"
> --
>
> Key: SPARK-45699
> URL: https://issues.apache.org/jira/browse/SPARK-45699
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> {code:java}
> error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:1199:67:
>  Widening conversion from Long to Double is deprecated because it loses 
> precision. Write `.toDouble` instead. [quickfixable]
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.scheduler.TaskSetManager.checkSpeculatableTasks.threshold
> [error]       val threshold = max(speculationMultiplier * medianDuration, 
> minTimeToSpeculation)
> [error]                                                                   ^
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:1207:60:
>  Widening conversion from Long to Double is deprecated because it loses 
> precision. Write `.toDouble` instead. [quickfixable]
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.scheduler.TaskSetManager.checkSpeculatableTasks
> [error]       foundTasks = checkAndSubmitSpeculatableTasks(timeMs, threshold, 
> customizedThreshold = true)
> [error]                                                            ^
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/connector/connect/common/src/main/scala/org/apache/spark/sql/connect/client/arrow/ArrowVectorReader.scala:137:48:
>  Widening conversion from Int to Float is deprecated because it loses 
> precision. Write `.toFloat` instead. [quickfixable]
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.sql.connect.client.arrow.IntVectorReader.getFloat
> [error]   override def getFloat(i: Int): Float = getInt(i)
> [error]                                                ^
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/connector/connect/common/src/main/scala/org/apache/spark/sql/connect/client/arrow/ArrowVectorReader.scala:146:49:
>  Widening conversion from Long to Float is deprecated because it loses 
> precision. Write `.toFloat` instead. [quickfixable]
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.sql.connect.client.arrow.BigIntVectorReader.getFloat
> [error]   override def getFloat(i: Int): Float = getLong(i)
> [error]                                                 ^
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/connector/connect/common/src/main/scala/org/apache/spark/sql/connect/client/arrow/ArrowVectorReader.scala:147:51:
>  Widening conversion from Long to Double is deprecated because it loses 
> precision. Write `.toDouble` instead. [quickfixable]
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.sql.connect.client.arrow.BigIntVectorReader.getDouble
> [error]   override def getDouble(i: Int): Double = getLong(i)
> [error]                                                   ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45699) Fix "Widening conversion from `TypeA` to `TypeB` is deprecated because it loses precision. Write `.toTypeB` instead"

2023-10-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45699:
-
Summary: Fix "Widening conversion from `TypeA` to `TypeB` is deprecated 
because it loses precision. Write `.toTypeB` instead"  (was: Fix "Widening 
conversion from `OType` to `NType` is deprecated because it loses precision. 
Write `.toXX` instead")

> Fix "Widening conversion from `TypeA` to `TypeB` is deprecated because it 
> loses precision. Write `.toTypeB` instead"
> 
>
> Key: SPARK-45699
> URL: https://issues.apache.org/jira/browse/SPARK-45699
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> {code:java}
> error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:1199:67:
>  Widening conversion from Long to Double is deprecated because it loses 
> precision. Write `.toDouble` instead. [quickfixable]
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.scheduler.TaskSetManager.checkSpeculatableTasks.threshold
> [error]       val threshold = max(speculationMultiplier * medianDuration, 
> minTimeToSpeculation)
> [error]                                                                   ^
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:1207:60:
>  Widening conversion from Long to Double is deprecated because it loses 
> precision. Write `.toDouble` instead. [quickfixable]
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.scheduler.TaskSetManager.checkSpeculatableTasks
> [error]       foundTasks = checkAndSubmitSpeculatableTasks(timeMs, threshold, 
> customizedThreshold = true)
> [error]                                                            ^
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/connector/connect/common/src/main/scala/org/apache/spark/sql/connect/client/arrow/ArrowVectorReader.scala:137:48:
>  Widening conversion from Int to Float is deprecated because it loses 
> precision. Write `.toFloat` instead. [quickfixable]
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.sql.connect.client.arrow.IntVectorReader.getFloat
> [error]   override def getFloat(i: Int): Float = getInt(i)
> [error]                                                ^
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/connector/connect/common/src/main/scala/org/apache/spark/sql/connect/client/arrow/ArrowVectorReader.scala:146:49:
>  Widening conversion from Long to Float is deprecated because it loses 
> precision. Write `.toFloat` instead. [quickfixable]
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.sql.connect.client.arrow.BigIntVectorReader.getFloat
> [error]   override def getFloat(i: Int): Float = getLong(i)
> [error]                                                 ^
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/connector/connect/common/src/main/scala/org/apache/spark/sql/connect/client/arrow/ArrowVectorReader.scala:147:51:
>  Widening conversion from Long to Double is deprecated because it loses 
> precision. Write `.toDouble` instead. [quickfixable]
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.sql.connect.client.arrow.BigIntVectorReader.getDouble
> [error]   override def getDouble(i: Int): Double = getLong(i)
> [error]                                                   ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45704) Fix `legacy-binding`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45704:


 Summary: Fix `legacy-binding`
 Key: SPARK-45704
 URL: https://issues.apache.org/jira/browse/SPARK-45704
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/deploy/client/StandaloneAppClient.scala:93:11:
 reference to stop is ambiguous;
[error] it is both defined in the enclosing class StandaloneAppClient and 
inherited in the enclosing class ClientEndpoint as method stop (defined in 
trait RpcEndpoint, inherited through parent trait ThreadSafeRpcEndpoint)
[error] In Scala 2, symbols inherited from a superclass shadow symbols defined 
in an outer scope.
[error] Such references are ambiguous in Scala 3. To continue using the 
inherited symbol, write `this.stop`.
[error] Or use `-Wconf:msg=legacy-binding:s` to silence this warning. 
[quickfixable]
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=other, 
site=org.apache.spark.deploy.client.StandaloneAppClient.ClientEndpoint.onStart
[error]           stop()
[error]           ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/deploy/client/StandaloneAppClient.scala:171:9:
 reference to stop is ambiguous;
[error] it is both defined in the enclosing class StandaloneAppClient and 
inherited in the enclosing class ClientEndpoint as method stop (defined in 
trait RpcEndpoint, inherited through parent trait ThreadSafeRpcEndpoint)
[error] In Scala 2, symbols inherited from a superclass shadow symbols defined 
in an outer scope.
[error] Such references are ambiguous in Scala 3. To continue using the 
inherited symbol, write `this.stop`.
[error] Or use `-Wconf:msg=legacy-binding:s` to silence this warning. 
[quickfixable]
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=other, 
site=org.apache.spark.deploy.client.StandaloneAppClient.ClientEndpoint.receive
[error]         stop()
[error]         ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/deploy/client/StandaloneAppClient.scala:206:9:
 reference to stop is ambiguous;
[error] it is both defined in the enclosing class StandaloneAppClient and 
inherited in the enclosing class ClientEndpoint as method stop (defined in 
trait RpcEndpoint, inherited through parent trait ThreadSafeRpcEndpoint)
[error] In Scala 2, symbols inherited from a superclass shadow symbols defined 
in an outer scope.
[error] Such references are ambiguous in Scala 3. To continue using the 
inherited symbol, write `this.stop`.
[error] Or use `-Wconf:msg=legacy-binding:s` to silence this warning. 
[quickfixable]
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=other, 
site=org.apache.spark.deploy.client.StandaloneAppClient.ClientEndpoint.receiveAndReply
[error]         stop()
[error]         ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/rdd/OrderedRDDFunctions.scala:100:21:
 the type test for pattern org.apache.spark.RangePartitioner[K,V] cannot be 
checked at runtime because it has type parameters eliminated by erasure
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=unchecked, 
site=org.apache.spark.rdd.OrderedRDDFunctions.filterByRange.rddToFilter
[error]       case Some(rp: RangePartitioner[K, V]) =>
[error]                     ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala:322:9:
 reference to stop is ambiguous;
[error] it is both defined in the enclosing class CoarseGrainedSchedulerBackend 
and inherited in the enclosing class DriverEndpoint as method stop (defined in 
trait RpcEndpoint, inherited through parent trait IsolatedThreadSafeRpcEndpoint)
[error] In Scala 2, symbols inherited from a superclass shadow symbols defined 
in an outer scope.
[error] Such references are ambiguous in Scala 3. To continue using the 
inherited symbol, write `this.stop`.
[error] Or use `-Wconf:msg=legacy-binding:s` to silence this warning. 
[quickfixable]
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=other, 
site=org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.DriverEndpoint.receiveAndReply
[error]         stop()
[error]         ^
[info] compiling 29 Scala sources and 267 Java sources to 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/connector/connect/common/target/scala-2.13/classes
 ...
[warn] -target is deprecated: Use -release instead to compile against the 
correct platform API.
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/sp

[jira] [Created] (SPARK-45703) Fix `abstract type TypeA in type pattern Some[TypeA] is unchecked since it is eliminated by erasure`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45703:


 Summary: Fix `abstract type TypeA in type pattern Some[TypeA] is 
unchecked since it is eliminated by erasure`
 Key: SPARK-45703
 URL: https://issues.apache.org/jira/browse/SPARK-45703
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/CatalystTypeConverters.scala:105:19:
 abstract type ScalaInputType in type pattern Some[ScalaInputType] is unchecked 
since it is eliminated by erasure
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=unchecked, 
site=org.apache.spark.sql.catalyst.CatalystTypeConverters.CatalystTypeConverter.toCatalyst
[error]         case opt: Some[ScalaInputType] => toCatalystImpl(opt.get)
[error]                   ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45702) Fix `the type test for pattern TypeA cannot be checked at runtime`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45702:


 Summary: Fix `the type test for pattern TypeA cannot be checked at 
runtime`
 Key: SPARK-45702
 URL: https://issues.apache.org/jira/browse/SPARK-45702
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/rdd/OrderedRDDFunctions.scala:100:21:
 the type test for pattern org.apache.spark.RangePartitioner[K,V] cannot be 
checked at runtime because it has type parameters eliminated by erasure
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=unchecked, 
site=org.apache.spark.rdd.OrderedRDDFunctions.filterByRange.rddToFilter
[error]       case Some(rp: RangePartitioner[K, V]) =>
[error]  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45701) Clean up the deprecated API usage related to `SetOps`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45701:


 Summary: Clean up the deprecated API usage related to `SetOps`
 Key: SPARK-45701
 URL: https://issues.apache.org/jira/browse/SPARK-45701
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


* method - in trait SetOps is deprecated (since 2.13.0)
 * method – in trait SetOps is deprecated (since 2.13.0)
 * method + in trait SetOps is deprecated (since 2.13.0)

 
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/storage/BlockReplicationPolicy.scala:70:32:
 method + in trait SetOps is deprecated (since 2.13.0): Consider requiring an 
immutable Set or fall back to Set.union
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.storage.BlockReplicationUtils.getSampleIds.indices.$anonfun,
 origin=scala.collection.SetOps.+, version=2.13.0
[warn]       if (set.contains(t)) set + i else set + t
[warn]                                ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45684) Clean up the deprecated API usage related to `SeqOps`

2023-10-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45684:
-
Description: 
* method transform in trait SeqOps is deprecated (since 2.13.0)
 * method reverseMap in trait SeqOps is deprecated (since 2.13.0)
 * method retain in trait SetOps is deprecated (since 2.13.0)
 * method union in trait SeqOps is deprecated (since 2.13.0)

{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala:675:15:
 method transform in trait SeqOps is deprecated (since 2.13.0): Use 
`mapInPlace` on an `IndexedSeq` instead
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.ml.classification.LogisticRegression.train.$anonfun, 
origin=scala.collection.mutable.SeqOps.transform, version=2.13.0
[warn]       centers.transform(_ / numCoefficientSets)
[warn]               ^ {code}

  was:
* method transform in trait SeqOps is deprecated (since 2.13.0)
 * method reverseMap in trait SeqOps is deprecated (since 2.13.0)
 * method retain in trait SetOps is deprecated (since 2.13.0)
 * method - in trait SetOps is deprecated (since 2.13.0)
 * method -- in trait SetOps is deprecated (since 2.13.0)
 * method + in trait SetOps is deprecated (since 2.13.0)

{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala:675:15:
 method transform in trait SeqOps is deprecated (since 2.13.0): Use 
`mapInPlace` on an `IndexedSeq` instead
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.ml.classification.LogisticRegression.train.$anonfun, 
origin=scala.collection.mutable.SeqOps.transform, version=2.13.0
[warn]       centers.transform(_ / numCoefficientSets)
[warn]               ^ {code}


> Clean up the deprecated API usage related to `SeqOps`
> -
>
> Key: SPARK-45684
> URL: https://issues.apache.org/jira/browse/SPARK-45684
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build, Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> * method transform in trait SeqOps is deprecated (since 2.13.0)
>  * method reverseMap in trait SeqOps is deprecated (since 2.13.0)
>  * method retain in trait SetOps is deprecated (since 2.13.0)
>  * method union in trait SeqOps is deprecated (since 2.13.0)
> {code:java}
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala:675:15:
>  method transform in trait SeqOps is deprecated (since 2.13.0): Use 
> `mapInPlace` on an `IndexedSeq` instead
> [warn] Applicable -Wconf / @nowarn filters for this warning: msg= message>, cat=deprecation, 
> site=org.apache.spark.ml.classification.LogisticRegression.train.$anonfun, 
> origin=scala.collection.mutable.SeqOps.transform, version=2.13.0
> [warn]       centers.transform(_ / numCoefficientSets)
> [warn]               ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45700) Fix `The outer reference in this type test cannot be checked at run time`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45700:


 Summary: Fix `The outer reference in this type test cannot be 
checked at run time`
 Key: SPARK-45700
 URL: https://issues.apache.org/jira/browse/SPARK-45700
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/core/src/test/scala/org/apache/spark/sql/SQLQueryTestSuite.scala:324:12:
 The outer reference in this type test cannot be checked at run time.
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=unchecked, 
site=org.apache.spark.sql.SQLQueryTestSuite.createScalaTestCase
[error]       case udfTestCase: UDFTest
[error]            ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/core/src/test/scala/org/apache/spark/sql/SQLQueryTestSuite.scala:506:12:
 The outer reference in this type test cannot be checked at run time.
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=unchecked, 
site=org.apache.spark.sql.SQLQueryTestSuite.runQueries
[error]       case udfTestCase: UDFTest =>
[error]            ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/core/src/test/scala/org/apache/spark/sql/SQLQueryTestSuite.scala:508:12:
 The outer reference in this type test cannot be checked at run time.
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=unchecked, 
site=org.apache.spark.sql.SQLQueryTestSuite.runQueries
[error]       case udtfTestCase: UDTFSetTest =>
[error]            ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/core/src/test/scala/org/apache/spark/sql/SQLQueryTestSuite.scala:514:13:
 The outer reference in this type test cannot be checked at run time.
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=unchecked, 
site=org.apache.spark.sql.SQLQueryTestSuite.runQueries
[error]       case _: PgSQLTest =>
[error]             ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/core/src/test/scala/org/apache/spark/sql/SQLQueryTestSuite.scala:522:13:
 The outer reference in this type test cannot be checked at run time.
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=unchecked, 
site=org.apache.spark.sql.SQLQueryTestSuite.runQueries
[error]       case _: AnsiTest =>
[error]             ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/core/src/test/scala/org/apache/spark/sql/SQLQueryTestSuite.scala:524:13:
 The outer reference in this type test cannot be checked at run time.
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=unchecked, 
site=org.apache.spark.sql.SQLQueryTestSuite.runQueries
[error]       case _: TimestampNTZTest =>
[error]             ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/core/src/test/scala/org/apache/spark/sql/SQLQueryTestSuite.scala:584:12:
 The outer reference in this type test cannot be checked at run time.
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=unchecked, 
site=org.apache.spark.sql.SQLQueryTestSuite.runQueries.clue
[error]       case udfTestCase: UDFTest
[error]            ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/core/src/test/scala/org/apache/spark/sql/SQLQueryTestSuite.scala:596:12:
 The outer reference in this type test cannot be checked at run time.
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=unchecked, 
site=org.apache.spark.sql.SQLQueryTestSuite.runQueries.clue
[error]       case udtfTestCase: UDTFSetTest
[error]            ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45699) Fix "Widening conversion from `OType` to `NType` is deprecated because it loses precision. Write `.toXX` instead"

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45699:


 Summary: Fix "Widening conversion from `OType` to `NType` is 
deprecated because it loses precision. Write `.toXX` instead"
 Key: SPARK-45699
 URL: https://issues.apache.org/jira/browse/SPARK-45699
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:1199:67:
 Widening conversion from Long to Double is deprecated because it loses 
precision. Write `.toDouble` instead. [quickfixable]
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=deprecation, 
site=org.apache.spark.scheduler.TaskSetManager.checkSpeculatableTasks.threshold
[error]       val threshold = max(speculationMultiplier * medianDuration, 
minTimeToSpeculation)
[error]                                                                   ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala:1207:60:
 Widening conversion from Long to Double is deprecated because it loses 
precision. Write `.toDouble` instead. [quickfixable]
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=deprecation, 
site=org.apache.spark.scheduler.TaskSetManager.checkSpeculatableTasks
[error]       foundTasks = checkAndSubmitSpeculatableTasks(timeMs, threshold, 
customizedThreshold = true)
[error]                                                            ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/connector/connect/common/src/main/scala/org/apache/spark/sql/connect/client/arrow/ArrowVectorReader.scala:137:48:
 Widening conversion from Int to Float is deprecated because it loses 
precision. Write `.toFloat` instead. [quickfixable]
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.connect.client.arrow.IntVectorReader.getFloat
[error]   override def getFloat(i: Int): Float = getInt(i)
[error]                                                ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/connector/connect/common/src/main/scala/org/apache/spark/sql/connect/client/arrow/ArrowVectorReader.scala:146:49:
 Widening conversion from Long to Float is deprecated because it loses 
precision. Write `.toFloat` instead. [quickfixable]
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.connect.client.arrow.BigIntVectorReader.getFloat
[error]   override def getFloat(i: Int): Float = getLong(i)
[error]                                                 ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/connector/connect/common/src/main/scala/org/apache/spark/sql/connect/client/arrow/ArrowVectorReader.scala:147:51:
 Widening conversion from Long to Double is deprecated because it loses 
precision. Write `.toDouble` instead. [quickfixable]
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.connect.client.arrow.BigIntVectorReader.getDouble
[error]   override def getDouble(i: Int): Double = getLong(i)
[error]                                                   ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-45687) Fix `Passing an explicit array value to a Scala varargs method is deprecated`

2023-10-26 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-45687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17780150#comment-17780150
 ] 

Yang Jie commented on SPARK-45687:
--

We need to distinguish the situations, some need to be changed to 
`.toIndexedSeq`, some need to be changed to `ArraySeq.unsafeWrapArray`

> Fix `Passing an explicit array value to a Scala varargs method is deprecated`
> -
>
> Key: SPARK-45687
> URL: https://issues.apache.org/jira/browse/SPARK-45687
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> Passing an explicit array value to a Scala varargs method is deprecated 
> (since 2.13.0) and will result in a defensive copy; Use the more efficient 
> non-copying ArraySeq.unsafeWrapArray or an explicit toIndexedSeq call
>  
> {code:java}
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/AggregationQuerySuite.scala:945:21:
>  Passing an explicit array value to a Scala varargs method is deprecated 
> (since 2.13.0) and will result in a defensive copy; Use the more efficient 
> non-copying ArraySeq.unsafeWrapArray or an explicit toIndexedSeq call
> [warn] Applicable -Wconf / @nowarn filters for this warning: msg= message>, cat=deprecation, 
> site=org.apache.spark.sql.hive.execution.AggregationQuerySuite, version=2.13.0
> [warn]         df.agg(udaf(allColumns: _*)),
> [warn]                     ^
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/ObjectHashAggregateSuite.scala:156:48:
>  Passing an explicit array value to a Scala varargs method is deprecated 
> (since 2.13.0) and will result in a defensive copy; Use the more efficient 
> non-copying ArraySeq.unsafeWrapArray or an explicit toIndexedSeq call
> [warn] Applicable -Wconf / @nowarn filters for this warning: msg= message>, cat=deprecation, 
> site=org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite, 
> version=2.13.0
> [warn]         df.agg(aggFunctions.head, aggFunctions.tail: _*),
> [warn]                                                ^
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/ObjectHashAggregateSuite.scala:161:76:
>  Passing an explicit array value to a Scala varargs method is deprecated 
> (since 2.13.0) and will result in a defensive copy; Use the more efficient 
> non-copying ArraySeq.unsafeWrapArray or an explicit toIndexedSeq call
> [warn] Applicable -Wconf / @nowarn filters for this warning: msg= message>, cat=deprecation, 
> site=org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite, 
> version=2.13.0
> [warn]         df.groupBy($"id" % 4 as "mod").agg(aggFunctions.head, 
> aggFunctions.tail: _*),
> [warn]                                                                        
>     ^
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/ObjectHashAggregateSuite.scala:171:50:
>  Passing an explicit array value to a Scala varargs method is deprecated 
> (since 2.13.0) and will result in a defensive copy; Use the more efficient 
> non-copying ArraySeq.unsafeWrapArray or an explicit toIndexedSeq call
> [warn] Applicable -Wconf / @nowarn filters for this warning: msg= message>, cat=deprecation, 
> site=org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite, 
> version=2.13.0
> [warn]           df.agg(aggFunctions.head, aggFunctions.tail: _*),
> [warn]  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-45686) Fix `method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is deprecated`

2023-10-26 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-45686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17780149#comment-17780149
 ] 

Yang Jie commented on SPARK-45686:
--

We need to distinguish the situations, some need to be changed to 
`.toIndexedSeq`, some need to be changed to `ArraySeq.unsafeWrapArray`
 
 
 
 
 
 
 
 

> Fix `method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is 
> deprecated`
> 
>
> Key: SPARK-45686
> URL: https://issues.apache.org/jira/browse/SPARK-45686
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> {code:java}
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala:57:31:
>  method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is 
> deprecated (since 2.13.0): implicit conversions from Array to 
> immutable.IndexedSeq are implemented by copying; use `toIndexedSeq` 
> explicitly if you want to copy, or use the more efficient non-copying 
> ArraySeq.unsafeWrapArray
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.ml.linalg.Vector.equals, 
> origin=scala.LowPriorityImplicits2.copyArrayToImmutableIndexedSeq, 
> version=2.13.0
> [error]             Vectors.equals(s1.indices, s1.values, s2.indices, 
> s2.values)
> [error]                               ^
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala:57:54:
>  method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is 
> deprecated (since 2.13.0): implicit conversions from Array to 
> immutable.IndexedSeq are implemented by copying; use `toIndexedSeq` 
> explicitly if you want to copy, or use the more efficient non-copying 
> ArraySeq.unsafeWrapArray
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.ml.linalg.Vector.equals, 
> origin=scala.LowPriorityImplicits2.copyArrayToImmutableIndexedSeq, 
> version=2.13.0
> [error]             Vectors.equals(s1.indices, s1.values, s2.indices, 
> s2.values)
> [error]                                                      ^
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala:59:31:
>  method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is 
> deprecated (since 2.13.0): implicit conversions from Array to 
> immutable.IndexedSeq are implemented by copying; use `toIndexedSeq` 
> explicitly if you want to copy, or use the more efficient non-copying 
> ArraySeq.unsafeWrapArray
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.ml.linalg.Vector.equals, 
> origin=scala.LowPriorityImplicits2.copyArrayToImmutableIndexedSeq, 
> version=2.13.0
> [error]             Vectors.equals(s1.indices, s1.values, 0 until d1.size, 
> d1.values)
> [error]                               ^
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala:61:59:
>  method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is 
> deprecated (since 2.13.0): implicit conversions from Array to 
> immutable.IndexedSeq are implemented by copying; use `toIndexedSeq` 
> explicitly if you want to copy, or use the more efficient non-copying 
> ArraySeq.unsafeWrapArray
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.ml.linalg.Vector.equals, 
> origin=scala.LowPriorityImplicits2.copyArrayToImmutableIndexedSeq, 
> version=2.13.0
> [error]             Vectors.equals(0 until d1.size, d1.values, s1.indices, 
> s1.values)
> [error]  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-45686) Fix `method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is deprecated`

2023-10-26 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-45686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17780149#comment-17780149
 ] 

Yang Jie edited comment on SPARK-45686 at 10/27/23 3:21 AM:


We need to distinguish the situations, some need to be changed to 
`.toIndexedSeq`, some need to be changed to `ArraySeq.unsafeWrapArray`


was (Author: luciferyang):
We need to distinguish the situations, some need to be changed to 
`.toIndexedSeq`, some need to be changed to `ArraySeq.unsafeWrapArray`
 
 
 
 
 
 
 
 

> Fix `method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is 
> deprecated`
> 
>
> Key: SPARK-45686
> URL: https://issues.apache.org/jira/browse/SPARK-45686
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> {code:java}
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala:57:31:
>  method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is 
> deprecated (since 2.13.0): implicit conversions from Array to 
> immutable.IndexedSeq are implemented by copying; use `toIndexedSeq` 
> explicitly if you want to copy, or use the more efficient non-copying 
> ArraySeq.unsafeWrapArray
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.ml.linalg.Vector.equals, 
> origin=scala.LowPriorityImplicits2.copyArrayToImmutableIndexedSeq, 
> version=2.13.0
> [error]             Vectors.equals(s1.indices, s1.values, s2.indices, 
> s2.values)
> [error]                               ^
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala:57:54:
>  method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is 
> deprecated (since 2.13.0): implicit conversions from Array to 
> immutable.IndexedSeq are implemented by copying; use `toIndexedSeq` 
> explicitly if you want to copy, or use the more efficient non-copying 
> ArraySeq.unsafeWrapArray
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.ml.linalg.Vector.equals, 
> origin=scala.LowPriorityImplicits2.copyArrayToImmutableIndexedSeq, 
> version=2.13.0
> [error]             Vectors.equals(s1.indices, s1.values, s2.indices, 
> s2.values)
> [error]                                                      ^
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala:59:31:
>  method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is 
> deprecated (since 2.13.0): implicit conversions from Array to 
> immutable.IndexedSeq are implemented by copying; use `toIndexedSeq` 
> explicitly if you want to copy, or use the more efficient non-copying 
> ArraySeq.unsafeWrapArray
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.ml.linalg.Vector.equals, 
> origin=scala.LowPriorityImplicits2.copyArrayToImmutableIndexedSeq, 
> version=2.13.0
> [error]             Vectors.equals(s1.indices, s1.values, 0 until d1.size, 
> d1.values)
> [error]                               ^
> [error] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala:61:59:
>  method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is 
> deprecated (since 2.13.0): implicit conversions from Array to 
> immutable.IndexedSeq are implemented by copying; use `toIndexedSeq` 
> explicitly if you want to copy, or use the more efficient non-copying 
> ArraySeq.unsafeWrapArray
> [error] Applicable -Wconf / @nowarn filters for this fatal warning: msg= of the message>, cat=deprecation, 
> site=org.apache.spark.ml.linalg.Vector.equals, 
> origin=scala.LowPriorityImplicits2.copyArrayToImmutableIndexedSeq, 
> version=2.13.0
> [error]             Vectors.equals(0 until d1.size, d1.values, s1.indices, 
> s1.values)
> [error]  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-45314) Drop Scala 2.12 and make Scala 2.13 by default

2023-10-26 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-45314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17780148#comment-17780148
 ] 

Yang Jie commented on SPARK-45314:
--

Friendly ping [~ivoson] [~panbingkun] [~zhiyuan] [~laglangyue], I has created 
some tickets here, feel free to pick up them if you are interested ~

> Drop Scala 2.12 and make Scala 2.13 by default
> --
>
> Key: SPARK-45314
> URL: https://issues.apache.org/jira/browse/SPARK-45314
> Project: Spark
>  Issue Type: Umbrella
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Hyukjin Kwon
>Assignee: Yang Jie
>Priority: Critical
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45698) Clean up the deprecated API usage related to `Buffer`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45698:


 Summary: Clean up the deprecated API usage related to `Buffer`
 Key: SPARK-45698
 URL: https://issues.apache.org/jira/browse/SPARK-45698
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


* method append in trait Buffer is deprecated (since 2.13.0)
 * method prepend in trait Buffer is deprecated (since 2.13.0)
 * method trimEnd in trait Buffer is deprecated (since 2.13.4)
 * method trimStart in trait Buffer is deprecated (since 2.13.4)

{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/test/scala/org/apache/spark/deploy/IvyTestUtils.scala:319:18:
 method append in trait Buffer is deprecated (since 2.13.0): Use appendAll 
instead
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.deploy.IvyTestUtils.createLocalRepository, 
origin=scala.collection.mutable.Buffer.append, version=2.13.0
[warn]         allFiles.append(rFiles: _*)
[warn]                  ^ 

[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/util/SizeEstimator.scala:183:13:
 method trimEnd in trait Buffer is deprecated (since 2.13.4): use 
dropRightInPlace instead
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.util.SizeEstimator.SearchState.dequeue, 
origin=scala.collection.mutable.Buffer.trimEnd, version=2.13.4
[warn]       stack.trimEnd(1)
[warn]             ^{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45697) Fix `Unicode escapes in triple quoted strings are deprecated`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45697:


 Summary: Fix `Unicode escapes in triple quoted strings are 
deprecated`
 Key: SPARK-45697
 URL: https://issues.apache.org/jira/browse/SPARK-45697
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/PlanParserSuite.scala:1686:44:
 Unicode escapes in triple quoted strings are deprecated; use the literal 
character instead
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, version=2.13.2
[warn]         |  COLLECTION ITEMS TERMINATED BY '\u0002'
[warn]                                            ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45667) Clean up the deprecated API usage related to `IterableOnceExtensionMethods`.

2023-10-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45667.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43532
[https://github.com/apache/spark/pull/43532]

> Clean up the deprecated API usage related to `IterableOnceExtensionMethods`.
> 
>
> Key: SPARK-45667
> URL: https://issues.apache.org/jira/browse/SPARK-45667
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-45667) Clean up the deprecated API usage related to `IterableOnceExtensionMethods`.

2023-10-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-45667:


Assignee: Yang Jie

> Clean up the deprecated API usage related to `IterableOnceExtensionMethods`.
> 
>
> Key: SPARK-45667
> URL: https://issues.apache.org/jira/browse/SPARK-45667
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45696) Fix `method tryCompleteWith in trait Promise is deprecated`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45696:


 Summary: Fix `method tryCompleteWith in trait Promise is 
deprecated`
 Key: SPARK-45696
 URL: https://issues.apache.org/jira/browse/SPARK-45696
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/FutureAction.scala:190:32:
 method tryCompleteWith in trait Promise is deprecated (since 2.13.0): Since 
this method is semantically equivalent to `completeWith`, use that instead.
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, site=org.apache.spark.ComplexFutureAction.p, 
origin=scala.concurrent.Promise.tryCompleteWith, version=2.13.0
[warn]   private val p = Promise[T]().tryCompleteWith(run(jobSubmitter))
[warn]                                ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45694) Fix `method signum in trait ScalaNumberProxy is deprecated`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45694:


 Summary: Fix `method signum in trait ScalaNumberProxy is 
deprecated`
 Key: SPARK-45694
 URL: https://issues.apache.org/jira/browse/SPARK-45694
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/EquivalentExpressions.scalalang:194:25:
 method signum in trait ScalaNumberProxy is deprecated (since 2.13.0): use 
`sign` method instead
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.catalyst.expressions.EquivalentExpressions.updateExprTree.uc,
 origin=scala.runtime.ScalaNumberProxy.signum, version=2.13.0
[warn]       val uc = useCount.signum
[warn]   {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45691) Clean up the deprecated API usage related to `RightProjection/LeftProjection/Either`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45691:


 Summary:   Clean up the deprecated API usage related to 
`RightProjection/LeftProjection/Either`
 Key: SPARK-45691
 URL: https://issues.apache.org/jira/browse/SPARK-45691
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


* method get in class RightProjection is deprecated (since 2.13.0)
 * method get in class LeftProjection is deprecated (since 2.13.0)
 * method right in class Either is deprecated (since 2.13.0)

 
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/GroupBasedRowLevelOperationScanPlanning.scala:54:28:
 method get in class LeftProjection is deprecated (since 2.13.0): use 
`Either.swap.getOrElse` instead
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.execution.datasources.v2.GroupBasedRowLevelOperationScanPlanning.apply,
 origin=scala.util.Either.LeftProjection.get, version=2.13.0
[warn]         pushedFilters.left.get.mkString(", ")
[warn]                            ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45690) Clean up type use of `BufferedIterator/CanBuildFrom/Traversable`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45690:


 Summary: Clean up type use of 
`BufferedIterator/CanBuildFrom/Traversable`
 Key: SPARK-45690
 URL: https://issues.apache.org/jira/browse/SPARK-45690
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core
Affects Versions: 4.0.0
Reporter: Yang Jie


* type BufferedIterator in package scala is deprecated (since 2.13.0)
 * type CanBuildFrom in package generic is deprecated (since 2.13.0)
 * type Traversable in package scala is deprecated (since 2.13.0)

 
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/core/src/main/scala/org/apache/spark/sql/execution/GroupedIterator.scala:67:12:
 type BufferedIterator in package scala is deprecated (since 2.13.0): Use 
scala.collection.BufferedIterator instead of scala.BufferedIterator
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.execution.GroupedIterator.input, 
origin=scala.BufferedIterator, version=2.13.0
[warn]     input: BufferedIterator[InternalRow],
[warn]            ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45689) Clean up the deprecated API usage related to `StringContext/StringOps`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45689:


 Summary:   Clean up the deprecated API usage related to 
`StringContext/StringOps`
 Key: SPARK-45689
 URL: https://issues.apache.org/jira/browse/SPARK-45689
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


 
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/javaCode.scala:258:30:
 method treatEscapes in object StringContext is deprecated (since 2.13.0): use 
processEscapes
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.catalyst.expressions.codegen.Block.foldLiteralArgs, 
origin=scala.StringContext.treatEscapes, version=2.13.0
[warn]     buf.append(StringContext.treatEscapes(strings.next()))
[warn]                              ^
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/javaCode.scala:270:32:
 method treatEscapes in object StringContext is deprecated (since 2.13.0): use 
processEscapes
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.catalyst.expressions.codegen.Block.foldLiteralArgs, 
origin=scala.StringContext.treatEscapes, version=2.13.0
[warn]       buf.append(StringContext.treatEscapes(strings.next()))
[warn]   {code}
 

 
 * method checkLengths in class StringContext is deprecated (since 2.13.0)
 * method treatEscapes in object StringContext is deprecated (since 2.13.0)
 * method replaceAllLiterally in class StringOps is deprecated (since 2.13.2)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45688) Clean up the deprecated API usage related to `MapOps`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45688:


 Summary: Clean up the deprecated API usage related to `MapOps`
 Key: SPARK-45688
 URL: https://issues.apache.org/jira/browse/SPARK-45688
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


* method - in trait MapOps is deprecated (since 2.13.0)
 * method -- in trait MapOps is deprecated (since 2.13.0)
 * method + in trait MapOps is deprecated (since 2.13.0)

 
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/deploy/worker/CommandUtils.scala:84:27:
 method + in trait MapOps is deprecated (since 2.13.0): Consider requiring an 
immutable Map or fall back to Map.concat.
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.deploy.worker.CommandUtils.buildLocalCommand.newEnvironment,
 origin=scala.collection.MapOps.+, version=2.13.0
[warn]       command.environment + ((libraryPathName, 
libraryPaths.mkString(File.pathSeparator)))
[warn]                           ^
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/deploy/worker/CommandUtils.scala:91:22:
 method + in trait MapOps is deprecated (since 2.13.0): Consider requiring an 
immutable Map or fall back to Map.concat.
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.deploy.worker.CommandUtils.buildLocalCommand, 
origin=scala.collection.MapOps.+, version=2.13.0
[warn]       newEnvironment += (SecurityManager.ENV_AUTH_SECRET -> 
securityMgr.getSecretKey())
[warn]                      ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45687) Fix `Passing an explicit array value to a Scala varargs method is deprecated`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45687:


 Summary: Fix `Passing an explicit array value to a Scala varargs 
method is deprecated`
 Key: SPARK-45687
 URL: https://issues.apache.org/jira/browse/SPARK-45687
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


Passing an explicit array value to a Scala varargs method is deprecated (since 
2.13.0) and will result in a defensive copy; Use the more efficient non-copying 
ArraySeq.unsafeWrapArray or an explicit toIndexedSeq call

 
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/AggregationQuerySuite.scala:945:21:
 Passing an explicit array value to a Scala varargs method is deprecated (since 
2.13.0) and will result in a defensive copy; Use the more efficient non-copying 
ArraySeq.unsafeWrapArray or an explicit toIndexedSeq call
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.hive.execution.AggregationQuerySuite, version=2.13.0
[warn]         df.agg(udaf(allColumns: _*)),
[warn]                     ^
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/ObjectHashAggregateSuite.scala:156:48:
 Passing an explicit array value to a Scala varargs method is deprecated (since 
2.13.0) and will result in a defensive copy; Use the more efficient non-copying 
ArraySeq.unsafeWrapArray or an explicit toIndexedSeq call
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite, 
version=2.13.0
[warn]         df.agg(aggFunctions.head, aggFunctions.tail: _*),
[warn]                                                ^
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/ObjectHashAggregateSuite.scala:161:76:
 Passing an explicit array value to a Scala varargs method is deprecated (since 
2.13.0) and will result in a defensive copy; Use the more efficient non-copying 
ArraySeq.unsafeWrapArray or an explicit toIndexedSeq call
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite, 
version=2.13.0
[warn]         df.groupBy($"id" % 4 as "mod").agg(aggFunctions.head, 
aggFunctions.tail: _*),
[warn]                                                                          
  ^
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/ObjectHashAggregateSuite.scala:171:50:
 Passing an explicit array value to a Scala varargs method is deprecated (since 
2.13.0) and will result in a defensive copy; Use the more efficient non-copying 
ArraySeq.unsafeWrapArray or an explicit toIndexedSeq call
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.hive.execution.ObjectHashAggregateSuite, 
version=2.13.0
[warn]           df.agg(aggFunctions.head, aggFunctions.tail: _*),
[warn]  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45686) Fix `method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is deprecated`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45686:


 Summary: Fix `method copyArrayToImmutableIndexedSeq in class 
LowPriorityImplicits2 is deprecated`
 Key: SPARK-45686
 URL: https://issues.apache.org/jira/browse/SPARK-45686
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala:57:31:
 method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is 
deprecated (since 2.13.0): implicit conversions from Array to 
immutable.IndexedSeq are implemented by copying; use `toIndexedSeq` explicitly 
if you want to copy, or use the more efficient non-copying 
ArraySeq.unsafeWrapArray
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=deprecation, 
site=org.apache.spark.ml.linalg.Vector.equals, 
origin=scala.LowPriorityImplicits2.copyArrayToImmutableIndexedSeq, 
version=2.13.0
[error]             Vectors.equals(s1.indices, s1.values, s2.indices, s2.values)
[error]                               ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala:57:54:
 method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is 
deprecated (since 2.13.0): implicit conversions from Array to 
immutable.IndexedSeq are implemented by copying; use `toIndexedSeq` explicitly 
if you want to copy, or use the more efficient non-copying 
ArraySeq.unsafeWrapArray
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=deprecation, 
site=org.apache.spark.ml.linalg.Vector.equals, 
origin=scala.LowPriorityImplicits2.copyArrayToImmutableIndexedSeq, 
version=2.13.0
[error]             Vectors.equals(s1.indices, s1.values, s2.indices, s2.values)
[error]                                                      ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala:59:31:
 method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is 
deprecated (since 2.13.0): implicit conversions from Array to 
immutable.IndexedSeq are implemented by copying; use `toIndexedSeq` explicitly 
if you want to copy, or use the more efficient non-copying 
ArraySeq.unsafeWrapArray
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=deprecation, 
site=org.apache.spark.ml.linalg.Vector.equals, 
origin=scala.LowPriorityImplicits2.copyArrayToImmutableIndexedSeq, 
version=2.13.0
[error]             Vectors.equals(s1.indices, s1.values, 0 until d1.size, 
d1.values)
[error]                               ^
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala:61:59:
 method copyArrayToImmutableIndexedSeq in class LowPriorityImplicits2 is 
deprecated (since 2.13.0): implicit conversions from Array to 
immutable.IndexedSeq are implemented by copying; use `toIndexedSeq` explicitly 
if you want to copy, or use the more efficient non-copying 
ArraySeq.unsafeWrapArray
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=deprecation, 
site=org.apache.spark.ml.linalg.Vector.equals, 
origin=scala.LowPriorityImplicits2.copyArrayToImmutableIndexedSeq, 
version=2.13.0
[error]             Vectors.equals(0 until d1.size, d1.values, s1.indices, 
s1.values)
[error]  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45685) Use `LazyList` instead of `Stream`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45685:


 Summary: Use `LazyList` instead of `Stream`
 Key: SPARK-45685
 URL: https://issues.apache.org/jira/browse/SPARK-45685
 Project: Spark
  Issue Type: Sub-task
  Components: Build, Spark Core, SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


* class Stream in package immutable is deprecated (since 2.13.0)object Stream in
 * package immutable is deprecated (since 2.13.0)
 * type Stream in package scala is deprecated (since 2.13.0)
 * value Stream in package scala is deprecated (since 2.13.0)
 * method append in class Stream is deprecated (since 2.13.0)
 * method toStream in trait IterableOnceOps is deprecated (since 2.13.0)

 
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/core/src/test/scala/org/apache/spark/sql/GenTPCDSData.scala:49:20:
 class Stream in package immutable is deprecated (since 2.13.0): Use LazyList 
(which is fully lazy) instead of Stream (which has a lazy tail only)
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.BlockingLineStream.BlockingStreamed.stream, 
origin=scala.collection.immutable.Stream, version=2.13.0
[warn]     val stream: () => Stream[T])
[warn]                    ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45684) Clean up the deprecated API usage related to `SeqOps`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45684:


 Summary: Clean up the deprecated API usage related to `SeqOps`
 Key: SPARK-45684
 URL: https://issues.apache.org/jira/browse/SPARK-45684
 Project: Spark
  Issue Type: Sub-task
  Components: Build, Spark Core, SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


* method transform in trait SeqOps is deprecated (since 2.13.0)
 * method reverseMap in trait SeqOps is deprecated (since 2.13.0)
 * method retain in trait SetOps is deprecated (since 2.13.0)
 * method - in trait SetOps is deprecated (since 2.13.0)
 * method -- in trait SetOps is deprecated (since 2.13.0)
 * method + in trait SetOps is deprecated (since 2.13.0)

{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala:675:15:
 method transform in trait SeqOps is deprecated (since 2.13.0): Use 
`mapInPlace` on an `IndexedSeq` instead
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.ml.classification.LogisticRegression.train.$anonfun, 
origin=scala.collection.mutable.SeqOps.transform, version=2.13.0
[warn]       centers.transform(_ / numCoefficientSets)
[warn]               ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45682) Fix "method + in class Byte/Short/Char/Long/Double/Int is deprecated"

2023-10-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45682:
-
Description: 
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/test/scala/org/apache/spark/rdd/PipedRDDSuite.scala:127:42:
 method + in class Int is deprecated (since 2.13.0): Adding a number and a 
String is deprecated. Use the string interpolation `s"$num$str"`
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, site=org.apache.spark.rdd.PipedRDDSuite, 
origin=scala.Int.+, version=2.13.0
[warn]       (i: Int, f: String => Unit) => f(i + "_")) {code}

> Fix   "method + in class Byte/Short/Char/Long/Double/Int is deprecated"
> ---
>
> Key: SPARK-45682
> URL: https://issues.apache.org/jira/browse/SPARK-45682
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build, Spark Core
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> {code:java}
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/test/scala/org/apache/spark/rdd/PipedRDDSuite.scala:127:42:
>  method + in class Int is deprecated (since 2.13.0): Adding a number and a 
> String is deprecated. Use the string interpolation `s"$num$str"`
> [warn] Applicable -Wconf / @nowarn filters for this warning: msg= message>, cat=deprecation, site=org.apache.spark.rdd.PipedRDDSuite, 
> origin=scala.Int.+, version=2.13.0
> [warn]       (i: Int, f: String => Unit) => f(i + "_")) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45683) Fix `method any2stringadd in object Predef is deprecated`

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45683:


 Summary: Fix `method any2stringadd in object Predef is deprecated`
 Key: SPARK-45683
 URL: https://issues.apache.org/jira/browse/SPARK-45683
 Project: Spark
  Issue Type: Sub-task
  Components: Build, Spark Core
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Expression.scala:720:17:
 method any2stringadd in object Predef is deprecated (since 2.13.0): Implicit 
injection of + is deprecated. Convert to String to call +
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, 
site=org.apache.spark.sql.catalyst.expressions.BinaryExpression.nullSafeCodeGen.nullSafeEval,
 origin=scala.Predef.any2stringadd, version=2.13.0
[warn]         leftGen.code + ctx.nullSafeExec(left.nullable, leftGen.isNull) {
[warn]                 ^ {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45682) Fix "method + in class Byte/Short/Char/Long/Double/Int is deprecated"

2023-10-26 Thread Yang Jie (Jira)
Yang Jie created SPARK-45682:


 Summary: Fix   "method + in class Byte/Short/Char/Long/Double/Int 
is deprecated"
 Key: SPARK-45682
 URL: https://issues.apache.org/jira/browse/SPARK-45682
 Project: Spark
  Issue Type: Sub-task
  Components: Build, Spark Core
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-45596) Use java.lang.ref.Cleaner instead of org.apache.spark.sql.connect.client.util.Cleaner

2023-10-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-45596:


Assignee: Min Zhao

> Use java.lang.ref.Cleaner instead of 
> org.apache.spark.sql.connect.client.util.Cleaner
> -
>
> Key: SPARK-45596
> URL: https://issues.apache.org/jira/browse/SPARK-45596
> Project: Spark
>  Issue Type: Improvement
>  Components: Connect
>Affects Versions: 4.0.0
>Reporter: Min Zhao
>Assignee: Min Zhao
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2023-10-19-02-25-57-966.png
>
>
> Now, we have updated JDK to 17,  so should replace this class by 
> [[java.lang.ref.Cleaner]].
>  
> !image-2023-10-19-02-25-57-966.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45596) Use java.lang.ref.Cleaner instead of org.apache.spark.sql.connect.client.util.Cleaner

2023-10-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45596.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43439
[https://github.com/apache/spark/pull/43439]

> Use java.lang.ref.Cleaner instead of 
> org.apache.spark.sql.connect.client.util.Cleaner
> -
>
> Key: SPARK-45596
> URL: https://issues.apache.org/jira/browse/SPARK-45596
> Project: Spark
>  Issue Type: Improvement
>  Components: Connect
>Affects Versions: 4.0.0
>Reporter: Min Zhao
>Assignee: Min Zhao
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: image-2023-10-19-02-25-57-966.png
>
>
> Now, we have updated JDK to 17,  so should replace this class by 
> [[java.lang.ref.Cleaner]].
>  
> !image-2023-10-19-02-25-57-966.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45659) Add `since` field to Java API marked as `@Deprecated`.

2023-10-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45659.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43522
[https://github.com/apache/spark/pull/43522]

> Add `since` field to Java API marked as `@Deprecated`.
> --
>
> Key: SPARK-45659
> URL: https://issues.apache.org/jira/browse/SPARK-45659
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL, SS
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> Spark 3.0.0:
> - SPARK-26861
>   - org.apache.spark.sql.expressions.javalang.typed
> - SPARK-27606
>   - org.apache.spark.sql.catalyst.expressions.ExpressionDescription#extended: 
>   - 
> org.apache.spark.sql.catalyst.expressions.ExpressionInfo#ExpressionInfo(String,
>  String, String, String, String)
> Spark 3.2.0
> - SPARK-33717
>   - 
> org.apache.spark.launcher.SparkLauncher#DEPRECATED_CHILD_CONNECTION_TIMEOUT
> - SPARK-33779
>   - org.apache.spark.sql.connector.write.WriteBuilder#buildForBatch
>   - org.apache.spark.sql.connector.write.WriteBuilder#buildForStreaming
> Spark 3.4.0
> - SPARK-39805
>   - org.apache.spark.sql.streaming.Trigger
> - SPARK-42398
>   - 
> org.apache.spark.sql.connector.catalog.TableCatalog#createTable(Identifier, 
> StructType, Transform[], Map) 
>   - 
> org.apache.spark.sql.connector.catalog.StagingTableCatalog#stageCreate(Identifier,
>  StructType, Transform[], Map)
>   - org.apache.spark.sql.connector.catalog.Table#schema
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-45659) Add `since` field to Java API marked as `@Deprecated`.

2023-10-26 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-45659:


Assignee: Yang Jie

> Add `since` field to Java API marked as `@Deprecated`.
> --
>
> Key: SPARK-45659
> URL: https://issues.apache.org/jira/browse/SPARK-45659
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL, SS
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Minor
>  Labels: pull-request-available
>
> Spark 3.0.0:
> - SPARK-26861
>   - org.apache.spark.sql.expressions.javalang.typed
> - SPARK-27606
>   - org.apache.spark.sql.catalyst.expressions.ExpressionDescription#extended: 
>   - 
> org.apache.spark.sql.catalyst.expressions.ExpressionInfo#ExpressionInfo(String,
>  String, String, String, String)
> Spark 3.2.0
> - SPARK-33717
>   - 
> org.apache.spark.launcher.SparkLauncher#DEPRECATED_CHILD_CONNECTION_TIMEOUT
> - SPARK-33779
>   - org.apache.spark.sql.connector.write.WriteBuilder#buildForBatch
>   - org.apache.spark.sql.connector.write.WriteBuilder#buildForStreaming
> Spark 3.4.0
> - SPARK-39805
>   - org.apache.spark.sql.streaming.Trigger
> - SPARK-42398
>   - 
> org.apache.spark.sql.connector.catalog.TableCatalog#createTable(Identifier, 
> StructType, Transform[], Map) 
>   - 
> org.apache.spark.sql.connector.catalog.StagingTableCatalog#stageCreate(Identifier,
>  StructType, Transform[], Map)
>   - org.apache.spark.sql.connector.catalog.Table#schema
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45663) Replace `IterableOnceOps#aggregate` with `IterableOnceOps#foldLeft`

2023-10-25 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45663.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43527
[https://github.com/apache/spark/pull/43527]

> Replace `IterableOnceOps#aggregate` with `IterableOnceOps#foldLeft`
> ---
>
> Key: SPARK-45663
> URL: https://issues.apache.org/jira/browse/SPARK-45663
> Project: Spark
>  Issue Type: Sub-task
>  Components: MLlib, Spark Core
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> {code:java}
> @deprecated("`aggregate` is not relevant for sequential collections. Use 
> `foldLeft(z)(seqop)` instead.", "2.13.0")
> def aggregate[B](z: => B)(seqop: (B, A) => B, combop: (B, B) => B): B = 
> foldLeft(z)(seqop) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-45663) Replace `IterableOnceOps#aggregate` with `IterableOnceOps#foldLeft`

2023-10-25 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-45663:


Assignee: Yang Jie

> Replace `IterableOnceOps#aggregate` with `IterableOnceOps#foldLeft`
> ---
>
> Key: SPARK-45663
> URL: https://issues.apache.org/jira/browse/SPARK-45663
> Project: Spark
>  Issue Type: Sub-task
>  Components: MLlib, Spark Core
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> @deprecated("`aggregate` is not relevant for sequential collections. Use 
> `foldLeft(z)(seqop)` instead.", "2.13.0")
> def aggregate[B](z: => B)(seqop: (B, A) => B, combop: (B, B) => B): B = 
> foldLeft(z)(seqop) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-44407) Clean up the compilation warnings related to `it will become a keyword in Scala 3`

2023-10-25 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-44407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-44407:


Assignee: Yang Jie

> Clean up the compilation warnings related to `it will become a keyword in 
> Scala 3`
> --
>
> Key: SPARK-44407
> URL: https://issues.apache.org/jira/browse/SPARK-44407
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Minor
>  Labels: pull-request-available
>
> {code:java}
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/JavaTypeInferenceSuite.scala:74:21:
>  [deprecation @  | origin= | version=2.13.7] Wrap `enum` in backticks to use 
> it as an identifier, it will become a keyword in Scala 3.
> [warn]   @BeanProperty var enum: java.time.Month = _ {code}
> enum will become a keyword in Scala 3, this also includes {{export}} and 
> {{{}given{}}}.
>  
> Scala 2.13
> {code:java}
> Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
> Type in expressions for evaluation. Or try :help.
> scala> val enum: Int = 1
>            ^
>        warning: Wrap `enum` in backticks to use it as an identifier, it will 
> become a keyword in Scala 3. [quickfixable]
> val enum: Int = 1
> scala> val export: Int = 1
>            ^
>        warning: Wrap `export` in backticks to use it as an identifier, it 
> will become a keyword in Scala 3. [quickfixable]
> val export: Int = 1
> scala> val given: Int = 1
>            ^
>        warning: Wrap `given` in backticks to use it as an identifier, it will 
> become a keyword in Scala 3. [quickfixable]
> val given: Int = 1 {code}
>  
> Scala 3
>  
> {code:java}
> Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
> Type in expressions for evaluation. Or try :help.
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val enum: Int = 1
> -- [E032] Syntax Error: 
> 
> 1 |val enum: Int = 1
>   |    
>   |    pattern expected
>   |
>   | longer explanation available when compiling with `-explain`
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val export: Int = 1
> -- [E032] Syntax Error: 
> 
> 1 |val export: Int = 1
>   |    ^^
>   |    pattern expected
>   |
>   | longer explanation available when compiling with `-explain`
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val given: Int = 1
> -- [E040] Syntax Error: 
> 
> 1 |val given: Int = 1
>   |         ^
>   |         an identifier expected, but ':' found
>   |
>   | longer explanation available when compiling with `-explain` {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-44407) Clean up the compilation warnings related to `it will become a keyword in Scala 3`

2023-10-25 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-44407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-44407.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43529
[https://github.com/apache/spark/pull/43529]

> Clean up the compilation warnings related to `it will become a keyword in 
> Scala 3`
> --
>
> Key: SPARK-44407
> URL: https://issues.apache.org/jira/browse/SPARK-44407
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> {code:java}
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/JavaTypeInferenceSuite.scala:74:21:
>  [deprecation @  | origin= | version=2.13.7] Wrap `enum` in backticks to use 
> it as an identifier, it will become a keyword in Scala 3.
> [warn]   @BeanProperty var enum: java.time.Month = _ {code}
> enum will become a keyword in Scala 3, this also includes {{export}} and 
> {{{}given{}}}.
>  
> Scala 2.13
> {code:java}
> Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
> Type in expressions for evaluation. Or try :help.
> scala> val enum: Int = 1
>            ^
>        warning: Wrap `enum` in backticks to use it as an identifier, it will 
> become a keyword in Scala 3. [quickfixable]
> val enum: Int = 1
> scala> val export: Int = 1
>            ^
>        warning: Wrap `export` in backticks to use it as an identifier, it 
> will become a keyword in Scala 3. [quickfixable]
> val export: Int = 1
> scala> val given: Int = 1
>            ^
>        warning: Wrap `given` in backticks to use it as an identifier, it will 
> become a keyword in Scala 3. [quickfixable]
> val given: Int = 1 {code}
>  
> Scala 3
>  
> {code:java}
> Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
> Type in expressions for evaluation. Or try :help.
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val enum: Int = 1
> -- [E032] Syntax Error: 
> 
> 1 |val enum: Int = 1
>   |    
>   |    pattern expected
>   |
>   | longer explanation available when compiling with `-explain`
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val export: Int = 1
> -- [E032] Syntax Error: 
> 
> 1 |val export: Int = 1
>   |    ^^
>   |    pattern expected
>   |
>   | longer explanation available when compiling with `-explain`
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val given: Int = 1
> -- [E040] Syntax Error: 
> 
> 1 |val given: Int = 1
>   |         ^
>   |         an identifier expected, but ':' found
>   |
>   | longer explanation available when compiling with `-explain` {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45665) Uses different ORACLE_DOCKER_IMAGE_NAME in the scheduled builds in other branches

2023-10-25 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45665.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43496
[https://github.com/apache/spark/pull/43496]

> Uses different ORACLE_DOCKER_IMAGE_NAME in the scheduled builds in other 
> branches
> -
>
> Key: SPARK-45665
> URL: https://issues.apache.org/jira/browse/SPARK-45665
> Project: Spark
>  Issue Type: Improvement
>  Components: Project Infra
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-45665) Uses different ORACLE_DOCKER_IMAGE_NAME in the scheduled builds in other branches

2023-10-25 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-45665:


Assignee: Yang Jie

> Uses different ORACLE_DOCKER_IMAGE_NAME in the scheduled builds in other 
> branches
> -
>
> Key: SPARK-45665
> URL: https://issues.apache.org/jira/browse/SPARK-45665
> Project: Spark
>  Issue Type: Improvement
>  Components: Project Infra
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45650) fix dev/mina get scala 2.12

2023-10-25 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45650.
--
Resolution: Not A Problem

> fix dev/mina get scala 2.12 
> 
>
> Key: SPARK-45650
> URL: https://issues.apache.org/jira/browse/SPARK-45650
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: tangjiafu
>Priority: Major
>
> Now the ci is executing  ./dev/mina will generate an incompatible error with 
> scala2.12. Sorry, I don't know how to fix it
> [info] [launcher] getting org.scala-sbt sbt 1.9.3  (this may take some 
> time)...
> [info] [launcher] getting Scala 2.12.18 (for sbt)...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-45650) fix dev/mina get scala 2.12

2023-10-25 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-45650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17779724#comment-17779724
 ] 

Yang Jie commented on SPARK-45650:
--

{code:java}
[error] spark-sql-api: Failed binary compatibility check against 
org.apache.spark:spark-sql-api_2.13:3.5.0! Found 6 potential problems (filtered 
28)
[error]  * interface org.apache.spark.sql.types.DoubleType#DoubleAsIfIntegral 
does not have a correspondent in current version
[error]    filter with: 
ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.sql.types.DoubleType$DoubleAsIfIntegral")
[error]  * object org.apache.spark.sql.types.DoubleType#DoubleAsIfIntegral does 
not have a correspondent in current version
[error]    filter with: 
ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.sql.types.DoubleType$DoubleAsIfIntegral$")
[error]  * interface org.apache.spark.sql.types.DoubleType#DoubleIsConflicted 
does not have a correspondent in current version
[error]    filter with: 
ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.sql.types.DoubleType$DoubleIsConflicted")
[error]  * interface org.apache.spark.sql.types.FloatType#FloatAsIfIntegral 
does not have a correspondent in current version
[error]    filter with: 
ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.sql.types.FloatType$FloatAsIfIntegral")
[error]  * object org.apache.spark.sql.types.FloatType#FloatAsIfIntegral does 
not have a correspondent in current version
[error]    filter with: 
ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.sql.types.FloatType$FloatAsIfIntegral$")
[error]  * interface org.apache.spark.sql.types.FloatType#FloatIsConflicted 
does not have a correspondent in current version
[error]    filter with: 
ProblemFilters.exclude[MissingClassProblem]("org.apache.spark.sql.types.FloatType$FloatIsConflicted")
 {code}
[https://github.com/laglangyue/spark/actions/runs/6614029427/job/17963169741]

 

As the compilation log says, this is because your PR changes broke the related 
API compatibility. You need to modify the code to maintain API compatibility or 
add the corresponding rules in MimaExcludes to skip the check. This is not a 
bug with the mima script, I will close this issue.

> fix dev/mina get scala 2.12 
> 
>
> Key: SPARK-45650
> URL: https://issues.apache.org/jira/browse/SPARK-45650
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: tangjiafu
>Priority: Major
>
> Now the ci is executing  ./dev/mina will generate an incompatible error with 
> scala2.12. Sorry, I don't know how to fix it
> [info] [launcher] getting org.scala-sbt sbt 1.9.3  (this may take some 
> time)...
> [info] [launcher] getting Scala 2.12.18 (for sbt)...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45667) Clean up the deprecated API usage related to `IterableOnceExtensionMethods`.

2023-10-25 Thread Yang Jie (Jira)
Yang Jie created SPARK-45667:


 Summary: Clean up the deprecated API usage related to 
`IterableOnceExtensionMethods`.
 Key: SPARK-45667
 URL: https://issues.apache.org/jira/browse/SPARK-45667
 Project: Spark
  Issue Type: Sub-task
  Components: Connect, Spark Core, SQL
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-45209) Flame Graph Support For Executor Thread Dump Page

2023-10-25 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-45209:


Assignee: Kent Yao

> Flame Graph Support For Executor Thread Dump Page
> -
>
> Key: SPARK-45209
> URL: https://issues.apache.org/jira/browse/SPARK-45209
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, Web UI
>Affects Versions: 4.0.0
>Reporter: Kent Yao
>Assignee: Kent Yao
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45209) Flame Graph Support For Executor Thread Dump Page

2023-10-25 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45209.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 42988
[https://github.com/apache/spark/pull/42988]

> Flame Graph Support For Executor Thread Dump Page
> -
>
> Key: SPARK-45209
> URL: https://issues.apache.org/jira/browse/SPARK-45209
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, Web UI
>Affects Versions: 4.0.0
>Reporter: Kent Yao
>Assignee: Kent Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45665) Uses different ORACLE_DOCKER_IMAGE_NAME in the scheduled builds in other branches

2023-10-25 Thread Yang Jie (Jira)
Yang Jie created SPARK-45665:


 Summary: Uses different ORACLE_DOCKER_IMAGE_NAME in the scheduled 
builds in other branches
 Key: SPARK-45665
 URL: https://issues.apache.org/jira/browse/SPARK-45665
 Project: Spark
  Issue Type: Improvement
  Components: Project Infra
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45663) Replace `IterableOnceOps#aggregate` with `IterableOnceOps#foldLeft`

2023-10-25 Thread Yang Jie (Jira)
Yang Jie created SPARK-45663:


 Summary: Replace `IterableOnceOps#aggregate` with 
`IterableOnceOps#foldLeft`
 Key: SPARK-45663
 URL: https://issues.apache.org/jira/browse/SPARK-45663
 Project: Spark
  Issue Type: Sub-task
  Components: MLlib, Spark Core
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
@deprecated("`aggregate` is not relevant for sequential collections. Use 
`foldLeft(z)(seqop)` instead.", "2.13.0")
def aggregate[B](z: => B)(seqop: (B, A) => B, combop: (B, B) => B): B = 
foldLeft(z)(seqop) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-45650) fix dev/mina get scala 2.12

2023-10-25 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-45650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17779383#comment-17779383
 ] 

Yang Jie commented on SPARK-45650:
--

Do you have a more detailed error stack? `[info] [launcher] getting Scala 
2.12.18 (for sbt)...` is because sbt 1.x uses scala 2.12, it won't affect the 
result of dev/mima.

> fix dev/mina get scala 2.12 
> 
>
> Key: SPARK-45650
> URL: https://issues.apache.org/jira/browse/SPARK-45650
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: tangjiafu
>Priority: Major
>
> Now the ci is executing  ./dev/mina will generate an incompatible error with 
> scala2.12. Sorry, I don't know how to fix it
> [info] [launcher] getting org.scala-sbt sbt 1.9.3  (this may take some 
> time)...
> [info] [launcher] getting Scala 2.12.18 (for sbt)...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45659) Add `since` field to Java API marked as `@Deprecated`.

2023-10-24 Thread Yang Jie (Jira)
Yang Jie created SPARK-45659:


 Summary: Add `since` field to Java API marked as `@Deprecated`.
 Key: SPARK-45659
 URL: https://issues.apache.org/jira/browse/SPARK-45659
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core, SQL, SS
Affects Versions: 4.0.0
Reporter: Yang Jie


Spark 3.0.0:
- SPARK-26861
  - org.apache.spark.sql.expressions.javalang.typed
- SPARK-27606
  - org.apache.spark.sql.catalyst.expressions.ExpressionDescription#extended: 
  - 
org.apache.spark.sql.catalyst.expressions.ExpressionInfo#ExpressionInfo(String, 
String, String, String, String)

Spark 3.2.0
- SPARK-33717
  - org.apache.spark.launcher.SparkLauncher#DEPRECATED_CHILD_CONNECTION_TIMEOUT
- SPARK-33779
  - org.apache.spark.sql.connector.write.WriteBuilder#buildForBatch
  - org.apache.spark.sql.connector.write.WriteBuilder#buildForStreaming
Spark 3.4.0
- SPARK-39805
  - org.apache.spark.sql.streaming.Trigger
- SPARK-42398
  - org.apache.spark.sql.connector.catalog.TableCatalog#createTable(Identifier, 
StructType, Transform[], Map) 
  - 
org.apache.spark.sql.connector.catalog.StagingTableCatalog#stageCreate(Identifier,
 StructType, Transform[], Map)
  - org.apache.spark.sql.connector.catalog.Table#schema

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-45646) Remove hardcoding time variables prior to Hive 2.0

2023-10-24 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-45646:


Assignee: Cheng Pan

> Remove hardcoding time variables prior to Hive 2.0
> --
>
> Key: SPARK-45646
> URL: https://issues.apache.org/jira/browse/SPARK-45646
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Cheng Pan
>Assignee: Cheng Pan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45646) Remove hardcoding time variables prior to Hive 2.0

2023-10-24 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45646.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43506
[https://github.com/apache/spark/pull/43506]

> Remove hardcoding time variables prior to Hive 2.0
> --
>
> Key: SPARK-45646
> URL: https://issues.apache.org/jira/browse/SPARK-45646
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 4.0.0
>Reporter: Cheng Pan
>Assignee: Cheng Pan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-45630) Replace `s.c.mutable.MapOps#retain` with `s.c.mutable.MapOps#filterInPlace`

2023-10-24 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-45630:


Assignee: Yang Jie

> Replace `s.c.mutable.MapOps#retain` with `s.c.mutable.MapOps#filterInPlace`
> ---
>
> Key: SPARK-45630
> URL: https://issues.apache.org/jira/browse/SPARK-45630
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL, YARN
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45630) Replace `s.c.mutable.MapOps#retain` with `s.c.mutable.MapOps#filterInPlace`

2023-10-24 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45630.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43482
[https://github.com/apache/spark/pull/43482]

> Replace `s.c.mutable.MapOps#retain` with `s.c.mutable.MapOps#filterInPlace`
> ---
>
> Key: SPARK-45630
> URL: https://issues.apache.org/jira/browse/SPARK-45630
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core, SQL, YARN
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45645) Fix `FileSystem.isFile & FileSystem.isDirectory is deprecated`

2023-10-24 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45645:
-
Parent: (was: SPARK-45314)
Issue Type: Improvement  (was: Sub-task)

> Fix `FileSystem.isFile & FileSystem.isDirectory is deprecated`
> --
>
> Key: SPARK-45645
> URL: https://issues.apache.org/jira/browse/SPARK-45645
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-45642) Fix `FileSystem.isFile & FileSystem.isDirectory is deprecated`

2023-10-23 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-45642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778915#comment-17778915
 ] 

Yang Jie commented on SPARK-45642:
--

[~panbingkun] I move this one out of `Drop Scala 2.12 and make Scala 2.13 by 
default`, this is not related to Scala 2.13
 
 
 
 
 

> Fix `FileSystem.isFile & FileSystem.isDirectory is deprecated`
> --
>
> Key: SPARK-45642
> URL: https://issues.apache.org/jira/browse/SPARK-45642
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45642) Fix `FileSystem.isFile & FileSystem.isDirectory is deprecated`

2023-10-23 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45642:
-
Parent: (was: SPARK-45314)
Issue Type: Improvement  (was: Sub-task)

> Fix `FileSystem.isFile & FileSystem.isDirectory is deprecated`
> --
>
> Key: SPARK-45642
> URL: https://issues.apache.org/jira/browse/SPARK-45642
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: BingKun Pan
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45643) Replace `s.c.mutable.MapOps#transform` with `s.c.mutable.MapOps#mapValuesInPlace`

2023-10-23 Thread Yang Jie (Jira)
Yang Jie created SPARK-45643:


 Summary: Replace `s.c.mutable.MapOps#transform` with 
`s.c.mutable.MapOps#mapValuesInPlace`
 Key: SPARK-45643
 URL: https://issues.apache.org/jira/browse/SPARK-45643
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
@deprecated("Use mapValuesInPlace instead", "2.13.0")
@inline final def transform(f: (K, V) => V): this.type = mapValuesInPlace(f) 
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45623) Move `mllib` and `mllib-local` to separate test groups.

2023-10-23 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45623.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43432
[https://github.com/apache/spark/pull/43432]

> Move `mllib` and `mllib-local` to separate test groups.
> ---
>
> Key: SPARK-45623
> URL: https://issues.apache.org/jira/browse/SPARK-45623
> Project: Spark
>  Issue Type: Improvement
>  Components: Project Infra
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> Improve the stability of GitHub Action tests.
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45636) Upgrade jersey to 2.41

2023-10-23 Thread Yang Jie (Jira)
Yang Jie created SPARK-45636:


 Summary: Upgrade jersey to 2.41
 Key: SPARK-45636
 URL: https://issues.apache.org/jira/browse/SPARK-45636
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 4.0.0
Reporter: Yang Jie


https://github.com/eclipse-ee4j/jersey/releases/tag/2.41



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-45627) Fix `symbol literal is deprecated`

2023-10-23 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-45627:


Assignee: BingKun Pan

> Fix `symbol literal is deprecated`
> --
>
> Key: SPARK-45627
> URL: https://issues.apache.org/jira/browse/SPARK-45627
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, GraphX, MLlib, Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: BingKun Pan
>Priority: Major
>  Labels: pull-request-available
>
> For the code `val symbol = 'symbol`, it's a compile warning in Scala 2.13, 
> but it's a compile error in Scala 3.
>  - Scala 2.13 
>  
> {code:java}
> Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
> Type in expressions for evaluation. Or try :help.
> scala> val symbol = 'symbol
>                     ^
>        warning: symbol literal is deprecated; use Symbol("symbol") instead 
> [quickfixable]
> val symbol: Symbol = Symbol(symbol) {code}
>  
>  
>  * Scala 3
>  
> {code:java}
> Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
> Type in expressions for evaluation. Or try :help.
>                                                                               
>                                                                               
>                                                                               
>           
> scala> val symbol = 'symbol
> -- Error: 
> --
> 1 |val symbol = 'symbol
>   |             ^
>   |symbol literal 'symbol is no longer supported,
>   |use a string literal "symbol" or an application Symbol("symbol") instead,
>   |or enclose in braces '{symbol} if you want a quoted expression.
>   |For now, you can also `import language.deprecated.symbolLiterals` to accept
>   |the idiom, but this possibility might no longer be available in the 
> future. {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45627) Fix `symbol literal is deprecated`

2023-10-23 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45627.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43487
[https://github.com/apache/spark/pull/43487]

> Fix `symbol literal is deprecated`
> --
>
> Key: SPARK-45627
> URL: https://issues.apache.org/jira/browse/SPARK-45627
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, GraphX, MLlib, Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: BingKun Pan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> For the code `val symbol = 'symbol`, it's a compile warning in Scala 2.13, 
> but it's a compile error in Scala 3.
>  - Scala 2.13 
>  
> {code:java}
> Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
> Type in expressions for evaluation. Or try :help.
> scala> val symbol = 'symbol
>                     ^
>        warning: symbol literal is deprecated; use Symbol("symbol") instead 
> [quickfixable]
> val symbol: Symbol = Symbol(symbol) {code}
>  
>  
>  * Scala 3
>  
> {code:java}
> Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
> Type in expressions for evaluation. Or try :help.
>                                                                               
>                                                                               
>                                                                               
>           
> scala> val symbol = 'symbol
> -- Error: 
> --
> 1 |val symbol = 'symbol
>   |             ^
>   |symbol literal 'symbol is no longer supported,
>   |use a string literal "symbol" or an application Symbol("symbol") instead,
>   |or enclose in braces '{symbol} if you want a quoted expression.
>   |For now, you can also `import language.deprecated.symbolLiterals` to accept
>   |the idiom, but this possibility might no longer be available in the 
> future. {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45630) Replace `s.c.mutable.MapOps#retain` with `s.c.mutable.MapOps#filterInPlace`

2023-10-22 Thread Yang Jie (Jira)
Yang Jie created SPARK-45630:


 Summary: Replace `s.c.mutable.MapOps#retain` with 
`s.c.mutable.MapOps#filterInPlace`
 Key: SPARK-45630
 URL: https://issues.apache.org/jira/browse/SPARK-45630
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL, YARN
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45629) Fix `Implicit definition should have explicit type`

2023-10-22 Thread Yang Jie (Jira)
Yang Jie created SPARK-45629:


 Summary: Fix `Implicit definition should have explicit type`
 Key: SPARK-45629
 URL: https://issues.apache.org/jira/browse/SPARK-45629
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, SQL
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
[error] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/main/scala/org/apache/spark/deploy/FaultToleranceTest.scala:343:16:
 Implicit definition should have explicit type (inferred 
org.json4s.DefaultFormats.type) [quickfixable]
[error] Applicable -Wconf / @nowarn filters for this fatal warning: msg=, cat=other-implicit-type, 
site=org.apache.spark.deploy.TestMasterInfo.formats
[error]   implicit val formats = org.json4s.DefaultFormats
[error]   {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-44407) Clean up the compilation warnings related to `it will become a keyword in Scala 3`

2023-10-22 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-44407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-44407:
-
Summary: Clean up the compilation warnings related to `it will become a 
keyword in Scala 3`  (was: Clean up the compilation warnings related to xx)

> Clean up the compilation warnings related to `it will become a keyword in 
> Scala 3`
> --
>
> Key: SPARK-44407
> URL: https://issues.apache.org/jira/browse/SPARK-44407
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Yang Jie
>Priority: Minor
>  Labels: pull-request-available
>
> {code:java}
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/JavaTypeInferenceSuite.scala:74:21:
>  [deprecation @  | origin= | version=2.13.7] Wrap `enum` in backticks to use 
> it as an identifier, it will become a keyword in Scala 3.
> [warn]   @BeanProperty var enum: java.time.Month = _ {code}
> enum will become a keyword in Scala 3, this also includes {{export}} and 
> {{{}given{}}}.
>  
> Scala 2.13
> {code:java}
> Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
> Type in expressions for evaluation. Or try :help.
> scala> val enum: Int = 1
>            ^
>        warning: Wrap `enum` in backticks to use it as an identifier, it will 
> become a keyword in Scala 3. [quickfixable]
> val enum: Int = 1
> scala> val export: Int = 1
>            ^
>        warning: Wrap `export` in backticks to use it as an identifier, it 
> will become a keyword in Scala 3. [quickfixable]
> val export: Int = 1
> scala> val given: Int = 1
>            ^
>        warning: Wrap `given` in backticks to use it as an identifier, it will 
> become a keyword in Scala 3. [quickfixable]
> val given: Int = 1 {code}
>  
> Scala 3
>  
> {code:java}
> Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
> Type in expressions for evaluation. Or try :help.
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val enum: Int = 1
> -- [E032] Syntax Error: 
> 
> 1 |val enum: Int = 1
>   |    
>   |    pattern expected
>   |
>   | longer explanation available when compiling with `-explain`
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val export: Int = 1
> -- [E032] Syntax Error: 
> 
> 1 |val export: Int = 1
>   |    ^^
>   |    pattern expected
>   |
>   | longer explanation available when compiling with `-explain`
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val given: Int = 1
> -- [E040] Syntax Error: 
> 
> 1 |val given: Int = 1
>   |         ^
>   |         an identifier expected, but ':' found
>   |
>   | longer explanation available when compiling with `-explain` {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-44407) Clean up the compilation warnings related to xx

2023-10-22 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-44407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-44407:
-
Summary: Clean up the compilation warnings related to xx  (was: Prohibit 
using `enum` as a variable or function name)

> Clean up the compilation warnings related to xx
> ---
>
> Key: SPARK-44407
> URL: https://issues.apache.org/jira/browse/SPARK-44407
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Yang Jie
>Priority: Minor
>  Labels: pull-request-available
>
> {code:java}
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/JavaTypeInferenceSuite.scala:74:21:
>  [deprecation @  | origin= | version=2.13.7] Wrap `enum` in backticks to use 
> it as an identifier, it will become a keyword in Scala 3.
> [warn]   @BeanProperty var enum: java.time.Month = _ {code}
> enum will become a keyword in Scala 3, this also includes {{export}} and 
> {{{}given{}}}.
>  
> Scala 2.13
> {code:java}
> Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
> Type in expressions for evaluation. Or try :help.
> scala> val enum: Int = 1
>            ^
>        warning: Wrap `enum` in backticks to use it as an identifier, it will 
> become a keyword in Scala 3. [quickfixable]
> val enum: Int = 1
> scala> val export: Int = 1
>            ^
>        warning: Wrap `export` in backticks to use it as an identifier, it 
> will become a keyword in Scala 3. [quickfixable]
> val export: Int = 1
> scala> val given: Int = 1
>            ^
>        warning: Wrap `given` in backticks to use it as an identifier, it will 
> become a keyword in Scala 3. [quickfixable]
> val given: Int = 1 {code}
>  
> Scala 3
>  
> {code:java}
> Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
> Type in expressions for evaluation. Or try :help.
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val enum: Int = 1
> -- [E032] Syntax Error: 
> 
> 1 |val enum: Int = 1
>   |    
>   |    pattern expected
>   |
>   | longer explanation available when compiling with `-explain`
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val export: Int = 1
> -- [E032] Syntax Error: 
> 
> 1 |val export: Int = 1
>   |    ^^
>   |    pattern expected
>   |
>   | longer explanation available when compiling with `-explain`
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val given: Int = 1
> -- [E040] Syntax Error: 
> 
> 1 |val given: Int = 1
>   |         ^
>   |         an identifier expected, but ':' found
>   |
>   | longer explanation available when compiling with `-explain` {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-44407) Prohibit using `enum` as a variable or function name

2023-10-22 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-44407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-44407:
-
Description: 
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/JavaTypeInferenceSuite.scala:74:21:
 [deprecation @  | origin= | version=2.13.7] Wrap `enum` in backticks to use it 
as an identifier, it will become a keyword in Scala 3.
[warn]   @BeanProperty var enum: java.time.Month = _ {code}
enum will become a keyword in Scala 3, this also includes {{export}} and 
{{{}given{}}}.

 

Scala 2.13
{code:java}
Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
Type in expressions for evaluation. Or try :help.


scala> val enum: Int = 1
           ^
       warning: Wrap `enum` in backticks to use it as an identifier, it will 
become a keyword in Scala 3. [quickfixable]
val enum: Int = 1


scala> val export: Int = 1
           ^
       warning: Wrap `export` in backticks to use it as an identifier, it will 
become a keyword in Scala 3. [quickfixable]
val export: Int = 1


scala> val given: Int = 1
           ^
       warning: Wrap `given` in backticks to use it as an identifier, it will 
become a keyword in Scala 3. [quickfixable]
val given: Int = 1 {code}
 
Scala 3
 
{code:java}
Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
Type in expressions for evaluation. Or try :help.
                                                                                
                                                                                
                                                                                
     
scala> val enum: Int = 1
-- [E032] Syntax Error: 
1 |val enum: Int = 1
  |    
  |    pattern expected
  |
  | longer explanation available when compiling with `-explain`
                                                                                
                                                                                
                                                                                
     
scala> val export: Int = 1
-- [E032] Syntax Error: 
1 |val export: Int = 1
  |    ^^
  |    pattern expected
  |
  | longer explanation available when compiling with `-explain`
                                                                                
                                                                                
                                                                                
     
scala> val given: Int = 1
-- [E040] Syntax Error: 
1 |val given: Int = 1
  |         ^
  |         an identifier expected, but ':' found
  |
  | longer explanation available when compiling with `-explain` {code}
 
 
 

  was:
{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/JavaTypeInferenceSuite.scala:74:21:
 [deprecation @  | origin= | version=2.13.7] Wrap `enum` in backticks to use it 
as an identifier, it will become a keyword in Scala 3.
[warn]   @BeanProperty var enum: java.time.Month = _ {code}
enum will become a keyword in Scala 3.


> Prohibit using `enum` as a variable or function name
> 
>
> Key: SPARK-44407
> URL: https://issues.apache.org/jira/browse/SPARK-44407
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Yang Jie
>Priority: Minor
>  Labels: pull-request-available
>
> {code:java}
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/JavaTypeInferenceSuite.scala:74:21:
>  [deprecation @  | origin= | version=2.13.7] Wrap `enum` in backticks to use 
> it as an identifier, it will become a keyword in Scala 3.
> [warn]   @BeanProperty var enum: java.time.Month = _ {code}
> enum will become a keyword in Scala 3, this also includes {{export}} and 
> {{{}given{}}}.
>  
> Scala 2.13
> {code:java}
> Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
> Type in expressions for evaluation. Or try :help.
> scala> val enum: Int = 1
>            ^
>        warning: Wrap `enum` in backticks to use it as an identifier, it will 
> become a keyword in Scala 3. [quickfixable]
> val enum: Int = 1
> scala> val export: Int = 1
>            ^
>        warning: Wrap `export` in backticks to use it as an identifier, it 
> will become a keyword in Scala 3. [quickfixable]
> val export: Int = 1
> scala> val given: Int = 1
>            ^
>        warning: Wrap `given` in backticks to use it as an identifier, it will 
> become a keyword in Scala 3. [quickfixable]

[jira] [Updated] (SPARK-44407) Prohibit using `enum` as a variable or function name

2023-10-22 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-44407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-44407:
-
Parent: SPARK-45314
Issue Type: Sub-task  (was: Improvement)

> Prohibit using `enum` as a variable or function name
> 
>
> Key: SPARK-44407
> URL: https://issues.apache.org/jira/browse/SPARK-44407
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.5.0
>Reporter: Yang Jie
>Priority: Minor
>  Labels: pull-request-available
>
> {code:java}
> [warn] 
> /Users/yangjie01/SourceCode/git/spark-mine-sbt/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/JavaTypeInferenceSuite.scala:74:21:
>  [deprecation @  | origin= | version=2.13.7] Wrap `enum` in backticks to use 
> it as an identifier, it will become a keyword in Scala 3.
> [warn]   @BeanProperty var enum: java.time.Month = _ {code}
> enum will become a keyword in Scala 3.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45625) Upgrade log4j2 to 2.21.0

2023-10-22 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45625.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43478
[https://github.com/apache/spark/pull/43478]

> Upgrade log4j2 to 2.21.0
> 
>
> Key: SPARK-45625
> URL: https://issues.apache.org/jira/browse/SPARK-45625
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> https://github.com/apache/logging-log4j2/releases/tag/rel%2F2.21.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-45625) Upgrade log4j2 to 2.21.0

2023-10-22 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-45625:


Assignee: Yang Jie

> Upgrade log4j2 to 2.21.0
> 
>
> Key: SPARK-45625
> URL: https://issues.apache.org/jira/browse/SPARK-45625
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
>
> https://github.com/apache/logging-log4j2/releases/tag/rel%2F2.21.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45627) Fix `symbol literal is deprecated`

2023-10-22 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45627:
-
Description: 
For the code `val symbol = 'symbol`, it's a compile warning in Scala 2.13, but 
it's a compile error in Scala 3.
 - Scala 2.13 

 
{code:java}
Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
Type in expressions for evaluation. Or try :help.
scala> val symbol = 'symbol
                    ^
       warning: symbol literal is deprecated; use Symbol("symbol") instead 
[quickfixable]
val symbol: Symbol = Symbol(symbol) {code}
 

 
 * Scala 3

 
{code:java}
Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
Type in expressions for evaluation. Or try :help.
                                                                                
                                                                                
                                                                                
    
scala> val symbol = 'symbol
-- Error: --
1 |val symbol = 'symbol
  |             ^
  |symbol literal 'symbol is no longer supported,
  |use a string literal "symbol" or an application Symbol("symbol") instead,
  |or enclose in braces '{symbol} if you want a quoted expression.
  |For now, you can also `import language.deprecated.symbolLiterals` to accept
  |the idiom, but this possibility might no longer be available in the future. 
{code}
 

  was:
For the code `val symbol = 'symbol`, it's a compile warning in Scala 2.13, but 
it's a compile error in Scala 3.

- Scala 2.13 

 
{code:java}
Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
Type in expressions for evaluation. Or try :help.
scala> val symbol = 'symbol
                    ^
       warning: symbol literal is deprecated; use Symbol("symbol") instead 
[quickfixable]
val symbol: Symbol = Symbol(symbol) {code}
 

 

- Scala 3
{code:java}
Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
Type in expressions for evaluation. Or try :help.
                                                                                
                                                                                
                                                                                
    
scala> val symbol = 'symbol
-- Error: --
1 |val symbol = 'symbol
  |             ^
  |symbol literal 'symbol is no longer supported,
  |use a string literal "symbol" or an application Symbol("symbol") instead,
  |or enclose in braces '{symbol} if you want a quoted expression.
  |For now, you can also `import language.deprecated.symbolLiterals` to accept
  |the idiom, but this possibility might no longer be available in the future. 
{code}
 


> Fix `symbol literal is deprecated`
> --
>
> Key: SPARK-45627
> URL: https://issues.apache.org/jira/browse/SPARK-45627
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, GraphX, MLlib, Spark Core, SQL
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> For the code `val symbol = 'symbol`, it's a compile warning in Scala 2.13, 
> but it's a compile error in Scala 3.
>  - Scala 2.13 
>  
> {code:java}
> Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
> Type in expressions for evaluation. Or try :help.
> scala> val symbol = 'symbol
>                     ^
>        warning: symbol literal is deprecated; use Symbol("symbol") instead 
> [quickfixable]
> val symbol: Symbol = Symbol(symbol) {code}
>  
>  
>  * Scala 3
>  
> {code:java}
> Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
> Type in expressions for evaluation. Or try :help.
>                                                                               
>                                                                               
>                                                                               
>           
> scala> val symbol = 'symbol
> -- Error: 
> --
> 1 |val symbol = 'symbol
>   |             ^
>   |symbol literal 'symbol is no longer supported,
>   |use a string literal "symbol" or an application Symbol("symbol") instead,
>   |or enclose in braces '{symbol} if you want a quoted expression.
>   |For now, you can also `import language.deprecated.symbolLiterals` to accept
>   |the idiom, but this possibility might no longer be available in the 
> future. {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-45533) Use `j.l.r.Cleaner` instead of `finalize` for `RocksDBIterator/LevelDBIterator`

2023-10-22 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-45533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778394#comment-17778394
 ] 

Yang Jie commented on SPARK-45533:
--

Okay, I'm just confirming that you will follow up on this issue, thank you very 
much. [~zhaomin] 
 
 
 
 
 

> Use `j.l.r.Cleaner` instead of `finalize` for 
> `RocksDBIterator/LevelDBIterator`
> ---
>
> Key: SPARK-45533
> URL: https://issues.apache.org/jira/browse/SPARK-45533
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45625) Upgrade log4j2 to 2.21.0

2023-10-22 Thread Yang Jie (Jira)
Yang Jie created SPARK-45625:


 Summary: Upgrade log4j2 to 2.21.0
 Key: SPARK-45625
 URL: https://issues.apache.org/jira/browse/SPARK-45625
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 4.0.0
Reporter: Yang Jie


https://github.com/apache/logging-log4j2/releases/tag/rel%2F2.21.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45624) Use `AccessibleObject#canAccess` instead of `AccessibleObject#isAccessible`

2023-10-22 Thread Yang Jie (Jira)
Yang Jie created SPARK-45624:


 Summary: Use `AccessibleObject#canAccess` instead of 
`AccessibleObject#isAccessible`
 Key: SPARK-45624
 URL: https://issues.apache.org/jira/browse/SPARK-45624
 Project: Spark
  Issue Type: Sub-task
  Components: Spark Core, Tests
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
[warn] 
/Users/yangjie01/SourceCode/git/spark-mine-sbt/core/src/test/scala/org/apache/spark/util/UtilsSuite.scala:258:15:
 method isAccessible in class AccessibleObject is deprecated (since 9)
[warn] Applicable -Wconf / @nowarn filters for this warning: msg=, cat=deprecation, site=org.apache.spark.util.UtilsSuite.getFieldValue, 
origin=java.lang.reflect.AccessibleObject.isAccessible, version=9
[warn]     if (field.isAccessible()) {
[warn]    {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-45533) Use `j.l.r.Cleaner` instead of `finalize` for `RocksDBIterator/LevelDBIterator`

2023-10-22 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-45533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778305#comment-17778305
 ] 

Yang Jie commented on SPARK-45533:
--

[~zhaomin] Are you still interested in this job?
 
 
 
 
 

> Use `j.l.r.Cleaner` instead of `finalize` for 
> `RocksDBIterator/LevelDBIterator`
> ---
>
> Key: SPARK-45533
> URL: https://issues.apache.org/jira/browse/SPARK-45533
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-30848) Remove manual backport of Murmur3 MurmurHash3.productHash fix from Scala 2.13

2023-10-20 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-30848.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43161
[https://github.com/apache/spark/pull/43161]

> Remove manual backport of Murmur3 MurmurHash3.productHash fix from Scala 2.13
> -
>
> Key: SPARK-30848
> URL: https://issues.apache.org/jira/browse/SPARK-30848
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: Hyukjin Kwon
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> SPARK-30847 introduced a manual backport to work around a Scala issue in hash 
> implementation. Once we drop Scala 2.12, we can remove the fix.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-30848) Remove manual backport of Murmur3 MurmurHash3.productHash fix from Scala 2.13

2023-10-20 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie reassigned SPARK-30848:


Assignee: BingKun Pan

> Remove manual backport of Murmur3 MurmurHash3.productHash fix from Scala 2.13
> -
>
> Key: SPARK-30848
> URL: https://issues.apache.org/jira/browse/SPARK-30848
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: Hyukjin Kwon
>Assignee: BingKun Pan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> SPARK-30847 introduced a manual backport to work around a Scala issue in hash 
> implementation. Once we drop Scala 2.12, we can remove the fix.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45602) Replace `s.c.MapOps.filterKeys` with `s.c.MapOps.view.filterKeys`

2023-10-20 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45602.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43445
[https://github.com/apache/spark/pull/43445]

> Replace `s.c.MapOps.filterKeys` with `s.c.MapOps.view.filterKeys`
> -
>
> Key: SPARK-45602
> URL: https://issues.apache.org/jira/browse/SPARK-45602
> Project: Spark
>  Issue Type: Sub-task
>  Components: Kubernetes, Spark Core, SQL, YARN
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> {code:java}
> /** Filters this map by retaining only keys satisfying a predicate.
>   *  @param  p   the predicate used to test keys
>   *  @return an immutable map consisting only of those key value pairs of 
> this map where the key satisfies
>   *  the predicate `p`. The resulting map wraps the original map 
> without copying any elements.
>   */
> @deprecated("Use .view.filterKeys(f). A future version will include a strict 
> version of this method (for now, .view.filterKeys(p).toMap).", "2.13.0")
> def filterKeys(p: K => Boolean): MapView[K, V] = new MapView.FilterKeys(this, 
> p) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45610) Fix "Auto-application to `()` is deprecated."

2023-10-20 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45610:
-
Summary: Fix "Auto-application to `()` is deprecated."  (was: Handle 
"Auto-application to `()` is deprecated.")

> Fix "Auto-application to `()` is deprecated."
> -
>
> Key: SPARK-45610
> URL: https://issues.apache.org/jira/browse/SPARK-45610
> Project: Spark
>  Issue Type: Sub-task
>  Components: GraphX, MLlib, Spark Core, SQL, Structured Streaming
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> For the following case, a compile warning will be issued in Scala 2.13:
>  
> {code:java}
> Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
> Type in expressions for evaluation. Or try :help.
> scala> class Foo {
>      |     def isEmpty(): Boolean = true
>      |     def isTrue(x: Boolean): Boolean = x
>      |   }
> class Foo
> scala> val foo = new Foo
> val foo: Foo = Foo@7061622
> scala> val ret = foo.isEmpty
>                      ^
>        warning: Auto-application to `()` is deprecated. Supply the empty 
> argument list `()` explicitly to invoke method isEmpty,
>        or remove the empty argument list from its definition (Java-defined 
> methods are exempt).
>        In Scala 3, an unapplied method like this will be eta-expanded into a 
> function. [quickfixable]
> val ret: Boolean = true {code}
> But for Scala 3, it is a compile error:
> {code:java}
> Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
> Type in expressions for evaluation. Or try :help.
>                                                                               
>                                                                               
>                                                                               
>            
> scala> class Foo {
>      |     def isEmpty(): Boolean = true
>      |     def isTrue(x: Boolean): Boolean = x
>      |   }
> // defined class Foo
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val foo = new Foo
> val foo: Foo = Foo@591f6f83
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val ret = foo.isEmpty
> -- [E100] Syntax Error: 
> 
> 1 |val ret = foo.isEmpty
>   |          ^^^
>   |          method isEmpty in class Foo must be called with () argument
>   |
>   | longer explanation available when compiling with `-explain`
> 1 error found {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45615) Remove redundant"Auto-application to `()` is deprecated" compile suppression rules.

2023-10-20 Thread Yang Jie (Jira)
Yang Jie created SPARK-45615:


 Summary: Remove redundant"Auto-application to `()` is deprecated" 
compile suppression rules.
 Key: SPARK-45615
 URL: https://issues.apache.org/jira/browse/SPARK-45615
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 4.0.0
Reporter: Yang Jie


Due to the issue https://github.com/scalatest/scalatest/issues/2297, we need to 
wait until we upgrade a scalatest version before removing these suppression 
rules.

Maybe 3.2.18



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45591) Upgrade ASM to 9.6

2023-10-19 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45591.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43431
[https://github.com/apache/spark/pull/43431]

> Upgrade ASM to 9.6
> --
>
> Key: SPARK-45591
> URL: https://issues.apache.org/jira/browse/SPARK-45591
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-45610) Handle "Auto-application to `()` is deprecated."

2023-10-19 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-45610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1539#comment-1539
 ] 

Yang Jie edited comment on SPARK-45610 at 10/20/23 2:31 AM:


Okay, I can start preparing this PR.


was (Author: luciferyang):
Okay, I can start preparing this PR.
 
 
 
 
 

> Handle "Auto-application to `()` is deprecated."
> 
>
> Key: SPARK-45610
> URL: https://issues.apache.org/jira/browse/SPARK-45610
> Project: Spark
>  Issue Type: Sub-task
>  Components: GraphX, MLlib, Spark Core, SQL, Structured Streaming
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> For the following case, a compile warning will be issued in Scala 2.13:
>  
> {code:java}
> Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
> Type in expressions for evaluation. Or try :help.
> scala> class Foo {
>      |     def isEmpty(): Boolean = true
>      |     def isTrue(x: Boolean): Boolean = x
>      |   }
> class Foo
> scala> val foo = new Foo
> val foo: Foo = Foo@7061622
> scala> val ret = foo.isEmpty
>                      ^
>        warning: Auto-application to `()` is deprecated. Supply the empty 
> argument list `()` explicitly to invoke method isEmpty,
>        or remove the empty argument list from its definition (Java-defined 
> methods are exempt).
>        In Scala 3, an unapplied method like this will be eta-expanded into a 
> function. [quickfixable]
> val ret: Boolean = true {code}
> But for Scala 3, it is a compile error:
> {code:java}
> Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
> Type in expressions for evaluation. Or try :help.
>                                                                               
>                                                                               
>                                                                               
>            
> scala> class Foo {
>      |     def isEmpty(): Boolean = true
>      |     def isTrue(x: Boolean): Boolean = x
>      |   }
> // defined class Foo
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val foo = new Foo
> val foo: Foo = Foo@591f6f83
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val ret = foo.isEmpty
> -- [E100] Syntax Error: 
> 
> 1 |val ret = foo.isEmpty
>   |          ^^^
>   |          method isEmpty in class Foo must be called with () argument
>   |
>   | longer explanation available when compiling with `-explain`
> 1 error found {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-45610) Handle "Auto-application to `()` is deprecated."

2023-10-19 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-45610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1539#comment-1539
 ] 

Yang Jie commented on SPARK-45610:
--

Okay, I can start preparing this PR.
 
 
 
 
 

> Handle "Auto-application to `()` is deprecated."
> 
>
> Key: SPARK-45610
> URL: https://issues.apache.org/jira/browse/SPARK-45610
> Project: Spark
>  Issue Type: Sub-task
>  Components: GraphX, MLlib, Spark Core, SQL, Structured Streaming
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> For the following case, a compile warning will be issued in Scala 2.13:
>  
> {code:java}
> Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
> Type in expressions for evaluation. Or try :help.
> scala> class Foo {
>      |     def isEmpty(): Boolean = true
>      |     def isTrue(x: Boolean): Boolean = x
>      |   }
> class Foo
> scala> val foo = new Foo
> val foo: Foo = Foo@7061622
> scala> val ret = foo.isEmpty
>                      ^
>        warning: Auto-application to `()` is deprecated. Supply the empty 
> argument list `()` explicitly to invoke method isEmpty,
>        or remove the empty argument list from its definition (Java-defined 
> methods are exempt).
>        In Scala 3, an unapplied method like this will be eta-expanded into a 
> function. [quickfixable]
> val ret: Boolean = true {code}
> But for Scala 3, it is a compile error:
> {code:java}
> Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
> Type in expressions for evaluation. Or try :help.
>                                                                               
>                                                                               
>                                                                               
>            
> scala> class Foo {
>      |     def isEmpty(): Boolean = true
>      |     def isTrue(x: Boolean): Boolean = x
>      |   }
> // defined class Foo
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val foo = new Foo
> val foo: Foo = Foo@591f6f83
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val ret = foo.isEmpty
> -- [E100] Syntax Error: 
> 
> 1 |val ret = foo.isEmpty
>   |          ^^^
>   |          method isEmpty in class Foo must be called with () argument
>   |
>   | longer explanation available when compiling with `-explain`
> 1 error found {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-45610) Handle "Auto-application to `()` is deprecated."

2023-10-19 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-45610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1396#comment-1396
 ] 

Yang Jie commented on SPARK-45610:
--

In Spark, this involves a massive amount of cases. Since this is a compile 
error for Scala 3, it seems that we will have to fix this when we prepare to 
support Scala 3.

As the plan to support Scala 3 is not clear at the moment, should we wait until 
the schedule for supporting Scala 3 is confirmed before we proceed with the fix?

 
I would like to know your thoughts. [~srowen]  [~dongjoon] [~gurwls223] 

> Handle "Auto-application to `()` is deprecated."
> 
>
> Key: SPARK-45610
> URL: https://issues.apache.org/jira/browse/SPARK-45610
> Project: Spark
>  Issue Type: Sub-task
>  Components: GraphX, MLlib, Spark Core, SQL, Structured Streaming
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> For the following case, a compile warning will be issued in Scala 2.13:
>  
> {code:java}
> Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
> Type in expressions for evaluation. Or try :help.
> scala> class Foo {
>      |     def isEmpty(): Boolean = true
>      |     def isTrue(x: Boolean): Boolean = x
>      |   }
> class Foo
> scala> val foo = new Foo
> val foo: Foo = Foo@7061622
> scala> val ret = foo.isEmpty
>                      ^
>        warning: Auto-application to `()` is deprecated. Supply the empty 
> argument list `()` explicitly to invoke method isEmpty,
>        or remove the empty argument list from its definition (Java-defined 
> methods are exempt).
>        In Scala 3, an unapplied method like this will be eta-expanded into a 
> function. [quickfixable]
> val ret: Boolean = true {code}
> But for Scala 3, it is a compile error:
> {code:java}
> Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
> Type in expressions for evaluation. Or try :help.
>                                                                               
>                                                                               
>                                                                               
>            
> scala> class Foo {
>      |     def isEmpty(): Boolean = true
>      |     def isTrue(x: Boolean): Boolean = x
>      |   }
> // defined class Foo
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val foo = new Foo
> val foo: Foo = Foo@591f6f83
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val ret = foo.isEmpty
> -- [E100] Syntax Error: 
> 
> 1 |val ret = foo.isEmpty
>   |          ^^^
>   |          method isEmpty in class Foo must be called with () argument
>   |
>   | longer explanation available when compiling with `-explain`
> 1 error found {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45610) Handle "Auto-application to `()` is deprecated."

2023-10-19 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45610:
-
Description: 
For the following case, a compile warning will be issued in Scala 2.13:

 
{code:java}
Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
Type in expressions for evaluation. Or try :help.


scala> class Foo {
     |     def isEmpty(): Boolean = true
     |     def isTrue(x: Boolean): Boolean = x
     |   }
class Foo


scala> val foo = new Foo
val foo: Foo = Foo@7061622


scala> val ret = foo.isEmpty
                     ^
       warning: Auto-application to `()` is deprecated. Supply the empty 
argument list `()` explicitly to invoke method isEmpty,
       or remove the empty argument list from its definition (Java-defined 
methods are exempt).
       In Scala 3, an unapplied method like this will be eta-expanded into a 
function. [quickfixable]
val ret: Boolean = true {code}
But for Scala 3, it is a compile error:
{code:java}
Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
Type in expressions for evaluation. Or try :help.
                                                                                
                                                                                
                                                                                
     
scala> class Foo {
     |     def isEmpty(): Boolean = true
     |     def isTrue(x: Boolean): Boolean = x
     |   }
// defined class Foo
                                                                                
                                                                                
                                                                                
     
scala> val foo = new Foo
val foo: Foo = Foo@591f6f83
                                                                                
                                                                                
                                                                                
     
scala> val ret = foo.isEmpty
-- [E100] Syntax Error: 
1 |val ret = foo.isEmpty
  |          ^^^
  |          method isEmpty in class Foo must be called with () argument
  |
  | longer explanation available when compiling with `-explain`
1 error found {code}

> Handle "Auto-application to `()` is deprecated."
> 
>
> Key: SPARK-45610
> URL: https://issues.apache.org/jira/browse/SPARK-45610
> Project: Spark
>  Issue Type: Sub-task
>  Components: GraphX, MLlib, Spark Core, SQL, Structured Streaming
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> For the following case, a compile warning will be issued in Scala 2.13:
>  
> {code:java}
> Welcome to Scala 2.13.12 (OpenJDK 64-Bit Server VM, Java 17.0.8).
> Type in expressions for evaluation. Or try :help.
> scala> class Foo {
>      |     def isEmpty(): Boolean = true
>      |     def isTrue(x: Boolean): Boolean = x
>      |   }
> class Foo
> scala> val foo = new Foo
> val foo: Foo = Foo@7061622
> scala> val ret = foo.isEmpty
>                      ^
>        warning: Auto-application to `()` is deprecated. Supply the empty 
> argument list `()` explicitly to invoke method isEmpty,
>        or remove the empty argument list from its definition (Java-defined 
> methods are exempt).
>        In Scala 3, an unapplied method like this will be eta-expanded into a 
> function. [quickfixable]
> val ret: Boolean = true {code}
> But for Scala 3, it is a compile error:
> {code:java}
> Welcome to Scala 3.3.1 (17.0.8, Java OpenJDK 64-Bit Server VM).
> Type in expressions for evaluation. Or try :help.
>                                                                               
>                                                                               
>                                                                               
>            
> scala> class Foo {
>      |     def isEmpty(): Boolean = true
>      |     def isTrue(x: Boolean): Boolean = x
>      |   }
> // defined class Foo
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val foo = new Foo
> val foo: Foo = Foo@591f6f83
>                                                                               
>                                                                               
>                                                                               
>            
> scala> val ret = foo.isEmpty
> -- [E100] Syntax Error: 
> 
> 1 |val ret = 

[jira] [Created] (SPARK-45610) Handle "Auto-application to `()` is deprecated."

2023-10-19 Thread Yang Jie (Jira)
Yang Jie created SPARK-45610:


 Summary: Handle "Auto-application to `()` is deprecated."
 Key: SPARK-45610
 URL: https://issues.apache.org/jira/browse/SPARK-45610
 Project: Spark
  Issue Type: Sub-task
  Components: GraphX, MLlib, Spark Core, SQL, Structured Streaming
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45605) Replace `s.c.MapOps.mapValues` with `s.c.MapOps.view.mapValues`

2023-10-19 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45605:
-
Description: 
{code:java}
@deprecated("Use .view.mapValues(f). A future version will include a strict 
version of this method (for now, .view.mapValues(f).toMap).", "2.13.0")
def mapValues[W](f: V => W): MapView[K, W] = new MapView.MapValues(this, f) 
{code}

  was:
{code:java}
// code placeholder
{code}


>Replace `s.c.MapOps.mapValues` with `s.c.MapOps.view.mapValues`
> --
>
> Key: SPARK-45605
> URL: https://issues.apache.org/jira/browse/SPARK-45605
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, DStreams, Examples, MLlib, Spark Core, SS
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> {code:java}
> @deprecated("Use .view.mapValues(f). A future version will include a strict 
> version of this method (for now, .view.mapValues(f).toMap).", "2.13.0")
> def mapValues[W](f: V => W): MapView[K, W] = new MapView.MapValues(this, f) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-45605) Replace `s.c.MapOps.mapValues` with `s.c.MapOps.view.mapValues`

2023-10-19 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-45605:
-
Description: 
{code:java}
// code placeholder
{code}

>Replace `s.c.MapOps.mapValues` with `s.c.MapOps.view.mapValues`
> --
>
> Key: SPARK-45605
> URL: https://issues.apache.org/jira/browse/SPARK-45605
> Project: Spark
>  Issue Type: Sub-task
>  Components: Connect, DStreams, Examples, MLlib, Spark Core, SS
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>
> {code:java}
> // code placeholder
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45605) Replace `s.c.MapOps.mapValues` with `s.c.MapOps.view.mapValues`

2023-10-19 Thread Yang Jie (Jira)
Yang Jie created SPARK-45605:


 Summary:Replace `s.c.MapOps.mapValues` with 
`s.c.MapOps.view.mapValues`
 Key: SPARK-45605
 URL: https://issues.apache.org/jira/browse/SPARK-45605
 Project: Spark
  Issue Type: Sub-task
  Components: SS, Connect, DStreams, Examples, MLlib, Spark Core
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45602) Replace `s.c.MapOps.filterKeys` with `s.c.MapOps.view.filterKeys`

2023-10-18 Thread Yang Jie (Jira)
Yang Jie created SPARK-45602:


 Summary: Replace `s.c.MapOps.filterKeys` with 
`s.c.MapOps.view.filterKeys`
 Key: SPARK-45602
 URL: https://issues.apache.org/jira/browse/SPARK-45602
 Project: Spark
  Issue Type: Sub-task
  Components: Kubernetes, Spark Core, SQL, YARN
Affects Versions: 4.0.0
Reporter: Yang Jie


{code:java}
/** Filters this map by retaining only keys satisfying a predicate.
  *  @param  p   the predicate used to test keys
  *  @return an immutable map consisting only of those key value pairs of this 
map where the key satisfies
  *  the predicate `p`. The resulting map wraps the original map 
without copying any elements.
  */
@deprecated("Use .view.filterKeys(f). A future version will include a strict 
version of this method (for now, .view.filterKeys(p).toMap).", "2.13.0")
def filterKeys(p: K => Boolean): MapView[K, V] = new MapView.FilterKeys(this, 
p) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-45591) Upgrade ASM to 9.6

2023-10-18 Thread Yang Jie (Jira)
Yang Jie created SPARK-45591:


 Summary: Upgrade ASM to 9.6
 Key: SPARK-45591
 URL: https://issues.apache.org/jira/browse/SPARK-45591
 Project: Spark
  Issue Type: Improvement
  Components: Build
Affects Versions: 4.0.0
Reporter: Yang Jie






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-45533) Use `j.l.r.Cleaner` instead of `finalize` for `RocksDBIterator/LevelDBIterator`

2023-10-17 Thread Yang Jie (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-45533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17776484#comment-17776484
 ] 

Yang Jie commented on SPARK-45533:
--

Feel free to pick up this one. Thanks. [~zhaomin] 

> Use `j.l.r.Cleaner` instead of `finalize` for 
> `RocksDBIterator/LevelDBIterator`
> ---
>
> Key: SPARK-45533
> URL: https://issues.apache.org/jira/browse/SPARK-45533
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-45546) Do not compile docs for snapshots deploy

2023-10-17 Thread Yang Jie (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-45546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie resolved SPARK-45546.
--
Fix Version/s: 4.0.0
   Resolution: Fixed

Issue resolved by pull request 43378
[https://github.com/apache/spark/pull/43378]

> Do not compile docs for snapshots deploy
> 
>
> Key: SPARK-45546
> URL: https://issues.apache.org/jira/browse/SPARK-45546
> Project: Spark
>  Issue Type: Improvement
>  Components: Build
>Affects Versions: 4.0.0
>Reporter: Yang Jie
>Assignee: Yang Jie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



<    1   2   3   4   5   6   7   8   9   10   >