Hisoka-X commented on code in PR #41855:
URL: https://github.com/apache/spark/pull/41855#discussion_r1314466006
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -180,6 +180,37 @@ abstract class JdbcDialect extends Serializable with
Logging {
s
cloud-fan commented on code in PR #41855:
URL: https://github.com/apache/spark/pull/41855#discussion_r1314508461
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -180,6 +180,38 @@ abstract class JdbcDialect extends Serializable with
Logging {
Hisoka-X commented on code in PR #41855:
URL: https://github.com/apache/spark/pull/41855#discussion_r1314507599
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -180,6 +180,37 @@ abstract class JdbcDialect extends Serializable with
Logging {
s
panbingkun opened a new pull request, #42797:
URL: https://github.com/apache/spark/pull/42797
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
Hisoka-X commented on code in PR #41855:
URL: https://github.com/apache/spark/pull/41855#discussion_r1314497081
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -180,6 +180,37 @@ abstract class JdbcDialect extends Serializable with
Logging {
s
Hisoka-X commented on code in PR #41855:
URL: https://github.com/apache/spark/pull/41855#discussion_r1314497081
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -180,6 +180,37 @@ abstract class JdbcDialect extends Serializable with
Logging {
s
Hisoka-X commented on code in PR #41855:
URL: https://github.com/apache/spark/pull/41855#discussion_r1314497081
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -180,6 +180,37 @@ abstract class JdbcDialect extends Serializable with
Logging {
s
Hisoka-X commented on code in PR #41855:
URL: https://github.com/apache/spark/pull/41855#discussion_r1314497081
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -180,6 +180,37 @@ abstract class JdbcDialect extends Serializable with
Logging {
s
LuciferYang commented on PR #42796:
URL: https://github.com/apache/spark/pull/42796#issuecomment-1704693024
test first
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To un
LuciferYang opened a new pull request, #42796:
URL: https://github.com/apache/spark/pull/42796
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
cloud-fan commented on code in PR #41855:
URL: https://github.com/apache/spark/pull/41855#discussion_r1314481185
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -180,6 +180,37 @@ abstract class JdbcDialect extends Serializable with
Logging {
LuciferYang commented on PR #42795:
URL: https://github.com/apache/spark/pull/42795#issuecomment-1704662375
cc @srowen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To u
panbingkun commented on PR #42795:
URL: https://github.com/apache/spark/pull/42795#issuecomment-1704661116
cc @LuciferYang
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
panbingkun opened a new pull request, #42795:
URL: https://github.com/apache/spark/pull/42795
### What changes were proposed in this pull request?
The pr aims to Upgrade jetty from 9.4.51.v20230217 to 9.4.52.v20230823.
(Backport to Spark 3.5.0)
### Why are the changes needed?
-
Hisoka-X commented on code in PR #41855:
URL: https://github.com/apache/spark/pull/41855#discussion_r1314466479
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -529,6 +560,16 @@ abstract class JdbcDialect extends Serializable with
Logging {
}
Hisoka-X commented on code in PR #41855:
URL: https://github.com/apache/spark/pull/41855#discussion_r1314466006
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -180,6 +180,37 @@ abstract class JdbcDialect extends Serializable with
Logging {
s
cloud-fan commented on code in PR #41855:
URL: https://github.com/apache/spark/pull/41855#discussion_r1314462378
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -180,6 +180,37 @@ abstract class JdbcDialect extends Serializable with
Logging {
cloud-fan commented on code in PR #41855:
URL: https://github.com/apache/spark/pull/41855#discussion_r1314462193
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -529,6 +560,16 @@ abstract class JdbcDialect extends Serializable with
Logging {
cloud-fan commented on code in PR #41855:
URL: https://github.com/apache/spark/pull/41855#discussion_r1314461546
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -180,6 +180,37 @@ abstract class JdbcDialect extends Serializable with
Logging {
cloud-fan commented on code in PR #42752:
URL: https://github.com/apache/spark/pull/42752#discussion_r1314459460
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/parameters.scala:
##
@@ -96,7 +96,11 @@ case class PosParameterizedQuery(child: LogicalPlan, arg
panbingkun commented on PR #42761:
URL: https://github.com/apache/spark/pull/42761#issuecomment-1704645884
> Merged into master. There are conflicts with 3.5, could you please give a
separate pr ? @panbingkun
Sure, Let me do it now.
--
This is an automated message from the Apache G
MaxGekk commented on code in PR #42752:
URL: https://github.com/apache/spark/pull/42752#discussion_r1314453024
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/parameters.scala:
##
@@ -96,7 +96,11 @@ case class PosParameterizedQuery(child: LogicalPlan, args:
cloud-fan commented on code in PR #42752:
URL: https://github.com/apache/spark/pull/42752#discussion_r1314446890
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/parameters.scala:
##
@@ -96,7 +96,11 @@ case class PosParameterizedQuery(child: LogicalPlan, arg
cloud-fan commented on code in PR #42752:
URL: https://github.com/apache/spark/pull/42752#discussion_r1314446317
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/parameters.scala:
##
@@ -96,7 +96,11 @@ case class PosParameterizedQuery(child: LogicalPlan, arg
MaxGekk commented on code in PR #42752:
URL: https://github.com/apache/spark/pull/42752#discussion_r1314446198
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/parameters.scala:
##
@@ -96,7 +96,11 @@ case class PosParameterizedQuery(child: LogicalPlan, args:
MaxGekk commented on code in PR #42752:
URL: https://github.com/apache/spark/pull/42752#discussion_r131739
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/parameters.scala:
##
@@ -96,7 +96,11 @@ case class PosParameterizedQuery(child: LogicalPlan, args:
zhengruifeng opened a new pull request, #42794:
URL: https://github.com/apache/spark/pull/42794
### What changes were proposed in this pull request?
Make function `repeat` accept column-type `n`
### Why are the changes needed?
1. to follow this guide:
https://github.com/
itholic commented on PR #42793:
URL: https://github.com/apache/spark/pull/42793#issuecomment-1704604204
Since there are many features are [deprecated from Pandas
2.1.0](https://pandas.pydata.org/docs/whatsnew/v2.1.0.html#deprecations), let
me investigate if there is any corresponding featur
itholic opened a new pull request, #42793:
URL: https://github.com/apache/spark/pull/42793
### What changes were proposed in this pull request?
This PR proposes to support pandas 2.1.0 for PySpark. See [What's new in
2.1.0](https://pandas.pydata.org/docs/dev/whatsnew/v2.1.0.html)
cloud-fan commented on PR #42777:
URL: https://github.com/apache/spark/pull/42777#issuecomment-1704598173
late LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsub
cloud-fan commented on code in PR #42752:
URL: https://github.com/apache/spark/pull/42752#discussion_r1314426545
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/parameters.scala:
##
@@ -96,7 +96,11 @@ case class PosParameterizedQuery(child: LogicalPlan, arg
cloud-fan commented on code in PR #42778:
URL: https://github.com/apache/spark/pull/42778#discussion_r1314425064
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala:
##
@@ -1305,11 +1305,20 @@ object TransposeWindow extends Rule[LogicalPlan] {
sadikovi commented on PR #42792:
URL: https://github.com/apache/spark/pull/42792#issuecomment-1704587062
cc @cloud-fan @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c
sadikovi commented on PR #42790:
URL: https://github.com/apache/spark/pull/42790#issuecomment-1704587030
cc @cloud-fan @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c
sadikovi commented on PR #42667:
URL: https://github.com/apache/spark/pull/42667#issuecomment-1704586475
I have opened backport PRs (linked in this PR).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
sadikovi opened a new pull request, #42792:
URL: https://github.com/apache/spark/pull/42792
### What changes were proposed in this pull request?
Backport of https://github.com/apache/spark/pull/42667 to branch-3.4.
The PR improves JSON parsing when `spark.sql.json.enablePartialR
zhengruifeng commented on code in PR #42791:
URL: https://github.com/apache/spark/pull/42791#discussion_r1314419285
##
python/pyspark/sql/connect/functions.py:
##
@@ -552,15 +552,23 @@ def cbrt(col: "ColumnOrName") -> Column:
cbrt.__doc__ = pysparkfuncs.cbrt.__doc__
-def ce
zhengruifeng opened a new pull request, #42791:
URL: https://github.com/apache/spark/pull/42791
### What changes were proposed in this pull request?
Add the missing `scale` parameter in `ceil/ceiling`
### Why are the changes needed?
for parity, this parameter existed in both
sadikovi opened a new pull request, #42790:
URL: https://github.com/apache/spark/pull/42790
### What changes were proposed in this pull request?
Backport of https://github.com/apache/spark/pull/42667 to branch-3.5.
The PR improves JSON parsing when `spark.sql.json.enablePartialR
LuciferYang commented on PR #42761:
URL: https://github.com/apache/spark/pull/42761#issuecomment-1704572375
Merged into master. There are conflicts with 3.5, could you please give a
separate pr ? @panbingkun
--
This is an automated message from the Apache Git Service.
To respond to the m
LuciferYang closed pull request #42761: [SPARK-45042][BUILD] Upgrade jetty to
9.4.52.v20230823
URL: https://github.com/apache/spark/pull/42761
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spe
LuciferYang commented on PR #42753:
URL: https://github.com/apache/spark/pull/42753#issuecomment-1704568162
Thanks @HyukjinKwon @dongjoon-hyun ~
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to th
LuciferYang commented on PR #42766:
URL: https://github.com/apache/spark/pull/42766#issuecomment-1704566844
@gengliangwang Do you still remember why changed `shadeTestJar` to false?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
LuciferYang commented on PR #42598:
URL: https://github.com/apache/spark/pull/42598#issuecomment-1704566124
https://github.com/apache/spark/assets/1475305/913cfb25-6bab-4a33-ba73-60a2f4f4f43a";>
Is there a problem with your GitHub Action configuration? Why does the GA
page look like a
LuciferYang opened a new pull request, #42789:
URL: https://github.com/apache/spark/pull/42789
### What changes were proposed in this pull request?
This pr refine docstring of `max_by/min_by` and add some new examples.
### Why are the changes needed?
To improve PySpark documentat
HyukjinKwon commented on code in PR #42770:
URL: https://github.com/apache/spark/pull/42770#discussion_r1314400083
##
python/pyspark/sql/dataframe.py:
##
@@ -1809,18 +1810,27 @@ def repartition( # type: ignore[misc]
Repartition the data into 10 partitions.
-
HyukjinKwon commented on code in PR #42770:
URL: https://github.com/apache/spark/pull/42770#discussion_r1314400083
##
python/pyspark/sql/dataframe.py:
##
@@ -1809,18 +1810,27 @@ def repartition( # type: ignore[misc]
Repartition the data into 10 partitions.
-
HyukjinKwon commented on PR #42667:
URL: https://github.com/apache/spark/pull/42667#issuecomment-1704548723
Merged to master. It has some conflicts in branch-3.5 and 3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
HyukjinKwon closed pull request #42667: [SPARK-44940][SQL] Improve performance
of JSON parsing when "spark.sql.json.enablePartialResults" is enabled
URL: https://github.com/apache/spark/pull/42667
--
This is an automated message from the Apache Git Service.
To respond to the message, please l
itholic opened a new pull request, #42788:
URL: https://github.com/apache/spark/pull/42788
### What changes were proposed in this pull request?
This PR added warning messages throughout the Pandas API on Spark wherever
the `numeric_only` parameter is used with different default value.
sadikovi commented on PR #42667:
URL: https://github.com/apache/spark/pull/42667#issuecomment-1704544738
Shall we merge this or do you have any concerns or questions? I will be more
than happy to answer them or follow up on the suggestions.
We may also need to backport to Spark 3.5/3.
HeartSaVioR commented on PR #42774:
URL: https://github.com/apache/spark/pull/42774#issuecomment-1704534176
Merged via
[4ab0fc](https://github.com/apache/spark/commit/44ab0fc0068f815c7eddcd34ae4343bbfd97b64d)
--
This is an automated message from the Apache Git Service.
To respond to the m
HeartSaVioR closed pull request #42774: [SPARK-45045][SS][3.5] Revert back the
behavior of idle progress for StreamingQuery API from SPARK-43183
URL: https://github.com/apache/spark/pull/42774
--
This is an automated message from the Apache Git Service.
To respond to the message, please log o
HeartSaVioR closed pull request #42773: [SPARK-45045][SS] Revert back the
behavior of idle progress for StreamingQuery API from SPARK-43183
URL: https://github.com/apache/spark/pull/42773
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
HeartSaVioR commented on PR #42774:
URL: https://github.com/apache/spark/pull/42774#issuecomment-1704533200
Thanks for reviewing! Merging to 3.5.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to t
HeartSaVioR commented on PR #42773:
URL: https://github.com/apache/spark/pull/42773#issuecomment-1704533152
Thanks for reviewing! Merging to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go t
zhengruifeng commented on code in PR #42770:
URL: https://github.com/apache/spark/pull/42770#discussion_r1314388163
##
python/pyspark/sql/dataframe.py:
##
@@ -1809,18 +1810,27 @@ def repartition( # type: ignore[misc]
Repartition the data into 10 partitions.
-
zhengruifeng commented on code in PR #42770:
URL: https://github.com/apache/spark/pull/42770#discussion_r1314388163
##
python/pyspark/sql/dataframe.py:
##
@@ -1809,18 +1810,27 @@ def repartition( # type: ignore[misc]
Repartition the data into 10 partitions.
-
itholic opened a new pull request, #42787:
URL: https://github.com/apache/spark/pull/42787
### What changes were proposed in this pull request?
This PR proposes to fix the behavior of `MultiIndex.append` to do not
checking names.
### Why are the changes needed?
To match
Hisoka-X commented on code in PR #42783:
URL: https://github.com/apache/spark/pull/42783#discussion_r1314377696
##
python/pyspark/sql/functions.py:
##
@@ -15748,6 +15749,33 @@ def java_method(*cols: "ColumnOrName") -> Column:
return _invoke_function_over_seq_of_columns("jav
zhengruifeng opened a new pull request, #42786:
URL: https://github.com/apache/spark/pull/42786
### What changes were proposed in this pull request?
backport https://github.com/apache/spark/pull/42775 to 3.5
### Why are the changes needed?
to make `func(col)` consistent with
ueshin opened a new pull request, #42785:
URL: https://github.com/apache/spark/pull/42785
### What changes were proposed in this pull request?
This is a backport of https://github.com/apache/spark/pull/42784.
Fixes Arrow-optimized Python UDF to delay wrapping the function with
zhengruifeng commented on PR #42775:
URL: https://github.com/apache/spark/pull/42775#issuecomment-1704506729
merged to master, will send a separate PR for 3.5
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
zhengruifeng closed pull request #42775: [SPARK-45052][SQL][PYTHON][CONNECT]
Make function aliases output column name consistent with SQL
URL: https://github.com/apache/spark/pull/42775
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to Gi
zhengruifeng commented on code in PR #42775:
URL: https://github.com/apache/spark/pull/42775#discussion_r1314370284
##
sql/core/src/main/scala/org/apache/spark/sql/functions.scala:
##
@@ -1052,15 +1049,15 @@ object functions {
* @group agg_funcs
* @since 3.5.0
*/
-
zhengruifeng commented on code in PR #42783:
URL: https://github.com/apache/spark/pull/42783#discussion_r1314365390
##
python/pyspark/sql/functions.py:
##
@@ -15748,6 +15749,33 @@ def java_method(*cols: "ColumnOrName") -> Column:
return _invoke_function_over_seq_of_columns(
HyukjinKwon commented on code in PR #42775:
URL: https://github.com/apache/spark/pull/42775#discussion_r1314366796
##
sql/core/src/main/scala/org/apache/spark/sql/functions.scala:
##
@@ -1052,15 +1049,15 @@ object functions {
* @group agg_funcs
* @since 3.5.0
*/
- d
zhengruifeng commented on code in PR #42783:
URL: https://github.com/apache/spark/pull/42783#discussion_r1314365390
##
python/pyspark/sql/functions.py:
##
@@ -15748,6 +15749,33 @@ def java_method(*cols: "ColumnOrName") -> Column:
return _invoke_function_over_seq_of_columns(
zhengruifeng commented on code in PR #42775:
URL: https://github.com/apache/spark/pull/42775#discussion_r1314358804
##
sql/core/src/main/scala/org/apache/spark/sql/functions.scala:
##
@@ -1052,15 +1049,15 @@ object functions {
* @group agg_funcs
* @since 3.5.0
*/
-
ueshin opened a new pull request, #42784:
URL: https://github.com/apache/spark/pull/42784
### What changes were proposed in this pull request?
Fixes Arrow-optimized Python UDF to delay wrapping the function with
`fail_on_stopiteration`.
Also removed unnecessary verification `ve
zekai-li commented on PR #42529:
URL: https://github.com/apache/spark/pull/42529#issuecomment-1704480296
@tgravescs take a check
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comme
HyukjinKwon closed pull request #42782: [SPARK-45058][PYTHON][DOCS] Refine
docstring of DataFrame.distinct
URL: https://github.com/apache/spark/pull/42782
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to g
HyukjinKwon commented on PR #42782:
URL: https://github.com/apache/spark/pull/42782#issuecomment-1704462452
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon closed pull request #42776: [SPARK-45053][PYTHON][MINOR] Log
improvement in python version mismatch
URL: https://github.com/apache/spark/pull/42776
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
HyukjinKwon commented on PR #42776:
URL: https://github.com/apache/spark/pull/42776#issuecomment-1704461253
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
zhengruifeng commented on code in PR #42775:
URL: https://github.com/apache/spark/pull/42775#discussion_r1314347624
##
sql/core/src/main/scala/org/apache/spark/sql/functions.scala:
##
@@ -1052,15 +1049,15 @@ object functions {
* @group agg_funcs
* @since 3.5.0
*/
-
Hisoka-X commented on PR #41855:
URL: https://github.com/apache/spark/pull/41855#issuecomment-1704461031
> Nah, let's don't add a new API to 3.5.0 at this moment.
Got it, let me change since to 4.0.0
--
This is an automated message from the Apache Git Service.
To respond to the mess
HyukjinKwon commented on code in PR #42771:
URL: https://github.com/apache/spark/pull/42771#discussion_r1314347699
##
connector/connect/common/src/main/scala/org/apache/spark/sql/connect/client/GrpcExceptionConverter.scala:
##
@@ -107,7 +107,7 @@ private[client] object GrpcExcep
zhengruifeng commented on code in PR #42775:
URL: https://github.com/apache/spark/pull/42775#discussion_r1314347624
##
sql/core/src/main/scala/org/apache/spark/sql/functions.scala:
##
@@ -1052,15 +1049,15 @@ object functions {
* @group agg_funcs
* @since 3.5.0
*/
-
HyukjinKwon closed pull request #42768: [SPARK-44667][INFRA][FOLLOWUP]
Uninstall `deepspeed` libraries for non-ML jobs
URL: https://github.com/apache/spark/pull/42768
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
UR
HyukjinKwon commented on PR #42768:
URL: https://github.com/apache/spark/pull/42768#issuecomment-1704458951
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
zhengruifeng commented on code in PR #42775:
URL: https://github.com/apache/spark/pull/42775#discussion_r1314346571
##
python/pyspark/sql/functions.py:
##
@@ -2385,25 +2416,54 @@ def signum(col: "ColumnOrName") -> Column:
Examples
->>> df = spark.range(1
HyukjinKwon commented on PR #42767:
URL: https://github.com/apache/spark/pull/42767#issuecomment-1704458536
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon commented on PR #42766:
URL: https://github.com/apache/spark/pull/42766#issuecomment-1704458471
cc @gengliangwang FYI
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comm
HyukjinKwon closed pull request #42758: [SPARK-45038][PYTHON][DOCS] Refine
docstring of `max`
URL: https://github.com/apache/spark/pull/42758
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spec
HyukjinKwon commented on PR #42758:
URL: https://github.com/apache/spark/pull/42758#issuecomment-1704457917
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon closed pull request #42753: [SPARK-45032][CONNECT] Fix compilation
warnings related to `Top-level wildcard is not allowed and will error under
-Xsource:3`
URL: https://github.com/apache/spark/pull/42753
--
This is an automated message from the Apache Git Service.
To respond to th
HyukjinKwon commented on PR #42753:
URL: https://github.com/apache/spark/pull/42753#issuecomment-1704457593
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
zhengruifeng commented on code in PR #42770:
URL: https://github.com/apache/spark/pull/42770#discussion_r1314345886
##
python/pyspark/sql/dataframe.py:
##
@@ -1809,18 +1810,27 @@ def repartition( # type: ignore[misc]
Repartition the data into 10 partitions.
-
HyukjinKwon commented on PR #41855:
URL: https://github.com/apache/spark/pull/41855#issuecomment-1704455930
Nah, let's don't add a new API to 3.5.0 at this moment.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
UR
HyukjinKwon closed pull request #42687: [SPARK-45061][SS][CONNECT] Clean up
Running python StreamingQueryLIstener processes when session expires
URL: https://github.com/apache/spark/pull/42687
--
This is an automated message from the Apache Git Service.
To respond to the message, please log o
HyukjinKwon commented on PR #42687:
URL: https://github.com/apache/spark/pull/42687#issuecomment-1704455336
Merged to master and branch-3.5.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the sp
WweiL commented on PR #42687:
URL: https://github.com/apache/spark/pull/42687#issuecomment-1704454592
@HyukjinKwon Done!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
HyukjinKwon commented on PR #42687:
URL: https://github.com/apache/spark/pull/42687#issuecomment-1704453122
@WweiL mind creating a separate JIRA? SPARK-44433 has been landed to 3.5.0
already, and this won't be available in the same version as a followup.
--
This is an automated message fr
HyukjinKwon commented on PR #42687:
URL: https://github.com/apache/spark/pull/42687#issuecomment-1704452754
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon commented on code in PR #42770:
URL: https://github.com/apache/spark/pull/42770#discussion_r1314343527
##
python/pyspark/sql/dataframe.py:
##
@@ -1809,18 +1810,27 @@ def repartition( # type: ignore[misc]
Repartition the data into 10 partitions.
-
github-actions[bot] closed pull request #40312: [SPARK-42695][SQL] Skew join
handling in stream side of broadcast hash join
URL: https://github.com/apache/spark/pull/40312
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use t
97 matches
Mail list logo