cloud-fan commented on code in PR #36530:
URL: https://github.com/apache/spark/pull/36530#discussion_r873346931
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala:
##
@@ -211,6 +219,15 @@ object EliminateOuterJoin extends Rule[LogicalPlan] with
cloud-fan commented on code in PR #36530:
URL: https://github.com/apache/spark/pull/36530#discussion_r873344595
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/joins.scala:
##
@@ -139,6 +139,14 @@ object ReorderJoin extends Rule[LogicalPlan] with
cloud-fan commented on code in PR #36295:
URL: https://github.com/apache/spark/pull/36295#discussion_r873341127
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsPushDownOffset.java:
##
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation
cloud-fan commented on code in PR #36295:
URL: https://github.com/apache/spark/pull/36295#discussion_r873340929
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsPushDownOffset.java:
##
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation
MaxGekk commented on PR #36479:
URL: https://github.com/apache/spark/pull/36479#issuecomment-1127239102
@panbingkun Since this PR modified error classes, could you backport it to
branch-3.3, please.
--
This is an automated message from the Apache Git Service.
To respond to the message,
MaxGekk closed pull request #36479: [SPARK-38688][SQL][TESTS] Use error classes
in the compilation errors of deserializer
URL: https://github.com/apache/spark/pull/36479
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
cloud-fan closed pull request #36412: [SPARK-39073][SQL] Keep rowCount after
hive table partition pruning if table only have hive statistics
URL: https://github.com/apache/spark/pull/36412
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
cloud-fan commented on PR #36412:
URL: https://github.com/apache/spark/pull/36412#issuecomment-1127235625
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
cloud-fan commented on code in PR #36412:
URL: https://github.com/apache/spark/pull/36412#discussion_r87309
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/PruneHiveTablePartitions.scala:
##
@@ -80,10 +80,15 @@ private[sql] class
MaxGekk closed pull request #36550: [SPARK-39187][SQL] Remove
`SparkIllegalStateException`
URL: https://github.com/apache/spark/pull/36550
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
cloud-fan closed pull request #36121: [SPARK-38836][SQL] Improve the
performance of ExpressionSet
URL: https://github.com/apache/spark/pull/36121
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
MaxGekk commented on PR #36550:
URL: https://github.com/apache/spark/pull/36550#issuecomment-1127234215
Merging to master. Thank you, @HyukjinKwon and @cloud-fan for review.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
cloud-fan commented on PR #36121:
URL: https://github.com/apache/spark/pull/36121#issuecomment-1127234077
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
AnywalkerGiser commented on PR #36537:
URL: https://github.com/apache/spark/pull/36537#issuecomment-1127233836
@HyukjinKwon It hasn't been tested in master, I found the problem in 3.0.1,
and I can test it in master later.
--
This is an automated message from the Apache Git Service.
To
cloud-fan commented on code in PR #36541:
URL: https://github.com/apache/spark/pull/36541#discussion_r873317698
##
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala:
##
@@ -82,52 +82,45 @@ abstract class SparkStrategies extends
cloud-fan commented on code in PR #36541:
URL: https://github.com/apache/spark/pull/36541#discussion_r873317698
##
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala:
##
@@ -82,52 +82,45 @@ abstract class SparkStrategies extends
cloud-fan commented on code in PR #36531:
URL: https://github.com/apache/spark/pull/36531#discussion_r873314783
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala:
##
@@ -2117,7 +2265,9 @@ case class Cast(
child: Expression,
dataType:
gengliangwang commented on code in PR #36557:
URL: https://github.com/apache/spark/pull/36557#discussion_r873307369
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/decimalExpressions.scala:
##
@@ -128,7 +128,7 @@ case class PromotePrecision(child:
gengliangwang opened a new pull request, #36557:
URL: https://github.com/apache/spark/pull/36557
### What changes were proposed in this pull request?
Similar to https://github.com/apache/spark/pull/36525, this PR provides
query context for decimal precision overflow error
AnywalkerGiser commented on code in PR #36537:
URL: https://github.com/apache/spark/pull/36537#discussion_r873305033
##
python/pyspark/sql/types.py:
##
@@ -191,14 +191,25 @@ def needConversion(self):
def toInternal(self, dt):
if dt is not None:
-
AngersZh commented on PR #36056:
URL: https://github.com/apache/spark/pull/36056#issuecomment-1127179471
Gentle ping @cloud-fan Could you take a look?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
AngersZh commented on PR #35799:
URL: https://github.com/apache/spark/pull/35799#issuecomment-1127178691
Any more suggestion?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon commented on PR #36537:
URL: https://github.com/apache/spark/pull/36537#issuecomment-1127177497
@AnywalkerGiser mind creating a PR against `master` branch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
HyukjinKwon commented on code in PR #36537:
URL: https://github.com/apache/spark/pull/36537#discussion_r873298180
##
python/pyspark/sql/types.py:
##
@@ -191,14 +191,25 @@ def needConversion(self):
def toInternal(self, dt):
if dt is not None:
-seconds
HyukjinKwon commented on code in PR #36537:
URL: https://github.com/apache/spark/pull/36537#discussion_r873297988
##
python/pyspark/tests/test_rdd.py:
##
@@ -669,6 +670,12 @@ def test_sample(self):
wr_s21 = rdd.sample(True, 0.4, 21).collect()
HyukjinKwon commented on code in PR #36537:
URL: https://github.com/apache/spark/pull/36537#discussion_r873297660
##
python/pyspark/sql/types.py:
##
@@ -191,14 +191,25 @@ def needConversion(self):
def toInternal(self, dt):
if dt is not None:
-seconds
HyukjinKwon commented on code in PR #36537:
URL: https://github.com/apache/spark/pull/36537#discussion_r873297554
##
python/pyspark/sql/types.py:
##
@@ -191,14 +191,25 @@ def needConversion(self):
def toInternal(self, dt):
if dt is not None:
-seconds
AngersZh commented on code in PR #36550:
URL: https://github.com/apache/spark/pull/36550#discussion_r873294811
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala:
##
@@ -582,8 +582,8 @@ trait CheckAnalysis extends PredicateHelper with
beliefer opened a new pull request, #36556:
URL: https://github.com/apache/spark/pull/36556
### What changes were proposed in this pull request?
This PR used to back port https://github.com/apache/spark/pull/36521 to 3.3
### Why are the changes needed?
Let function push-down
AnywalkerGiser commented on PR #36537:
URL: https://github.com/apache/spark/pull/36537#issuecomment-1127149821
Is there a supervisor for approval?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
beliefer commented on PR #36521:
URL: https://github.com/apache/spark/pull/36521#issuecomment-1127146479
@cloud-fan @huaxingao Thank you a lot! I will create back port to 3.3.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
beliefer closed pull request #36520: [SPARK-38633][SQL] Support push down
AnsiCast to JDBC data source V2
URL: https://github.com/apache/spark/pull/36520
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
beliefer commented on code in PR #36531:
URL: https://github.com/apache/spark/pull/36531#discussion_r873277888
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala:
##
@@ -275,6 +376,53 @@ object Cast {
case _ => null
}
}
+
+ //
LuciferYang commented on PR #36515:
URL: https://github.com/apache/spark/pull/36515#issuecomment-1127140077
thanks @huaxingao @sunchao
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
zhengruifeng commented on PR #36555:
URL: https://github.com/apache/spark/pull/36555#issuecomment-1127136933
@HyukjinKwon Sure! will update soon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
beobest2 commented on PR #36509:
URL: https://github.com/apache/spark/pull/36509#issuecomment-1127127677
@bjornjorgensen Seems like a good idea! I can simply add a column to
display parameters that only exist in pandas. However, it is necessary to
discuss whether or not it meets the
HyukjinKwon commented on PR #36555:
URL: https://github.com/apache/spark/pull/36555#issuecomment-1127098019
@zhengruifeng mind showing the example of this argument usage in the PR
description?
--
This is an automated message from the Apache Git Service.
To respond to the message, please
HyukjinKwon closed pull request #36554: [SPARK-39186][PYTHON][FOLLOWUP] Improve
the numerical stability of pandas-on-Spark's skewness
URL: https://github.com/apache/spark/pull/36554
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
HyukjinKwon commented on PR #36554:
URL: https://github.com/apache/spark/pull/36554#issuecomment-1127097656
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
github-actions[bot] closed pull request #35357: [SPARK-21195][CORE]
MetricSystem should pick up dynamically registered metrics in sources
URL: https://github.com/apache/spark/pull/35357
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
bjornjorgensen commented on PR #36509:
URL: https://github.com/apache/spark/pull/36509#issuecomment-1127032945
Yes, very good.
I was thinking, pandas API on spark has some more options then pandas have.
Like to_json() have `ignoreNullFields=True` and `num_files=1`
Can we add
tiagovrtr commented on PR #33675:
URL: https://github.com/apache/spark/pull/33675#issuecomment-1126996196
this patch seems only to bring the latest changes from master, anything else
to do here?
--
This is an automated message from the Apache Git Service.
To respond to the message,
mridulm commented on code in PR #36512:
URL: https://github.com/apache/spark/pull/36512#discussion_r873204359
##
core/src/main/scala/org/apache/spark/storage/BlockManager.scala:
##
@@ -933,10 +933,29 @@ private[spark] class BlockManager(
})
Some(new
MaxGekk commented on code in PR #36479:
URL: https://github.com/apache/spark/pull/36479#discussion_r873201532
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala:
##
@@ -147,14 +147,17 @@ object QueryCompilationErrors extends QueryErrorsBase
huaxingao commented on PR #36515:
URL: https://github.com/apache/spark/pull/36515#issuecomment-1126965449
Thanks! Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
huaxingao closed pull request #36515: [SPARK-39156][SQL] Clean up the usage of
`ParquetLogRedirector` in `ParquetFileFormat`.
URL: https://github.com/apache/spark/pull/36515
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
zhengruifeng opened a new pull request, #36555:
URL: https://github.com/apache/spark/pull/36555
### What changes were proposed in this pull request?
interpolate supports param `limit_area`
### Why are the changes needed?
to increase api coverage
### Does this PR
LuciferYang commented on PR #36515:
URL: https://github.com/apache/spark/pull/36515#issuecomment-1126870123
hmm... @sunchao any other need changes?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
LuciferYang commented on PR #36078:
URL: https://github.com/apache/spark/pull/36078#issuecomment-1126869252
> Yeah, I have the same thought w/ Sean's
Got it ~
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r873115217
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,18 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r873115217
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,18 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r873115217
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,18 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
52 matches
Mail list logo