xinrong-meng commented on code in PR #39585:
URL: https://github.com/apache/spark/pull/39585#discussion_r1082016616
##
python/pyspark/sql/connect/functions.py:
##
@@ -2350,8 +2356,21 @@ def unwrap_udt(col: "ColumnOrName") -> Column:
unwrap_udt.__doc__ = pysparkfuncs.unwrap_udt.
dongjoon-hyun closed pull request #38180: [SPARK-40719][SQL] `CTAS` should
respect `TBLPROPERTIES` during execution
URL: https://github.com/apache/spark/pull/38180
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL a
zhengruifeng commented on PR #39636:
URL: https://github.com/apache/spark/pull/39636#issuecomment-1397792505
thank you @dongjoon-hyun @cloud-fan @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL ab
vinodkc commented on code in PR #39577:
URL: https://github.com/apache/spark/pull/39577#discussion_r1082013586
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/maskExpressions.scala:
##
@@ -17,27 +17,29 @@
package org.apache.spark.sql.catalyst.expressi
tedyu commented on PR #39654:
URL: https://github.com/apache/spark/pull/39654#issuecomment-1397785390
From https://github.com/tedyu/spark/actions/runs/3961295903/jobs/6786585143
```
Finished test(pypy3): pyspark.sql.tests.test_utils (10s)
Starting test(python3.9): pyspark.mllib.tests
dtenedor opened a new pull request, #39657:
URL: https://github.com/apache/spark/pull/39657
### What changes were proposed in this pull request?
Include column default values in DESCRIBE output.
### Why are the changes needed?
This helps users work with tables and check t
github-actions[bot] commented on PR #38053:
URL: https://github.com/apache/spark/pull/38053#issuecomment-1397774457
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] commented on PR #37360:
URL: https://github.com/apache/spark/pull/37360#issuecomment-1397774479
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] closed pull request #38159: [SPARK-40594][SQL] Eagerly
release hashed relation in ShuffledHashJoin
URL: https://github.com/apache/spark/pull/38159
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
UR
github-actions[bot] commented on PR #38180:
URL: https://github.com/apache/spark/pull/38180#issuecomment-1397774428
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
zhenlineo commented on code in PR #39541:
URL: https://github.com/apache/spark/pull/39541#discussion_r1081981382
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/util/Cleaner.scala:
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Fo
dongjoon-hyun commented on code in PR #39541:
URL: https://github.com/apache/spark/pull/39541#discussion_r1081977091
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/util/Cleaner.scala:
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Softwar
zhenlineo commented on code in PR #39541:
URL: https://github.com/apache/spark/pull/39541#discussion_r1081970702
##
connector/connect/client/jvm/pom.xml:
##
@@ -47,6 +47,12 @@
+
Review Comment:
This is not a release blocker for Spark. But it is ni
zhenlineo commented on code in PR #39541:
URL: https://github.com/apache/spark/pull/39541#discussion_r1081967771
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/util/Cleaner.scala:
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Fo
dongjoon-hyun closed pull request #38005: [SPARK-40550][SQL] DataSource V2:
Handle DELETE commands for delta-based sources
URL: https://github.com/apache/spark/pull/38005
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use th
dongjoon-hyun commented on PR #38005:
URL: https://github.com/apache/spark/pull/38005#issuecomment-1397734988
All tests passed. Merged to master for Apache Spark 3.4.0.
![Screen Shot 2023-01-19 at 3 19 10
PM](https://user-images.githubusercontent.com/9700541/213583682-080d7e88-86f9-4e
dongjoon-hyun commented on code in PR #39541:
URL: https://github.com/apache/spark/pull/39541#discussion_r1081964708
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/util/Cleaner.scala:
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Softwar
dongjoon-hyun commented on PR #39541:
URL: https://github.com/apache/spark/pull/39541#issuecomment-1397726617
Thank you, @zhenlineo !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
zhenlineo commented on PR #39541:
URL: https://github.com/apache/spark/pull/39541#issuecomment-1397725614
Yes, I will fix the jar finding for Scala 2.13. I will also see if there is
an easy way to ask sbt/maven build the server jar before the tests too. Thanks
a lot for reporting the issue.
zhenlineo commented on PR #39541:
URL: https://github.com/apache/spark/pull/39541#issuecomment-1397717645
Hi @dongjoon-hyun Can you give me the command that you used to run a module
test?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
tedyu commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081954269
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -815,7 +815,7 @@ public MergeStatuses
finalizeShuffleMerge(F
mridulm commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081952766
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -815,7 +815,7 @@ public MergeStatuses
finalizeShuffleMerge
mridulm commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081952766
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -815,7 +815,7 @@ public MergeStatuses
finalizeShuffleMerge
allisonwang-db opened a new pull request, #39656:
URL: https://github.com/apache/spark/pull/39656
### What changes were proposed in this pull request?
This PR adds two new built-in table-valued functions in the table function
registry: `inline` and `inline_outer`.
### Why a
dongjoon-hyun commented on code in PR #39541:
URL: https://github.com/apache/spark/pull/39541#discussion_r1081947954
##
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/connect/client/util/RemoteSparkSession.scala:
##
@@ -0,0 +1,198 @@
+/*
+ * Licensed to the Apa
dongjoon-hyun commented on code in PR #39541:
URL: https://github.com/apache/spark/pull/39541#discussion_r1081945632
##
connector/connect/client/jvm/pom.xml:
##
@@ -47,6 +47,12 @@
+
Review Comment:
Hi, @hvanhovell .
I believe this is not a
aokolnychyi commented on PR #32921:
URL: https://github.com/apache/spark/pull/32921#issuecomment-1397703601
Hi, @LorenzoMartini! I am not sure how much `SupportsRuntimeFiltering` API
will be helpful for built-in sources because Spark treats them in a special
way. For instance, `PushDownUtil
hvanhovell commented on PR #39541:
URL: https://github.com/apache/spark/pull/39541#issuecomment-1397666200
Merged
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubsc
hvanhovell closed pull request #39541: [SPARK-42043][CONNECT] Scala Client
Result with E2E Tests
URL: https://github.com/apache/spark/pull/39541
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the s
gengliangwang closed pull request #39275: [SPARK-41759][CORE] Use `weakIntern`
on string values in create new objects during deserialization
URL: https://github.com/apache/spark/pull/39275
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
gengliangwang commented on PR #39275:
URL: https://github.com/apache/spark/pull/39275#issuecomment-1397656825
Thanks, merging to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specifi
tedyu commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081692479
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -815,7 +815,7 @@ public MergeStatuses
finalizeShuffleMerge(F
huaxingao commented on PR #38005:
URL: https://github.com/apache/spark/pull/38005#issuecomment-1397613359
+1 LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubsc
aokolnychyi commented on code in PR #38005:
URL: https://github.com/apache/spark/pull/38005#discussion_r1081850206
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/v2Commands.scala:
##
@@ -274,6 +274,120 @@ case class ReplaceData(
}
}
+/**
+ * Wri
grundprinzip commented on code in PR #39541:
URL: https://github.com/apache/spark/pull/39541#discussion_r1081830731
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/SparkSession.scala:
##
@@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (AS
aokolnychyi commented on code in PR #38005:
URL: https://github.com/apache/spark/pull/38005#discussion_r1081828234
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/v2Commands.scala:
##
@@ -274,6 +274,120 @@ case class ReplaceData(
}
}
+/**
+ * Wri
aokolnychyi commented on code in PR #38005:
URL: https://github.com/apache/spark/pull/38005#discussion_r1081825442
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2Exec.scala:
##
@@ -477,6 +507,73 @@ object DataWritingSparkTask extends
rithwik-db commented on code in PR #39299:
URL: https://github.com/apache/spark/pull/39299#discussion_r1081819229
##
python/pyspark/ml/torch/distributor.py:
##
@@ -72,6 +77,19 @@ def get_conf_boolean(sc: SparkContext, key: str,
default_value: str) -> bool:
)
+def get_l
rithwik-db commented on code in PR #39637:
URL: https://github.com/apache/spark/pull/39637#discussion_r1081814743
##
python/pyspark/ml/torch/tests/test_distributor.py:
##
@@ -288,6 +288,13 @@ def test_local_training_succeeds(self) -> None:
if cuda_env_var:
viirya commented on code in PR #38005:
URL: https://github.com/apache/spark/pull/38005#discussion_r1081796690
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2Exec.scala:
##
@@ -477,6 +507,73 @@ object DataWritingSparkTask extends Loggi
chaoqin-li1123 commented on PR #39647:
URL: https://github.com/apache/spark/pull/39647#issuecomment-1397535787
The test failure seems
irrelevant(https://github.com/chaoqin-li1123/spark/actions/runs/3956602101/jobs/6776029863#step:11:1317)
--
This is an automated message from the Apache Gi
dongjoon-hyun closed pull request #39655: [SPARK-42116][SQL][TESTS] Mark
`ColumnarBatchSuite` as `ExtendedSQLTest`
URL: https://github.com/apache/spark/pull/39655
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL ab
dongjoon-hyun commented on PR #39655:
URL: https://github.com/apache/spark/pull/39655#issuecomment-1397531305
Merged to master for Apache Spark 3.4.0.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
viirya commented on code in PR #38005:
URL: https://github.com/apache/spark/pull/38005#discussion_r1081781360
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/v2Commands.scala:
##
@@ -274,6 +274,120 @@ case class ReplaceData(
}
}
+/**
+ * Writes a
dongjoon-hyun commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081772125
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -815,7 +815,7 @@ public MergeStatuses
finalizeShuffl
rithwik-db commented on code in PR #39637:
URL: https://github.com/apache/spark/pull/39637#discussion_r1081766800
##
python/pyspark/ml/torch/tests/test_distributor.py:
##
@@ -286,6 +326,23 @@ def tearDown(self) -> None:
os.unlink(self.tempFile.name)
self.spark.
gengliangwang commented on code in PR #39508:
URL: https://github.com/apache/spark/pull/39508#discussion_r1081743950
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveReferencesInAggregate.scala:
##
@@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Soft
gengliangwang commented on code in PR #39508:
URL: https://github.com/apache/spark/pull/39508#discussion_r1081742355
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveReferencesInAggregate.scala:
##
@@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Soft
gengliangwang commented on code in PR #39508:
URL: https://github.com/apache/spark/pull/39508#discussion_r1081741822
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveReferencesInAggregate.scala:
##
@@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Soft
jchen5 commented on PR #39375:
URL: https://github.com/apache/spark/pull/39375#issuecomment-1397459943
Definitely, I added some more tests. The is the set of factors I tested:
- Subquery type:
- Eligible for DecorrelateInnerQuery: Scalar, lateral join
- Not supported: EXISTS (new
tedyu commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081692479
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -815,7 +815,7 @@ public MergeStatuses
finalizeShuffleMerge(F
dongjoon-hyun commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081684212
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -815,7 +815,7 @@ public MergeStatuses
finalizeShuffl
srielau commented on code in PR #38419:
URL: https://github.com/apache/spark/pull/38419#discussion_r1081675876
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala:
##
@@ -1432,6 +1681,53 @@ case class Logarithm(left: Expression, right:
srielau commented on code in PR #38419:
URL: https://github.com/apache/spark/pull/38419#discussion_r1081669031
##
sql/core/src/test/resources/sql-tests/inputs/trunc.sql:
##
@@ -0,0 +1,136 @@
+-- trunc decimal
Review Comment:
Can you add some tests for the result type, specia
srielau commented on PR #38419:
URL: https://github.com/apache/spark/pull/38419#issuecomment-1397428887
What is the result type? Does it match the input?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
tedyu commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081661106
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -848,13 +848,13 @@ public void registerExecutor(String appId,
dongjoon-hyun commented on PR #39655:
URL: https://github.com/apache/spark/pull/39655#issuecomment-1397423585
Thank you, @huaxingao !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
dongjoon-hyun commented on code in PR #39654:
URL: https://github.com/apache/spark/pull/39654#discussion_r1081658172
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -848,13 +848,13 @@ public void registerExecutor(Strin
dongjoon-hyun commented on PR #39655:
URL: https://github.com/apache/spark/pull/39655#issuecomment-1397414001
Could you review this, @huaxingao ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to t
dongjoon-hyun opened a new pull request, #39655:
URL: https://github.com/apache/spark/pull/39655
### What changes were proposed in this pull request?
This PR aims to mark `ColumnarBatchSuite` as `ExtendedSQLTest`
### Why are the changes needed?
### Does this PR introd
antonipp commented on code in PR #38376:
URL: https://github.com/apache/spark/pull/38376#discussion_r1073834555
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStep.scala:
##
@@ -168,27 +168,27 @@ private[spark] class Ba
dtenedor commented on code in PR #39592:
URL: https://github.com/apache/spark/pull/39592#discussion_r1081597693
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala:
##
@@ -227,7 +227,7 @@ object LogicalPlanIntegrity {
* this method ch
aokolnychyi commented on PR #38005:
URL: https://github.com/apache/spark/pull/38005#issuecomment-1397327118
I've updated this PR and its description so it is ready for another look.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to Git
antonipp commented on code in PR #38376:
URL: https://github.com/apache/spark/pull/38376#discussion_r1081559422
##
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStepSuite.scala:
##
@@ -353,3 +381,16 @@ class BasicDriverFe
dongjoon-hyun closed pull request #39651: [SPARK-42113][PS][INFRA] Upgrade
pandas to 1.5.3
URL: https://github.com/apache/spark/pull/39651
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specifi
cloud-fan commented on code in PR #39592:
URL: https://github.com/apache/spark/pull/39592#discussion_r1081550033
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/rules/RuleExecutor.scala:
##
@@ -151,12 +152,15 @@ abstract class RuleExecutor[TreeType <: TreeNode[_]]
xkrogen commented on code in PR #39592:
URL: https://github.com/apache/spark/pull/39592#discussion_r1081513455
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -304,6 +304,14 @@ object SQLConf {
.stringConf
.createOptional
+ val PLAN
xkrogen commented on code in PR #39592:
URL: https://github.com/apache/spark/pull/39592#discussion_r1081513455
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -304,6 +304,14 @@ object SQLConf {
.stringConf
.createOptional
+ val PLAN
peter-toth commented on PR #37525:
URL: https://github.com/apache/spark/pull/37525#issuecomment-1397205575
> I've rebased the PR on #39652, that is not yet merged, so there is an
extra commit
([59646bb](https://github.com/apache/spark/commit/59646bbc26476ec957fd7bff8cbae317791dc228))
in th
EnricoMi commented on PR #39640:
URL: https://github.com/apache/spark/pull/39640#issuecomment-1397205310
All changes done, all tests green.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spe
peter-toth commented on PR #39652:
URL: https://github.com/apache/spark/pull/39652#issuecomment-1397202187
Thanks for the quick review!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specifi
cloud-fan closed pull request #39652: [SPARK-40599][SQL] Relax multiTransform
rule type to allow alternatives to be any kinds of Seq
URL: https://github.com/apache/spark/pull/39652
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
cloud-fan commented on PR #39652:
URL: https://github.com/apache/spark/pull/39652#issuecomment-1397201326
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c
peter-toth commented on PR #37525:
URL: https://github.com/apache/spark/pull/37525#issuecomment-1397176778
I've rebased the PR on https://github.com/apache/spark/pull/39652, that is
not yet merged, so there is an extra commit
(https://github.com/apache/spark/pull/37525/commits/59646bbc26476
dongjoon-hyun commented on code in PR #38376:
URL: https://github.com/apache/spark/pull/38376#discussion_r1081452390
##
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStepSuite.scala:
##
@@ -353,3 +381,16 @@ class BasicDri
dongjoon-hyun commented on code in PR #38376:
URL: https://github.com/apache/spark/pull/38376#discussion_r1081451704
##
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStepSuite.scala:
##
@@ -353,3 +381,16 @@ class BasicDri
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081450369
##
sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala:
##
@@ -1314,6 +1313,135 @@ class PlannerSuite extends SharedSparkSession with
Adaptive
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081449673
##
sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala:
##
@@ -1314,6 +1313,135 @@ class PlannerSuite extends SharedSparkSession with
Adaptive
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081449349
##
sql/core/src/main/scala/org/apache/spark/sql/execution/AliasAwareOutputExpression.scala:
##
@@ -74,18 +73,4 @@ trait AliasAwareOutputPartitioning extends
AliasAw
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081449075
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/AliasAwareOutputExpression.scala:
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Fou
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081448247
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/AliasAwareOutputExpression.scala:
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Fou
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081448596
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/AliasAwareOutputExpression.scala:
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Fou
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081447485
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/AliasAwareOutputExpression.scala:
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Fou
peter-toth commented on code in PR #37525:
URL: https://github.com/apache/spark/pull/37525#discussion_r1081446977
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/AliasAwareOutputExpression.scala:
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Fou
dongjoon-hyun commented on PR #39649:
URL: https://github.com/apache/spark/pull/39649#issuecomment-1397128418
Thank you, @HyukjinKwon !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specifi
tedyu commented on PR #39654:
URL: https://github.com/apache/spark/pull/39654#issuecomment-1397097053
cc @mridulm
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubs
tedyu opened a new pull request, #39654:
URL: https://github.com/apache/spark/pull/39654
### What changes were proposed in this pull request?
This PR adds `ioe` to the warning log of `finalizeShuffleMerge`.
### Why are the changes needed?
With `ioe` logged, user would have more c
srowen commented on PR #39190:
URL: https://github.com/apache/spark/pull/39190#issuecomment-1397061880
Or maybe more to the point, do you have a concrete example of how this
arises in Spark?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
srowen commented on PR #39190:
URL: https://github.com/apache/spark/pull/39190#issuecomment-1397061274
It makes sense to me. I don't know a lot about this code, so hesitate to
review it. Does this only affect display metrics? I'm just wondering why it
hadn't caused a problem before. Maybe i
HyukjinKwon closed pull request #39639: [SPARK-42080][PYTHON][DOCS] Add
guideline for PySpark errors
URL: https://github.com/apache/spark/pull/39639
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to t
HyukjinKwon closed pull request #39649: [SPARK-42111][SQL][TESTS] Mark
`Orc*FilterSuite/OrcV*SchemaPruningSuite` as `ExtendedSQLTest`
URL: https://github.com/apache/spark/pull/39649
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
HyukjinKwon commented on PR #39649:
URL: https://github.com/apache/spark/pull/39649#issuecomment-1396979534
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon commented on code in PR #39585:
URL: https://github.com/apache/spark/pull/39585#discussion_r1081248787
##
python/pyspark/sql/connect/functions.py:
##
@@ -2350,8 +2356,21 @@ def unwrap_udt(col: "ColumnOrName") -> Column:
unwrap_udt.__doc__ = pysparkfuncs.unwrap_udt._
HyukjinKwon commented on code in PR #39653:
URL: https://github.com/apache/spark/pull/39653#discussion_r1081229852
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala:
##
@@ -1006,7 +1006,7 @@ object CollapseProject extends Rule[LogicalPlan] wi
HyukjinKwon opened a new pull request, #39653:
URL: https://github.com/apache/spark/pull/39653
### What changes were proposed in this pull request?
This PR proposes to enable pushing down the limit through Python UDFs by
disabling `PushProjectionThroughLimit` and `CollapseProject` if
panbingkun commented on code in PR #39275:
URL: https://github.com/apache/spark/pull/39275#discussion_r1081187372
##
core/src/main/scala/org/apache/spark/status/protobuf/StageDataWrapperSerializer.scala:
##
@@ -393,10 +393,8 @@ class StageDataWrapperSerializer extends
ProtobufS
EnricoMi commented on code in PR #39640:
URL: https://github.com/apache/spark/pull/39640#discussion_r1081178784
##
sql/core/src/test/java/test/org/apache/spark/sql/JavaDatasetSuite.java:
##
@@ -387,7 +400,27 @@ public void testGroupBy() {
},
Encoders.STRING());
-
panbingkun commented on code in PR #39275:
URL: https://github.com/apache/spark/pull/39275#discussion_r1081174525
##
core/src/main/scala/org/apache/spark/status/protobuf/PoolDataSerializer.scala:
##
@@ -34,7 +33,7 @@ class PoolDataSerializer extends ProtobufSerDe[PoolData] {
panbingkun commented on PR #39275:
URL: https://github.com/apache/spark/pull/39275#issuecomment-1396852540
> https://user-images.githubusercontent.com/1097932/213345430-088ace51-e8ab-4f2b-9097-0184ab94efb8.png";>
>
> @panbingkun there are 7 usages in from live entities, while there are
codecov-commenter commented on PR #39647:
URL: https://github.com/apache/spark/pull/39647#issuecomment-1396839380
#
[Codecov](https://codecov.io/gh/apache/spark/pull/39647?src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=The+Apache+Soft
201 - 300 of 23204 matches
Mail list logo