zhengruifeng commented on code in PR #38578:
URL: https://github.com/apache/spark/pull/38578#discussion_r1018862065
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/dsl/package.scala:
##
@@ -227,6 +227,21 @@ package object dsl {
}
}
+implicit
grundprinzip commented on code in PR #38535:
URL: https://github.com/apache/spark/pull/38535#discussion_r1018891531
##
python/pyspark/sql/connect/client.py:
##
@@ -125,13 +126,30 @@ def metadata(self) -> typing.Iterable[typing.Tuple[str,
str]]:
@property
def
bjornjorgensen commented on PR #38589:
URL: https://github.com/apache/spark/pull/38589#issuecomment-1310115370
Tests are failing because of this.
https://github.com/apache/spark/commit/c2a8e48e70abfb6bd101c99c5a0f6017151fc85e
--
This is an automated message from the Apache Git Service.
cloud-fan commented on code in PR #38582:
URL: https://github.com/apache/spark/pull/38582#discussion_r1019088145
##
core/src/main/scala/org/apache/spark/SparkException.scala:
##
@@ -68,6 +68,17 @@ class SparkException(
}
object SparkException {
+ def internalError(msg:
cloud-fan commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019092340
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +120,93 @@ class
hvanhovell commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019117026
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
bjornjorgensen commented on PR #38601:
URL: https://github.com/apache/spark/pull/38601#issuecomment-1310324785
@Yikun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
HyukjinKwon commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1018837877
##
connector/connect/src/main/protobuf/spark/connect/base.proto:
##
@@ -83,7 +83,6 @@ message Response {
int64 uncompressed_bytes = 2;
Review Comment:
HyukjinKwon commented on code in PR #38535:
URL: https://github.com/apache/spark/pull/38535#discussion_r1018848047
##
python/pyspark/sql/connect/client.py:
##
@@ -235,23 +260,30 @@ def fromProto(cls, pb: typing.Any) -> "AnalyzeResult":
class RemoteSparkSession(object):
hvanhovell commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1018938395
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
hvanhovell commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1018995213
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
hvanhovell commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019036963
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -117,10 +129,91 @@ class
pan3793 commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019048009
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -117,10 +129,91 @@ class
cloud-fan commented on code in PR #38497:
URL: https://github.com/apache/spark/pull/38497#discussion_r1019077400
##
sql/core/src/test/scala/org/apache/spark/sql/SubqueryHintPropagationSuite.scala:
##
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
cloud-fan commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019090800
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +120,93 @@ class
cloud-fan commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019090389
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +120,93 @@ class
pan3793 commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019058105
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
zhengruifeng commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019118121
##
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala:
##
@@ -128,6 +128,92 @@ private[sql] object ArrowConverters extends Logging
bjornjorgensen opened a new pull request, #38601:
URL: https://github.com/apache/spark/pull/38601
### What changes were proposed in this pull request?
Upgrade ubuntu version on runners in github actions from 20.04 to latest
### Why are the changes needed?
###
cloud-fan commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019152492
##
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala:
##
@@ -128,6 +128,92 @@ private[sql] object ArrowConverters extends Logging {
HyukjinKwon commented on code in PR #38166:
URL: https://github.com/apache/spark/pull/38166#discussion_r1018835131
##
connector/connect/src/test/scala/org/apache/spark/sql/connect/planner/SparkConnectProtoSuite.scala:
##
@@ -81,6 +81,31 @@ class SparkConnectProtoSuite extends
HyukjinKwon commented on code in PR #38535:
URL: https://github.com/apache/spark/pull/38535#discussion_r1018850304
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -195,7 +195,15 @@ def test_invalid_connection_strings(self):
for i in invalid:
HyukjinKwon commented on code in PR #38535:
URL: https://github.com/apache/spark/pull/38535#discussion_r1018850304
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -195,7 +195,15 @@ def test_invalid_connection_strings(self):
for i in invalid:
zhengruifeng commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1018875460
##
connector/connect/src/main/protobuf/spark/connect/base.proto:
##
@@ -83,7 +83,6 @@ message Response {
int64 uncompressed_bytes = 2;
Review Comment:
pan3793 commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1018969469
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
pan3793 commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019007963
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
cloud-fan commented on code in PR #38166:
URL: https://github.com/apache/spark/pull/38166#discussion_r1019060189
##
connector/connect/src/test/scala/org/apache/spark/sql/connect/planner/SparkConnectProtoSuite.scala:
##
@@ -81,6 +81,31 @@ class SparkConnectProtoSuite extends
LuciferYang commented on PR #38599:
URL: https://github.com/apache/spark/pull/38599#issuecomment-1310380670
Some GA Task were killed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
zhengruifeng commented on PR #38597:
URL: https://github.com/apache/spark/pull/38597#issuecomment-1309997726
@grundprinzip I dont know, the package versions do not change
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
hvanhovell commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1018944392
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
hvanhovell commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1018951362
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
pan3793 commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019048009
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -117,10 +129,91 @@ class
pan3793 commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019048009
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -117,10 +129,91 @@ class
cloud-fan commented on code in PR #38497:
URL: https://github.com/apache/spark/pull/38497#discussion_r1019079991
##
sql/core/src/test/scala/org/apache/spark/sql/SubqueryHintPropagationSuite.scala:
##
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
LuciferYang commented on PR #38589:
URL: https://github.com/apache/spark/pull/38589#issuecomment-1310243986
rebased, thanks @bjornjorgensen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
zhengruifeng commented on PR #38597:
URL: https://github.com/apache/spark/pull/38597#issuecomment-1309998801
python linter in this PR has passed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
HyukjinKwon commented on PR #38597:
URL: https://github.com/apache/spark/pull/38597#issuecomment-130519
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon closed pull request #38597:
[SPARK-41034][CONNECT][PYTHON][FOLLOW-UP] Fix mypy annotations test
URL: https://github.com/apache/spark/pull/38597
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
HyukjinKwon opened a new pull request, #38599:
URL: https://github.com/apache/spark/pull/38599
### What changes were proposed in this pull request?
This PR proposes to clean all (except the files in Git repository) before
running Mima.
### Why are the changes needed?
grundprinzip commented on code in PR #38535:
URL: https://github.com/apache/spark/pull/38535#discussion_r1018869517
##
python/pyspark/sql/connect/client.py:
##
@@ -125,13 +126,30 @@ def metadata(self) -> typing.Iterable[typing.Tuple[str,
str]]:
@property
def
zhengruifeng closed pull request #38578: [SPARK-41064][CONNECT][PYTHON]
Implement `DataFrame.crosstab` and `DataFrame.stat.crosstab`
URL: https://github.com/apache/spark/pull/38578
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
zhengruifeng commented on PR #38578:
URL: https://github.com/apache/spark/pull/38578#issuecomment-1310060863
merged into master, thank you all for reviews
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
zhengruifeng commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1018928333
##
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala:
##
@@ -128,6 +128,97 @@ private[sql] object ArrowConverters extends Logging
AmplabJenkins commented on PR #38574:
URL: https://github.com/apache/spark/pull/38574#issuecomment-1310144467
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
pan3793 commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019007963
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
itholic opened a new pull request, #38600:
URL: https://github.com/apache/spark/pull/38600
### What changes were proposed in this pull request?
This PR proposes to rename `GROUP_BY_POS_REFERS_AGG_EXPR` to
`GROUP_BY_POS_AGGREGATE`
### Why are the changes needed?
cloud-fan commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019093136
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +120,93 @@ class
zhengruifeng commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019093054
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +120,93 @@ class
cloud-fan commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019094064
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +120,93 @@ class
cloud-fan commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019100196
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
zhengruifeng commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019100602
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +120,93 @@ class
zhengruifeng commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019099531
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
LuciferYang commented on code in PR #38569:
URL: https://github.com/apache/spark/pull/38569#discussion_r1018831510
##
core/src/main/resources/error/error-classes.json:
##
@@ -469,6 +469,11 @@
"Grouping sets size cannot be greater than "
]
},
+
HyukjinKwon commented on code in PR #38166:
URL: https://github.com/apache/spark/pull/38166#discussion_r1018835131
##
connector/connect/src/test/scala/org/apache/spark/sql/connect/planner/SparkConnectProtoSuite.scala:
##
@@ -81,6 +81,31 @@ class SparkConnectProtoSuite extends
fred-db commented on code in PR #38497:
URL: https://github.com/apache/spark/pull/38497#discussion_r1018854115
##
sql/core/src/test/scala/org/apache/spark/sql/SubqueryHintPropagationSuite.scala:
##
@@ -0,0 +1,183 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
zhengruifeng commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1018939295
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
yabola commented on PR #38560:
URL: https://github.com/apache/spark/pull/38560#issuecomment-1310162375
> I am wondering whether the driver needs to pass the merged reduceId to the
external shuffle service (but now the driver cannot fully record merged info),
or the shuffle service records
hvanhovell commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019032161
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
pan3793 commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019058105
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
cloud-fan commented on code in PR #38497:
URL: https://github.com/apache/spark/pull/38497#discussion_r1019074653
##
sql/core/src/test/scala/org/apache/spark/sql/SubqueryHintPropagationSuite.scala:
##
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
cloud-fan commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019154009
##
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala:
##
@@ -128,6 +128,97 @@ private[sql] object ArrowConverters extends Logging {
zhengruifeng commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019170138
##
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala:
##
@@ -128,6 +128,97 @@ private[sql] object ArrowConverters extends Logging
HyukjinKwon commented on code in PR #38535:
URL: https://github.com/apache/spark/pull/38535#discussion_r1018846038
##
python/pyspark/sql/connect/client.py:
##
@@ -125,13 +126,30 @@ def metadata(self) -> typing.Iterable[typing.Tuple[str,
str]]:
@property
def
grundprinzip commented on code in PR #38535:
URL: https://github.com/apache/spark/pull/38535#discussion_r1018868868
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -195,7 +195,15 @@ def test_invalid_connection_strings(self):
for i in invalid:
pan3793 commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1018997361
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
pan3793 commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019007963
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
pan3793 commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019007963
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
cloud-fan commented on code in PR #38497:
URL: https://github.com/apache/spark/pull/38497#discussion_r1019068317
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/subquery.scala:
##
@@ -148,7 +150,7 @@ object RewritePredicateSubquery extends
grundprinzip commented on PR #38468:
URL: https://github.com/apache/spark/pull/38468#issuecomment-1310307256
I would like to close the discussion on the ordered vs un-ordered result.
1) For simple clients ordered results are what they expect and it follows
the precedent of what users
bozhang2820 opened a new pull request, #38602:
URL: https://github.com/apache/spark/pull/38602
### What changes were proposed in this pull request?
Exceptions thrown in `SparkHadoopWriter.write` are wrapped with
`SparkException("Job aborted."), which provides little extra
HyukjinKwon commented on code in PR #38535:
URL: https://github.com/apache/spark/pull/38535#discussion_r1018848047
##
python/pyspark/sql/connect/client.py:
##
@@ -235,23 +260,30 @@ def fromProto(cls, pb: typing.Any) -> "AnalyzeResult":
class RemoteSparkSession(object):
cloud-fan commented on code in PR #38497:
URL: https://github.com/apache/spark/pull/38497#discussion_r1019064915
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/subquery.scala:
##
@@ -52,10 +52,12 @@ object RewritePredicateSubquery extends
cloud-fan commented on code in PR #38497:
URL: https://github.com/apache/spark/pull/38497#discussion_r1019082930
##
sql/core/src/test/scala/org/apache/spark/sql/SubqueryHintPropagationSuite.scala:
##
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
cloud-fan commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019104867
##
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala:
##
@@ -128,6 +128,92 @@ private[sql] object ArrowConverters extends Logging {
cloud-fan commented on code in PR #38468:
URL: https://github.com/apache/spark/pull/38468#discussion_r1019151257
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -114,10 +123,97 @@ class
Yikun commented on PR #38601:
URL: https://github.com/apache/spark/pull/38601#issuecomment-1310345629
I think we should upgrade to `22.04` when github action `ubuntu-latest`
point to 22.04, rather than use the `ubuntu-latest` directly, to reduce the
potential impacts of OS upgrade breaking
MaxGekk commented on PR #38569:
URL: https://github.com/apache/spark/pull/38569#issuecomment-1310561698
+1, LGTM. Merging to master.
Thank you, @itholic and @srielau @LuciferYang for review.
--
This is an automated message from the Apache Git Service.
To respond to the message, please
pan3793 commented on PR #38596:
URL: https://github.com/apache/spark/pull/38596#issuecomment-1310574000
cc @srowen
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
cloud-fan commented on PR #38511:
URL: https://github.com/apache/spark/pull/38511#issuecomment-1310574974
also cc @wangyum @ulysses-you
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
amaliujia commented on PR #38578:
URL: https://github.com/apache/spark/pull/38578#issuecomment-1310695917
Thanks. Late LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
amaliujia commented on PR #38586:
URL: https://github.com/apache/spark/pull/38586#issuecomment-1310739431
@HyukjinKwon yes the goal will be matching the API shape of the `Column` in
Python/Scala (likely Python first if there is API difference between Python and
Scala).
This PR is
mridulm commented on PR #38560:
URL: https://github.com/apache/spark/pull/38560#issuecomment-1310753366
This is related quite a lot to https://github.com/apache/spark/pull/37922 by
@wankunde
That PR seems to be having build issues, and so has not made progress.
--
This is an automated
srowen closed pull request #38593: [SPARK-41089][YARN][SHUFFLE] Relocate Netty
native arm64 libs
URL: https://github.com/apache/spark/pull/38593
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
MaxGekk closed pull request #38569: [SPARK-41055][SQL] Rename
`_LEGACY_ERROR_TEMP_2424` to `GROUP_BY_AGGREGATE`
URL: https://github.com/apache/spark/pull/38569
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
amaliujia commented on code in PR #38546:
URL: https://github.com/apache/spark/pull/38546#discussion_r1019478368
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -139,11 +139,9 @@ def columns(self) -> List[str]:
if self._plan is None:
return []
mridulm commented on PR #38602:
URL: https://github.com/apache/spark/pull/38602#issuecomment-1310755154
It is wrapped in `SparkException` specifically since we handle
`SparkException` in various codepaths for task failure handling.
--
This is an automated message from the Apache Git
LuciferYang commented on PR #38575:
URL: https://github.com/apache/spark/pull/38575#issuecomment-1310402593
Is the sparkr UTs failure is related to this one?
https://github.com/itholic/spark/actions/runs/3425639144/jobs/5708796073
```
══ Failed
amaliujia commented on PR #38594:
URL: https://github.com/apache/spark/pull/38594#issuecomment-1310693233
LGTM!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
LuciferYang commented on PR #38602:
URL: https://github.com/apache/spark/pull/38602#issuecomment-1310395116
Can you give a comparison of the exception stack Before and After this pr in
the pr description?
--
This is an automated message from the Apache Git Service.
To respond to
LuciferYang commented on PR #38575:
URL: https://github.com/apache/spark/pull/38575#issuecomment-1310407470
Looks like we need to refactor this case
https://github.com/apache/spark/blob/c5d27603f29437f1686cac70727594c19410a273/R/pkg/tests/fulltests/test_sparkSQL.R#L3986-L3998
--
MaxGekk commented on code in PR #38582:
URL: https://github.com/apache/spark/pull/38582#discussion_r1019383970
##
core/src/main/scala/org/apache/spark/SparkException.scala:
##
@@ -68,6 +68,17 @@ class SparkException(
}
object SparkException {
+ def internalError(msg:
MaxGekk commented on code in PR #38582:
URL: https://github.com/apache/spark/pull/38582#discussion_r1019383970
##
core/src/main/scala/org/apache/spark/SparkException.scala:
##
@@ -68,6 +68,17 @@ class SparkException(
}
object SparkException {
+ def internalError(msg:
mridulm commented on PR #38091:
URL: https://github.com/apache/spark/pull/38091#issuecomment-1310742436
The fix for this was merged recently - did you update to latest @LuciferYang
?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
LuciferYang commented on PR #38589:
URL: https://github.com/apache/spark/pull/38589#issuecomment-1310450309
maven test all passed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
maryannxue commented on code in PR #38558:
URL: https://github.com/apache/spark/pull/38558#discussion_r1019286861
##
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveSparkPlanExec.scala:
##
@@ -205,6 +205,8 @@ case class AdaptiveSparkPlanExec(
def
srowen commented on PR #38596:
URL: https://github.com/apache/spark/pull/38596#issuecomment-1310617398
Sounds ok. How far back should this backport?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
AmplabJenkins commented on PR #38568:
URL: https://github.com/apache/spark/pull/38568#issuecomment-1310627176
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
srowen commented on PR #38593:
URL: https://github.com/apache/spark/pull/38593#issuecomment-1310419914
Merged to master/3.3
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
fred-db commented on code in PR #38497:
URL: https://github.com/apache/spark/pull/38497#discussion_r1019250270
##
sql/core/src/test/scala/org/apache/spark/sql/SubqueryHintPropagationSuite.scala:
##
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
srielau commented on code in PR #38576:
URL: https://github.com/apache/spark/pull/38576#discussion_r1019261584
##
core/src/main/resources/error/error-classes.json:
##
@@ -1277,6 +1277,11 @@
"A correlated outer name reference within a subquery expression body
was not
1 - 100 of 314 matches
Mail list logo