HeartSaVioR commented on code in PR #38503:
URL: https://github.com/apache/spark/pull/38503#discussion_r1021175378
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/FlatMapGroupsInPandasWithStateSuite.scala:
##
@@ -240,25 +240,30 @@ class
HeartSaVioR commented on code in PR #38503:
URL: https://github.com/apache/spark/pull/38503#discussion_r1021193999
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationSuite.scala:
##
@@ -190,20 +190,25 @@ class StreamingDeduplicationSuite extends
HeartSaVioR commented on code in PR #38503:
URL: https://github.com/apache/spark/pull/38503#discussion_r1021193999
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationSuite.scala:
##
@@ -190,20 +190,25 @@ class StreamingDeduplicationSuite extends
zhengruifeng opened a new pull request, #38653:
URL: https://github.com/apache/spark/pull/38653
### What changes were proposed in this pull request?
Implement `DataFrame.fillna ` and `DataFrame.na.fill`
### Why are the changes needed?
For API coverage
### Does
MaxGekk opened a new pull request, #38656:
URL: https://github.com/apache/spark/pull/38656
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How was this
itholic commented on PR #38658:
URL: https://github.com/apache/spark/pull/38658#issuecomment-1313545319
Thanks for the fix this.
Do you know why this suddenly start failing?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
cloud-fan closed pull request #38626: [SPARK-38959][SQL][FOLLOWUP] Do not
optimize subqueries twice
URL: https://github.com/apache/spark/pull/38626
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
pan3793 opened a new pull request, #38651:
URL: https://github.com/apache/spark/pull/38651
### What changes were proposed in this pull request?
Shorten graceful shutdown time of `ExecutorPodsSnapshotsStoreImpl#stop` from
30s to 20s to prevent blocking shutdown process
cloud-fan commented on code in PR #38631:
URL: https://github.com/apache/spark/pull/38631#discussion_r1021211335
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -170,6 +170,8 @@ message Expression {
message Alias {
Expression expr = 1;
-
itholic commented on PR #38652:
URL: https://github.com/apache/spark/pull/38652#issuecomment-1313290570
cc @MaxGekk @srielau
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
itholic opened a new pull request, #38657:
URL: https://github.com/apache/spark/pull/38657
### What changes were proposed in this pull request?
This PR proposes to improve the error message and test for
`PYTHON_UDF_IN_ON_CLAUSE`
### Why are the changes needed?
The
HeartSaVioR commented on code in PR #38503:
URL: https://github.com/apache/spark/pull/38503#discussion_r1021193999
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationSuite.scala:
##
@@ -190,20 +190,25 @@ class StreamingDeduplicationSuite extends
MaxGekk commented on code in PR #38646:
URL: https://github.com/apache/spark/pull/38646#discussion_r1021194056
##
core/src/main/resources/error/error-classes.json:
##
@@ -1044,7 +1044,7 @@
},
"UNRESOLVED_MAP_KEY" : {
"message" : [
- "Cannot resolve column as a
grundprinzip commented on code in PR #38631:
URL: https://github.com/apache/spark/pull/38631#discussion_r1021214764
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -170,6 +170,8 @@ message Expression {
message Alias {
Expression expr = 1;
cloud-fan commented on code in PR #38648:
URL: https://github.com/apache/spark/pull/38648#discussion_r1021221885
##
sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala:
##
@@ -494,7 +494,8 @@ object QueryExecution {
private[sql] def
EnricoMi commented on code in PR #38356:
URL: https://github.com/apache/spark/pull/38356#discussion_r1021351324
##
sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala:
##
@@ -220,6 +220,23 @@ class PartitionedWriteSuite extends QueryTest with
LuciferYang commented on code in PR #38615:
URL: https://github.com/apache/spark/pull/38615#discussion_r1021384022
##
core/src/main/resources/error/error-classes.json:
##
@@ -630,6 +630,23 @@
"Input schema can only contain STRING as a key type for a
MAP."
]
},
cloud-fan commented on PR #38626:
URL: https://github.com/apache/spark/pull/38626#issuecomment-1313245999
The failed test is known to be flaky:
```
SPARK-37555: spark-sql should pass last unclosed comment to backend ***
FAILED *** (2 minutes, 10 seconds)
[info]
cloud-fan commented on PR #38626:
URL: https://github.com/apache/spark/pull/38626#issuecomment-1313244564
thanks for review, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
HeartSaVioR commented on code in PR #38503:
URL: https://github.com/apache/spark/pull/38503#discussion_r1021196487
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationSuite.scala:
##
@@ -190,20 +190,25 @@ class StreamingDeduplicationSuite extends
HeartSaVioR commented on code in PR #38503:
URL: https://github.com/apache/spark/pull/38503#discussion_r1021193999
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationSuite.scala:
##
@@ -190,20 +190,25 @@ class StreamingDeduplicationSuite extends
HeartSaVioR commented on code in PR #38503:
URL: https://github.com/apache/spark/pull/38503#discussion_r1021193999
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationSuite.scala:
##
@@ -190,20 +190,25 @@ class StreamingDeduplicationSuite extends
itholic opened a new pull request, #38652:
URL: https://github.com/apache/spark/pull/38652
### What changes were proposed in this pull request?
This PR proposes to rename `LATERAL_JOIN_OF_TYPE` to
`INVALID_LATERAL_JOIN_TYPE`.
Also remove this from the sub-class under
MaxGekk commented on code in PR #38648:
URL: https://github.com/apache/spark/pull/38648#discussion_r1021200448
##
sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala:
##
@@ -494,7 +494,8 @@ object QueryExecution {
private[sql] def toInternalError(msg:
cloud-fan commented on code in PR #38631:
URL: https://github.com/apache/spark/pull/38631#discussion_r1021212753
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -348,7 +350,16 @@ class SparkConnectPlanner(session:
grundprinzip commented on code in PR #38631:
URL: https://github.com/apache/spark/pull/38631#discussion_r1021212933
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -348,7 +350,16 @@ class SparkConnectPlanner(session:
cloud-fan commented on code in PR #38631:
URL: https://github.com/apache/spark/pull/38631#discussion_r1021219879
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -170,6 +170,8 @@ message Expression {
message Alias {
Expression expr = 1;
-
LuciferYang commented on code in PR #38651:
URL: https://github.com/apache/spark/pull/38651#discussion_r1021225985
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsSnapshotsStoreImpl.scala:
##
@@ -94,7 +95,9 @@
MaxGekk commented on code in PR #38531:
URL: https://github.com/apache/spark/pull/38531#discussion_r1021226420
##
core/src/main/resources/error/error-classes.json:
##
@@ -290,6 +290,46 @@
"Null typed values cannot be used as arguments of ."
]
},
+
zhengruifeng commented on PR #38654:
URL: https://github.com/apache/spark/pull/38654#issuecomment-1313363736
cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
zhengruifeng opened a new pull request, #38655:
URL: https://github.com/apache/spark/pull/38655
### What changes were proposed in this pull request?
Make `DataFrame.na.fill` have the same augment types as `DataFrame.fillna`
### Why are the changes needed?
`DataFrame.na.fill`
itholic commented on PR #38657:
URL: https://github.com/apache/spark/pull/38657#issuecomment-1313489138
cc @srielau @MaxGekk
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
Yikun commented on PR #38064:
URL: https://github.com/apache/spark/pull/38064#issuecomment-1313530277
The `2GB` should not the limited of github action, there only are [a 7GB
LuciferYang commented on PR #38658:
URL: https://github.com/apache/spark/pull/38658#issuecomment-1313537233
cc @MaxGekk @dongjoon-hyun
for fix GA Task:
https://github.com/LuciferYang/spark/actions/runs/3459895559/jobs/5775820010
--
This is an automated message from the
MaxGekk commented on code in PR #38648:
URL: https://github.com/apache/spark/pull/38648#discussion_r1021200448
##
sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala:
##
@@ -494,7 +494,8 @@ object QueryExecution {
private[sql] def toInternalError(msg:
grundprinzip commented on code in PR #38631:
URL: https://github.com/apache/spark/pull/38631#discussion_r1021203690
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -170,6 +170,8 @@ message Expression {
message Alias {
Expression expr = 1;
MaxGekk commented on code in PR #38650:
URL: https://github.com/apache/spark/pull/38650#discussion_r1021209665
##
core/src/main/resources/error/error-classes.json:
##
@@ -616,6 +616,11 @@
],
"sqlState" : "42000"
},
+ "INVALID_EMPTY_LOCATION" : {
+"message" : [
MaxGekk commented on code in PR #38648:
URL: https://github.com/apache/spark/pull/38648#discussion_r1021233627
##
sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala:
##
@@ -494,7 +494,8 @@ object QueryExecution {
private[sql] def toInternalError(msg:
zhengruifeng opened a new pull request, #38654:
URL: https://github.com/apache/spark/pull/38654
### What changes were proposed in this pull request?
Document the reason of sending batch in main thread
### Why are the changes needed?
as per
grundprinzip commented on code in PR #38631:
URL: https://github.com/apache/spark/pull/38631#discussion_r1021263016
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -170,6 +170,8 @@ message Expression {
message Alias {
Expression expr = 1;
ulysses-you commented on code in PR #38356:
URL: https://github.com/apache/spark/pull/38356#discussion_r1021331641
##
sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala:
##
@@ -220,6 +220,23 @@ class PartitionedWriteSuite extends QueryTest with
LuciferYang opened a new pull request, #38658:
URL: https://github.com/apache/spark/pull/38658
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
###
LuciferYang commented on code in PR #38637:
URL: https://github.com/apache/spark/pull/38637#discussion_r1021393419
##
project/plugins.sbt:
##
@@ -25,7 +25,7 @@ libraryDependencies += "com.puppycrawl.tools" % "checkstyle"
% "9.3"
// checkstyle uses guava 31.0.1-jre.
LuciferYang commented on PR #38658:
URL: https://github.com/apache/spark/pull/38658#issuecomment-1313551135
Still under investigation. It should have been discovered very early
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
itholic commented on code in PR #38646:
URL: https://github.com/apache/spark/pull/38646#discussion_r1021411839
##
core/src/main/resources/error/error-classes.json:
##
@@ -1044,7 +1044,7 @@
},
"UNRESOLVED_MAP_KEY" : {
"message" : [
- "Cannot resolve column as a
cloud-fan commented on PR #38631:
URL: https://github.com/apache/spark/pull/38631#issuecomment-1313590270
@HyukjinKwon can you review the python side?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
zero323 commented on PR #38643:
URL: https://github.com/apache/spark/pull/38643#issuecomment-1313603005
LGTM. Thanks @sunchao!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
LuciferYang commented on PR #38658:
URL: https://github.com/apache/spark/pull/38658#issuecomment-1313667810
> Thanks for the fix this.
>
> Do you know why this suddenly start failing?
@itholic
After https://github.com/apache/spark/pull/38615 merging, there is a code
dengziming opened a new pull request, #38659:
URL: https://github.com/apache/spark/pull/38659
### What changes were proposed in this pull request?
This PR supports local data for LocalRelation, we have 2 approaches to
represent a row:
1. Use Expression.Literal.Struct
2. Use
AmplabJenkins commented on PR #38649:
URL: https://github.com/apache/spark/pull/38649#issuecomment-1313876520
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
tgravescs commented on PR #38622:
URL: https://github.com/apache/spark/pull/38622#issuecomment-1313919867
can you please add a description to the issue:
https://issues.apache.org/jira/projects/SPARK/issues/SPARK-39601
--
This is an automated message from the Apache Git Service.
To
EnricoMi commented on code in PR #38356:
URL: https://github.com/apache/spark/pull/38356#discussion_r1021635684
##
sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala:
##
@@ -220,6 +220,23 @@ class PartitionedWriteSuite extends QueryTest with
tgravescs commented on code in PR #38622:
URL: https://github.com/apache/spark/pull/38622#discussion_r1021695959
##
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala:
##
@@ -815,6 +815,7 @@ private[spark] class ApplicationMaster(
peter-toth commented on PR #38640:
URL: https://github.com/apache/spark/pull/38640#issuecomment-1313560978
cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
itholic commented on code in PR #38644:
URL: https://github.com/apache/spark/pull/38644#discussion_r1021424319
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/CastWithAnsiOnSuite.scala:
##
@@ -242,9 +242,13 @@ class CastWithAnsiOnSuite extends
dengziming commented on code in PR #38638:
URL: https://github.com/apache/spark/pull/38638#discussion_r1021527911
##
connector/connect/src/main/protobuf/spark/connect/base.proto:
##
@@ -38,6 +38,30 @@ message Plan {
}
}
+// Plan explanation mode.
+enum ExplainMode {
+
LuciferYang commented on PR #38658:
URL: https://github.com/apache/spark/pull/38658#issuecomment-1313699403
friendly ping @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
itholic commented on code in PR #38646:
URL: https://github.com/apache/spark/pull/38646#discussion_r1021411839
##
core/src/main/resources/error/error-classes.json:
##
@@ -1044,7 +1044,7 @@
},
"UNRESOLVED_MAP_KEY" : {
"message" : [
- "Cannot resolve column as a
itholic commented on code in PR #38646:
URL: https://github.com/apache/spark/pull/38646#discussion_r1021411839
##
core/src/main/resources/error/error-classes.json:
##
@@ -1044,7 +1044,7 @@
},
"UNRESOLVED_MAP_KEY" : {
"message" : [
- "Cannot resolve column as a
cloud-fan commented on PR #38640:
URL: https://github.com/apache/spark/pull/38640#issuecomment-1313585893
This is great! Is it using v2 parquet?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
cloud-fan commented on code in PR #38654:
URL: https://github.com/apache/spark/pull/38654#discussion_r1021448099
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##
@@ -168,8 +168,12 @@ class
peter-toth commented on PR #38640:
URL: https://github.com/apache/spark/pull/38640#issuecomment-1313612631
> This is great! Is it using v2 parquet?
Yes it is, like other stability tests.
--
This is an automated message from the Apache Git Service.
To respond to the message, please
LuciferYang commented on PR #38658:
URL: https://github.com/apache/spark/pull/38658#issuecomment-1313618482
An interesting thing. I didn't find logs related to `SparkThrowableSuite`
tests in the two tests passed link. Is that my problem?
LuciferYang commented on PR #38635:
URL: https://github.com/apache/spark/pull/38635#issuecomment-1313712161
cc @MaxGekk FYI, Should `fmt` be checked for null in `checkInputDataTypes`?
--
This is an automated message from the Apache Git Service.
To respond to the message, please
cloud-fan commented on code in PR #38356:
URL: https://github.com/apache/spark/pull/38356#discussion_r1021444905
##
sql/core/src/test/scala/org/apache/spark/sql/sources/PartitionedWriteSuite.scala:
##
@@ -220,6 +220,23 @@ class PartitionedWriteSuite extends QueryTest with
AmplabJenkins commented on PR #38651:
URL: https://github.com/apache/spark/pull/38651#issuecomment-1313645025
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
MaxGekk commented on PR #38658:
URL: https://github.com/apache/spark/pull/38658#issuecomment-1314019272
+1, LGTM. Merging to master. I have checked the test suite locally:
```
[info] SparkThrowableSuite:
...
[info] - prohibit dots in error class names (87 milliseconds)
[info]
MaxGekk closed pull request #38658: [SPARK-41109][CORE][FOLLOWUP] Re-order
error class to fix `SparkThrowableSuite`
URL: https://github.com/apache/spark/pull/38658
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
sunchao commented on PR #38643:
URL: https://github.com/apache/spark/pull/38643#issuecomment-1314053963
Thanks @dongjoon-hyun @zero323 ! I think I'm unblocked from 3.2.3 release
now.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
pan3793 commented on code in PR #38622:
URL: https://github.com/apache/spark/pull/38622#discussion_r1021822652
##
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala:
##
@@ -815,6 +815,7 @@ private[spark] class ApplicationMaster(
pan3793 commented on code in PR #38651:
URL: https://github.com/apache/spark/pull/38651#discussion_r1021830231
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsSnapshotsStoreImpl.scala:
##
@@ -94,7 +95,9 @@ private[spark]
MaxGekk commented on code in PR #38576:
URL: https://github.com/apache/spark/pull/38576#discussion_r1021741191
##
core/src/main/resources/error/error-classes.json:
##
@@ -1277,6 +1277,11 @@
"A correlated outer name reference within a subquery expression body
was not
LuciferYang commented on PR #38658:
URL: https://github.com/apache/spark/pull/38658#issuecomment-1314066195
Thanks @MaxGekk @cloud-fan @itholic
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
pan3793 commented on PR #38622:
URL: https://github.com/apache/spark/pull/38622#issuecomment-1314081544
> can you please add a description to the issue:
https://issues.apache.org/jira/projects/SPARK/issues/SPARK-39601
Thanks for reminding, updated.
--
This is an automated message
carlfu-db commented on code in PR #38404:
URL: https://github.com/apache/spark/pull/38404#discussion_r1021850376
##
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -319,6 +319,7 @@ query
insertInto
: INSERT OVERWRITE TABLE?
dongjoon-hyun commented on PR #38643:
URL: https://github.com/apache/spark/pull/38643#issuecomment-1314132688
It's great. I saw the tag. Thank you!
- https://github.com/apache/spark/releases/tag/v3.2.3-rc1
--
This is an automated message from the Apache Git Service.
To respond to the
WweiL commented on code in PR #38503:
URL: https://github.com/apache/spark/pull/38503#discussion_r1021870148
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationSuite.scala:
##
@@ -190,20 +190,25 @@ class StreamingDeduplicationSuite extends
amaliujia commented on PR #38659:
URL: https://github.com/apache/spark/pull/38659#issuecomment-1314224467
Question: can we re-use the ARROW collection we have done here?
cc @zhengruifeng
--
This is an automated message from the Apache Git Service.
To respond to the message, please
MaxGekk commented on PR #38656:
URL: https://github.com/apache/spark/pull/38656#issuecomment-1314224185
@srielau @cloud-fan @LuciferYang @panbingkun @itholic Please, review this PR.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
amaliujia commented on PR #38659:
URL: https://github.com/apache/spark/pull/38659#issuecomment-1314224623
also cc @hvanhovell
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
sunchao closed pull request #38628: [SPARK-41096][SQL] Support reading parquet
FIXED_LEN_BYTE_ARRAY type
URL: https://github.com/apache/spark/pull/38628
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
sunchao commented on PR #38628:
URL: https://github.com/apache/spark/pull/38628#issuecomment-1314225366
Committed to master, thanks @kazuyukitanimura !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
kazuyukitanimura commented on PR #38628:
URL: https://github.com/apache/spark/pull/38628#issuecomment-1314250536
Thank you @huaxingao @sunchao @LuciferYang
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
aokolnychyi commented on code in PR #38005:
URL: https://github.com/apache/spark/pull/38005#discussion_r1022088980
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/v2Commands.scala:
##
@@ -254,6 +254,113 @@ case class ReplaceData(
}
}
+/**
+ *
xkrogen commented on PR #37949:
URL: https://github.com/apache/spark/pull/37949#issuecomment-1314492128
Ah, I see. It seems you're using `spark.yarn.populateHadoopClasspath =
true`. It looks like it's expected that the Hadoop conf from the node overrides
the one from `__hadoop_conf__` in
xkrogen commented on code in PR #38648:
URL: https://github.com/apache/spark/pull/38648#discussion_r1021976624
##
sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala:
##
@@ -494,7 +494,8 @@ object QueryExecution {
private[sql] def toInternalError(msg:
viirya commented on PR #38628:
URL: https://github.com/apache/spark/pull/38628#issuecomment-1314270163
Just found a previous PR #35902. The change is the same, but there are some
avro test stuff that we can consider to add as a followup too.
--
This is an automated message from the
SandishKumarHN commented on code in PR #38384:
URL: https://github.com/apache/spark/pull/38384#discussion_r1021864869
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufUtils.scala:
##
@@ -155,21 +155,52 @@ private[sql] object ProtobufUtils extends
grundprinzip commented on code in PR #38642:
URL: https://github.com/apache/spark/pull/38642#discussion_r1022030795
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -207,6 +208,18 @@ def test_range(self):
.equals(self.spark.range(start=0, end=10,
kazuyukitanimura commented on PR #38628:
URL: https://github.com/apache/spark/pull/38628#issuecomment-1314336519
Thanks @viirya I also realized PR https://github.com/apache/spark/pull/35902
along with https://github.com/apache/spark/pull/20826 and
https://github.com/apache/spark/pull/1737
grundprinzip commented on code in PR #38642:
URL: https://github.com/apache/spark/pull/38642#discussion_r1022031268
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -207,6 +208,18 @@ def test_range(self):
.equals(self.spark.range(start=0, end=10,
aokolnychyi commented on code in PR #38005:
URL: https://github.com/apache/spark/pull/38005#discussion_r1022079865
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ProjectingInternalRow.scala:
##
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation
amaliujia commented on code in PR #38638:
URL: https://github.com/apache/spark/pull/38638#discussion_r1021980330
##
connector/connect/src/main/protobuf/spark/connect/base.proto:
##
@@ -48,6 +72,9 @@ message Request {
// The logical plan to be executed / analyzed.
Plan
grundprinzip commented on code in PR #38638:
URL: https://github.com/apache/spark/pull/38638#discussion_r1022036015
##
connector/connect/src/main/protobuf/spark/connect/base.proto:
##
@@ -38,16 +38,50 @@ message Plan {
}
}
+// Explains the input plan based on a
aokolnychyi commented on code in PR #38005:
URL: https://github.com/apache/spark/pull/38005#discussion_r1022077282
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2Exec.scala:
##
@@ -477,6 +507,73 @@ object DataWritingSparkTask extends
amaliujia commented on code in PR #38642:
URL: https://github.com/apache/spark/pull/38642#discussion_r1022140526
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -207,6 +208,18 @@ def test_range(self):
.equals(self.spark.range(start=0, end=10,
WweiL commented on code in PR #38503:
URL: https://github.com/apache/spark/pull/38503#discussion_r1021870148
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationSuite.scala:
##
@@ -190,20 +190,25 @@ class StreamingDeduplicationSuite extends
amaliujia commented on code in PR #38609:
URL: https://github.com/apache/spark/pull/38609#discussion_r1021995966
##
project/SparkBuild.scala:
##
@@ -109,6 +109,16 @@ object SparkBuild extends PomBuild {
if (profiles.contains("jdwp-test-debug")) {
amaliujia commented on PR #38609:
URL: https://github.com/apache/spark/pull/38609#issuecomment-1314293906
Looks easy to follow!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
amaliujia commented on code in PR #38642:
URL: https://github.com/apache/spark/pull/38642#discussion_r1022037157
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -207,6 +208,18 @@ def test_range(self):
.equals(self.spark.range(start=0, end=10,
1 - 100 of 217 matches
Mail list logo