gengliangwang commented on code in PR #38567:
URL: https://github.com/apache/spark/pull/38567#discussion_r1028977314
##
core/src/main/scala/org/apache/spark/status/KVUtils.scala:
##
@@ -80,6 +89,44 @@ private[spark] object KVUtils extends Logging {
db
}
+ def
cloud-fan commented on code in PR #38495:
URL: https://github.com/apache/spark/pull/38495#discussion_r1028966247
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala:
##
@@ -105,6 +106,15 @@ private[hive] class HiveClientImpl(
private class
EnricoMi commented on PR #38676:
URL: https://github.com/apache/spark/pull/38676#issuecomment-1323236232
@wangyum @cloud-fan I am not sure if this is the right approach to fix
`DeduplicateRelations`. Please advise.
Problem is that `DeduplicateRelations` is only considering duplicates
LuciferYang commented on PR #38737:
URL: https://github.com/apache/spark/pull/38737#issuecomment-1323234015
@MaxGekk rebased
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
cloud-fan commented on code in PR #38734:
URL: https://github.com/apache/spark/pull/38734#discussion_r1028961395
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -122,6 +122,18 @@ def withPlan(cls, plan: plan.LogicalPlan, session:
"RemoteSparkSession") -> "Dat
MaxGekk closed pull request #38685: [SPARK-41206][SQL] Rename the error class
`_LEGACY_ERROR_TEMP_1233` to `COLUMN_ALREADY_EXISTS`
URL: https://github.com/apache/spark/pull/38685
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
MaxGekk commented on PR #38685:
URL: https://github.com/apache/spark/pull/38685#issuecomment-1323221728
Merging to master. Thank you, @srielau and @cloud-fan for review.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
HyukjinKwon commented on code in PR #38659:
URL: https://github.com/apache/spark/pull/38659#discussion_r1028944132
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -271,8 +273,12 @@ class SparkConnectPlanner(session:
HyukjinKwon commented on code in PR #38659:
URL: https://github.com/apache/spark/pull/38659#discussion_r1028943446
##
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala:
##
@@ -253,16 +253,94 @@ private[sql] object ArrowConverters extends Logging
HyukjinKwon commented on code in PR #38659:
URL: https://github.com/apache/spark/pull/38659#discussion_r1028942760
##
core/src/main/scala/org/apache/spark/util/Utils.scala:
##
@@ -3257,6 +3257,14 @@ private[spark] object Utils extends Logging {
case _ =>
HyukjinKwon commented on code in PR #38659:
URL: https://github.com/apache/spark/pull/38659#discussion_r1028942267
##
connector/connect/src/test/scala/org/apache/spark/sql/connect/planner/SparkConnectProtoSuite.scala:
##
@@ -44,14 +47,18 @@ class SparkConnectProtoSuite extends
grundprinzip commented on code in PR #38723:
URL: https://github.com/apache/spark/pull/38723#discussion_r1028942189
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -220,6 +220,29 @@ def test_create_global_temp_view(self):
with
MaxGekk closed pull request #38744: [SPARK-41217][SQL] Add the error class
`FAILED_FUNCTION_CALL`
URL: https://github.com/apache/spark/pull/38744
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
LuciferYang commented on PR #38744:
URL: https://github.com/apache/spark/pull/38744#issuecomment-1323206350
OK, I will rebase my pr
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
MaxGekk commented on PR #38744:
URL: https://github.com/apache/spark/pull/38744#issuecomment-1323205514
Merging to master. Thank you, @panbingkun @LuciferYang @cloud-fan for review.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
HyukjinKwon commented on PR #38715:
URL: https://github.com/apache/spark/pull/38715#issuecomment-1323204958
cc @dongjoon-hyun and @HeartSaVioR FYI
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
zhengruifeng commented on code in PR #38742:
URL: https://github.com/apache/spark/pull/38742#discussion_r1028935043
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -736,6 +736,19 @@ def toPandas(self) -> Optional["pandas.DataFrame"]:
query =
LuciferYang commented on PR #38752:
URL: https://github.com/apache/spark/pull/38752#issuecomment-1323197061
late LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
HyukjinKwon closed pull request #38697: [SPARK-41118][SQL][3.3]
`to_number`/`try_to_number` should return `null` when format is `null`
URL: https://github.com/apache/spark/pull/38697
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
LuciferYang closed pull request #38753: [SPARK-40809][CONNECT][PYTHON][TESTS]
Fix pyspark-connect test failed with Scala 2.13
URL: https://github.com/apache/spark/pull/38753
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
LuciferYang commented on PR #38753:
URL: https://github.com/apache/spark/pull/38753#issuecomment-1323196610
dup with https://github.com/apache/spark/pull/38752, close this one
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
HyukjinKwon commented on PR #38697:
URL: https://github.com/apache/spark/pull/38697#issuecomment-1323196618
Merged to branch-3.3.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon closed pull request #38752: [SPARK-40809][CONNECT][FOLLOW-UP] Do
not use Buffer to make Scala 2.13 test pass
URL: https://github.com/apache/spark/pull/38752
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
HyukjinKwon commented on PR #38752:
URL: https://github.com/apache/spark/pull/38752#issuecomment-1323196020
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon commented on code in PR #38631:
URL: https://github.com/apache/spark/pull/38631#discussion_r1028933832
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -301,6 +301,20 @@ def test_simple_datasource_read(self) -> None:
actualResult =
LuciferYang commented on PR #38704:
URL: https://github.com/apache/spark/pull/38704#issuecomment-1323195610
Thanks @HyukjinKwon @mridulm @liuzqt
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
HyukjinKwon closed pull request #38686: [SPARK-41169][CONNECT][PYTHON]
Implement `DataFrame.drop`
URL: https://github.com/apache/spark/pull/38686
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
zhengruifeng commented on PR #38735:
URL: https://github.com/apache/spark/pull/38735#issuecomment-1323195553
@HyukjinKwon thanks for the reviews
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
HyukjinKwon commented on PR #38686:
URL: https://github.com/apache/spark/pull/38686#issuecomment-1323195361
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon closed pull request #38704: [SPARK-41193][SQL][TESTS] Ignore
`collect data with single partition larger than 2GB bytes array limit` in
`DatasetLargeResultCollectingSuite`
URL: https://github.com/apache/spark/pull/38704
--
This is an automated message from the Apache Git Service.
HyukjinKwon commented on PR #38704:
URL: https://github.com/apache/spark/pull/38704#issuecomment-1323194208
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon commented on code in PR #38723:
URL: https://github.com/apache/spark/pull/38723#discussion_r1028931691
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -220,6 +220,29 @@ def test_create_global_temp_view(self):
with
HyukjinKwon commented on code in PR #38723:
URL: https://github.com/apache/spark/pull/38723#discussion_r1028931101
##
python/pyspark/sql/connect/column.py:
##
@@ -263,6 +263,22 @@ def __str__(self) -> str:
return f"Column({self._unparsed_identifier})"
+class
HyukjinKwon commented on code in PR #38723:
URL: https://github.com/apache/spark/pull/38723#discussion_r1028930295
##
python/pyspark/sql/connect/column.py:
##
@@ -263,6 +263,22 @@ def __str__(self) -> str:
return f"Column({self._unparsed_identifier})"
+class
HyukjinKwon closed pull request #38731: [SPARK-41209][PYTHON] Improve PySpark
type inference in _merge_type method
URL: https://github.com/apache/spark/pull/38731
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
HyukjinKwon commented on PR #38731:
URL: https://github.com/apache/spark/pull/38731#issuecomment-1323189415
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon closed pull request #38735: [SPARK-41213][CONNECT][PYTHON]
Implement `DataFrame.__repr__` and `DataFrame.dtypes`
URL: https://github.com/apache/spark/pull/38735
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
HyukjinKwon commented on PR #38735:
URL: https://github.com/apache/spark/pull/38735#issuecomment-1323188640
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
cloud-fan commented on code in PR #38302:
URL: https://github.com/apache/spark/pull/38302#discussion_r1028926326
##
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLListener.scala:
##
@@ -56,7 +56,10 @@ case class SparkListenerSQLExecutionStart(
}
@DeveloperApi
LuciferYang opened a new pull request, #38753:
URL: https://github.com/apache/spark/pull/38753
### What changes were proposed in this pull request?
This pr simplify assertions to fix `pyspark-connect` test failed with Scala
2.13.
### Why are the changes needed?
Fix
HeartSaVioR closed pull request #38748: [SPARK-41151][SQL][3.3] Keep built-in
file `_metadata` column nullable value consistent
URL: https://github.com/apache/spark/pull/38748
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
HyukjinKwon commented on code in PR #38742:
URL: https://github.com/apache/spark/pull/38742#discussion_r1028915567
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -736,6 +736,19 @@ def toPandas(self) -> Optional["pandas.DataFrame"]:
query =
HeartSaVioR commented on PR #38748:
URL: https://github.com/apache/spark/pull/38748#issuecomment-1323163883
Thanks! Merging to 3.3.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HeartSaVioR commented on PR #38748:
URL: https://github.com/apache/spark/pull/38748#issuecomment-1323163589
Looks like GA build failed to take the result of forked GA build. Here is a
success run from the forked repository.
https://github.com/Yaohua628/spark/runs/9630498867
--
This is
LuciferYang commented on code in PR #38631:
URL: https://github.com/apache/spark/pull/38631#discussion_r1028908773
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -301,6 +301,20 @@ def test_simple_datasource_read(self) -> None:
actualResult =
HyukjinKwon commented on code in PR #38302:
URL: https://github.com/apache/spark/pull/38302#discussion_r1028906041
##
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLListener.scala:
##
@@ -56,7 +56,10 @@ case class SparkListenerSQLExecutionStart(
}
HyukjinKwon opened a new pull request, #38752:
URL: https://github.com/apache/spark/pull/38752
### What changes were proposed in this pull request?
This PR is a followup of https://github.com/apache/spark/pull/38631 that
fixes the test to pass in Scala 2.13 by avoiding using `Buffer`
panbingkun commented on PR #38744:
URL: https://github.com/apache/spark/pull/38744#issuecomment-1323142938
+1, LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
gaoyajun02 commented on PR #38333:
URL: https://github.com/apache/spark/pull/38333#issuecomment-1323136879
> Thank you, @gaoyajun02 , @mridulm , @otterc .
>
> * Do we need to backport this to branch-3.3?
> * According to the previous failure description, what happens in
branch-3.3
gaoyajun02 commented on PR #38333:
URL: https://github.com/apache/spark/pull/38333#issuecomment-1323119482
> I was on two minds whether to fix this in 3.3 as well ... Yes, 3.3 is
affected by it.
>
> But agree, a backport to branch-3.3 would be helpful. Can you get it a
shot
gaoyajun02 opened a new pull request, #38751:
URL: https://github.com/apache/spark/pull/38751
### What changes were proposed in this pull request?
This is a backport PR of #38333
When push-based shuffle is enabled, a zero-size buf error may occur when
fetching shuffle chunks from bad
toujours33 commented on PR #38711:
URL: https://github.com/apache/spark/pull/38711#issuecomment-1323074280
> How far back should this backport?
I hope it can be back to 3.3+(3.3 included). For version 3.3 is mainly used
in our production environment~
--
This is an automated
toujours33 commented on code in PR #38711:
URL: https://github.com/apache/spark/pull/38711#discussion_r1028830007
##
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala:
##
@@ -749,8 +749,10 @@ private[spark] class ExecutorAllocationManager(
LuciferYang commented on PR #38711:
URL: https://github.com/apache/spark/pull/38711#issuecomment-1323065288
How far back should this backport?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
LuciferYang commented on code in PR #38711:
URL: https://github.com/apache/spark/pull/38711#discussion_r1028823929
##
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala:
##
@@ -749,8 +749,10 @@ private[spark] class ExecutorAllocationManager(
LuciferYang commented on code in PR #38711:
URL: https://github.com/apache/spark/pull/38711#discussion_r1028821337
##
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala:
##
@@ -749,8 +749,10 @@ private[spark] class ExecutorAllocationManager(
itholic commented on code in PR #38650:
URL: https://github.com/apache/spark/pull/38650#discussion_r1028818844
##
core/src/main/resources/error/error-classes.json:
##
@@ -656,6 +656,11 @@
],
"sqlState" : "42000"
},
+ "INVALID_EMPTY_LOCATION" : {
+"message" : [
itholic commented on code in PR #38650:
URL: https://github.com/apache/spark/pull/38650#discussion_r1028818844
##
core/src/main/resources/error/error-classes.json:
##
@@ -656,6 +656,11 @@
],
"sqlState" : "42000"
},
+ "INVALID_EMPTY_LOCATION" : {
+"message" : [
dengziming commented on PR #38715:
URL: https://github.com/apache/spark/pull/38715#issuecomment-1323037097
These failures comes from
[apache/kafka#12049](https://github.com/apache/kafka/pull/12049) and is
described here: https://kafka.apache.org/documentation/#upgrade_33_notable
The
itholic commented on code in PR #38664:
URL: https://github.com/apache/spark/pull/38664#discussion_r1028813303
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala:
##
@@ -146,7 +147,10 @@ object FunctionRegistryBase {
zhengruifeng commented on code in PR #38734:
URL: https://github.com/apache/spark/pull/38734#discussion_r1028812420
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -122,6 +122,20 @@ def withPlan(cls, plan: plan.LogicalPlan, session:
"RemoteSparkSession") -> "Dat
zhengruifeng commented on code in PR #38686:
URL: https://github.com/apache/spark/pull/38686#discussion_r1028811655
##
connector/connect/src/test/scala/org/apache/spark/sql/connect/planner/SparkConnectProtoSuite.scala:
##
@@ -148,6 +148,23 @@ class SparkConnectProtoSuite
toujours33 commented on code in PR #38711:
URL: https://github.com/apache/spark/pull/38711#discussion_r1028811606
##
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala:
##
@@ -749,8 +749,10 @@ private[spark] class ExecutorAllocationManager(
itholic commented on code in PR #38576:
URL: https://github.com/apache/spark/pull/38576#discussion_r1028810170
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala:
##
@@ -1059,10 +1060,16 @@ trait CheckAnalysis extends PredicateHelper with
toujours33 commented on code in PR #38711:
URL: https://github.com/apache/spark/pull/38711#discussion_r1028807362
##
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala:
##
@@ -749,8 +749,10 @@ private[spark] class ExecutorAllocationManager(
LuciferYang commented on code in PR #38711:
URL: https://github.com/apache/spark/pull/38711#discussion_r1028804426
##
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala:
##
@@ -749,8 +749,10 @@ private[spark] class ExecutorAllocationManager(
LuciferYang commented on code in PR #38711:
URL: https://github.com/apache/spark/pull/38711#discussion_r1028803675
##
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala:
##
@@ -749,8 +749,10 @@ private[spark] class ExecutorAllocationManager(
LuciferYang commented on code in PR #38711:
URL: https://github.com/apache/spark/pull/38711#discussion_r1028802840
##
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala:
##
@@ -749,8 +749,10 @@ private[spark] class ExecutorAllocationManager(
cloud-fan commented on code in PR #38703:
URL: https://github.com/apache/spark/pull/38703#discussion_r1028795081
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/subquery.scala:
##
@@ -355,7 +355,7 @@ case class ListQuery(
plan.canonicalized,
cloud-fan commented on code in PR #38739:
URL: https://github.com/apache/spark/pull/38739#discussion_r1028793298
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:
##
@@ -3532,6 +3532,49 @@ class DataFrameSuite extends QueryTest
}.isEmpty)
}
}
zhengruifeng commented on code in PR #38742:
URL: https://github.com/apache/spark/pull/38742#discussion_r1028639472
##
connector/connect/src/main/protobuf/spark/connect/base.proto:
##
@@ -100,18 +70,135 @@ message AnalyzePlanRequest {
// logging purposes and will not be
amaliujia commented on code in PR #38734:
URL: https://github.com/apache/spark/pull/38734#discussion_r1028724092
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -122,6 +122,20 @@ def withPlan(cls, plan: plan.LogicalPlan, session:
"RemoteSparkSession") -> "Dat
19855134604 commented on code in PR #38743:
URL: https://github.com/apache/spark/pull/38743#discussion_r1028790815
##
connector/protobuf/README.md:
##
@@ -0,0 +1,37 @@
+# Spark Protobuf - Developer Documentation
+
+## Getting Started
+
+### Build
+
+```bash
+./build/mvn -Phive
cloud-fan closed pull request #38738: WIP
URL: https://github.com/apache/spark/pull/38738
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
zhengruifeng commented on code in PR #38734:
URL: https://github.com/apache/spark/pull/38734#discussion_r1028789393
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -122,6 +122,20 @@ def withPlan(cls, plan: plan.LogicalPlan, session:
"RemoteSparkSession") -> "Dat
toujours33 commented on code in PR #38711:
URL: https://github.com/apache/spark/pull/38711#discussion_r1028788976
##
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala:
##
@@ -774,17 +776,16 @@ private[spark] class ExecutorAllocationManager(
LuciferYang commented on code in PR #38711:
URL: https://github.com/apache/spark/pull/38711#discussion_r1028783234
##
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala:
##
@@ -774,17 +776,16 @@ private[spark] class ExecutorAllocationManager(
sadikovi commented on PR #38731:
URL: https://github.com/apache/spark/pull/38731#issuecomment-1322965568
@xinrong-meng I have updated the PR description to clarify the user-facing
change.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
cloud-fan commented on code in PR #38495:
URL: https://github.com/apache/spark/pull/38495#discussion_r1028755248
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClient.scala:
##
@@ -113,6 +113,9 @@ private[hive] trait HiveClient {
/** Creates a table with the
cloud-fan closed pull request #38746: [SPARK-41017][SQL][FOLLOWUP] Push Filter
with both deterministic and nondeterministic predicates
URL: https://github.com/apache/spark/pull/38746
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
cloud-fan commented on PR #38746:
URL: https://github.com/apache/spark/pull/38746#issuecomment-1322944376
thanks for review, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
wankunde commented on code in PR #38495:
URL: https://github.com/apache/spark/pull/38495#discussion_r1028741655
##
sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertSuite.scala:
##
@@ -894,12 +895,14 @@ class InsertSuite extends QueryTest with
TestHiveSingleton with
wankunde commented on code in PR #38495:
URL: https://github.com/apache/spark/pull/38495#discussion_r1028741472
##
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala:
##
@@ -721,19 +721,18 @@ private[spark] class HiveExternalCatalog(conf: SparkConf,
cloud-fan commented on code in PR #38747:
URL: https://github.com/apache/spark/pull/38747#discussion_r1028740897
##
sql/core/src/main/scala/org/apache/spark/sql/execution/SQLExecution.scala:
##
@@ -121,7 +121,11 @@ object SQLExecution {
AngersZh commented on code in PR #38622:
URL: https://github.com/apache/spark/pull/38622#discussion_r1028739055
##
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala:
##
@@ -815,6 +815,7 @@ private[spark] class ApplicationMaster(
LuciferYang commented on PR #38743:
URL: https://github.com/apache/spark/pull/38743#issuecomment-1322936602
cc @HyukjinKwon FYI, a similar fix as SPARK-40593
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
ulysses-you commented on code in PR #38747:
URL: https://github.com/apache/spark/pull/38747#discussion_r1028738643
##
sql/core/src/main/scala/org/apache/spark/sql/execution/SQLExecution.scala:
##
@@ -121,7 +121,11 @@ object SQLExecution {
LuciferYang commented on code in PR #38743:
URL: https://github.com/apache/spark/pull/38743#discussion_r1028737655
##
connector/protobuf/README.md:
##
@@ -0,0 +1,37 @@
+# Spark Protobuf - Developer Documentation
+
+## Getting Started
+
+### Build
+
+```bash
+./build/mvn -Phive
amaliujia commented on PR #38734:
URL: https://github.com/apache/spark/pull/38734#issuecomment-1322927685
LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
amaliujia commented on code in PR #38734:
URL: https://github.com/apache/spark/pull/38734#discussion_r1028724092
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -122,6 +122,20 @@ def withPlan(cls, plan: plan.LogicalPlan, session:
"RemoteSparkSession") -> "Dat
HyukjinKwon commented on code in PR #38734:
URL: https://github.com/apache/spark/pull/38734#discussion_r1028721327
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -122,6 +122,20 @@ def withPlan(cls, plan: plan.LogicalPlan, session:
"RemoteSparkSession") -> "Dat
amaliujia commented on code in PR #38735:
URL: https://github.com/apache/spark/pull/38735#discussion_r1028720171
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -115,6 +115,9 @@ def __init__(
self._cache: Dict[str, Any] = {}
self._session:
zhengruifeng commented on code in PR #38735:
URL: https://github.com/apache/spark/pull/38735#discussion_r1028719570
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -115,6 +115,9 @@ def __init__(
self._cache: Dict[str, Any] = {}
self._session:
cloud-fan closed pull request #38741: [SPARK-41154][SQL][3.3] Incorrect
relation caching for queries with time travel spec
URL: https://github.com/apache/spark/pull/38741
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
ulysses-you commented on code in PR #38739:
URL: https://github.com/apache/spark/pull/38739#discussion_r1028718134
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:
##
@@ -3532,6 +3532,49 @@ class DataFrameSuite extends QueryTest
}.isEmpty)
}
cloud-fan commented on PR #38741:
URL: https://github.com/apache/spark/pull/38741#issuecomment-1322916287
tests all passed: https://github.com/ulysses-you/spark/runs/9613393804
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
cloud-fan commented on PR #38741:
URL: https://github.com/apache/spark/pull/38741#issuecomment-1322916371
thanks, merging to 3.3!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
cloud-fan commented on PR #38706:
URL: https://github.com/apache/spark/pull/38706#issuecomment-1322913672
late LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
xinrong-meng commented on PR #38731:
URL: https://github.com/apache/spark/pull/38731#issuecomment-1322912963
Shall we add an example to elaborate `Does this PR introduce any user-facing
change?`? The change might be in the 3.4 release note.
--
This is an automated message from the Apache
jerrypeng commented on PR #38517:
URL: https://github.com/apache/spark/pull/38517#issuecomment-1322908492
@HeartSaVioR Please review.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
1 - 100 of 225 matches
Mail list logo