LuciferYang commented on code in PR #38737:
URL: https://github.com/apache/spark/pull/38737#discussion_r1030121284
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala:
##
@@ -2620,46 +2620,81 @@ case class ToBinary(
dengziming commented on PR #38659:
URL: https://github.com/apache/spark/pull/38659#issuecomment-1324659424
Thank you @grundprinzip for your review, I fixed the comments and let's wait
for @hvanhovell and @cloud-fan. 欄
--
This is an automated message from the Apache Git Service.
To
LuciferYang commented on PR #38764:
URL: https://github.com/apache/spark/pull/38764#issuecomment-1324658198
Thanks @HyukjinKwon @MaxGekk
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
MaxGekk commented on code in PR #38737:
URL: https://github.com/apache/spark/pull/38737#discussion_r1030091988
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala:
##
@@ -2620,46 +2620,81 @@ case class ToBinary(
itholic commented on PR #38766:
URL: https://github.com/apache/spark/pull/38766#issuecomment-1324635151
I made it into a separate PR from my other tasks although it's very small
change, to avoid in case this error message affecting the tests on the original
PR.
--
This is an automated
itholic opened a new pull request, #38769:
URL: https://github.com/apache/spark/pull/38769
### What changes were proposed in this pull request?
This PR proposes to rename `COLUMN_NOT_IN_GROUP_BY_CLAUSE` to
`MISSING_AGGREGATION`.
Also, improve its error message.
### Why
amaliujia opened a new pull request, #38768:
URL: https://github.com/apache/spark/pull/38768
### What changes were proposed in this pull request?
This PR proposes that Relations (e.g. Aggregate in this PR) should only deal
with `Expression` than `str`. `str` could be mapped
MaxGekk commented on code in PR #38707:
URL: https://github.com/apache/spark/pull/38707#discussion_r1030084249
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala:
##
@@ -637,13 +637,13 @@ private[sql] object QueryCompilationErrors extends
cloud-fan commented on PR #38767:
URL: https://github.com/apache/spark/pull/38767#issuecomment-1324625972
cc @viirya
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
cloud-fan opened a new pull request, #38767:
URL: https://github.com/apache/spark/pull/38767
### What changes were proposed in this pull request?
Followup of https://github.com/apache/spark/pull/38692. To follow other APIs
in `SparkSessionExtensions`, the name should be
itholic opened a new pull request, #38766:
URL: https://github.com/apache/spark/pull/38766
### What changes were proposed in this pull request?
This PR proposes to correct the minor syntax on error message for
`UNEXPECTED_INPUT_TYPE`,
### Why are the changes needed?
LuciferYang commented on code in PR #38737:
URL: https://github.com/apache/spark/pull/38737#discussion_r1030082083
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala:
##
@@ -2620,46 +2620,81 @@ case class ToBinary(
wankunde opened a new pull request, #38765:
URL: https://github.com/apache/spark/pull/38765
### What changes were proposed in this pull request?
Restore dbName and tableName in `HiveShim.getTable()` method.
When we create a hive table, hive will convert the dbName and
MaxGekk commented on PR #38710:
URL: https://github.com/apache/spark/pull/38710#issuecomment-1324619657
@panbingkun Please, resolve conflicts.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
cloud-fan commented on PR #38760:
URL: https://github.com/apache/spark/pull/38760#issuecomment-1324619462
cc @srielau @viirya
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
MaxGekk commented on code in PR #38725:
URL: https://github.com/apache/spark/pull/38725#discussion_r1030079292
##
core/src/main/resources/error/error-classes.json:
##
@@ -656,6 +656,11 @@
],
"sqlState" : "42000"
},
+ "INVALID_EXTRACT_FIELD" : {
+"message" : [
MaxGekk commented on PR #38730:
URL: https://github.com/apache/spark/pull/38730#issuecomment-1324618149
@panbingkun Could you resolve conflicts, please.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
cloud-fan commented on PR #38760:
URL: https://github.com/apache/spark/pull/38760#issuecomment-1324618007
It seems reasonable to say that 0 is the only valid value for `decimal(0,
0)`. Forbidding `decimal(0, 0)` seems also reasonable but is more risky.
--
This is an automated message
AngersZh commented on PR #35799:
URL: https://github.com/apache/spark/pull/35799#issuecomment-1324617674
gentle ping @dongjoon-hyun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
MaxGekk commented on code in PR #38737:
URL: https://github.com/apache/spark/pull/38737#discussion_r1030077573
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala:
##
@@ -2620,46 +2620,81 @@ case class ToBinary(
MaxGekk closed pull request #38764: [SPARK-41206][SQL][FOLLOWUP] Make result of
`checkColumnNameDuplication` stable to fix `COLUMN_ALREADY_EXISTS` check failed
with Scala 2.13
URL: https://github.com/apache/spark/pull/38764
--
This is an automated message from the Apache Git Service.
To
MaxGekk commented on PR #38764:
URL: https://github.com/apache/spark/pull/38764#issuecomment-1324599102
+1, LGTM. Merging to master. All GAs passed.
Thank you, @LuciferYang and @HyukjinKwon for review.
--
This is an automated message from the Apache Git Service.
To respond to the
AmplabJenkins commented on PR #38750:
URL: https://github.com/apache/spark/pull/38750#issuecomment-1324584432
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
AmplabJenkins commented on PR #38751:
URL: https://github.com/apache/spark/pull/38751#issuecomment-1324584414
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
MaxGekk commented on code in PR #38576:
URL: https://github.com/apache/spark/pull/38576#discussion_r1030039032
##
sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala:
##
@@ -964,17 +964,14 @@ class SubquerySuite extends QueryTest
| WHERE
MaxGekk closed pull request #38575: [SPARK-40948][SQL][FOLLOWUP] Restore
PATH_NOT_FOUND
URL: https://github.com/apache/spark/pull/38575
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
MaxGekk commented on PR #38575:
URL: https://github.com/apache/spark/pull/38575#issuecomment-1324575522
+1, LGTM. Merging to master.
Thank you, @itholic and @HyukjinKwon @cloud-fan @LuciferYang for review.
--
This is an automated message from the Apache Git Service.
To respond to the
MaxGekk commented on code in PR #25004:
URL: https://github.com/apache/spark/pull/25004#discussion_r1030034252
##
sql/core/src/test/scala/org/apache/spark/sql/sources/v2/FileDataSourceV2FallBackSuite.scala:
##
@@ -170,4 +174,46 @@ class FileDataSourceV2FallBackSuite extends
zhengruifeng commented on code in PR #38757:
URL: https://github.com/apache/spark/pull/38757#discussion_r1030022747
##
python/pyspark/sql/connect/column.py:
##
@@ -15,14 +15,15 @@
# limitations under the License.
#
import uuid
-from typing import cast, get_args,
HyukjinKwon commented on code in PR #38757:
URL: https://github.com/apache/spark/pull/38757#discussion_r1030022065
##
python/pyspark/sql/connect/column.py:
##
@@ -15,14 +15,15 @@
# limitations under the License.
#
import uuid
-from typing import cast, get_args,
zhengruifeng commented on code in PR #38757:
URL: https://github.com/apache/spark/pull/38757#discussion_r1030020312
##
python/pyspark/sql/connect/column.py:
##
@@ -15,14 +15,15 @@
# limitations under the License.
#
import uuid
-from typing import cast, get_args,
HyukjinKwon commented on code in PR #38575:
URL: https://github.com/apache/spark/pull/38575#discussion_r1030018762
##
R/pkg/tests/fulltests/test_sparkSQL.R:
##
@@ -3990,12 +3990,16 @@ test_that("Call DataFrameWriter.load() API in Java
without path and check argume
ulysses-you commented on code in PR #38760:
URL: https://github.com/apache/spark/pull/38760#discussion_r1030015079
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:
##
@@ -3537,6 +3537,12 @@ class DataFrameSuite extends QueryTest
}.isEmpty)
}
amaliujia commented on code in PR #38762:
URL: https://github.com/apache/spark/pull/38762#discussion_r1030011493
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -951,6 +951,39 @@ def createOrReplaceGlobalTempView(self, name: str) -> None:
amaliujia commented on code in PR #38762:
URL: https://github.com/apache/spark/pull/38762#discussion_r1030011493
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -951,6 +951,39 @@ def createOrReplaceGlobalTempView(self, name: str) -> None:
amaliujia commented on code in PR #38659:
URL: https://github.com/apache/spark/pull/38659#discussion_r1030009940
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -271,8 +273,12 @@ class SparkConnectPlanner(session:
LuciferYang commented on code in PR #38685:
URL: https://github.com/apache/spark/pull/38685#discussion_r1030009517
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:
##
@@ -1759,24 +1763,25 @@ class DataFrameSuite extends QueryTest
test("SPARK-8072:
LuciferYang commented on PR #38764:
URL: https://github.com/apache/spark/pull/38764#issuecomment-1324532345
cc @HyukjinKwon try to fix
https://github.com/apache/spark/pull/38685#discussion_r1029966254
ahshahid commented on code in PR #38714:
URL: https://github.com/apache/spark/pull/38714#discussion_r1030005359
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/subquery.scala:
##
@@ -208,20 +208,33 @@ object SubExprUtils extends PredicateHelper {
*/
ahshahid commented on code in PR #38714:
URL: https://github.com/apache/spark/pull/38714#discussion_r1030005062
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/subquery.scala:
##
@@ -208,20 +208,33 @@ object SubExprUtils extends PredicateHelper {
*/
ahshahid commented on code in PR #38714:
URL: https://github.com/apache/spark/pull/38714#discussion_r1030004911
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/ResolveSubquerySuite.scala:
##
@@ -17,13 +17,20 @@
package
cloud-fan commented on code in PR #38760:
URL: https://github.com/apache/spark/pull/38760#discussion_r1030003237
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:
##
@@ -3537,6 +3537,12 @@ class DataFrameSuite extends QueryTest
}.isEmpty)
}
}
wankunde commented on code in PR #38560:
URL: https://github.com/apache/spark/pull/38560#discussion_r1029992768
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -452,22 +489,69 @@ void
LuciferYang opened a new pull request, #38764:
URL: https://github.com/apache/spark/pull/38764
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
###
itholic commented on code in PR #38646:
URL: https://github.com/apache/spark/pull/38646#discussion_r102592
##
core/src/main/resources/error/error-classes.json:
##
@@ -1044,7 +1044,7 @@
},
"UNRESOLVED_MAP_KEY" : {
"message" : [
- "Cannot resolve column as a
cloud-fan commented on code in PR #38575:
URL: https://github.com/apache/spark/pull/38575#discussion_r1029997623
##
R/pkg/tests/fulltests/test_sparkSQL.R:
##
@@ -3990,12 +3990,16 @@ test_that("Call DataFrameWriter.load() API in Java
without path and check argume
zhengruifeng commented on PR #38763:
URL: https://github.com/apache/spark/pull/38763#issuecomment-1324508234
merged into master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
zhengruifeng closed pull request #38763:
[SPARK-41201][CONNECT][PYTHON][TEST][FOLLOWUP] Reenable test_fill_na
URL: https://github.com/apache/spark/pull/38763
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
itholic commented on code in PR #38575:
URL: https://github.com/apache/spark/pull/38575#discussion_r1029988770
##
R/pkg/tests/fulltests/test_sparkSQL.R:
##
@@ -3990,12 +3990,16 @@ test_that("Call DataFrameWriter.load() API in Java
without path and check argume
zhengruifeng commented on code in PR #38659:
URL: https://github.com/apache/spark/pull/38659#discussion_r1029986139
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -271,8 +273,12 @@ class SparkConnectPlanner(session:
xinrong-meng commented on PR #38731:
URL: https://github.com/apache/spark/pull/38731#issuecomment-1324490281
Thanks @sadikovi !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
zhengruifeng commented on code in PR #38762:
URL: https://github.com/apache/spark/pull/38762#discussion_r1029983662
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -951,6 +951,39 @@ def createOrReplaceGlobalTempView(self, name: str) -> None:
xiuzhu9527 commented on PR #38674:
URL: https://github.com/apache/spark/pull/38674#issuecomment-1324484471
@tgravescs
1. Yes, Jersey 1 and Jersey 2 are two different packages, one is
com.sun.jersey and one is org.glassfish.jersey
2. I will try to use maven-shade-plugin to change the
yabola commented on code in PR #38560:
URL: https://github.com/apache/spark/pull/38560#discussion_r1023816561
##
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java:
##
@@ -654,8 +731,7 @@ public MergeStatuses
LuciferYang commented on code in PR #38685:
URL: https://github.com/apache/spark/pull/38685#discussion_r1029979036
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:
##
@@ -1759,24 +1763,25 @@ class DataFrameSuite extends QueryTest
test("SPARK-8072:
pan3793 closed pull request #38205: [SPARK-40747][CORE] Support setting driver
log url using env vars on other resource managers
URL: https://github.com/apache/spark/pull/38205
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
pan3793 commented on PR #38205:
URL: https://github.com/apache/spark/pull/38205#issuecomment-1324476298
Close and in favor https://github.com/apache/spark/pull/38357
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
amaliujia commented on PR #38763:
URL: https://github.com/apache/spark/pull/38763#issuecomment-1324471937
@zhengruifeng thanks for the clarification!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
zhengruifeng commented on code in PR #38742:
URL: https://github.com/apache/spark/pull/38742#discussion_r1029975064
##
connector/connect/src/main/protobuf/spark/connect/base.proto:
##
@@ -100,18 +70,138 @@ message AnalyzePlanRequest {
// logging purposes and will not be
zhengruifeng commented on code in PR #38762:
URL: https://github.com/apache/spark/pull/38762#discussion_r1029974156
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -951,6 +951,39 @@ def createOrReplaceGlobalTempView(self, name: str) -> None:
beliefer commented on PR #38745:
URL: https://github.com/apache/spark/pull/38745#issuecomment-1324464202
ping @zhengruifeng cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
zhengruifeng commented on PR #38763:
URL: https://github.com/apache/spark/pull/38763#issuecomment-1324459180
@amaliujia that is on purpose, `sdf.x` will just throw an exception since
`sdf` don't contains `x` column, but in connect df `cdf` , `cdf.x` will not
throw an exception since it
HyukjinKwon commented on code in PR #38685:
URL: https://github.com/apache/spark/pull/38685#discussion_r1029966254
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:
##
@@ -1759,24 +1763,25 @@ class DataFrameSuite extends QueryTest
test("SPARK-8072:
HyukjinKwon commented on code in PR #38685:
URL: https://github.com/apache/spark/pull/38685#discussion_r1029966254
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:
##
@@ -1759,24 +1763,25 @@ class DataFrameSuite extends QueryTest
test("SPARK-8072:
bersprockets commented on PR #38727:
URL: https://github.com/apache/spark/pull/38727#issuecomment-1324453979
Tested PR https://github.com/apache/spark/pull/38737. That PR incidentally
seems to fix this issue:
```
SELECT try_to_binary(col1, col2) from values ('abc', 'utf-8') as
bersprockets closed pull request #38727: [SPARK-41205][SQL] Check that format
is foldable in `TryToBinary`
URL: https://github.com/apache/spark/pull/38727
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
ulysses-you commented on PR #38760:
URL: https://github.com/apache/spark/pull/38760#issuecomment-1324450892
cc @cloud-fan @revans2 @gengliangwang
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
ulysses-you commented on code in PR #38760:
URL: https://github.com/apache/spark/pull/38760#discussion_r1029962313
##
sql/catalyst/src/test/scala/org/apache/spark/sql/types/DecimalSuite.scala:
##
@@ -384,4 +384,11 @@ class DecimalSuite extends SparkFunSuite with
ulysses-you commented on code in PR #38739:
URL: https://github.com/apache/spark/pull/38739#discussion_r1029960652
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/DecimalPrecisionSuite.scala:
##
@@ -276,9 +276,9 @@ class DecimalPrecisionSuite extends
amaliujia commented on code in PR #38686:
URL: https://github.com/apache/spark/pull/38686#discussion_r1029946709
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -255,10 +255,21 @@ def distinct(self) -> "DataFrame":
)
def drop(self, *cols: "ColumnOrString") ->
amaliujia commented on PR #38686:
URL: https://github.com/apache/spark/pull/38686#issuecomment-1324436991
LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
zhengruifeng commented on code in PR #38742:
URL: https://github.com/apache/spark/pull/38742#discussion_r1029953393
##
connector/connect/src/main/protobuf/spark/connect/base.proto:
##
@@ -100,18 +70,138 @@ message AnalyzePlanRequest {
// logging purposes and will not be
zhengruifeng commented on code in PR #38742:
URL: https://github.com/apache/spark/pull/38742#discussion_r1029953075
##
connector/connect/src/main/protobuf/spark/connect/base.proto:
##
@@ -100,18 +70,138 @@ message AnalyzePlanRequest {
// logging purposes and will not be
amaliujia commented on PR #38763:
URL: https://github.com/apache/spark/pull/38763#issuecomment-1324432409
LGTM
If you are interested in, can you BTW follow up in this PR on
zhengruifeng commented on code in PR #38742:
URL: https://github.com/apache/spark/pull/38742#discussion_r1029952200
##
connector/connect/src/main/protobuf/spark/connect/base.proto:
##
@@ -100,18 +70,138 @@ message AnalyzePlanRequest {
// logging purposes and will not be
zhengruifeng commented on code in PR #38723:
URL: https://github.com/apache/spark/pull/38723#discussion_r1029951842
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -302,6 +301,31 @@ def test_to_pandas(self):
self.spark.sql(query).toPandas(),
zhengruifeng opened a new pull request, #38763:
URL: https://github.com/apache/spark/pull/38763
### What changes were proposed in this pull request?
Reenable test_fill_na
### Why are the changes needed?
`test_fill_na` was disabled by mistake in
amaliujia commented on code in PR #38723:
URL: https://github.com/apache/spark/pull/38723#discussion_r1029951479
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -302,6 +301,31 @@ def test_to_pandas(self):
self.spark.sql(query).toPandas(),
amaliujia commented on code in PR #38723:
URL: https://github.com/apache/spark/pull/38723#discussion_r1029951172
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -302,6 +301,31 @@ def test_to_pandas(self):
self.spark.sql(query).toPandas(),
zhengruifeng commented on code in PR #38723:
URL: https://github.com/apache/spark/pull/38723#discussion_r1029950542
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -302,6 +301,31 @@ def test_to_pandas(self):
self.spark.sql(query).toPandas(),
HyukjinKwon commented on code in PR #38723:
URL: https://github.com/apache/spark/pull/38723#discussion_r1029950210
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -302,6 +301,31 @@ def test_to_pandas(self):
self.spark.sql(query).toPandas(),
zhengruifeng commented on code in PR #38742:
URL: https://github.com/apache/spark/pull/38742#discussion_r1029949868
##
connector/connect/src/main/protobuf/spark/connect/base.proto:
##
@@ -100,18 +70,138 @@ message AnalyzePlanRequest {
// logging purposes and will not be
zhengruifeng commented on code in PR #38723:
URL: https://github.com/apache/spark/pull/38723#discussion_r1029949413
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -302,6 +301,31 @@ def test_to_pandas(self):
self.spark.sql(query).toPandas(),
HyukjinKwon commented on PR #38751:
URL: https://github.com/apache/spark/pull/38751#issuecomment-1324425695
Seems like the test failure looks unrelated. I don't mind merging it as is.
Feel free to retrigger https://github.com/gaoyajun02/spark/runs/9633118279
@gaoyajun02
--
This
HyukjinKwon closed pull request #38723: [SPARK-41201][CONNECT][PYTHON]
Implement `DataFrame.SelectExpr` in Python client
URL: https://github.com/apache/spark/pull/38723
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
HyukjinKwon commented on PR #38723:
URL: https://github.com/apache/spark/pull/38723#issuecomment-1324424425
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
amaliujia commented on code in PR #38686:
URL: https://github.com/apache/spark/pull/38686#discussion_r1029946709
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -255,10 +255,21 @@ def distinct(self) -> "DataFrame":
)
def drop(self, *cols: "ColumnOrString") ->
amaliujia commented on code in PR #38723:
URL: https://github.com/apache/spark/pull/38723#discussion_r1029932646
##
python/pyspark/sql/connect/column.py:
##
@@ -263,6 +263,22 @@ def __str__(self) -> str:
return f"Column({self._unparsed_identifier})"
+class
amaliujia commented on code in PR #38723:
URL: https://github.com/apache/spark/pull/38723#discussion_r1029932471
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -220,6 +220,29 @@ def test_create_global_temp_view(self):
with
HyukjinKwon commented on code in PR #38013:
URL: https://github.com/apache/spark/pull/38013#discussion_r1029932042
##
examples/src/main/python/sql/streaming/structured_network_wordcount_session_window.py:
##
@@ -0,0 +1,139 @@
+#
+# Licensed to the Apache Software Foundation
mridulm commented on code in PR #36165:
URL: https://github.com/apache/spark/pull/36165#discussion_r1029927503
##
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala:
##
@@ -282,6 +280,17 @@ final class ShuffleBlockFetcherIterator(
}
}
+
mridulm commented on code in PR #36165:
URL: https://github.com/apache/spark/pull/36165#discussion_r1029927503
##
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala:
##
@@ -282,6 +280,17 @@ final class ShuffleBlockFetcherIterator(
}
}
+
mridulm commented on code in PR #36165:
URL: https://github.com/apache/spark/pull/36165#discussion_r1029927503
##
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala:
##
@@ -282,6 +280,17 @@ final class ShuffleBlockFetcherIterator(
}
}
+
mridulm commented on code in PR #36165:
URL: https://github.com/apache/spark/pull/36165#discussion_r1029927503
##
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala:
##
@@ -282,6 +280,17 @@ final class ShuffleBlockFetcherIterator(
}
}
+
grundprinzip commented on PR #38762:
URL: https://github.com/apache/spark/pull/38762#issuecomment-1324244517
@HyukjinKwon @hvanhovell
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
grundprinzip opened a new pull request, #38762:
URL: https://github.com/apache/spark/pull/38762
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
###
ahshahid commented on code in PR #38714:
URL: https://github.com/apache/spark/pull/38714#discussion_r1029822940
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/subquery.scala:
##
@@ -208,20 +208,33 @@ object SubExprUtils extends PredicateHelper {
*/
grundprinzip commented on code in PR #38659:
URL: https://github.com/apache/spark/pull/38659#discussion_r1029721265
##
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala:
##
@@ -213,58 +214,115 @@ private[sql] object ArrowConverters extends
ahshahid commented on code in PR #38714:
URL: https://github.com/apache/spark/pull/38714#discussion_r1029763899
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/subquery.scala:
##
@@ -208,20 +208,33 @@ object SubExprUtils extends PredicateHelper {
*/
MaxGekk commented on code in PR #38575:
URL: https://github.com/apache/spark/pull/38575#discussion_r1029760391
##
R/pkg/tests/fulltests/test_sparkSQL.R:
##
@@ -3990,12 +3990,16 @@ test_that("Call DataFrameWriter.load() API in Java
without path and check argume
1 - 100 of 160 matches
Mail list logo