amaliujia commented on code in PR #40135:
URL: https://github.com/apache/spark/pull/40135#discussion_r1115332180
##
python/pyspark/sql/tests/test_dataframe.py:
##
@@ -144,6 +144,17 @@ def test_drop_duplicates(self):
message_parameters={"arg_name": "subset",
LuciferYang commented on code in PR #40136:
URL: https://github.com/apache/spark/pull/40136#discussion_r1115328475
##
connector/connect/client/jvm/pom.xml:
##
@@ -125,6 +125,11 @@
${mima.version}
test
+
Review Comment:
Or do you have any suggestions
LuciferYang commented on code in PR #40136:
URL: https://github.com/apache/spark/pull/40136#discussion_r1115328475
##
connector/connect/client/jvm/pom.xml:
##
@@ -125,6 +125,11 @@
${mima.version}
test
+
Review Comment:
Or do you have any suggestions
LuciferYang commented on code in PR #40136:
URL: https://github.com/apache/spark/pull/40136#discussion_r1115315752
##
connector/connect/client/jvm/pom.xml:
##
@@ -125,6 +125,11 @@
${mima.version}
test
+
Review Comment:
And when I revert to add
HyukjinKwon commented on code in PR #40135:
URL: https://github.com/apache/spark/pull/40135#discussion_r1115324235
##
python/pyspark/sql/tests/test_dataframe.py:
##
@@ -144,6 +144,17 @@ def test_drop_duplicates(self):
message_parameters={"arg_name": "subset",
amaliujia closed pull request #38588: [SPARK-41086][SQL] Consolidate
SecondArgumentXXX error to INVALID_PARAMETER_VALUE
URL: https://github.com/apache/spark/pull/38588
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
LuciferYang commented on code in PR #40136:
URL: https://github.com/apache/spark/pull/40136#discussion_r1115318339
##
connector/connect/client/jvm/pom.xml:
##
@@ -125,6 +125,11 @@
${mima.version}
test
+
Review Comment:
So I think `parquet-hadoop` is
LuciferYang commented on code in PR #40136:
URL: https://github.com/apache/spark/pull/40136#discussion_r1115315752
##
connector/connect/client/jvm/pom.xml:
##
@@ -125,6 +125,11 @@
${mima.version}
test
+
Review Comment:
And when I revert to add
LuciferYang commented on code in PR #40136:
URL: https://github.com/apache/spark/pull/40136#discussion_r1115318339
##
connector/connect/client/jvm/pom.xml:
##
@@ -125,6 +125,11 @@
${mima.version}
test
+
Review Comment:
So I think `parquet-hadoop` is
LuciferYang commented on code in PR #40136:
URL: https://github.com/apache/spark/pull/40136#discussion_r1115315752
##
connector/connect/client/jvm/pom.xml:
##
@@ -125,6 +125,11 @@
${mima.version}
test
+
Review Comment:
And when I revert to add
cloud-fan commented on PR #40135:
URL: https://github.com/apache/spark/pull/40135#issuecomment-1441301397
which commit caused the regression?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
cloud-fan commented on code in PR #40138:
URL: https://github.com/apache/spark/pull/40138#discussion_r1115306480
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFramesSuite.scala:
##
@@ -474,4 +474,33 @@ class DataFrameWindowFramesSuite extends QueryTest with
LuciferYang commented on code in PR #40136:
URL: https://github.com/apache/spark/pull/40136#discussion_r1115305918
##
connector/connect/client/jvm/pom.xml:
##
@@ -125,6 +125,11 @@
${mima.version}
test
+
Review Comment:
Did an experiment
huaxingao commented on code in PR #40134:
URL: https://github.com/apache/spark/pull/40134#discussion_r1115304895
##
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/DB2IntegrationSuite.scala:
##
@@ -217,4 +217,25 @@ class DB2IntegrationSuite extends
cloud-fan commented on code in PR #38823:
URL: https://github.com/apache/spark/pull/38823#discussion_r1115303902
##
sql/core/src/test/scala/org/apache/spark/sql/sources/InsertSuite.scala:
##
@@ -2316,6 +2316,18 @@ class InsertSuite extends DataSourceTest with
cloud-fan commented on code in PR #38823:
URL: https://github.com/apache/spark/pull/38823#discussion_r1115300391
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DataSourceV2Strategy.scala:
##
@@ -178,6 +178,15 @@ class DataSourceV2Strategy(session:
cloud-fan commented on code in PR #38823:
URL: https://github.com/apache/spark/pull/38823#discussion_r1115298759
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala:
##
@@ -3405,6 +3405,17 @@ private[sql] object QueryCompilationErrors extends
cloud-fan commented on code in PR #38823:
URL: https://github.com/apache/spark/pull/38823#discussion_r1115297770
##
sql/catalyst/src/main/scala/org/apache/spark/sql/connector/catalog/CatalogV2Util.scala:
##
@@ -471,43 +473,63 @@ private[sql] object CatalogV2Util {
/**
cloud-fan commented on code in PR #38823:
URL: https://github.com/apache/spark/pull/38823#discussion_r1115296490
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/GeneratedColumn.scala:
##
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation
cloud-fan commented on code in PR #38823:
URL: https://github.com/apache/spark/pull/38823#discussion_r1115295605
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/GeneratedColumn.scala:
##
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation
cloud-fan commented on code in PR #38823:
URL: https://github.com/apache/spark/pull/38823#discussion_r1115294387
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/GeneratedColumn.scala:
##
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation
cloud-fan commented on code in PR #38823:
URL: https://github.com/apache/spark/pull/38823#discussion_r1115293788
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/GeneratedColumn.scala:
##
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation
ulysses-you commented on PR #40138:
URL: https://github.com/apache/spark/pull/40138#issuecomment-1441283164
cc @cloud-fan @tgravescs
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
cloud-fan commented on code in PR #38823:
URL: https://github.com/apache/spark/pull/38823#discussion_r1115293400
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/GeneratedColumn.scala:
##
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation
cloud-fan commented on code in PR #38823:
URL: https://github.com/apache/spark/pull/38823#discussion_r1115292891
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/GeneratedColumn.scala:
##
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation
cloud-fan commented on code in PR #38823:
URL: https://github.com/apache/spark/pull/38823#discussion_r1115290604
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/GeneratedColumn.scala:
##
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation
cloud-fan commented on code in PR #38823:
URL: https://github.com/apache/spark/pull/38823#discussion_r1115290414
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/GeneratedColumn.scala:
##
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation
gengliangwang commented on code in PR #40140:
URL: https://github.com/apache/spark/pull/40140#discussion_r1115289271
##
sql/core/src/test/scala/org/apache/spark/sql/sources/InsertSuite.scala:
##
@@ -1106,6 +1106,16 @@ class InsertSuite extends DataSourceTest with
gengliangwang commented on code in PR #40140:
URL: https://github.com/apache/spark/pull/40140#discussion_r1115289060
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala:
##
@@ -2357,21 +2357,43 @@ case class UpCast(child: Expression, target:
cloud-fan commented on code in PR #38823:
URL: https://github.com/apache/spark/pull/38823#discussion_r1115287430
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/Column.java:
##
@@ -52,7 +58,17 @@ static Column create(
String comment,
RunyaoChen commented on PR #39855:
URL: https://github.com/apache/spark/pull/39855#issuecomment-1441274484
> @RunyaoChen Could you backport this fix to branch-3.3 to fix
[SPARK-42473](https://issues.apache.org/jira/browse/SPARK-42473)?
Sure, here's the cherry-pick to branch-3.3:
alkis commented on code in PR #40121:
URL: https://github.com/apache/spark/pull/40121#discussion_r1115274769
##
core/src/main/scala/org/apache/spark/util/collection/PercentileHeap.scala:
##
@@ -20,97 +20,55 @@ package org.apache.spark.util.collection
import
wangyum commented on PR #40140:
URL: https://github.com/apache/spark/pull/40140#issuecomment-1441269735
cc @gengliangwang
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
alkis commented on PR #40121:
URL: https://github.com/apache/spark/pull/40121#issuecomment-1441267376
> Can we minimize diff's to this file ? A large fraction is whitespace
changes and due to the renames ... will take a look at the changes as well.
Can you treat is a new
alkis commented on PR #40121:
URL: https://github.com/apache/spark/pull/40121#issuecomment-1441264762
> Also given this is an optimization change - include benchmark to quantify
the impact ?
I did benchmarking live in a cluster. Profiles before show ~1% of scheduler
time in
RunyaoChen opened a new pull request, #40140:
URL: https://github.com/apache/spark/pull/40140
### What changes were proposed in this pull request?
This PR fixes the internal error `Child is not Cast or ExpressionProxy of
Cast` for valid `CaseWhen` expr with `Cast`
alkis commented on code in PR #40121:
URL: https://github.com/apache/spark/pull/40121#discussion_r1115277253
##
core/src/main/scala/org/apache/spark/util/collection/PercentileHeap.scala:
##
@@ -20,97 +20,55 @@ package org.apache.spark.util.collection
import
alkis commented on code in PR #40121:
URL: https://github.com/apache/spark/pull/40121#discussion_r1115277253
##
core/src/main/scala/org/apache/spark/util/collection/PercentileHeap.scala:
##
@@ -20,97 +20,55 @@ package org.apache.spark.util.collection
import
alkis commented on code in PR #40121:
URL: https://github.com/apache/spark/pull/40121#discussion_r1115274769
##
core/src/main/scala/org/apache/spark/util/collection/PercentileHeap.scala:
##
@@ -20,97 +20,55 @@ package org.apache.spark.util.collection
import
LuciferYang commented on code in PR #40136:
URL: https://github.com/apache/spark/pull/40136#discussion_r1115273797
##
connector/connect/client/jvm/pom.xml:
##
@@ -125,6 +125,11 @@
${mima.version}
test
+
Review Comment:
Even if I make `parquet-hadoop`
LuciferYang commented on code in PR #40136:
URL: https://github.com/apache/spark/pull/40136#discussion_r1115273797
##
connector/connect/client/jvm/pom.xml:
##
@@ -125,6 +125,11 @@
${mima.version}
test
+
Review Comment:
Yeah, even if I make
cloud-fan closed pull request #40073: [SPARK-42484] [SQL] UnsafeRowUtils better
error message
URL: https://github.com/apache/spark/pull/40073
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
cloud-fan commented on PR #40073:
URL: https://github.com/apache/spark/pull/40073#issuecomment-1441255439
thanks, merging to master/3.4 (error message improvement)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
huaxingao opened a new pull request, #40139:
URL: https://github.com/apache/spark/pull/40139
### What changes were proposed in this pull request?
get ColStats in `DescribeColumnExec` when `isExtended` is true
### Why are the changes needed?
To make code cleaner
xinrong-meng commented on PR #40135:
URL: https://github.com/apache/spark/pull/40135#issuecomment-1441240940
Shall we add an example to **Does this PR introduce any user-facing
change?** in the PR description? Like
```py
>>> df3.show()
+---++--++
LuciferYang commented on code in PR #40136:
URL: https://github.com/apache/spark/pull/40136#discussion_r1115256951
##
connector/connect/server/pom.xml:
##
@@ -199,6 +199,11 @@
${tomcat.annotations.api.version}
provided
+
+ org.apache.parquet
+
LuciferYang commented on code in PR #40136:
URL: https://github.com/apache/spark/pull/40136#discussion_r1115253586
##
connector/connect/client/jvm/pom.xml:
##
@@ -125,6 +125,11 @@
${mima.version}
test
+
Review Comment:
Good question, when I add this
allisonport-db commented on code in PR #38823:
URL: https://github.com/apache/spark/pull/38823#discussion_r1115242394
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/Column.java:
##
@@ -82,6 +98,15 @@ static Column create(
@Nullable
ColumnDefaultValue
WeichenXu123 commented on code in PR #40097:
URL: https://github.com/apache/spark/pull/40097#discussion_r1115240143
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/ml/classification/Classifier.scala:
##
@@ -0,0 +1,187 @@
+/*
+ * Licensed to the Apache Software
sadikovi commented on PR #40134:
URL: https://github.com/apache/spark/pull/40134#issuecomment-1441208481
@dongjoon-hyun I have addressed the comment, could you review again please?
Thank you.
Also, do you know whom I can ping on this PR with the regard to DB2 SQL
semantics?
--
sadikovi commented on code in PR #40134:
URL: https://github.com/apache/spark/pull/40134#discussion_r1115227618
##
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/DB2IntegrationSuite.scala:
##
@@ -217,4 +217,26 @@ class DB2IntegrationSuite extends
sadikovi commented on code in PR #40134:
URL: https://github.com/apache/spark/pull/40134#discussion_r1115227694
##
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/DB2IntegrationSuite.scala:
##
@@ -217,4 +217,26 @@ class DB2IntegrationSuite extends
dongjoon-hyun commented on code in PR #40134:
URL: https://github.com/apache/spark/pull/40134#discussion_r1115217759
##
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/DB2IntegrationSuite.scala:
##
@@ -217,4 +217,26 @@ class DB2IntegrationSuite
dongjoon-hyun commented on code in PR #40134:
URL: https://github.com/apache/spark/pull/40134#discussion_r1115217372
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/DB2Dialect.scala:
##
@@ -160,4 +160,8 @@ private object DB2Dialect extends JdbcDialect {
s"DROP
dongjoon-hyun commented on code in PR #40134:
URL: https://github.com/apache/spark/pull/40134#discussion_r1115217088
##
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/DB2IntegrationSuite.scala:
##
@@ -217,4 +217,26 @@ class DB2IntegrationSuite
hvanhovell commented on code in PR #40133:
URL: https://github.com/apache/spark/pull/40133#discussion_r1115216626
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkConnectClient.scala:
##
@@ -189,9 +245,54 @@ object SparkConnectClient {
mridulm commented on PR #40064:
URL: https://github.com/apache/spark/pull/40064#issuecomment-1441190260
@Yikf Agree - we only specify two parts for the `JobID` - the `String
jtIdentifier` and `int id`.
We can persist those in the class - and make jobId a `transient lazy val`
which
hvanhovell commented on code in PR #40133:
URL: https://github.com/apache/spark/pull/40133#discussion_r1115216249
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkConnectClient.scala:
##
@@ -158,13 +214,14 @@ object SparkConnectClient {
hvanhovell commented on code in PR #40133:
URL: https://github.com/apache/spark/pull/40133#discussion_r1115215682
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkConnectClient.scala:
##
@@ -117,6 +126,53 @@ object SparkConnectClient {
ulysses-you opened a new pull request, #40138:
URL: https://github.com/apache/spark/pull/40138
### What changes were proposed in this pull request?
Use `DecimalAddNoOverflowCheck` instead of `Add` to craete bound ordering
for window range frame
### Why are the
hvanhovell closed pull request #40129: [SPARK-42529][CONNECT] Support Cube and
Rollup in Scala client
URL: https://github.com/apache/spark/pull/40129
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
hvanhovell commented on PR #40129:
URL: https://github.com/apache/spark/pull/40129#issuecomment-1441187112
merging
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
cloud-fan commented on PR #40137:
URL: https://github.com/apache/spark/pull/40137#issuecomment-1441186992
cc @peter-toth
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
cloud-fan opened a new pull request, #40137:
URL: https://github.com/apache/spark/pull/40137
### What changes were proposed in this pull request?
This is a follow-up of https://github.com/apache/spark/pull/37525 . When the
project list as aliases, we go to the
hvanhovell commented on code in PR #40136:
URL: https://github.com/apache/spark/pull/40136#discussion_r1115214435
##
connector/connect/client/jvm/pom.xml:
##
@@ -125,6 +125,11 @@
${mima.version}
test
+
Review Comment:
I have trouble understanding how
LuciferYang commented on PR #40136:
URL: https://github.com/apache/spark/pull/40136#issuecomment-1441185226
cc @hvanhovell
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
Yikf commented on PR #40064:
URL: https://github.com/apache/spark/pull/40064#issuecomment-1441179787
@mridulm Thanks your review, this is a nice question for me, `JobId` maybe
is different when each time the class is deserialized.
How about this idea that
sadikovi commented on code in PR #40134:
URL: https://github.com/apache/spark/pull/40134#discussion_r1115208845
##
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala:
##
@@ -1028,6 +1028,19 @@ class JDBCSuite extends QueryTest with
SharedSparkSession {
sadikovi commented on code in PR #40134:
URL: https://github.com/apache/spark/pull/40134#discussion_r1115208845
##
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala:
##
@@ -1028,6 +1028,19 @@ class JDBCSuite extends QueryTest with
SharedSparkSession {
mridulm commented on PR #40064:
URL: https://github.com/apache/spark/pull/40064#issuecomment-1441168996
I have not followed the changes in this part of the code too much in a while
- but this specific PR will result in a different `jobId` each time the class
is deserialized - I would
LuciferYang opened a new pull request, #40136:
URL: https://github.com/apache/spark/pull/40136
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
###
cloud-fan commented on code in PR #40121:
URL: https://github.com/apache/spark/pull/40121#discussion_r1115200882
##
core/src/main/scala/org/apache/spark/util/collection/PercentileHeap.scala:
##
@@ -20,97 +20,55 @@ package org.apache.spark.util.collection
import
cloud-fan commented on code in PR #40134:
URL: https://github.com/apache/spark/pull/40134#discussion_r1115199857
##
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala:
##
@@ -1028,6 +1028,19 @@ class JDBCSuite extends QueryTest with
SharedSparkSession {
zhengruifeng commented on PR #40135:
URL: https://github.com/apache/spark/pull/40135#issuecomment-1441159415
cc @HyukjinKwon @xinrong-meng
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
zhengruifeng opened a new pull request, #40135:
URL: https://github.com/apache/spark/pull/40135
### What changes were proposed in this pull request?
Existing implementation always convert inputs (maybe column or column name)
to columns, this cause `AMBIGUOUS_REFERENCE` issue since there
WeichenXu123 commented on PR #40097:
URL: https://github.com/apache/spark/pull/40097#issuecomment-1441155203
> This PR also copies following testsuites to spark-mllib-common:
> 1, org.apache.spark.ml.attribute.*
> 2, org.apache.spark.ml.linalg.* except:
>
>
WeichenXu123 commented on code in PR #40097:
URL: https://github.com/apache/spark/pull/40097#discussion_r1115193417
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/ml/Pipeline.scala:
##
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
WeichenXu123 commented on code in PR #40097:
URL: https://github.com/apache/spark/pull/40097#discussion_r1115192771
##
mllib-common/src/test/scala/org/apache/spark/ml/attribute/AttributeSuite.scala:
##
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
WeichenXu123 commented on code in PR #40097:
URL: https://github.com/apache/spark/pull/40097#discussion_r1115192771
##
mllib-common/src/test/scala/org/apache/spark/ml/attribute/AttributeSuite.scala:
##
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
beliefer commented on code in PR #39954:
URL: https://github.com/apache/spark/pull/39954#discussion_r1115191715
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/jdbc/JDBCScanBuilder.scala:
##
@@ -126,24 +126,23 @@ case class JDBCScanBuilder(
sadikovi commented on PR #40134:
URL: https://github.com/apache/spark/pull/40134#issuecomment-1441145746
cc @dongjoon-hyun @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
wangyum commented on PR #40115:
URL: https://github.com/apache/spark/pull/40115#issuecomment-1441144948
@zml1206 Could you update the PR title to `[SPARK-42525][SQL] Collapse ...`?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
wangyum commented on code in PR #40115:
URL: https://github.com/apache/spark/pull/40115#discussion_r1115183539
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFunctionsSuite.scala:
##
@@ -532,10 +532,15 @@ class DataFrameWindowFunctionsSuite extends QueryTest
amaliujia commented on code in PR #40129:
URL: https://github.com/apache/spark/pull/40129#discussion_r1115180270
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/RelationalGroupedDataset.scala:
##
@@ -37,16 +37,25 @@ import org.apache.spark.connect.proto
*/
Yikf commented on PR #40064:
URL: https://github.com/apache/spark/pull/40064#issuecomment-1441134396
kindly ping @cloud-fan , @boneanxs Any suggestions?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
LuciferYang commented on PR #40120:
URL: https://github.com/apache/spark/pull/40120#issuecomment-1441131900
Thanks @hvanhovell
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
LuciferYang commented on code in PR #40120:
URL: https://github.com/apache/spark/pull/40120#discussion_r1115177855
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/functions.scala:
##
@@ -129,7 +132,7 @@ object functions {
case v: Array[Byte] =>
sadikovi opened a new pull request, #40134:
URL: https://github.com/apache/spark/pull/40134
### What changes were proposed in this pull request?
The PR fixes DB2 Limit clause syntax. Although DB2 supports LIMIT keyword,
it seems that this support varies across databases
hvanhovell commented on code in PR #40129:
URL: https://github.com/apache/spark/pull/40129#discussion_r1115163152
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/RelationalGroupedDataset.scala:
##
@@ -225,3 +234,28 @@ class RelationalGroupedDataset
hvanhovell commented on code in PR #40129:
URL: https://github.com/apache/spark/pull/40129#discussion_r1115162635
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/RelationalGroupedDataset.scala:
##
@@ -37,16 +37,25 @@ import org.apache.spark.connect.proto
dongjoon-hyun closed pull request #40127: [SPARK-42530][PYSPARK][DOCS] Remove
Hadoop 2 from PySpark installation guide
URL: https://github.com/apache/spark/pull/40127
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
dongjoon-hyun commented on PR #40127:
URL: https://github.com/apache/spark/pull/40127#issuecomment-1441097534
Thank you, @HyukjinKwon . Merged to master/3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
HyukjinKwon commented on PR #39995:
URL: https://github.com/apache/spark/pull/39995#issuecomment-1441085969
cc @ueshin
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
github-actions[bot] commented on PR #37634:
URL: https://github.com/apache/spark/pull/37634#issuecomment-1441040591
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
zhenlineo commented on PR #40133:
URL: https://github.com/apache/spark/pull/40133#issuecomment-1441018465
cc @grundprinzip
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
zhenlineo opened a new pull request, #40133:
URL: https://github.com/apache/spark/pull/40133
### What changes were proposed in this pull request?
Adding SSL encryption and access token support for Scala client
### Why are the changes needed?
To support basic client side
wangyum commented on code in PR #40115:
URL: https://github.com/apache/spark/pull/40115#discussion_r1115112991
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:
##
@@ -3592,6 +3592,34 @@ class DataFrameSuite extends QueryTest
val df =
wangyum commented on code in PR #40115:
URL: https://github.com/apache/spark/pull/40115#discussion_r1115111006
##
sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala:
##
@@ -3592,6 +3592,34 @@ class DataFrameSuite extends QueryTest
val df =
dongjoon-hyun closed pull request #40132: [SPARK-42532][K8S][DOCS] Update
YuniKorn docs with v1.2
URL: https://github.com/apache/spark/pull/40132
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun commented on PR #40132:
URL: https://github.com/apache/spark/pull/40132#issuecomment-1440969626
Thank you so much, @viirya . Merged to master/3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
1 - 100 of 204 matches
Mail list logo