HeartSaVioR commented on PR #40892:
URL: https://github.com/apache/spark/pull/40892#issuecomment-1550888639
I'll just help merging this one as it has been here for multiple weeks and
we don't want to require this PR to be rebased anymore.
--
This is an automated message from the Apache
HeartSaVioR closed pull request #40892: [SPARK-43128][CONNECT][SS] Make
`recentProgress` and `lastProgress` return `StreamingQueryProgress` consistent
with the native Scala Api
URL: https://github.com/apache/spark/pull/40892
--
This is an automated message from the Apache Git Service.
To
HeartSaVioR commented on PR #40892:
URL: https://github.com/apache/spark/pull/40892#issuecomment-1550890958
Thanks @LuciferYang , I merged this to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
olaky commented on PR #39608:
URL: https://github.com/apache/spark/pull/39608#issuecomment-1550895739
@dongjoon-hyun I created the ticket and made the requested changes
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
LuciferYang commented on PR #40892:
URL: https://github.com/apache/spark/pull/40892#issuecomment-1550894768
Thanks @HeartSaVioR @HyukjinKwon @rangadi ~
I have already tested my new permissions in other pr :)
--
This is an automated message from the Apache Git Service.
To
LuciferYang commented on PR #40654:
URL: https://github.com/apache/spark/pull/40654#issuecomment-1550897944
hmm... `SparkConnectPlanner ` has had another conflict... I will fix it and
merge this pr as soon as possible
--
This is an automated message from the Apache Git Service.
To
wangyum opened a new pull request, #41195:
URL: https://github.com/apache/spark/pull/41195
### What changes were proposed in this pull request?
This PR adds `log4j-1.2-api` and `log4j-slf4j2-impl` to classpath if active
`hadoop-provided`.
### Why are the changes needed?
MaxGekk commented on code in PR #41191:
URL: https://github.com/apache/spark/pull/41191#discussion_r1196089093
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala:
##
@@ -3187,7 +3187,24 @@ class AstBuilder extends
panbingkun commented on PR #41184:
URL: https://github.com/apache/spark/pull/41184#issuecomment-1550941249
Through the debug plugin scalastyle, I found that the logic of the
ImportOrderChecker rule is as follows, for example, in the xxx file, the
following import:
panbingkun commented on PR #41184:
URL: https://github.com/apache/spark/pull/41184#issuecomment-1550945373
friendly ping @HyukjinKwon @dongjoon-hyun @srowen @LuciferYang
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
panbingkun commented on code in PR #41169:
URL: https://github.com/apache/spark/pull/41169#discussion_r1196113962
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala:
##
@@ -2134,30 +2134,145 @@ case class OctetLength(child:
turboFei commented on PR #41181:
URL: https://github.com/apache/spark/pull/41181#issuecomment-1550791604
thanks for comments, I will check it
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
advancedxy commented on PR #41181:
URL: https://github.com/apache/spark/pull/41181#issuecomment-1550797386
> Thank you for making a PR, @turboFei .
>
> However, this PR might cause a outage because the number of configMap is
controlled by quota.
>
> ```
> $ kubectl describe
HyukjinKwon closed pull request #41026: [SPARK-43132] [SS] [CONNECT] Python
Client DataStreamWriter foreach() API
URL: https://github.com/apache/spark/pull/41026
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
MaxGekk commented on PR #40970:
URL: https://github.com/apache/spark/pull/40970#issuecomment-1550789731
> This change adds support for optional IV and AAD fields to aes_encrypt and
aes_decrypt
@sweisdb Looking at the constructors of the `AesEncrypt` and `AesDecrypt`
expressions,
MaxGekk commented on PR #41155:
URL: https://github.com/apache/spark/pull/41155#issuecomment-1550794232
@johanl-db Are you working on the PR? Could you, please, address the
comments above.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
MaxGekk commented on code in PR #41020:
URL: https://github.com/apache/spark/pull/41020#discussion_r1195983690
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryCompilationErrors.scala:
##
@@ -2115,7 +2115,7 @@ private[sql] object QueryCompilationErrors extends
SandishKumarHN commented on code in PR #41192:
URL: https://github.com/apache/spark/pull/41192#discussion_r1196002786
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/functions.scala:
##
@@ -148,8 +212,38 @@ object functions {
messageName: String,
gatorsmile commented on code in PR #40658:
URL: https://github.com/apache/spark/pull/40658#discussion_r1196006534
##
python/pyspark/pandas/frame.py:
##
@@ -8844,91 +8836,6 @@ def combine_first(self, other: "DataFrame") ->
"DataFrame":
)
return
Hisoka-X commented on code in PR #41187:
URL: https://github.com/apache/spark/pull/41187#discussion_r1196009514
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala:
##
@@ -371,7 +371,8 @@ object CreateStruct {
// We should
Hisoka-X commented on code in PR #41187:
URL: https://github.com/apache/spark/pull/41187#discussion_r1196009862
##
sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala:
##
@@ -280,6 +280,20 @@ class SQLQuerySuite extends QueryTest with
SharedSparkSession with
LuciferYang commented on PR #40925:
URL: https://github.com/apache/spark/pull/40925#issuecomment-1550820753
```
Error: Exception in thread "main" java.lang.IllegalArgumentException:
Unsupported class file major version 61
at org.objectweb.asm.ClassReader.(ClassReader.java:195)
HyukjinKwon commented on PR #41026:
URL: https://github.com/apache/spark/pull/41026#issuecomment-1550823334
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
LuciferYang commented on PR #41194:
URL: https://github.com/apache/spark/pull/41194#issuecomment-1550822915
late LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
SandishKumarHN commented on code in PR #41192:
URL: https://github.com/apache/spark/pull/41192#discussion_r1196002786
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/functions.scala:
##
@@ -148,8 +212,38 @@ object functions {
messageName: String,
SandishKumarHN commented on code in PR #41192:
URL: https://github.com/apache/spark/pull/41192#discussion_r1196003247
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/functions.scala:
##
@@ -148,8 +212,38 @@ object functions {
messageName: String,
MaxGekk commented on code in PR #41169:
URL: https://github.com/apache/spark/pull/41169#discussion_r1196017647
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala:
##
@@ -2134,30 +2134,145 @@ case class OctetLength(child: Expression)
HyukjinKwon commented on PR #41013:
URL: https://github.com/apache/spark/pull/41013#issuecomment-1550847776
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon closed pull request #41013: [SPARK-43509][CONNECT] Support Creating
multiple Spark Connect sessions
URL: https://github.com/apache/spark/pull/41013
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
dongjoon-hyun commented on PR #41201:
URL: https://github.com/apache/spark/pull/41201#issuecomment-1551615074
cc @pralabhkumar and @holdenk from #37417
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
dtenedor commented on code in PR #40996:
URL: https://github.com/apache/spark/pull/40996#discussion_r1196822211
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/write/SupportsCustomSchemaWrite.java:
##
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software
zhenlineo commented on code in PR #41129:
URL: https://github.com/apache/spark/pull/41129#discussion_r1196867820
##
connector/connect/common/src/main/protobuf/spark/connect/commands.proto:
##
@@ -216,6 +216,7 @@ message WriteStreamOperationStart {
message
ericm-db opened a new pull request, #41205:
URL: https://github.com/apache/spark/pull/41205
### What changes were proposed in this pull request?
We are migrating to a new error framework in order to surface errors in a
friendlier way to customers. This PR defines a new error
dtenedor commented on code in PR #41191:
URL: https://github.com/apache/spark/pull/41191#discussion_r1197029651
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala:
##
@@ -3187,7 +3189,37 @@ class AstBuilder extends
dtenedor commented on code in PR #41062:
URL: https://github.com/apache/spark/pull/41062#discussion_r1196820360
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/write/SupportsCustomSchemaWrite.java:
##
@@ -27,12 +28,12 @@
* @since 3.4.1
*/
@Evolving
-public
dtenedor commented on code in PR #41062:
URL: https://github.com/apache/spark/pull/41062#discussion_r1196819921
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/write/SupportsCustomSchemaWrite.java:
##
@@ -27,12 +28,12 @@
* @since 3.4.1
*/
@Evolving
-public
rangadi commented on code in PR #41129:
URL: https://github.com/apache/spark/pull/41129#discussion_r1196819312
##
connector/connect/common/src/main/scala/org/apache/spark/sql/connect/common/foreachWriterPacket.scala:
##
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software
srowen commented on PR #41198:
URL: https://github.com/apache/spark/pull/41198#issuecomment-1551953229
OK yeah it was fine, false alarm. Oops.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun commented on PR #41198:
URL: https://github.com/apache/spark/pull/41198#issuecomment-1551729535
No worry, @srowen ~ I'll monitor together.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
dongjoon-hyun commented on code in PR #41202:
URL: https://github.com/apache/spark/pull/41202#discussion_r1196850103
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/subquery.scala:
##
@@ -372,7 +372,8 @@ case class ListQuery(
// ListQuery can't be
pan3793 commented on code in PR #41201:
URL: https://github.com/apache/spark/pull/41201#discussion_r1196873001
##
core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala:
##
@@ -425,7 +428,7 @@ private[spark] class SparkSubmit extends Logging {
case
xinrong-meng commented on code in PR #41147:
URL: https://github.com/apache/spark/pull/41147#discussion_r1196924307
##
python/pyspark/sql/pandas/serializers.py:
##
@@ -317,66 +320,6 @@ def arrow_to_pandas(self, arrow_column):
s =
dtenedor commented on code in PR #41191:
URL: https://github.com/apache/spark/pull/41191#discussion_r1196805611
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala:
##
@@ -3187,7 +3187,24 @@ class AstBuilder extends
dtenedor commented on PR #41203:
URL: https://github.com/apache/spark/pull/41203#issuecomment-1551745541
@RyanBerti thanks for the update!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun commented on code in PR #41202:
URL: https://github.com/apache/spark/pull/41202#discussion_r1196870894
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/subquery.scala:
##
@@ -372,7 +372,8 @@ case class ListQuery(
// ListQuery can't be
MaxGekk commented on code in PR #41191:
URL: https://github.com/apache/spark/pull/41191#discussion_r1196979589
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala:
##
@@ -3187,7 +3189,37 @@ class AstBuilder extends
RyanBerti commented on PR #41203:
URL: https://github.com/apache/spark/pull/41203#issuecomment-1551953337
@dtenedor I just pushed a commit that tries to generalize the foldable
check, as I'm seeing duplicate code in the datasketches functions as well as
others (see
dtenedor commented on code in PR #41191:
URL: https://github.com/apache/spark/pull/41191#discussion_r1196991836
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala:
##
@@ -3187,7 +3189,37 @@ class AstBuilder extends
dtenedor commented on code in PR #41203:
URL: https://github.com/apache/spark/pull/41203#discussion_r1196808900
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/datasketchesAggregates.scala:
##
@@ -265,6 +288,26 @@ case class HllUnionAgg(
holdenk commented on PR #41201:
URL: https://github.com/apache/spark/pull/41201#issuecomment-1551827663
+1 looks reasonable module the existing suggestions (clean up the logging +
tighten the test). Thanks for making this PR :)
--
This is an automated message from the Apache Git
dtenedor commented on code in PR #41191:
URL: https://github.com/apache/spark/pull/41191#discussion_r1196972501
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala:
##
@@ -3187,7 +3187,24 @@ class AstBuilder extends
jchen5 opened a new pull request, #41202:
URL: https://github.com/apache/spark/pull/41202
### What changes were proposed in this pull request?
In case the assert for the call to ListQuery.nullable is hit, mention in the
assert error message the conf flag that can be used to disable the
jchen5 commented on code in PR #41094:
URL: https://github.com/apache/spark/pull/41094#discussion_r1196750077
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -4199,6 +4199,16 @@ object SQLConf {
.booleanConf
RyanBerti commented on PR #41203:
URL: https://github.com/apache/spark/pull/41203#issuecomment-1551726374
@bersprockets here are the changes to handle non-foldable input args, based
on our conversation in https://github.com/apache/spark/pull/40615. cc @dtenedor
@mkaravel
--
This is an
RyanBerti commented on code in PR #41203:
URL: https://github.com/apache/spark/pull/41203#discussion_r1196857575
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/datasketchesAggregates.scala:
##
@@ -265,6 +288,26 @@ case class HllUnionAgg(
jchen5 commented on PR #41202:
URL: https://github.com/apache/spark/pull/41202#issuecomment-1551806149
Thanks for comments, updated
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
pan3793 commented on code in PR #41201:
URL: https://github.com/apache/spark/pull/41201#discussion_r1196873001
##
core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala:
##
@@ -425,7 +428,7 @@ private[spark] class SparkSubmit extends Logging {
case
gengliangwang commented on code in PR #41191:
URL: https://github.com/apache/spark/pull/41191#discussion_r1196885982
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala:
##
@@ -3187,7 +3187,24 @@ class AstBuilder extends
LuciferYang commented on PR #41198:
URL: https://github.com/apache/spark/pull/41198#issuecomment-1551749676
If there are any issues, please revert and I will resubmit one :)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
MaxGekk opened a new pull request, #41204:
URL: https://github.com/apache/spark/pull/41204
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
sweisdb commented on PR #40970:
URL: https://github.com/apache/spark/pull/40970#issuecomment-1551795385
@MaxGekk I am planning to doing the user-facing SQL expression changes in a
followup to make each change more simple. I want to land this first.
--
This is an automated message from
dongjoon-hyun commented on code in PR #41202:
URL: https://github.com/apache/spark/pull/41202#discussion_r1196848013
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/subquery.scala:
##
@@ -372,7 +372,8 @@ case class ListQuery(
// ListQuery can't be
dtenedor commented on PR #41203:
URL: https://github.com/apache/spark/pull/41203#issuecomment-1551964588
The new trait looks good. In the future we can think about reusing it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
dongjoon-hyun closed pull request #41122: [SPARK-43436][BUILD] Upgrade
rocksdbjni to 8.1.1.1
URL: https://github.com/apache/spark/pull/41122
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun commented on PR #41122:
URL: https://github.com/apache/spark/pull/41122#issuecomment-1552036061
Merged to master for Apache Spark 3.5.0.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
otterc commented on code in PR #41071:
URL: https://github.com/apache/spark/pull/41071#discussion_r1197131313
##
common/network-common/src/main/java/org/apache/spark/network/server/TransportChannelHandler.java:
##
@@ -163,14 +163,11 @@ public void
HyukjinKwon opened a new pull request, #41206:
URL: https://github.com/apache/spark/pull/41206
### What changes were proposed in this pull request?
This PR is a followup of https://github.com/apache/spark/pull/41013 that
sets `SPARK_CONNECT_MODE_ENABLED` when running PySpark shell
HyukjinKwon commented on PR #41013:
URL: https://github.com/apache/spark/pull/41013#issuecomment-1552233690
https://github.com/apache/spark/pull/41206
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
rangadi commented on code in PR #41192:
URL: https://github.com/apache/spark/pull/41192#discussion_r1197260139
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/functions.scala:
##
@@ -148,8 +212,38 @@ object functions {
messageName: String,
grundprinzip commented on PR #41206:
URL: https://github.com/apache/spark/pull/41206#issuecomment-1552363238
Thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
robreeves commented on PR #40812:
URL: https://github.com/apache/spark/pull/40812#issuecomment-1552124399
>
@cloud-fan Cloning the cachedPlan is also problematic because it contains
state (accumulators in private fields) when it includes a `CollectMetricsExec`
operator.
warrenzhu25 commented on PR #41083:
URL: https://github.com/apache/spark/pull/41083#issuecomment-1552141121
> These looks like things which can be handled by appropriate configuration
tuning ? The PR itself requires a bit more work if that is not a feasible
direction (efficient cleanup,
HyukjinKwon commented on code in PR #41206:
URL: https://github.com/apache/spark/pull/41206#discussion_r1197200251
##
python/pyspark/shell.py:
##
@@ -100,10 +100,9 @@
% (platform.python_version(), platform.python_build()[0],
platform.python_build()[1])
)
if is_remote():
Kimahriman commented on PR #41195:
URL: https://github.com/apache/spark/pull/41195#issuecomment-1552245794
Maybe similar reason I made https://github.com/apache/spark/pull/37694 a
while ago? Basically Spark logging setup assumes log4j2, but with hadoop
provided you get 1.x from Hadoop. So
HyukjinKwon commented on code in PR #41206:
URL: https://github.com/apache/spark/pull/41206#discussion_r1197216790
##
python/pyspark/shell.py:
##
@@ -100,10 +100,11 @@
% (platform.python_version(), platform.python_build()[0],
platform.python_build()[1])
)
if
wzhfy commented on PR #41162:
URL: https://github.com/apache/spark/pull/41162#issuecomment-1552291841
I also think that the different results between 0 in ('00') and 0 = '00' are
confusing, and seems hive already fixed this problem.
Could you also take a look? @cloud-fan @MaxGekk
--
itholic opened a new pull request, #41208:
URL: https://github.com/apache/spark/pull/41208
### What changes were proposed in this pull request?
This PR proposes to fix [Supported pandas
gerashegalov commented on code in PR #41203:
URL: https://github.com/apache/spark/pull/41203#discussion_r1197205234
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ExpectsInputTypes.scala:
##
@@ -74,3 +74,44 @@ object ExpectsInputTypes extends
turboFei commented on code in PR #41201:
URL: https://github.com/apache/spark/pull/41201#discussion_r1197261326
##
core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala:
##
@@ -425,7 +428,7 @@ private[spark] class SparkSubmit extends Logging {
case
LuciferYang commented on PR #41209:
URL: https://github.com/apache/spark/pull/41209#issuecomment-1552334558
cc @attilapiros @viirya @sunchao @pan3793 FYI
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
panbingkun commented on PR #41209:
URL: https://github.com/apache/spark/pull/41209#issuecomment-1552341297
https://github.com/apache/spark/assets/15246973/6da74b5d-4e71-440e-bb47-d17ba7f7de1e;>
--
This is an automated message from the Apache Git Service.
To respond to the message,
pan3793 commented on code in PR #41201:
URL: https://github.com/apache/spark/pull/41201#discussion_r1196873001
##
core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala:
##
@@ -425,7 +428,7 @@ private[spark] class SparkSubmit extends Logging {
case
rangadi commented on code in PR #41129:
URL: https://github.com/apache/spark/pull/41129#discussion_r1197276153
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -2386,10 +2393,26 @@ class SparkConnectPlanner(val
wzhfy commented on code in PR #41162:
URL: https://github.com/apache/spark/pull/41162#discussion_r1197253055
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala:
##
@@ -509,16 +509,25 @@ case class In(value: Expression, list:
xinrong-meng commented on code in PR #41147:
URL: https://github.com/apache/spark/pull/41147#discussion_r1196924307
##
python/pyspark/sql/pandas/serializers.py:
##
@@ -317,66 +320,6 @@ def arrow_to_pandas(self, arrow_column):
s =
liukuijian8040 commented on code in PR #41162:
URL: https://github.com/apache/spark/pull/41162#discussion_r1197258991
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala:
##
@@ -509,16 +509,25 @@ case class In(value: Expression, list:
liukuijian8040 commented on code in PR #41162:
URL: https://github.com/apache/spark/pull/41162#discussion_r1197258991
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala:
##
@@ -509,16 +509,25 @@ case class In(value: Expression, list:
ueshin commented on PR #41013:
URL: https://github.com/apache/spark/pull/41013#issuecomment-1552174756
Hi, `./bin/pyspark --remote local` shows the following error after this
commit.
```py
% ./bin/pyspark --remote local
...
Traceback (most recent call last):
File
HyukjinKwon commented on PR #41013:
URL: https://github.com/apache/spark/pull/41013#issuecomment-1552230867
creating a followup now
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
itholic opened a new pull request, #41207:
URL: https://github.com/apache/spark/pull/41207
### What changes were proposed in this pull request?
This is follow-up for https://github.com/apache/spark/pull/40459 to fix the
incorrect information and to elaborate more detailed changes.
xinrong-meng commented on code in PR #41147:
URL: https://github.com/apache/spark/pull/41147#discussion_r1196924307
##
python/pyspark/sql/pandas/serializers.py:
##
@@ -317,66 +320,6 @@ def arrow_to_pandas(self, arrow_column):
s =
LuciferYang commented on PR #40654:
URL: https://github.com/apache/spark/pull/40654#issuecomment-1552308089
Merged to master. Thanks @hvanhovell @HyukjinKwon @rangadi
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
advancedxy commented on code in PR #41192:
URL: https://github.com/apache/spark/pull/41192#discussion_r1197287073
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/CatalystDataToProtobuf.scala:
##
@@ -26,14 +26,14 @@ import
panbingkun opened a new pull request, #41209:
URL: https://github.com/apache/spark/pull/41209
### What changes were proposed in this pull request?
The pr aims to remove workaround for HADOOP-16255.
### Why are the changes needed?
- Because HADOOP-16255 has been fix after hadoop
pralabhkumar commented on PR #41201:
URL: https://github.com/apache/spark/pull/41201#issuecomment-1552336382
LGTM .
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
HyukjinKwon closed pull request #41208: [3.4][SPARK-43547][PS][DOCS] Update
"Supported Pandas API" page to point out the proper pandas docs
URL: https://github.com/apache/spark/pull/41208
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
HyukjinKwon commented on PR #41208:
URL: https://github.com/apache/spark/pull/41208#issuecomment-1552340648
Merged to branch-3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
warrenzhu25 commented on code in PR #41071:
URL: https://github.com/apache/spark/pull/41071#discussion_r1197093878
##
common/network-common/src/main/java/org/apache/spark/network/server/TransportChannelHandler.java:
##
@@ -163,14 +163,11 @@ public void
turboFei commented on code in PR #41201:
URL: https://github.com/apache/spark/pull/41201#discussion_r1197147035
##
core/src/test/scala/org/apache/spark/deploy/SparkSubmitSuite.scala:
##
@@ -1618,6 +1618,24 @@ class SparkSubmitSuite
conf.get(k) should be (v)
}
}
+
turboFei commented on code in PR #41201:
URL: https://github.com/apache/spark/pull/41201#discussion_r1197146855
##
core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala:
##
@@ -425,7 +428,7 @@ private[spark] class SparkSubmit extends Logging {
case
1 - 100 of 178 matches
Mail list logo