grundprinzip commented on code in PR #40447:
URL: https://github.com/apache/spark/pull/40447#discussion_r1138163227
##
python/pyspark/sql/connect/client.py:
##
@@ -122,6 +122,8 @@ class ChannelBuilder:
PARAM_TOKEN = "token"
PARAM_USER_ID = "user_id"
navinvishy commented on PR #38947:
URL: https://github.com/apache/spark/pull/38947#issuecomment-1471352117
> @navinvishy would you mind addressing wenchen's comments? we can merge it
then.
I've addressed them. Thanks for checking, @zhengruifeng !
--
This is an automated message
navinvishy commented on code in PR #38947:
URL: https://github.com/apache/spark/pull/38947#discussion_r1138152130
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -1399,6 +1399,151 @@ case class ArrayContains(left:
navinvishy commented on code in PR #38947:
URL: https://github.com/apache/spark/pull/38947#discussion_r1138150738
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -1399,6 +1399,151 @@ case class ArrayContains(left:
zhouyejoe commented on PR #40412:
URL: https://github.com/apache/spark/pull/40412#issuecomment-1471342162
Thanks for creating the PR. Will review ASAP @Stove-hust
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
dongjoon-hyun closed pull request #40453: [SPARK-42820][BUILD] Update ORC to
1.8.3
URL: https://github.com/apache/spark/pull/40453
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
dongjoon-hyun commented on PR #40453:
URL: https://github.com/apache/spark/pull/40453#issuecomment-1471315962
Let me merge this~ Merged to master/3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
dongjoon-hyun commented on PR #40453:
URL: https://github.com/apache/spark/pull/40453#issuecomment-1471314521
It seems that there is some GitHub Action setting issue on William side.
Actually, I was the release manager of Apache ORC 1.8.3 and tested this here
in my repo.
-
ulysses-you commented on code in PR #40446:
URL: https://github.com/apache/spark/pull/40446#discussion_r1138120937
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -864,6 +864,14 @@ object SQLConf {
.checkValue(_ >= 0, "The maximum must
zhengruifeng commented on PR #40432:
URL: https://github.com/apache/spark/pull/40432#issuecomment-1471307277
@WeichenXu123 not ready.
`sql slow` failed with message related to `mllib-common`:
```
[error]
ulysses-you commented on code in PR #40446:
URL: https://github.com/apache/spark/pull/40446#discussion_r1138102904
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/EquivalentExpressions.scala:
##
@@ -128,10 +131,23 @@ class EquivalentExpressions {
//
WeichenXu123 commented on PR #40432:
URL: https://github.com/apache/spark/pull/40432#issuecomment-1471290106
Is it ready to merge ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
ulysses-you commented on code in PR #40446:
URL: https://github.com/apache/spark/pull/40446#discussion_r1138102904
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/EquivalentExpressions.scala:
##
@@ -128,10 +131,23 @@ class EquivalentExpressions {
//
shuwang21 commented on PR #40448:
URL: https://github.com/apache/spark/pull/40448#issuecomment-1471288187
LGTM. thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
viirya commented on code in PR #40446:
URL: https://github.com/apache/spark/pull/40446#discussion_r1138097640
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/EquivalentExpressions.scala:
##
@@ -128,10 +131,23 @@ class EquivalentExpressions {
// There
viirya commented on code in PR #40446:
URL: https://github.com/apache/spark/pull/40446#discussion_r1138095296
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -864,6 +864,14 @@ object SQLConf {
.checkValue(_ >= 0, "The maximum must not be
LuciferYang commented on PR #40452:
URL: https://github.com/apache/spark/pull/40452#issuecomment-1471278961
Thanks @dongjoon-hyun @yaooqinn
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun closed pull request #40452: [MINOR] Add comments of `xercesImpl`
upgrade precautions in `pom.xml`
URL: https://github.com/apache/spark/pull/40452
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
LuciferYang commented on code in PR #40452:
URL: https://github.com/apache/spark/pull/40452#discussion_r1138092261
##
pom.xml:
##
@@ -1426,6 +1426,7 @@
test
+
Review Comment:
done
--
This is an automated message from the Apache Git
LuciferYang commented on PR #40389:
URL: https://github.com/apache/spark/pull/40389#issuecomment-1471275097
Thanks @HyukjinKwon and @Hisoka-X
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
yaooqinn commented on PR #40453:
URL: https://github.com/apache/spark/pull/40453#issuecomment-1471274601
LGTM, thanks @williamhyun @dongjoon-hyun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
HyukjinKwon closed pull request #40389: [SPARK-42767][CONNECT][TESTS] Add a
precondition to start connect server fallback with `in-memory` and auto ignored
some tests strongly depend on hive
URL: https://github.com/apache/spark/pull/40389
--
This is an automated message from the Apache Git
HyukjinKwon commented on PR #40389:
URL: https://github.com/apache/spark/pull/40389#issuecomment-1471270622
Merged to master and branch-3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun commented on PR #40448:
URL: https://github.com/apache/spark/pull/40448#issuecomment-1471270199
cc @mridulm
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon commented on PR #39239:
URL: https://github.com/apache/spark/pull/39239#issuecomment-1471268820
Yup, I meant that most of cases work except few cases that can happen
because of timezone. we're on the same page.
--
This is an automated message from the Apache Git Service.
To
dongjoon-hyun commented on code in PR #40448:
URL: https://github.com/apache/spark/pull/40448#discussion_r1138090342
##
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala:
##
@@ -498,7 +498,10 @@ private[spark] class ApplicationMaster(
dongjoon-hyun commented on code in PR #40452:
URL: https://github.com/apache/spark/pull/40452#discussion_r1138089863
##
pom.xml:
##
@@ -1426,6 +1426,7 @@
test
+
Review Comment:
Thank you, @LuciferYang . Could you split this into two lines?
zhengruifeng commented on PR #40432:
URL: https://github.com/apache/spark/pull/40432#issuecomment-1471248695
`sql - slow` failed, not sure whether it is related, let me investigate it
first
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
ulysses-you commented on code in PR #40446:
URL: https://github.com/apache/spark/pull/40446#discussion_r1138066754
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/EquivalentExpressions.scala:
##
@@ -130,7 +133,19 @@ class EquivalentExpressions {
//
cloud-fan commented on code in PR #40116:
URL: https://github.com/apache/spark/pull/40116#discussion_r1138054456
##
sql/core/src/main/scala/org/apache/spark/sql/SQLImplicits.scala:
##
@@ -40,12 +40,15 @@ abstract class SQLImplicits extends LowPrioritySQLImplicits
{
*/
cloud-fan commented on PR #40116:
URL: https://github.com/apache/spark/pull/40116#issuecomment-1471235372
The single quote indicates that the expression is unresolved, I think it
doesn't matter here.
--
This is an automated message from the Apache Git Service.
To respond to the message,
zhengruifeng commented on PR #40451:
URL: https://github.com/apache/spark/pull/40451#issuecomment-1471228184
merged to master/branch-3.4
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
zhengruifeng closed pull request #40451:
[SPARK-42818][CONNECT][PYTHON][FOLLOWUP] Add versionchanged
URL: https://github.com/apache/spark/pull/40451
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
beliefer commented on PR #40396:
URL: https://github.com/apache/spark/pull/40396#issuecomment-1471226525
@dongjoon-hyun @cloud-fan @huaxingao @sadikovi Thank you.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
cloud-fan commented on code in PR #40446:
URL: https://github.com/apache/spark/pull/40446#discussion_r1138040356
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/EquivalentExpressions.scala:
##
@@ -130,7 +133,19 @@ class EquivalentExpressions {
//
panbingkun opened a new pull request, #40454:
URL: https://github.com/apache/spark/pull/40454
### What changes were proposed in this pull request?
The pr aims to remove unused parameters in PartitionedFileUtil.splitFiles
methods
### Why are the changes needed?
###
zhengruifeng commented on PR #38947:
URL: https://github.com/apache/spark/pull/38947#issuecomment-1471205562
@navinvishy would you mind addressing wenchen's comments? we can merge it
then.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
zhengruifeng commented on PR #40450:
URL: https://github.com/apache/spark/pull/40450#issuecomment-1471199868
Late LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
williamhyun opened a new pull request, #40453:
URL: https://github.com/apache/spark/pull/40453
### What changes were proposed in this pull request?
This PR aims to update ORC to 1.8.3.
### Why are the changes needed?
This will bring the following bug fixes.
-
ulysses-you commented on code in PR #40446:
URL: https://github.com/apache/spark/pull/40446#discussion_r1138020573
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/EquivalentExpressions.scala:
##
@@ -130,7 +133,19 @@ class EquivalentExpressions {
//
zhengruifeng commented on code in PR #40432:
URL: https://github.com/apache/spark/pull/40432#discussion_r1138017400
##
dev/sparktestsupport/modules.py:
##
@@ -781,6 +740,57 @@ def __hash__(self):
],
)
+
+pyspark_connect = Module(
+name="pyspark-connect",
+
cloud-fan commented on code in PR #40446:
URL: https://github.com/apache/spark/pull/40446#discussion_r1138016464
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/EquivalentExpressions.scala:
##
@@ -130,7 +133,19 @@ class EquivalentExpressions {
//
zhengruifeng commented on code in PR #40432:
URL: https://github.com/apache/spark/pull/40432#discussion_r1138015354
##
python/pyspark/ml/connect/functions.py:
##
@@ -0,0 +1,76 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
LuciferYang commented on PR #40431:
URL: https://github.com/apache/spark/pull/40431#issuecomment-1471187025
https://github.com/apache/spark/pull/40452/files : I added a comment in
`pom.xml` to prevent us from forgetting this
--
This is an automated message from the Apache Git Service.
To
zhengruifeng commented on code in PR #40432:
URL: https://github.com/apache/spark/pull/40432#discussion_r1138014700
##
dev/sparktestsupport/modules.py:
##
@@ -655,6 +655,7 @@ def __hash__(self):
"pyspark.ml.tests.test_wrapper",
LuciferYang opened a new pull request, #40452:
URL: https://github.com/apache/spark/pull/40452
### What changes were proposed in this pull request?
This pr just add comments of `xercesImpl` upgrade precautions in `pom.xml`.
### Why are the changes needed?
Add comments to remind
cloud-fan commented on code in PR #40446:
URL: https://github.com/apache/spark/pull/40446#discussion_r1138011206
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -864,6 +864,15 @@ object SQLConf {
.checkValue(_ >= 0, "The maximum must not
amaliujia commented on code in PR #40447:
URL: https://github.com/apache/spark/pull/40447#discussion_r1138008293
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/config/Connect.scala:
##
@@ -47,6 +47,13 @@ object Connect {
.bytesConf(ByteUnit.MiB)
amaliujia commented on code in PR #40447:
URL: https://github.com/apache/spark/pull/40447#discussion_r1138007108
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/config/Connect.scala:
##
@@ -47,6 +47,13 @@ object Connect {
.bytesConf(ByteUnit.MiB)
ueshin opened a new pull request, #40451:
URL: https://github.com/apache/spark/pull/40451
### What changes were proposed in this pull request?
Follow-up of #40450.
Adds `versionchanged` to the docstring.
### Why are the changes needed?
The `versionchanged` is
HyukjinKwon closed pull request #40450: [SPARK-42818][CONNECT][PYTHON]
Implement DataFrameReader/Writer.jdbc
URL: https://github.com/apache/spark/pull/40450
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
HyukjinKwon commented on PR #40450:
URL: https://github.com/apache/spark/pull/40450#issuecomment-1471147451
Merged to master and branch-3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
ulysses-you commented on PR #40446:
URL: https://github.com/apache/spark/pull/40446#issuecomment-1471123749
cc @viirya @cloud-fan thank you
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
cloud-fan closed pull request #40396: [SPARK-42772][SQL] Change the default
value of JDBC options about push down to true
URL: https://github.com/apache/spark/pull/40396
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
cloud-fan commented on PR #40396:
URL: https://github.com/apache/spark/pull/40396#issuecomment-1471116456
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
dongjoon-hyun commented on code in PR #40447:
URL: https://github.com/apache/spark/pull/40447#discussion_r1137915076
##
python/pyspark/sql/connect/client.py:
##
@@ -122,6 +122,8 @@ class ChannelBuilder:
PARAM_TOKEN = "token"
PARAM_USER_ID = "user_id"
dongjoon-hyun commented on code in PR #40447:
URL: https://github.com/apache/spark/pull/40447#discussion_r1137915076
##
python/pyspark/sql/connect/client.py:
##
@@ -122,6 +122,8 @@ class ChannelBuilder:
PARAM_TOKEN = "token"
PARAM_USER_ID = "user_id"
dongjoon-hyun commented on code in PR #40447:
URL: https://github.com/apache/spark/pull/40447#discussion_r1137914674
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/config/Connect.scala:
##
@@ -47,6 +47,13 @@ object Connect {
github-actions[bot] commented on PR #37725:
URL: https://github.com/apache/spark/pull/37725#issuecomment-1471019944
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
HyukjinKwon commented on PR #40448:
URL: https://github.com/apache/spark/pull/40448#issuecomment-1471014495
AppVeyor failure (`continuous-integration/appveyor/pr`) should be fine to
ignore for now.
--
This is an automated message from the Apache Git Service.
To respond to the message,
dongjoon-hyun commented on code in PR #40444:
URL: https://github.com/apache/spark/pull/40444#discussion_r1137906960
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/LoggingPodStatusWatcher.scala:
##
@@ -95,8 +95,8 @@ private[k8s] class
HyukjinKwon commented on code in PR #40432:
URL: https://github.com/apache/spark/pull/40432#discussion_r1137906670
##
python/pyspark/ml/tests/connect/test_connect_function.py:
##
@@ -0,0 +1,113 @@
+#
Review Comment:
I think you can remove this test - I believe the doctests
HyukjinKwon commented on code in PR #40432:
URL: https://github.com/apache/spark/pull/40432#discussion_r1137906453
##
dev/sparktestsupport/modules.py:
##
@@ -655,6 +655,7 @@ def __hash__(self):
"pyspark.ml.tests.test_wrapper",
HyukjinKwon commented on code in PR #40447:
URL: https://github.com/apache/spark/pull/40447#discussion_r1137904888
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/config/Connect.scala:
##
@@ -47,6 +47,13 @@ object Connect {
HyukjinKwon closed pull request #40442: [SPARK-42809][BUILD] Upgrade
scala-maven-plugin from 4.8.0 to 4.8.1
URL: https://github.com/apache/spark/pull/40442
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
HyukjinKwon commented on PR #40442:
URL: https://github.com/apache/spark/pull/40442#issuecomment-1471000337
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
dtenedor commented on PR #40449:
URL: https://github.com/apache/spark/pull/40449#issuecomment-1470971482
@gengliangwang alright I made this change, please look again when you are
ready.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
ueshin opened a new pull request, #40450:
URL: https://github.com/apache/spark/pull/40450
### What changes were proposed in this pull request?
Implements `DataFrameReader/Writer.jdbc`.
### Why are the changes needed?
Missing API.
### Does this PR introduce _any_
gengliangwang commented on PR #40449:
URL: https://github.com/apache/spark/pull/40449#issuecomment-1470835260
> I will put the analyzer results in separate files.
Sounds great! Thanks for the work!
--
This is an automated message from the Apache Git Service.
To respond to the
dtenedor commented on PR #40449:
URL: https://github.com/apache/spark/pull/40449#issuecomment-1470824639
@gengliangwang from past experience we will want to keep the query plans
separate from the SQL results, otherwise the SQL results become hard to read. I
will put the analyzer results in
dtenedor commented on PR #40449:
URL: https://github.com/apache/spark/pull/40449#issuecomment-1470823193
@gengliangwang Sure, I was thinking about this too. We can reuse the same
input SQL query files if we want, and just generate and test against different
analyzer test output files. Let
rithwik-db closed pull request #40423: [SPARK-41775][PYTHON][FOLLOW-UP] Torch
distributor multiple gpus per task
URL: https://github.com/apache/spark/pull/40423
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
rithwik-db commented on PR #40423:
URL: https://github.com/apache/spark/pull/40423#issuecomment-1470815851
This ticket will be closed for now; related changes may be in a V2 of the
TorchDistributor
--
This is an automated message from the Apache Git Service.
To respond to the message,
steveloughran commented on PR #39124:
URL: https://github.com/apache/spark/pull/39124#issuecomment-1470813750
got a new RC up to play with...hopefully RC3 will ship. main changes are
fixes to some HDFS cases which can trigger NPEs
--
This is an automated message from the Apache Git
bjornjorgensen commented on PR #40431:
URL: https://github.com/apache/spark/pull/40431#issuecomment-1470803805
CC @panbingkun
so you too are aware of this and hopefully don't make the same mistake I did.
--
This is an automated message from the Apache Git Service.
To respond to the
gengliangwang commented on PR #40449:
URL: https://github.com/apache/spark/pull/40449#issuecomment-1470801546
@dtenedor Since we already have `SQLQueryTestSuite` which has good basic
Spark SQL features coverage, shall we combine both? E.g. let
`SQLQueryTestSuite` show analyzed
dtenedor commented on PR #40449:
URL: https://github.com/apache/spark/pull/40449#issuecomment-1470712114
Hi @gengliangwang this should be ready for a first look!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
MaxGekk commented on PR #39239:
URL: https://github.com/apache/spark/pull/39239#issuecomment-1470569694
> So the problem here would be implementation detail.
@HyukjinKwon I think this is not impl details but a fundamental problem of
`datetime`, especially in the corner case of
NarekDW commented on PR #40422:
URL: https://github.com/apache/spark/pull/40422#issuecomment-1470557942
> @NarekDW Are there any more similar cases?
>
> cc @srowen FYI
@LuciferYang no, these are all
--
This is an automated message from the Apache Git Service.
To respond to
MaxGekk commented on code in PR #39239:
URL: https://github.com/apache/spark/pull/39239#discussion_r1137571293
##
python/pyspark/sql/types.py:
##
@@ -276,7 +276,18 @@ def toInternal(self, dt: datetime.datetime) -> int:
def fromInternal(self, ts: int) -> datetime.datetime:
dtenedor opened a new pull request, #40449:
URL: https://github.com/apache/spark/pull/40449
### What changes were proposed in this pull request?
This PR creates a new `SQLAnalyzerTestSuite` that consumes input SQL queries
from files and then performs analysis and generates the string
otterc commented on PR #40393:
URL: https://github.com/apache/spark/pull/40393#issuecomment-1470501033
@akpatnam25 @shuwang21
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
otterc commented on PR #40448:
URL: https://github.com/apache/spark/pull/40448#issuecomment-1470498081
@mridulm @xkrogen @akpatnam25 @shuwang21 Please help review.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
otterc opened a new pull request, #40448:
URL: https://github.com/apache/spark/pull/40448
### What changes were proposed in this pull request?
Removed the logging of shuffle service name multiple times in the driver
log. It gets logged everytime a new executor is allocated.
###
LuciferYang commented on PR #40395:
URL: https://github.com/apache/spark/pull/40395#issuecomment-1470433831
https://github.com/apache/spark/actions/runs/4420600519
https://user-images.githubusercontent.com/1475305/225388240-9f85593f-f6d6-47dd-be07-9ab906bf53a8.png;>
The latest
pan3793 commented on code in PR #40447:
URL: https://github.com/apache/spark/pull/40447#discussion_r1137446054
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/config/Connect.scala:
##
@@ -47,6 +47,13 @@ object Connect {
.bytesConf(ByteUnit.MiB)
jdferreira commented on PR #40398:
URL: https://github.com/apache/spark/pull/40398#issuecomment-1470420003
@srowen I haev enabled it, but now I don't know how to progress. Is there a
"re-run" button to re-trigger the build? Or do I push an empty commit into this
branch?
--
This is an
pan3793 commented on code in PR #40447:
URL: https://github.com/apache/spark/pull/40447#discussion_r1137446054
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/config/Connect.scala:
##
@@ -47,6 +47,13 @@ object Connect {
.bytesConf(ByteUnit.MiB)
pan3793 commented on code in PR #40444:
URL: https://github.com/apache/spark/pull/40444#discussion_r1137429300
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/LoggingPodStatusWatcher.scala:
##
@@ -95,8 +95,8 @@ private[k8s] class
grundprinzip opened a new pull request, #40447:
URL: https://github.com/apache/spark/pull/40447
### What changes were proposed in this pull request?
This change lifts the default message size of 4MB to 128MB and makes it
configurable. While 128MB is a "random number" it supports
ritikam2 commented on PR #40116:
URL: https://github.com/apache/spark/pull/40116#issuecomment-1470374550
Can anyone tell me how I am getting this single quote in count expression.
Attaching the picture. This can potentially cause problems down the lance where
tree nodes are compared in the
dongjoon-hyun commented on code in PR #40396:
URL: https://github.com/apache/spark/pull/40396#discussion_r1137367307
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -583,12 +583,16 @@ abstract class JdbcDialect extends Serializable with
Logging {
dongjoon-hyun commented on PR #40410:
URL: https://github.com/apache/spark/pull/40410#issuecomment-1470311818
Thank you, @beliefer and @cloud-fan .
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
dongjoon-hyun commented on code in PR #40358:
URL: https://github.com/apache/spark/pull/40358#discussion_r1137339742
##
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/ClientE2ETestSuite.scala:
##
@@ -175,6 +176,26 @@ class ClientE2ETestSuite extends
dongjoon-hyun commented on code in PR #40358:
URL: https://github.com/apache/spark/pull/40358#discussion_r1137339742
##
connector/connect/client/jvm/src/test/scala/org/apache/spark/sql/ClientE2ETestSuite.scala:
##
@@ -175,6 +176,26 @@ class ClientE2ETestSuite extends
dongjoon-hyun commented on code in PR #40444:
URL: https://github.com/apache/spark/pull/40444#discussion_r1137331510
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/LoggingPodStatusWatcher.scala:
##
@@ -95,8 +95,8 @@ private[k8s] class
dongjoon-hyun commented on code in PR #40444:
URL: https://github.com/apache/spark/pull/40444#discussion_r1137331510
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/LoggingPodStatusWatcher.scala:
##
@@ -95,8 +95,8 @@ private[k8s] class
ulysses-you opened a new pull request, #40446:
URL: https://github.com/apache/spark/pull/40446
### What changes were proposed in this pull request?
Add a new config to shortcut subexpression elimination for conditional
expression.
The subexpression in conditional
LuciferYang opened a new pull request, #40445:
URL: https://github.com/apache/spark/pull/40445
### What changes were proposed in this pull request?
This pr aims to upgrade the following maven plugins
- maven-enforcer-plugin 3.0.0-M2 -> 3.2.1
- build-helper-maven-plugin 3.2.0 ->
pan3793 commented on PR #40444:
URL: https://github.com/apache/spark/pull/40444#issuecomment-1470098995
cc @slothspot @dongjoon-hyun @yaooqinn, please take a look when you get
time, thanks.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
1 - 100 of 208 matches
Mail list logo