yabola commented on PR #38882:
URL: https://github.com/apache/spark/pull/38882#issuecomment-1340316700
@gengliangwang Yes, but the URI will be processed by yarn proxy (encoded
twice). I collect URIInfo information in the interface.
`uriInfo.getRequestUri` :
LuciferYang commented on code in PR #38954:
URL: https://github.com/apache/spark/pull/38954#discussion_r1041710409
##
sql/core/src/test/resources/sql-tests/results/literals.sql.out:
##
@@ -353,10 +353,12 @@ pattern% no-pattern\%pattern\% pattern\\%
select '\'', '"',
LuciferYang commented on code in PR #38954:
URL: https://github.com/apache/spark/pull/38954#discussion_r1041710409
##
sql/core/src/test/resources/sql-tests/results/literals.sql.out:
##
@@ -353,10 +353,12 @@ pattern% no-pattern\%pattern\% pattern\\%
select '\'', '"',
amaliujia commented on PR #38938:
URL: https://github.com/apache/spark/pull/38938#issuecomment-1340315896
Not sure why in the suggestion those newlines were gone. But we need those
new lines. Otherwise this PR won't pass the lint check...
--
This is an automated message from the Apache
LuciferYang commented on code in PR #38954:
URL: https://github.com/apache/spark/pull/38954#discussion_r1041710409
##
sql/core/src/test/resources/sql-tests/results/literals.sql.out:
##
@@ -353,10 +353,12 @@ pattern% no-pattern\%pattern\% pattern\\%
select '\'', '"',
tedyu commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-1340314428
@dongjoon-hyun @viirya
I have modified the subject and description of this PR.
Please take another look.
--
This is an automated message from the Apache Git Service.
To respond
LuciferYang commented on code in PR #38954:
URL: https://github.com/apache/spark/pull/38954#discussion_r1041709594
##
sql/core/src/test/resources/sql-tests/results/literals.sql.out:
##
@@ -353,10 +353,12 @@ pattern% no-pattern\%pattern\% pattern\\%
select '\'', '"',
dengziming commented on code in PR #38899:
URL: https://github.com/apache/spark/pull/38899#discussion_r1041709267
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/planner/LiteralValueProtoConverter.scala:
##
@@ -0,0 +1,157 @@
+/*
+ * Licensed to the Apache
amaliujia commented on code in PR #38938:
URL: https://github.com/apache/spark/pull/38938#discussion_r1041708865
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -1239,6 +1239,16 @@ def summary(self, *statistics: str) -> "DataFrame":
session=self._session,
amaliujia commented on PR #38953:
URL: https://github.com/apache/spark/pull/38953#issuecomment-1340312415
LGTM thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
viirya commented on code in PR #38949:
URL: https://github.com/apache/spark/pull/38949#discussion_r1041706563
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocator.scala:
##
@@ -455,7 +455,6 @@ class
panbingkun opened a new pull request, #38955:
URL: https://github.com/apache/spark/pull/38955
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch
tedyu commented on code in PR #38948:
URL: https://github.com/apache/spark/pull/38948#discussion_r1041703880
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocator.scala:
##
@@ -455,8 +457,13 @@ class
pan3793 commented on code in PR #38901:
URL: https://github.com/apache/spark/pull/38901#discussion_r1041703045
##
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala:
##
@@ -85,7 +85,19 @@ private[spark] class CoarseGrainedExecutorBackend(
LuciferYang opened a new pull request, #38954:
URL: https://github.com/apache/spark/pull/38954
### What changes were proposed in this pull request?
This pr aims rename `_LEGACY_ERROR_TEMP_0019` to
`CANNOT_PARSE_VALUE_TO_DATATYPE`
### Why are the changes needed?
Proper names
zhengruifeng commented on PR #38953:
URL: https://github.com/apache/spark/pull/38953#issuecomment-1340303448
LGTM + 1
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
dongjoon-hyun commented on code in PR #38948:
URL: https://github.com/apache/spark/pull/38948#discussion_r1041700263
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocator.scala:
##
@@ -455,8 +457,13 @@ class
HyukjinKwon closed pull request #38953: [SPARK-41369][CONNECT] Add connect
common to servers' shaded jar
URL: https://github.com/apache/spark/pull/38953
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
HyukjinKwon commented on PR #38953:
URL: https://github.com/apache/spark/pull/38953#issuecomment-1340302157
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
tedyu commented on code in PR #38948:
URL: https://github.com/apache/spark/pull/38948#discussion_r1041696221
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocator.scala:
##
@@ -455,8 +457,12 @@ class
hvanhovell opened a new pull request, #38953:
URL: https://github.com/apache/spark/pull/38953
### What changes were proposed in this pull request?
This adds the connect common jar to the servers' shaded assembly jar. This
was missed in the previous PR.
### Why are the changes
gengliangwang opened a new pull request, #38952:
URL: https://github.com/apache/spark/pull/38952
### What changes were proposed in this pull request?
This is a minor follow-up of https://github.com/apache/spark/pull/37283. It
moves the related methods for checking table
amaliujia commented on code in PR #38938:
URL: https://github.com/apache/spark/pull/38938#discussion_r1041691534
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -1239,6 +1239,16 @@ def summary(self, *statistics: str) -> "DataFrame":
session=self._session,
amaliujia commented on code in PR #38938:
URL: https://github.com/apache/spark/pull/38938#discussion_r1041691534
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -1239,6 +1239,16 @@ def summary(self, *statistics: str) -> "DataFrame":
session=self._session,
dongjoon-hyun commented on code in PR #38948:
URL: https://github.com/apache/spark/pull/38948#discussion_r1041691297
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocator.scala:
##
@@ -455,8 +457,12 @@ class
dongjoon-hyun commented on code in PR #38948:
URL: https://github.com/apache/spark/pull/38948#discussion_r1041691297
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocator.scala:
##
@@ -455,8 +457,12 @@ class
HyukjinKwon commented on code in PR #38947:
URL: https://github.com/apache/spark/pull/38947#discussion_r1041690900
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -119,21 +117,24 @@ case class Size(child: Expression,
dongjoon-hyun commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-1340291989
@tedyu . It seems that you forgot
`spark.kubernetes.driver.ownPersistentVolumeClaim=true`. Pod deletion doesn't
clean up PVCs. It's owned by Driver pod. This is not your bug. That
tedyu commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-1340290773
If possible, can you elaborate a bit ?
If exception happens at `newlyCreatedExecutors(newExecutorId) =` (or later
in the try block), the pod would be deleted.
Why shouldn't
wankunde opened a new pull request, #38951:
URL: https://github.com/apache/spark/pull/38951
### What changes were proposed in this pull request?
Transforms the SelfJoin resulting in duplicate rows used for IN predicate to
aggregation.
For IN predicate, duplicate rows does
dongjoon-hyun commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-1340288819
It's totally fine because
`spark.kubernetes.driver.reusePersistentVolumeClaim=true`. We can reuse that
PVC later, @tedyu .
> e.g. the test can produce exception when the
HeartSaVioR commented on code in PR #38880:
URL: https://github.com/apache/spark/pull/38880#discussion_r1041686643
##
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/RocksDBSuite.scala:
##
@@ -116,7 +116,9 @@ class RocksDBSuite extends SparkFunSuite {
zhengruifeng commented on code in PR #38935:
URL: https://github.com/apache/spark/pull/38935#discussion_r1041682838
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -173,4 +174,18 @@ message Expression {
// (Optional) Alias metadata expressed
dongjoon-hyun commented on PR #38943:
URL: https://github.com/apache/spark/pull/38943#issuecomment-1340280345
Here is a PR including test case to address the comment.
- https://github.com/apache/spark/pull/38949
--
This is an automated message from the Apache Git Service.
To respond to
tedyu commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-1340279777
e.g.
the test can produce exception when the following is called:
```
newlyCreatedExecutors(newExecutorId) = (resourceProfileId,
clock.getTimeMillis())
```
--
beliefer commented on PR #38799:
URL: https://github.com/apache/spark/pull/38799#issuecomment-1340278339
@zhengruifeng @cloud-fan Could you have any other suggestion ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
dongjoon-hyun commented on PR #38949:
URL: https://github.com/apache/spark/pull/38949#issuecomment-1340278166
I already suggest you to use my test code to verify your PR, @tedyu .
- https://github.com/apache/spark/pull/38948#issuecomment-1340234190
--
This is an automated message from
hvanhovell closed pull request #38883: [SPARK-41366][CONNECT] DF.groupby.agg()
should be compatible
URL: https://github.com/apache/spark/pull/38883
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
hvanhovell commented on PR #38883:
URL: https://github.com/apache/spark/pull/38883#issuecomment-1340276233
merging
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
beliefer commented on code in PR #38938:
URL: https://github.com/apache/spark/pull/38938#discussion_r1041676903
##
connector/connect/src/main/protobuf/spark/connect/relations.proto:
##
@@ -404,6 +405,18 @@ message StatSummary {
repeated string statistics = 2;
}
+//
hvanhovell commented on code in PR #38935:
URL: https://github.com/apache/spark/pull/38935#discussion_r1041675179
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -173,4 +174,18 @@ message Expression {
// (Optional) Alias metadata expressed as
tedyu commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-1340272860
@dongjoon-hyun
May I borrow your new test case to show that my PR covers that failure
scenario ?
--
This is an automated message from the Apache Git Service.
To respond to the
beliefer commented on code in PR #38899:
URL: https://github.com/apache/spark/pull/38899#discussion_r1041674643
##
connector/connect/src/main/scala/org/apache/spark/sql/connect/planner/LiteralValueProtoConverter.scala:
##
@@ -0,0 +1,157 @@
+/*
+ * Licensed to the Apache
zhengruifeng commented on code in PR #38935:
URL: https://github.com/apache/spark/pull/38935#discussion_r1041674284
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -173,4 +174,18 @@ message Expression {
// (Optional) Alias metadata expressed
dongjoon-hyun commented on PR #38949:
URL: https://github.com/apache/spark/pull/38949#issuecomment-1340271046
I'll test this PR more in the cluster.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
cloud-fan commented on code in PR #38924:
URL: https://github.com/apache/spark/pull/38924#discussion_r1041671515
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/BatchScanExec.scala:
##
@@ -81,18 +81,21 @@ case class BatchScanExec(
val
wineternity commented on code in PR #38702:
URL: https://github.com/apache/spark/pull/38702#discussion_r1041671057
##
core/src/main/scala/org/apache/spark/status/AppStatusListener.scala:
##
@@ -645,8 +645,11 @@ private[spark] class AppStatusListener(
}
override def
dongjoon-hyun commented on PR #38949:
URL: https://github.com/apache/spark/pull/38949#issuecomment-1340265839
For reviewers, the following test case is added.
```
test("SPARK-41410: An exception during PVC creation should not increase PVC
counter")
```
--
This is an automated
infoankitp commented on code in PR #38865:
URL: https://github.com/apache/spark/pull/38865#discussion_r1041666229
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -4600,3 +4600,133 @@ case class ArrayExcept(left:
LuciferYang commented on PR #38918:
URL: https://github.com/apache/spark/pull/38918#issuecomment-1340262581
Thanks @dongjoon-hyun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
infoankitp commented on code in PR #38865:
URL: https://github.com/apache/spark/pull/38865#discussion_r1041665895
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -4600,3 +4600,133 @@ case class ArrayExcept(left:
beliefer commented on code in PR #38938:
URL: https://github.com/apache/spark/pull/38938#discussion_r1041665659
##
connector/connect/src/main/protobuf/spark/connect/relations.proto:
##
@@ -404,6 +405,18 @@ message StatSummary {
repeated string statistics = 2;
}
+//
infoankitp commented on code in PR #38874:
URL: https://github.com/apache/spark/pull/38874#discussion_r1037911293
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -4600,3 +4600,51 @@ case class ArrayExcept(left:
infoankitp commented on code in PR #38865:
URL: https://github.com/apache/spark/pull/38865#discussion_r1041664885
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala:
##
@@ -4600,3 +4600,133 @@ case class ArrayExcept(left:
zhengruifeng commented on PR #38944:
URL: https://github.com/apache/spark/pull/38944#issuecomment-1340259901
@vicennial @hvanhovell do we need to update the commands in
https://github.com/apache/spark/blob/master/connector/connect/README.md ?
--
This is an automated message from the
sunchao opened a new pull request, #38950:
URL: https://github.com/apache/spark/pull/38950
### What changes were proposed in this pull request?
This enhances Storage Partitioned Join by handling mismatch partition keys
from both sides of the join and skip shuffle in
tedyu commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-1340255065
e.g.
the test can produce exception when the following is called in
`addOwnerReference`
```
originalMetadata.setOwnerReferences(Collections.singletonList(reference))
HeartSaVioR commented on code in PR #38880:
URL: https://github.com/apache/spark/pull/38880#discussion_r1041654592
##
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/RocksDBSuite.scala:
##
@@ -116,7 +116,9 @@ class RocksDBSuite extends SparkFunSuite {
HeartSaVioR commented on code in PR #38880:
URL: https://github.com/apache/spark/pull/38880#discussion_r1041654592
##
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/RocksDBSuite.scala:
##
@@ -116,7 +116,9 @@ class RocksDBSuite extends SparkFunSuite {
MrDLontheway commented on PR #38893:
URL: https://github.com/apache/spark/pull/38893#issuecomment-1340248439
@wangyum
pls help review.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HeartSaVioR commented on code in PR #38911:
URL: https://github.com/apache/spark/pull/38911#discussion_r1041651140
##
connector/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaMicroBatchStream.scala:
##
@@ -316,6 +320,54 @@ private[kafka010] class
tedyu commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-1340242732
My point is: when exception happens, the exception may not come from this
call:
```
dongjoon-hyun commented on PR #38949:
URL: https://github.com/apache/spark/pull/38949#issuecomment-1340239932
cc @tedyu
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
dongjoon-hyun commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-1340238163
This is handled properly by removing `decrement` line.
> the counter shouldn't be decremented.
--
This is an automated message from the Apache Git Service.
To respond to the
dongjoon-hyun commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-1340236767
Please make a valid est case for your claim.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
tedyu commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-1340235685
If exception happens before we reach the following line:
```
kubernetesClient.persistentVolumeClaims().inNamespace(namespace).resource(pvc).create()
```
the counter
HyukjinKwon commented on PR #38915:
URL: https://github.com/apache/spark/pull/38915#issuecomment-1340235289
@zhengruifeng mind fixing the conflicts? Otherwise should be good to go.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
SandishKumarHN commented on PR #38922:
URL: https://github.com/apache/spark/pull/38922#issuecomment-1340234549
> > file that corresponds to the source dataframe.
>
>
>
> They might have used from_protobuf() to get that schema, which supports
recursive fields. They should be
dongjoon-hyun commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-1340234190
BTW, we need to add the test case to validate the ideas. I'll try to add to
my PR. You may can reuse it.
--
This is an automated message from the Apache Git Service.
To respond
dongjoon-hyun opened a new pull request, #38949:
URL: https://github.com/apache/spark/pull/38949
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
###
rangadi commented on PR #38922:
URL: https://github.com/apache/spark/pull/38922#issuecomment-1340232193
> file that corresponds to the source dataframe.
They might have used from_protobuf() to get that schema, which supports
recursive fields. They should be able to do to_protobuf()
dongjoon-hyun commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-1340232151
Okay. Since we don't agree, I will make my PR too. We can compare
side-by-side, @tedyu . :)
--
This is an automated message from the Apache Git Service.
To respond to the
tedyu commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-1340231423
That's not the right way :-)
See https://github.com/apache/spark/pull/38943#issuecomment-1340229735
--
This is an automated message from the Apache Git Service.
To respond to the
dongjoon-hyun commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-1340231350
In other words, please revert all changes and remove one line,
`PVC_COUNTER.decrementAndGet()`.
--
This is an automated message from the Apache Git Service.
To respond to the
SandishKumarHN commented on PR #38922:
URL: https://github.com/apache/spark/pull/38922#issuecomment-1340230717
> > The source dataframe struct field should match the protobuf recursion
message for "to protobuf." It will convert until the recursion level is
matched. like struct within a
dongjoon-hyun commented on PR #38943:
URL: https://github.com/apache/spark/pull/38943#issuecomment-1340230713
I commented on your PR.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
xinrong-meng commented on code in PR #38921:
URL: https://github.com/apache/spark/pull/38921#discussion_r1041642310
##
python/pyspark/sql/tests/connect/test_connect_function.py:
##
@@ -410,6 +410,67 @@ def test_aggregation_functions(self):
tedyu commented on PR #38943:
URL: https://github.com/apache/spark/pull/38943#issuecomment-1340229735
The catch block handles errors beyond PVC creation failure.
```
case NonFatal(e) =>
```
Execution may not reach the `resource(pvc).create()` call.
So we would know the
rangadi commented on PR #38922:
URL: https://github.com/apache/spark/pull/38922#issuecomment-1340225428
> The source dataframe struct field should match the protobuf recursion
message for "to protobuf." It will convert until the recursion level is
matched. like struct within a struct to
dongjoon-hyun commented on PR #38943:
URL: https://github.com/apache/spark/pull/38943#issuecomment-1340224815
In case of creation failure, `PVC_COUNTER.incrementAndGet()` is not invoked.
SandishKumarHN commented on PR #38922:
URL: https://github.com/apache/spark/pull/38922#issuecomment-1340222834
> > Added selectable recursion depth option to from_protobuf.
>
> Do we need to this for 'to_protobuf()' too? What would happen in that case?
@rangadi
The source
tedyu commented on PR #38948:
URL: https://github.com/apache/spark/pull/38948#issuecomment-134002
@dongjoon-hyun
Please take a look.
I am trying to figure out how to add a test.
--
This is an automated message from the Apache Git Service.
To respond to the message, please
tedyu commented on PR #38943:
URL: https://github.com/apache/spark/pull/38943#issuecomment-1340221769
Yeah - the `delete` in catch block may fail.
There could be other error, say prior to the creation of PVC.
--
This is an automated message from the Apache Git Service.
To respond to
tedyu opened a new pull request, #38948:
URL: https://github.com/apache/spark/pull/38948
### What changes were proposed in this pull request?
This is follow-up to commit cc55de33420335bd715720e1d9190bd5e8e2e9fc where
`PVC_COUNTER` was introduced to track outstanding number of PVCs.
dongjoon-hyun commented on PR #38943:
URL: https://github.com/apache/spark/pull/38943#issuecomment-1340220604
Do you mean that `.delete()` can fail, @tedyu ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
cloud-fan commented on code in PR #38776:
URL: https://github.com/apache/spark/pull/38776#discussion_r1041629600
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveLateralColumnAlias.scala:
##
@@ -0,0 +1,222 @@
+/*
+ * Licensed to the Apache Software
dongjoon-hyun commented on PR #38943:
URL: https://github.com/apache/spark/pull/38943#issuecomment-1340215003
Thank you for review, @tedyu .
Could you make a PR with valid test case for your claim?
BTW, technically, a single pod can have multiple PVCs. So `success == 2` is
incorrect
rangadi commented on PR #38922:
URL: https://github.com/apache/spark/pull/38922#issuecomment-1340212474
> Added selectable recursion depth option to from_protobuf.
Do we need to this for 'to_protobuf()' too? What would happen in that case?
--
This is an automated message from the
tedyu commented on PR #38943:
URL: https://github.com/apache/spark/pull/38943#issuecomment-1340205543
I think the `PVC_COUNTER` should only be decremented when the pod deletion
happens (in response to error).
@dongjoon-hyun
What do you think of the following change ?
```
diff
hvanhovell commented on code in PR #38935:
URL: https://github.com/apache/spark/pull/38935#discussion_r1041616815
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -173,4 +174,18 @@ message Expression {
// (Optional) Alias metadata expressed as
amaliujia commented on code in PR #38935:
URL: https://github.com/apache/spark/pull/38935#discussion_r1041613292
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -173,4 +174,18 @@ message Expression {
// (Optional) Alias metadata expressed as a
github-actions[bot] closed pull request #37670: [SPARK-40227][SQL] Data Source
V2: Support creating table with the duplicate transform with different arguments
URL: https://github.com/apache/spark/pull/37670
--
This is an automated message from the Apache Git Service.
To respond to the
github-actions[bot] closed pull request #37613: [SPARK-37944][SQL] Use error
classes in the execution errors of casting
URL: https://github.com/apache/spark/pull/37613
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
AmplabJenkins commented on PR #38941:
URL: https://github.com/apache/spark/pull/38941#issuecomment-1340191577
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
hvanhovell commented on code in PR #38935:
URL: https://github.com/apache/spark/pull/38935#discussion_r1041609765
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -173,4 +174,18 @@ message Expression {
// (Optional) Alias metadata expressed as
SandishKumarHN commented on PR #38922:
URL: https://github.com/apache/spark/pull/38922#issuecomment-1340188322
https://github.com/apache/spark/pull/38922#discussion_r1041470191
@rangadi made the below changes.
- Added selectable recursion depth option to from_protobuf.
- Added
amaliujia commented on code in PR #38935:
URL: https://github.com/apache/spark/pull/38935#discussion_r1041608123
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -173,4 +174,18 @@ message Expression {
// (Optional) Alias metadata expressed as a
hvanhovell commented on code in PR #38935:
URL: https://github.com/apache/spark/pull/38935#discussion_r1041599792
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -173,4 +174,18 @@ message Expression {
// (Optional) Alias metadata expressed as
gengliangwang commented on code in PR #38776:
URL: https://github.com/apache/spark/pull/38776#discussion_r1041599676
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveLateralColumnAlias.scala:
##
@@ -0,0 +1,222 @@
+/*
+ * Licensed to the Apache
hvanhovell commented on code in PR #38935:
URL: https://github.com/apache/spark/pull/38935#discussion_r1041599340
##
connector/connect/src/main/protobuf/spark/connect/expressions.proto:
##
@@ -173,4 +174,18 @@ message Expression {
// (Optional) Alias metadata expressed as
101 - 200 of 347 matches
Mail list logo