mridulm commented on code in PR #38333:
URL: https://github.com/apache/spark/pull/38333#discussion_r1024857218
##
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala:
##
@@ -794,7 +794,18 @@ final class ShuffleBlockFetcherIterator(
//
mridulm commented on code in PR #38333:
URL: https://github.com/apache/spark/pull/38333#discussion_r1024857218
##
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala:
##
@@ -794,7 +794,18 @@ final class ShuffleBlockFetcherIterator(
//
mridulm commented on PR #38333:
URL: https://github.com/apache/spark/pull/38333#issuecomment-1318228713
The test failure looks unrelated, can you retrigger the tests @gaoyajun02 ...
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
mridulm commented on PR #38674:
URL: https://github.com/apache/spark/pull/38674#issuecomment-1318223807
Would be better for @tgravescs to take a look.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
mridulm commented on PR #38668:
URL: https://github.com/apache/spark/pull/38668#issuecomment-1318223332
+CC @dongjoon-hyun, @holdenk
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
mridulm commented on PR #38441:
URL: https://github.com/apache/spark/pull/38441#issuecomment-1318221059
Merged to master.
Thanks for working on this @warrenzhu25 !
Thanks for the reviews @dongjoon-hyun, @Ngone51 :-)
--
This is an automated message from the Apache Git Service.
To
asfgit closed pull request #38441: [SPARK-40979][CORE] Keep removed executor
info due to decommission
URL: https://github.com/apache/spark/pull/38441
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
MaxGekk opened a new pull request, #38685:
URL: https://github.com/apache/spark/pull/38685
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
cloud-fan commented on PR #38684:
URL: https://github.com/apache/spark/pull/38684#issuecomment-1318184753
cc @viirya @wangyum @gengliangwang
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
cloud-fan opened a new pull request, #38684:
URL: https://github.com/apache/spark/pull/38684
### What changes were proposed in this pull request?
This is a followup of https://github.com/apache/spark/pull/38511 to fix a
mistake: we should respect the original `Filter`
MaxGekk commented on code in PR #38576:
URL: https://github.com/apache/spark/pull/38576#discussion_r1024817626
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala:
##
@@ -1059,8 +1059,8 @@ trait CheckAnalysis extends PredicateHelper with
MaxGekk commented on code in PR #38664:
URL: https://github.com/apache/spark/pull/38664#discussion_r1024815444
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala:
##
@@ -146,7 +146,7 @@ object FunctionRegistryBase {
MaxGekk commented on code in PR #38650:
URL: https://github.com/apache/spark/pull/38650#discussion_r1024814862
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DataSourceV2Strategy.scala:
##
@@ -369,7 +369,7 @@ class DataSourceV2Strategy(session:
wangyum commented on PR #38511:
URL: https://github.com/apache/spark/pull/38511#issuecomment-1318162616
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
wangyum closed pull request #38511: [SPARK-41017][SQL] Support column pruning
with multiple nondeterministic Filters
URL: https://github.com/apache/spark/pull/38511
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
wankunde commented on PR #38682:
URL: https://github.com/apache/spark/pull/38682#issuecomment-1318136683
> Seems like it became slower after your PR (?)
Sorry for the mistake, I have update the benchmark result, after this PR,
`Query with LikeAny simplification` should be the same
Yaohua628 closed pull request #38663: [SPARK-41143][SQL] Add named argument
syntax support for table-valued function
URL: https://github.com/apache/spark/pull/38663
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
Yaohua628 commented on PR #38683:
URL: https://github.com/apache/spark/pull/38683#issuecomment-1318133748
cc: @HeartSaVioR
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon commented on code in PR #38681:
URL: https://github.com/apache/spark/pull/38681#discussion_r1024787984
##
connector/connect/src/test/scala/org/apache/spark/sql/connect/planner/SparkConnectServiceSuite.scala:
##
@@ -55,4 +65,38 @@ class SparkConnectServiceSuite
Yaohua628 opened a new pull request, #38683:
URL: https://github.com/apache/spark/pull/38683
### What changes were proposed in this pull request?
In FileSourceStrategy, we add an Alias node to wrap the file metadata fields
(e.g. file_name, file_size) in a NamedStruct
HyukjinKwon commented on PR #38682:
URL: https://github.com/apache/spark/pull/38682#issuecomment-1318132557
Seems like it became slower after your PR (?)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
HyukjinKwon closed pull request #38673: [SPARK-41149][PYTHON] Fix
`SparkSession.builder.config` to support bool
URL: https://github.com/apache/spark/pull/38673
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
HyukjinKwon commented on PR #38673:
URL: https://github.com/apache/spark/pull/38673#issuecomment-1318128005
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
pan3793 commented on code in PR #38651:
URL: https://github.com/apache/spark/pull/38651#discussion_r1024775551
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsSnapshotsStoreImpl.scala:
##
@@ -57,6 +60,7 @@ import
amaliujia commented on code in PR #38659:
URL: https://github.com/apache/spark/pull/38659#discussion_r1024759018
##
connector/connect/src/main/protobuf/spark/connect/relations.proto:
##
@@ -213,7 +213,7 @@ message Deduplicate {
message LocalRelation {
repeated
zhengruifeng commented on code in PR #38659:
URL: https://github.com/apache/spark/pull/38659#discussion_r1024753970
##
connector/connect/src/main/protobuf/spark/connect/relations.proto:
##
@@ -213,7 +213,7 @@ message Deduplicate {
message LocalRelation {
repeated
amaliujia commented on PR #38659:
URL: https://github.com/apache/spark/pull/38659#issuecomment-1318074158
You can also run the scala lint locally `./dev/lint-scala`
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
amaliujia commented on code in PR #38659:
URL: https://github.com/apache/spark/pull/38659#discussion_r1024749559
##
connector/connect/src/main/protobuf/spark/connect/relations.proto:
##
@@ -213,7 +213,7 @@ message Deduplicate {
message LocalRelation {
repeated
amaliujia commented on PR #38609:
URL: https://github.com/apache/spark/pull/38609#issuecomment-1318054981
Checking with @grundprinzip to see if there are more comments?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
wankunde opened a new pull request, #38682:
URL: https://github.com/apache/spark/pull/38682
### What changes were proposed in this pull request?
We can improve multi like by reorder the match expressions.
### Why are the changes needed?
Local benchmark
zhengruifeng commented on PR #38681:
URL: https://github.com/apache/spark/pull/38681#issuecomment-1318050418
also cc @cloud-fan @amaliujia
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
hvanhovell opened a new pull request, #38681:
URL: https://github.com/apache/spark/pull/38681
### What changes were proposed in this pull request?
Two changes:
1. Make sure connect's arrow result path properly deals with errors, and
avoids hangs.
2. Fix a common source of
zhengruifeng commented on PR #38659:
URL: https://github.com/apache/spark/pull/38659#issuecomment-1318017962
you may reformat the scala code by
`./build/mvn -Pscala-2.12 scalafmt:format -Dscalafmt.skip=fase
-Dscalafmt.validateOnly=false -Dscalafmt.changedOnly=false -pl
LuciferYang commented on code in PR #38567:
URL: https://github.com/apache/spark/pull/38567#discussion_r1024721724
##
core/src/main/scala/org/apache/spark/status/KVUtils.scala:
##
@@ -80,6 +89,44 @@ private[spark] object KVUtils extends Logging {
db
}
+ def
LuciferYang commented on code in PR #38567:
URL: https://github.com/apache/spark/pull/38567#discussion_r1024721571
##
core/src/main/scala/org/apache/spark/status/KVUtils.scala:
##
@@ -80,6 +89,44 @@ private[spark] object KVUtils extends Logging {
db
}
+ def
zhengruifeng commented on code in PR #38659:
URL: https://github.com/apache/spark/pull/38659#discussion_r1024720949
##
connector/connect/src/main/protobuf/spark/connect/relations.proto:
##
@@ -213,7 +213,7 @@ message Deduplicate {
message LocalRelation {
repeated
cloud-fan closed pull request #38558: [SPARK-41048][SQL] Improve output
partitioning and ordering with AQE cache
URL: https://github.com/apache/spark/pull/38558
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
cloud-fan commented on PR #38558:
URL: https://github.com/apache/spark/pull/38558#issuecomment-1318015274
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
zhengruifeng commented on code in PR #38659:
URL: https://github.com/apache/spark/pull/38659#discussion_r1024719459
##
connector/connect/src/main/protobuf/spark/connect/relations.proto:
##
@@ -213,7 +213,7 @@ message Deduplicate {
message LocalRelation {
repeated
aokolnychyi commented on PR #38005:
URL: https://github.com/apache/spark/pull/38005#issuecomment-1318009704
@cloud-fan, sounds good. Will do by the end of this week.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
itholic commented on code in PR #38576:
URL: https://github.com/apache/spark/pull/38576#discussion_r1024711225
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala:
##
@@ -1059,8 +1059,8 @@ trait CheckAnalysis extends PredicateHelper with
rangadi commented on code in PR #38384:
URL: https://github.com/apache/spark/pull/38384#discussion_r1024692244
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufUtils.scala:
##
@@ -155,21 +155,52 @@ private[sql] object ProtobufUtils extends
huskysun commented on PR #38679:
URL: https://github.com/apache/spark/pull/38679#issuecomment-1317967794
Ah I see. Thanks for the clarification!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
rangadi commented on PR #38680:
URL: https://github.com/apache/spark/pull/38680#issuecomment-1317967322
@HeartSaVioR PTAL.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
rangadi opened a new pull request, #38680:
URL: https://github.com/apache/spark/pull/38680
### What changes were proposed in this pull request?
This is a follow up to address couple of comments in #38384.
Fixes a comment and adds explanation about why we don't use
LuciferYang commented on PR #38609:
URL: https://github.com/apache/spark/pull/38609#issuecomment-1317964560
Any other changes? @HyukjinKwon @grundprinzip @amaliujia Thanks ~
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
LuciferYang commented on PR #38610:
URL: https://github.com/apache/spark/pull/38610#issuecomment-1317963555
GA passed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
LuciferYang commented on PR #38671:
URL: https://github.com/apache/spark/pull/38671#issuecomment-1317963020
Thanks @MaxGekk
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
dongjoon-hyun commented on PR #38679:
URL: https://github.com/apache/spark/pull/38679#issuecomment-1317950780
During Holiday Season (Thanksgiving + Christmas), we cannot make a new
release. I guess the on-going 3.2.3 RC0 vote will be the last release in this
year (if there is no urgent
dongjoon-hyun commented on PR #38679:
URL: https://github.com/apache/spark/pull/38679#issuecomment-1317949502
Oh, what I meant was this commit addressing my comment was a minor
documentation fix. Your PR is not trivial at all. :)
-
huskysun commented on PR #38679:
URL: https://github.com/apache/spark/pull/38679#issuecomment-1317947106
Oh thanks @dongjoon-hyun. It added a new spark config as code change, but
yeah that's rather trivial. Thanks for the quick merge! So this change will be
in `3.4.0`, right? Do you know
HyukjinKwon commented on code in PR #38673:
URL: https://github.com/apache/spark/pull/38673#discussion_r1024678857
##
python/pyspark/sql/session.py:
##
@@ -256,8 +256,12 @@ def config(
self._options[k] = v
elif map is not None:
dongjoon-hyun commented on PR #38679:
URL: https://github.com/apache/spark/pull/38679#issuecomment-1317944729
Since it's a minor documentation change, I updated your PR and merged to
master.
Thank you, @huskysun .
--
This is an automated message from the Apache Git Service.
To respond
dongjoon-hyun closed pull request #38679: [SPARK-40671][K8S] Support driver
service labels
URL: https://github.com/apache/spark/pull/38679
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun commented on code in PR #38679:
URL: https://github.com/apache/spark/pull/38679#discussion_r1024675718
##
docs/running-on-kubernetes.md:
##
@@ -856,6 +856,17 @@ See the [configuration page](configuration.html) for
information on Spark config
2.3.0
+
+
dongjoon-hyun commented on PR #38679:
URL: https://github.com/apache/spark/pull/38679#issuecomment-1317939397
Hi, @huskysun . Your PR already passed the test here.
- https://github.com/huskysun/spark/runs/9537511421
--
This is an automated message from the Apache Git Service.
To
HyukjinKwon closed pull request #38677: [SPARK-41150][PYTHON][DOCS] Document
debugging with PySpark memory profiler
URL: https://github.com/apache/spark/pull/38677
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
HyukjinKwon commented on PR #38677:
URL: https://github.com/apache/spark/pull/38677#issuecomment-1317938038
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
huskysun commented on PR #38679:
URL: https://github.com/apache/spark/pull/38679#issuecomment-1317937460
@dongjoon-hyun Hi Dongjoon, could you please take a look at this and give it
an "ok to test" (I'm following the steps
cloud-fan closed pull request #38595: [SPARK-41090][SQL] Throw Exception for
`db_name.view_name` when creating temp view by Dataset API
URL: https://github.com/apache/spark/pull/38595
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
cloud-fan commented on PR #38595:
URL: https://github.com/apache/spark/pull/38595#issuecomment-1317932109
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
cloud-fan commented on PR #38005:
URL: https://github.com/apache/spark/pull/38005#issuecomment-1317931495
We should probably enrich the PR description to talk about the general
approach. e.g. we add a virtual column to indicate the operation (delete,
update, insert)
--
This is an
HyukjinKwon commented on PR #38635:
URL: https://github.com/apache/spark/pull/38635#issuecomment-1317897684
@bersprockets it has a conflict with branch-3.3. Feel free to create a
backport Pr if you feel this is needed.
--
This is an automated message from the Apache Git Service.
To
HyukjinKwon commented on PR #38635:
URL: https://github.com/apache/spark/pull/38635#issuecomment-1317897259
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon closed pull request #38635: [SPARK-41118][SQL]
`to_number`/`try_to_number` should return `null` when format is `null`
URL: https://github.com/apache/spark/pull/38635
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
amaliujia commented on PR #38666:
URL: https://github.com/apache/spark/pull/38666#issuecomment-1317894265
@zhengruifeng I confirmed that for small doc change there is no need for a
JIRA (that is why I didn't create one).
--
This is an automated message from the Apache Git Service.
To
zhengruifeng commented on PR #38666:
URL: https://github.com/apache/spark/pull/38666#issuecomment-1317892984
oh, I forgot to mention that we should have a SPARK-X title
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
zhengruifeng commented on PR #38666:
URL: https://github.com/apache/spark/pull/38666#issuecomment-1317892003
merged into master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
zhengruifeng closed pull request #38666: [CONENCT][PYTHON][DOC] Document how to
run the module of tests for Spark Connect Python tests
URL: https://github.com/apache/spark/pull/38666
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
github-actions[bot] closed pull request #34367: [SPARK-37099][SQL] Introduce a
rank-based filter to optimize top-k computation
URL: https://github.com/apache/spark/pull/34367
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
HyukjinKwon closed pull request #38630: [SPARK-41115][CONNECT] Add ClientType
to proto to indicate which client sends a request
URL: https://github.com/apache/spark/pull/38630
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
HyukjinKwon commented on PR #38630:
URL: https://github.com/apache/spark/pull/38630#issuecomment-1317843773
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
rangadi commented on code in PR #38384:
URL: https://github.com/apache/spark/pull/38384#discussion_r1024614312
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufUtils.scala:
##
@@ -155,21 +155,52 @@ private[sql] object ProtobufUtils extends
HyukjinKwon commented on code in PR #38618:
URL: https://github.com/apache/spark/pull/38618#discussion_r1024604820
##
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala:
##
@@ -71,158 +71,146 @@ private[sql] class ArrowBatchStreamWriter(
}
HyukjinKwon commented on code in PR #38618:
URL: https://github.com/apache/spark/pull/38618#discussion_r1024602372
##
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala:
##
@@ -71,158 +71,146 @@ private[sql] class ArrowBatchStreamWriter(
}
huskysun opened a new pull request, #38679:
URL: https://github.com/apache/spark/pull/38679
### What changes were proposed in this pull request?
This PR to add configurability to customize driver service object labels
when running Spark on k8s. The new config is
dongjoon-hyun commented on code in PR #38441:
URL: https://github.com/apache/spark/pull/38441#discussion_r1024561615
##
core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala:
##
@@ -2017,6 +2019,38 @@ class TaskSchedulerImplSuite extends SparkFunSuite with
dongjoon-hyun commented on code in PR #38441:
URL: https://github.com/apache/spark/pull/38441#discussion_r1024560957
##
core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala:
##
@@ -2017,6 +2019,38 @@ class TaskSchedulerImplSuite extends SparkFunSuite with
amaliujia commented on code in PR #38638:
URL: https://github.com/apache/spark/pull/38638#discussion_r1024541016
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -667,12 +668,70 @@ def schema(self) -> StructType:
else:
return self._schema
-def
xinrong-meng commented on PR #38677:
URL: https://github.com/apache/spark/pull/38677#issuecomment-1317723627
```
FAIL [2.213s]: test_termination_sigterm
(pyspark.tests.test_daemon.DaemonTests)
Ensure that daemon and workers terminate on SIGTERM.
amaliujia commented on code in PR #38638:
URL: https://github.com/apache/spark/pull/38638#discussion_r1024541016
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -667,12 +668,70 @@ def schema(self) -> StructType:
else:
return self._schema
-def
HeartSaVioR commented on code in PR #38384:
URL: https://github.com/apache/spark/pull/38384#discussion_r1024522973
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufUtils.scala:
##
@@ -155,21 +155,52 @@ private[sql] object ProtobufUtils extends
HeartSaVioR commented on code in PR #38384:
URL: https://github.com/apache/spark/pull/38384#discussion_r1024522973
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufUtils.scala:
##
@@ -155,21 +155,52 @@ private[sql] object ProtobufUtils extends
HeartSaVioR commented on code in PR #38384:
URL: https://github.com/apache/spark/pull/38384#discussion_r1024522973
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufUtils.scala:
##
@@ -155,21 +155,52 @@ private[sql] object ProtobufUtils extends
HeartSaVioR commented on code in PR #38384:
URL: https://github.com/apache/spark/pull/38384#discussion_r1024522973
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufUtils.scala:
##
@@ -155,21 +155,52 @@ private[sql] object ProtobufUtils extends
rangadi commented on code in PR #38384:
URL: https://github.com/apache/spark/pull/38384#discussion_r1024518159
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufUtils.scala:
##
@@ -155,21 +155,52 @@ private[sql] object ProtobufUtils extends
rangadi commented on code in PR #38384:
URL: https://github.com/apache/spark/pull/38384#discussion_r1024513027
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufUtils.scala:
##
@@ -155,21 +155,52 @@ private[sql] object ProtobufUtils extends
HeartSaVioR commented on code in PR #38384:
URL: https://github.com/apache/spark/pull/38384#discussion_r1024506647
##
connector/protobuf/src/main/scala/org/apache/spark/sql/protobuf/utils/ProtobufUtils.scala:
##
@@ -155,21 +155,52 @@ private[sql] object ProtobufUtils extends
HeartSaVioR commented on PR #38384:
URL: https://github.com/apache/spark/pull/38384#issuecomment-1317663388
(I've just realized that this PR is a follow-up with already resolved JIRA
ticket. Please add the prefix `[FOLLOWUP]` for such case. If the change is
non-trivial, we advise to create
HeartSaVioR closed pull request #38384: [SPARK-40657][PROTOBUF] Require shading
for Java class jar, improve error handling
URL: https://github.com/apache/spark/pull/38384
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
HeartSaVioR commented on PR #38384:
URL: https://github.com/apache/spark/pull/38384#issuecomment-1317660304
Thanks! Merging to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HeartSaVioR closed pull request #38503: [SPARK-40940] Remove Multi-stateful
operator checkers for streaming queries.
URL: https://github.com/apache/spark/pull/38503
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
HeartSaVioR commented on PR #38503:
URL: https://github.com/apache/spark/pull/38503#issuecomment-1317644092
Thanks! Merging to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
AmplabJenkins commented on PR #38666:
URL: https://github.com/apache/spark/pull/38666#issuecomment-1317642241
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
AmplabJenkins commented on PR #38668:
URL: https://github.com/apache/spark/pull/38668#issuecomment-1317642173
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
viirya commented on PR #38669:
URL: https://github.com/apache/spark/pull/38669#issuecomment-1317629457
Merged. Thanks.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
viirya closed pull request #38669: [SPARK-41155][SQL] Add error message to
SchemaColumnConvertNotSupportedException
URL: https://github.com/apache/spark/pull/38669
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
amaliujia commented on PR #38678:
URL: https://github.com/apache/spark/pull/38678#issuecomment-1317617606
@cloud-fan @grundprinzip @zhengruifeng
cc @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
amaliujia opened a new pull request, #38678:
URL: https://github.com/apache/spark/pull/38678
### What changes were proposed in this pull request?
As we have a guidance for Connect proto ([adding proto
hvanhovell commented on code in PR #38618:
URL: https://github.com/apache/spark/pull/38618#discussion_r1024455483
##
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala:
##
@@ -71,158 +71,146 @@ private[sql] class ArrowBatchStreamWriter(
}
1 - 100 of 192 matches
Mail list logo