chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1199562650
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBFileManager.scala:
##
@@ -134,6 +137,27 @@ class RocksDBFileManager(
private
panbingkun opened a new pull request, #41241:
URL: https://github.com/apache/spark/pull/41241
### What changes were proposed in this pull request?
The pr aims to assign a name to the error class _LEGACY_ERROR_TEMP_0017.
### Why are the changes needed?
The changes improve the
wangyum commented on PR #41195:
URL: https://github.com/apache/spark/pull/41195#issuecomment-1555428658
@Kimahriman Do you have a way to reproduce?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1199516457
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDB.scala:
##
@@ -164,9 +194,34 @@ class RocksDB(
loadedVersion = -1 //
chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1199516321
##
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/RocksDBStateStoreSuite.scala:
##
@@ -177,6 +185,33 @@ class RocksDBStateStoreSuite
chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1199516278
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBFileManager.scala:
##
@@ -280,34 +342,34 @@ class RocksDBFileManager(
val
chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1199515952
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBFileManager.scala:
##
@@ -134,6 +137,27 @@ class RocksDBFileManager(
private
chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1199515925
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBFileManager.scala:
##
@@ -205,19 +229,39 @@ class RocksDBFileManager(
chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1199515899
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBFileManager.scala:
##
@@ -134,6 +137,27 @@ class RocksDBFileManager(
private
chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1199515734
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDB.scala:
##
@@ -129,17 +140,36 @@ class RocksDB(
* Note that this will copy
chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1199515576
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDB.scala:
##
@@ -56,6 +56,15 @@ class RocksDB(
hadoopConf: Configuration =
chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1199513814
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBFileManager.scala:
##
@@ -280,34 +342,34 @@ class RocksDBFileManager(
val
dtenedor commented on code in PR #41191:
URL: https://github.com/apache/spark/pull/41191#discussion_r1199511465
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala:
##
@@ -3187,7 +3189,42 @@ class AstBuilder extends
github-actions[bot] commented on PR #39825:
URL: https://github.com/apache/spark/pull/39825#issuecomment-1555391404
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] closed pull request #39861: [WIP][SPARK-42291] Enable
dropping of columns for non V2 tables
URL: https://github.com/apache/spark/pull/39861
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
github-actions[bot] closed pull request #39838: [SPARK-42270][SQL] Improve sort
merge join stability with large stream side
URL: https://github.com/apache/spark/pull/39838
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
ueshin opened a new pull request, #41240:
URL: https://github.com/apache/spark/pull/41240
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How was this
rangadi commented on PR #40959:
URL: https://github.com/apache/spark/pull/40959#issuecomment-1555381692
> Not tested yet, will perform the test when I'm back.
Is this tested yet? Could you update the PR description?
--
This is an automated message from the Apache Git Service.
To
ericm-db commented on PR #41205:
URL: https://github.com/apache/spark/pull/41205#issuecomment-1555324959
Thanks for the review! I've made the changes, and I think it's ready to
merge now @MaxGekk @HeartSaVioR
--
This is an automated message from the Apache Git Service.
To respond to the
xinrong-meng commented on PR #41147:
URL: https://github.com/apache/spark/pull/41147#issuecomment-1555303500
Please free to leave comments if any, I'll adjust them in follow-ups.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
xinrong-meng commented on PR #41147:
URL: https://github.com/apache/spark/pull/41147#issuecomment-1555302733
Merged to master, thank you!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
xinrong-meng closed pull request #41147: [SPARK-43543][PYTHON] Fix nested
MapType behavior in Pandas UDF
URL: https://github.com/apache/spark/pull/41147
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
MaxGekk closed pull request #41200: [SPARK-43539][SQL] Assign a name to the
error class _LEGACY_ERROR_TEMP_0003
URL: https://github.com/apache/spark/pull/41200
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
dtenedor commented on PR #41007:
URL: https://github.com/apache/spark/pull/41007#issuecomment-1555198846
> @dtenedor
There are still 30 multipartIdentifier usages that do NOT support
IDENTIFIER() notation.
So we would trade mechanical churn in the grammar for code changes in
MaxGekk commented on PR #41200:
URL: https://github.com/apache/spark/pull/41200#issuecomment-1555176941
+1, LGTM. Merging to master.
Thank you, @panbingkun.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
Kimahriman commented on PR #41195:
URL: https://github.com/apache/spark/pull/41195#issuecomment-1555080586
Actually hit a new issue related to this after finally being able to test
out 3.4 from the Delta release. Because of the bump to slf4j 2, it seems
`log4j-slf4j2-impl` doesn't get
dongjoon-hyun commented on PR #41232:
URL: https://github.com/apache/spark/pull/41232#issuecomment-1555078951
Thank you, @anigos .
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
kaiubiferreira closed pull request #41239: Span array function
URL: https://github.com/apache/spark/pull/41239
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe,
kaiubiferreira opened a new pull request, #41239:
URL: https://github.com/apache/spark/pull/41239
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
###
Fokko opened a new pull request, #41238:
URL: https://github.com/apache/spark/pull/41238
### What changes were proposed in this pull request?
Small change to `anyToMicros` to also accept `LocalDateTime` that's being
returned when working with `TIMESTAMP_NTZ`. This simplifies
dongjoon-hyun commented on PR #41226:
URL: https://github.com/apache/spark/pull/41226#issuecomment-1555032827
Thank you, @panbingkun , @LuciferYang , @zhenlineo !
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
dongjoon-hyun closed pull request #41226: [SPARK-43584][BUILD] Update
`sbt-assembly`, `sbt-revolver`, `sbt-mima-plugin` plugins
URL: https://github.com/apache/spark/pull/41226
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
shuwang21 commented on PR #41225:
URL: https://github.com/apache/spark/pull/41225#issuecomment-1555014603
> > Do you think when `spark.network.crypto.saslFallback=true` and L95 from
`AuthRpcHandler.java`.
> > ```
> > saslHandler = new SaslRpcHandler(conf, channel, null,
chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1199195571
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBFileManager.scala:
##
@@ -123,6 +125,7 @@ class RocksDBFileManager(
chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1199194346
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBFileManager.scala:
##
@@ -134,6 +137,27 @@ class RocksDBFileManager(
private
otterc commented on PR #41225:
URL: https://github.com/apache/spark/pull/41225#issuecomment-1554906573
> Do you think when `spark.network.crypto.saslFallback=true` and L95 from
`AuthRpcHandler.java`.
>
> ```
> saslHandler = new SaslRpcHandler(conf, channel, null, secretKeyHolder);
tgravescs commented on code in PR #41173:
URL: https://github.com/apache/spark/pull/41173#discussion_r1199162022
##
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala:
##
@@ -780,7 +771,7 @@ private[yarn] class YarnAllocator(
bjornjorgensen commented on code in PR #41211:
URL: https://github.com/apache/spark/pull/41211#discussion_r1198937746
##
python/pyspark/pandas/tests/data_type_ops/test_date_ops.py:
##
@@ -61,6 +63,10 @@ def test_add(self):
for psser in self.pssers:
anigos commented on PR #41232:
URL: https://github.com/apache/spark/pull/41232#issuecomment-1554814355
This was small but much needed as it confuses developers. Thanks
@dongjoon-hyun .
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
zhenlineo commented on PR #41226:
URL: https://github.com/apache/spark/pull/41226#issuecomment-1554804591
I checked locally. MiMa 1.1.2 can find errors about missing private classes
e.g. `private[sql] object Dataset`
```
object org.apache.spark.sql.Dataset does not have a
dongjoon-hyun closed pull request #41234: [SPARK-43589][SQL][3.3] Fix
`cannotBroadcastTableOverMaxTableBytesError` to use `bytesToString`
URL: https://github.com/apache/spark/pull/41234
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
dongjoon-hyun commented on PR #41234:
URL: https://github.com/apache/spark/pull/41234#issuecomment-1554761464
Thank you again, @LuciferYang ! Merged to branch-3.3.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
srowen closed pull request #40398: [MINOR][DOCS] Update `translate` docblock
URL: https://github.com/apache/spark/pull/40398
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
wankunde opened a new pull request, #41237:
URL: https://github.com/apache/spark/pull/41237
### What changes were proposed in this pull request?
If there are few distinct values in the RangePartitioner, there will be very
few partitions that could be very large. We can
bjornjorgensen commented on code in PR #41211:
URL: https://github.com/apache/spark/pull/41211#discussion_r1198937746
##
python/pyspark/pandas/tests/data_type_ops/test_date_ops.py:
##
@@ -61,6 +63,10 @@ def test_add(self):
for psser in self.pssers:
bjornjorgensen commented on code in PR #41211:
URL: https://github.com/apache/spark/pull/41211#discussion_r1198937746
##
python/pyspark/pandas/tests/data_type_ops/test_date_ops.py:
##
@@ -61,6 +63,10 @@ def test_add(self):
for psser in self.pssers:
bjornjorgensen commented on code in PR #41211:
URL: https://github.com/apache/spark/pull/41211#discussion_r1198937746
##
python/pyspark/pandas/tests/data_type_ops/test_date_ops.py:
##
@@ -61,6 +63,10 @@ def test_add(self):
for psser in self.pssers:
bjornjorgensen commented on code in PR #41211:
URL: https://github.com/apache/spark/pull/41211#discussion_r1198937746
##
python/pyspark/pandas/tests/data_type_ops/test_date_ops.py:
##
@@ -61,6 +63,10 @@ def test_add(self):
for psser in self.pssers:
srowen commented on PR #41195:
URL: https://github.com/apache/spark/pull/41195#issuecomment-1554550499
Seems reasonable then. Let's just get the tests to run again.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
bjornjorgensen commented on code in PR #41211:
URL: https://github.com/apache/spark/pull/41211#discussion_r1198937746
##
python/pyspark/pandas/tests/data_type_ops/test_date_ops.py:
##
@@ -61,6 +63,10 @@ def test_add(self):
for psser in self.pssers:
Kimahriman commented on code in PR #34558:
URL: https://github.com/apache/spark/pull/34558#discussion_r1198915674
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/higherOrderFunctions.scala:
##
@@ -130,6 +134,23 @@ case class LambdaFunction(
Kimahriman commented on code in PR #34558:
URL: https://github.com/apache/spark/pull/34558#discussion_r1198914316
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/higherOrderFunctions.scala:
##
@@ -130,6 +134,23 @@ case class LambdaFunction(
Kimahriman commented on code in PR #34558:
URL: https://github.com/apache/spark/pull/34558#discussion_r1198880920
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/EquivalentExpressions.scala:
##
@@ -149,9 +149,13 @@ class EquivalentExpressions(
//
panbingkun commented on code in PR #41214:
URL: https://github.com/apache/spark/pull/41214#discussion_r1198690104
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala:
##
@@ -407,8 +407,8 @@ private[sql] object QueryParsingErrors extends
Kimahriman commented on code in PR #34558:
URL: https://github.com/apache/spark/pull/34558#discussion_r1198873238
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/higherOrderFunctions.scala:
##
@@ -235,6 +256,53 @@ trait HigherOrderFunction extends
panbingkun opened a new pull request, #41236:
URL: https://github.com/apache/spark/pull/41236
### What changes were proposed in this pull request?
The pr aims to assign a name to the error class _LEGACY_ERROR_TEMP_0013.
### Why are the changes needed?
The changes improve the
pan3793 commented on PR #37483:
URL: https://github.com/apache/spark/pull/37483#issuecomment-1554421469
cc @yaooqinn @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
pan3793 commented on PR #37483:
URL: https://github.com/apache/spark/pull/37483#issuecomment-1554409912
IMO we need to partially backport this patch to branch-3.3.
The base64 function behavior changed since SPARK-37820 (3.3.0), causes some
queries, e.g. `select unbase64("abcs==")`,
LuciferYang commented on code in PR #41192:
URL: https://github.com/apache/spark/pull/41192#discussion_r1198789118
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/protobuf/functions.scala:
##
@@ -45,12 +53,36 @@ object functions {
messageName: String,
LuciferYang commented on PR #40925:
URL: https://github.com/apache/spark/pull/40925#issuecomment-1554342893
> Can you make sure we don't exclude too many cases?
Will double check this later
--
This is an automated message from the Apache Git Service.
To respond to the message,
LuciferYang opened a new pull request, #41235:
URL: https://github.com/apache/spark/pull/41235
### What changes were proposed in this pull request?
This pr make `connect-jvm-client-mima-check` to support mima check between
`connect-client-jvm` and `protobuf` module.
### Why
beliefer commented on PR #40782:
URL: https://github.com/apache/spark/pull/40782#issuecomment-1554336985
@ueshin @hvanhovell Recently, https://github.com/apache/spark/pull/41064
added the rowCount statistics to `LocalRelation`. In this PR, @ueshin also
suggested to add the row count as
HeartSaVioR commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1198779061
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBFileManager.scala:
##
@@ -134,6 +137,27 @@ class RocksDBFileManager(
private
HeartSaVioR commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1198581431
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDB.scala:
##
@@ -56,6 +56,15 @@ class RocksDB(
hadoopConf: Configuration = new
LuciferYang commented on PR #41233:
URL: https://github.com/apache/spark/pull/41233#issuecomment-1554318417
cc @HyukjinKwon FYI
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
dongjoon-hyun opened a new pull request, #41234:
URL: https://github.com/apache/spark/pull/41234
### What changes were proposed in this pull request?
This is a backporting of #41232
This PR aims to fix `cannotBroadcastTableOverMaxTableBytesError` to use
`bytesToString`
LuciferYang opened a new pull request, #41233:
URL: https://github.com/apache/spark/pull/41233
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How was
dongjoon-hyun commented on PR #41232:
URL: https://github.com/apache/spark/pull/41232#issuecomment-1554293573
Merged to master/3.4
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
dongjoon-hyun closed pull request #41232: [SPARK-43589][SQL] Fix
`cannotBroadcastTableOverMaxTableBytesError` to use `bytesToString`
URL: https://github.com/apache/spark/pull/41232
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
LuciferYang commented on PR #41231:
URL: https://github.com/apache/spark/pull/41231#issuecomment-1554277884
Thanks @dongjoon-hyun @yaooqinn @panbingkun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
dongjoon-hyun commented on PR #41231:
URL: https://github.com/apache/spark/pull/41231#issuecomment-1554275933
Thank you all! Merged to master for Apache Spark 3.5.0.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
dongjoon-hyun closed pull request #41231: [SPARK-43588][BUILD] Upgrade ASM to
9.5
URL: https://github.com/apache/spark/pull/41231
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
beliefer commented on PR #41212:
URL: https://github.com/apache/spark/pull/41212#issuecomment-1554271802
> Do you think you can make this new test environment variable works for
both Maven and SBT, @beliefer ?
AFAIK, `SparkBuilder` only used for SBT.
--
This is an automated
peter-toth commented on PR #41119:
URL: https://github.com/apache/spark/pull/41119#issuecomment-1554266843
> Hi, @rednaxelafx @peter-toth could you help to review this PR ? Thanks
Hi @wankunde, thanks for pinging me. I can take a look at this PR sometime
next week...
--
This is an
panbingkun commented on code in PR #41214:
URL: https://github.com/apache/spark/pull/41214#discussion_r1198690104
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala:
##
@@ -407,8 +407,8 @@ private[sql] object QueryParsingErrors extends
panbingkun commented on code in PR #41214:
URL: https://github.com/apache/spark/pull/41214#discussion_r1198690104
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala:
##
@@ -407,8 +407,8 @@ private[sql] object QueryParsingErrors extends
dongjoon-hyun commented on PR #41226:
URL: https://github.com/apache/spark/pull/41226#issuecomment-1554208491
Oh, +1 for @LuciferYang 's comment.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
LuciferYang commented on PR #41226:
URL: https://github.com/apache/spark/pull/41226#issuecomment-1554205847
cc@zhenlineo I remember you mentioned a bug in mima 1.1.1: `where the MiMa
will not be able to check the class methods if the object is marked private`,
so Spark have been using
WeichenXu123 commented on code in PR #41176:
URL: https://github.com/apache/spark/pull/41176#discussion_r1198664831
##
python/pyspark/mlv2/feature.py:
##
@@ -0,0 +1,127 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
MaxGekk closed pull request #41020: [SPARK-43345][SPARK-43346][SQL] Rename the
error classes _LEGACY_ERROR_TEMP_[0041|1206]
URL: https://github.com/apache/spark/pull/41020
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
MaxGekk commented on PR #41020:
URL: https://github.com/apache/spark/pull/41020#issuecomment-1554182276
+1, LGTM. Merging to master.
Thank you, @imback82.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
dongjoon-hyun commented on PR #41232:
URL: https://github.com/apache/spark/pull/41232#issuecomment-1554166101
Thank you so much, @LuciferYang !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
zhengruifeng commented on PR #41188:
URL: https://github.com/apache/spark/pull/41188#issuecomment-1554162004
merged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
zhengruifeng closed pull request #41188: [SPARK-43361][PROTOBUF] update
documentation for errors related to enum serialization
URL: https://github.com/apache/spark/pull/41188
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1198625757
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDBFileManager.scala:
##
@@ -362,6 +423,7 @@ class RocksDBFileManager(
}
MaxGekk commented on PR #41205:
URL: https://github.com/apache/spark/pull/41205#issuecomment-1554129823
@ericm-db Could you allow GitHub actions in your fork and re-trigger GAs,
please.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
MaxGekk commented on code in PR #41205:
URL: https://github.com/apache/spark/pull/41205#discussion_r1198618103
##
core/src/main/resources/error/error-classes.json:
##
@@ -202,6 +202,13 @@
"Another instance of this query was just started by a concurrent
session."
]
chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1198620563
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDB.scala:
##
@@ -334,25 +405,59 @@ class RocksDB(
loadedVersion = -1 //
chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1198619715
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDB.scala:
##
@@ -286,44 +343,58 @@ class RocksDB(
*/
def commit(): Long =
chaoqin-li1123 commented on code in PR #41099:
URL: https://github.com/apache/spark/pull/41099#discussion_r1198619375
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/RocksDB.scala:
##
@@ -164,9 +194,34 @@ class RocksDB(
loadedVersion = -1 //
justaparth commented on PR #41188:
URL: https://github.com/apache/spark/pull/41188#issuecomment-1554108357
cc @HyukjinKwon would you mind taking a look and merging this one? thanks
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
panbingkun commented on PR #41200:
URL: https://github.com/apache/spark/pull/41200#issuecomment-1554099340
> @panbingkun Could you wrap `op` by `toSQLStmt()` at:
>
>
mrmadira commented on PR #39474:
URL: https://github.com/apache/spark/pull/39474#issuecomment-1554098087
Hi - Is it possible to get a backporting to Spark 3.3 for this?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
Hisoka-X commented on PR #41156:
URL: https://github.com/apache/spark/pull/41156#issuecomment-1554096090
cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
dongjoon-hyun commented on PR #41232:
URL: https://github.com/apache/spark/pull/41232#issuecomment-1554094150
Could you review this PR, @LuciferYang ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
rangadi commented on PR #41188:
URL: https://github.com/apache/spark/pull/41188#issuecomment-1554081799
Thank you!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
dongjoon-hyun opened a new pull request, #41232:
URL: https://github.com/apache/spark/pull/41232
…
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
dongjoon-hyun commented on PR #41229:
URL: https://github.com/apache/spark/pull/41229#issuecomment-1554075901
Merged to master/3.4/3.3
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
MaxGekk commented on code in PR #41214:
URL: https://github.com/apache/spark/pull/41214#discussion_r1198595578
##
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryParsingErrors.scala:
##
@@ -407,8 +407,8 @@ private[sql] object QueryParsingErrors extends
dongjoon-hyun closed pull request #41229: [SPARK-43587][CORE][TESTS] Run
`HealthTrackerIntegrationSuite` in a dedicated JVM
URL: https://github.com/apache/spark/pull/41229
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
1 - 100 of 109 matches
Mail list logo