ulysses-you commented on code in PR #40589:
URL: https://github.com/apache/spark/pull/40589#discussion_r1154046952
##
sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala:
##
@@ -1161,3 +1177,12 @@ object AddLimit extends Rule[LogicalPlan] {
case _
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1154042557
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
juanvisoler commented on code in PR #40608:
URL: https://github.com/apache/spark/pull/40608#discussion_r1154034968
##
python/pyspark/sql/dataframe.py:
##
@@ -706,6 +706,25 @@ def explain(
assert self._sc._jvm is not None
ulysses-you commented on code in PR #40589:
URL: https://github.com/apache/spark/pull/40589#discussion_r1154034136
##
sql/core/src/main/scala/org/apache/spark/sql/SparkSessionExtensions.scala:
##
@@ -111,11 +112,12 @@ class SparkSessionExtensions {
type FunctionDescription =
cloud-fan commented on PR #40258:
URL: https://github.com/apache/spark/pull/40258#issuecomment-1491283077
according to the [code in
2.3](https://github.com/apache/spark/blob/branch-2.3/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala#L190),
I think
cloud-fan commented on PR #40258:
URL: https://github.com/apache/spark/pull/40258#issuecomment-1491280956
> FWIW Both the use cases were working fine in Spark 2.3
Sorry I missed this point. Do you know how it worked in 2.3? Did 2.3 also
call `distinct` before returning the result?
ivoson commented on code in PR #40610:
URL: https://github.com/apache/spark/pull/40610#discussion_r1154005920
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkResult.scala:
##
@@ -134,24 +134,41 @@ private[sql] class SparkResult[T](
/**
yaooqinn commented on code in PR #40583:
URL: https://github.com/apache/spark/pull/40583#discussion_r1153995742
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/TableChange.java:
##
@@ -628,6 +630,16 @@ public int hashCode() {
result = 31 * result +
cloud-fan commented on code in PR #40583:
URL: https://github.com/apache/spark/pull/40583#discussion_r1153994592
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/TableChange.java:
##
@@ -628,6 +630,16 @@ public int hashCode() {
result = 31 * result +
cloud-fan closed pull request #40116: [SPARK-41391][SQL] The output column name
of groupBy.agg(count_distinct) is incorrect
URL: https://github.com/apache/spark/pull/40116
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
cloud-fan commented on PR #40116:
URL: https://github.com/apache/spark/pull/40116#issuecomment-1491261819
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
dongjoon-hyun commented on code in PR #40589:
URL: https://github.com/apache/spark/pull/40589#discussion_r1153990472
##
sql/core/src/main/scala/org/apache/spark/sql/SparkSessionExtensions.scala:
##
@@ -111,11 +112,12 @@ class SparkSessionExtensions {
type FunctionDescription
dongjoon-hyun commented on code in PR #40589:
URL: https://github.com/apache/spark/pull/40589#discussion_r1153989800
##
sql/core/src/main/scala/org/apache/spark/sql/SparkSessionExtensions.scala:
##
@@ -111,11 +112,12 @@ class SparkSessionExtensions {
type FunctionDescription
dongjoon-hyun commented on code in PR #40589:
URL: https://github.com/apache/spark/pull/40589#discussion_r1153988178
##
sql/core/src/main/scala/org/apache/spark/sql/SparkSessionExtensions.scala:
##
@@ -111,11 +112,12 @@ class SparkSessionExtensions {
type FunctionDescription
LuciferYang commented on PR #40605:
URL: https://github.com/apache/spark/pull/40605#issuecomment-1491252507
GA passed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
LuciferYang commented on PR #40610:
URL: https://github.com/apache/spark/pull/40610#issuecomment-1491250312
```
2023-03-30T16:09:39.936Z [0m[[0m[0minfo[0m] [0m[0m[31m- Dataset
result destructive iterator *** FAILED *** (84 milliseconds)[0m[0m
2023-03-30T16:09:39.9382605Z
dongjoon-hyun commented on code in PR #40589:
URL: https://github.com/apache/spark/pull/40589#discussion_r1153985168
##
sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala:
##
@@ -1161,3 +1177,12 @@ object AddLimit extends Rule[LogicalPlan] {
case
yaooqinn commented on PR #40583:
URL: https://github.com/apache/spark/pull/40583#issuecomment-1491249761
cc @cloud-fan @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
dongjoon-hyun commented on code in PR #40589:
URL: https://github.com/apache/spark/pull/40589#discussion_r1153984929
##
sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala:
##
@@ -500,6 +500,22 @@ class SparkSessionExtensionSuite extends SparkFunSuite
hvanhovell commented on code in PR #40611:
URL: https://github.com/apache/spark/pull/40611#discussion_r1153984573
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/arrow/ArrowSerializer.scala:
##
@@ -0,0 +1,529 @@
+/*
+ * Licensed to the Apache
cloud-fan commented on PR #32987:
URL: https://github.com/apache/spark/pull/32987#issuecomment-1491247967
After taking another thought, I think the idea is valid. If a subexpression
will be evaluated at least once, and likely more than once due to conditional
branches, it should be
cloud-fan commented on code in PR #40602:
URL: https://github.com/apache/spark/pull/40602#discussion_r1153980662
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/DB2Dialect.scala:
##
@@ -113,8 +114,9 @@ private object DB2Dialect extends JdbcDialect {
// scalastyle:off
hvanhovell commented on code in PR #40610:
URL: https://github.com/apache/spark/pull/40610#discussion_r1153980266
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkResult.scala:
##
@@ -134,24 +134,41 @@ private[sql] class SparkResult[T](
cloud-fan commented on code in PR #40602:
URL: https://github.com/apache/spark/pull/40602#discussion_r1153979914
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -398,10 +398,24 @@ abstract class JdbcDialect extends Serializable with
Logging {
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153976547
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala:
##
@@ -980,3 +1022,65 @@ object StreamingDeduplicateExec {
private
cloud-fan commented on code in PR #40545:
URL: https://github.com/apache/spark/pull/40545#discussion_r1153976307
##
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategy.scala:
##
@@ -220,9 +220,20 @@ object FileSourceStrategy extends Strategy
Hisoka-X commented on code in PR #40609:
URL: https://github.com/apache/spark/pull/40609#discussion_r1153975175
##
sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala:
##
@@ -625,6 +625,20 @@ class QueryExecutionErrorsSuite
}
}
+
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153973985
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
Hisoka-X commented on code in PR #40609:
URL: https://github.com/apache/spark/pull/40609#discussion_r1153973500
##
sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala:
##
@@ -625,6 +625,20 @@ class QueryExecutionErrorsSuite
}
}
+
gengliangwang commented on PR #40601:
URL: https://github.com/apache/spark/pull/40601#issuecomment-1491232046
> My suggestion is don't touch it to keep legacy workloads running. We
should update the SQL queries to not use String so extensively.
+1, totally agree!
--
This is an
wangyum commented on PR #40601:
URL: https://github.com/apache/spark/pull/40601#issuecomment-1491229091
+1 for fail it in ANSI mode.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
gengliangwang commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153965249
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala:
##
@@ -980,3 +1022,65 @@ object StreamingDeduplicateExec {
gengliangwang commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153964890
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153926757
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer)
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153958746
##
streaming/src/test/java/test/org/apache/spark/streaming/JavaAPISuite.java:
##
@@ -1476,7 +1476,7 @@ public void testCheckpointMasterRecovery() throws
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153958604
##
core/src/test/java/test/org/apache/spark/JavaAPISuite.java:
##
@@ -93,7 +94,7 @@ public class JavaAPISuite implements Serializable {
@Before
public void
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153958391
##
core/src/test/java/test/org/apache/spark/Java8RDDAPISuite.java:
##
@@ -246,7 +246,7 @@ public void mapPartitions() {
@Test
public void sequenceFile()
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153957872
##
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java:
##
@@ -243,7 +243,9 @@ protected void serviceInit(Configuration
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153957701
##
common/network-shuffle/src/test/java/org/apache/spark/network/shuffle/TestShuffleDataContext.java:
##
@@ -47,8 +47,9 @@ public TestShuffleDataContext(int
hvanhovell commented on code in PR #40610:
URL: https://github.com/apache/spark/pull/40610#discussion_r1153957374
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkResult.scala:
##
@@ -134,24 +134,41 @@ private[sql] class SparkResult[T](
hvanhovell commented on code in PR #40610:
URL: https://github.com/apache/spark/pull/40610#discussion_r1153957184
##
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkResult.scala:
##
@@ -45,7 +45,7 @@ private[sql] class SparkResult[T](
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153956628
##
common/network-shuffle/src/test/java/org/apache/spark/network/shuffle/ExternalBlockHandlerSuite.java:
##
@@ -125,7 +125,7 @@ private void checkDiagnosisResult(
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153955764
##
common/network-common/src/test/java/org/apache/spark/network/StreamTestHelper.java:
##
@@ -49,7 +49,7 @@ private static ByteBuffer createBuffer(int bufSize) {
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153951716
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala:
##
@@ -980,3 +1022,65 @@ object StreamingDeduplicateExec {
private
yaooqinn commented on PR #40602:
URL: https://github.com/apache/spark/pull/40602#issuecomment-1491192780
cc @cloud-fan @HyukjinKwon thanks
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153926757
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer)
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153936004
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala:
##
@@ -980,3 +1022,65 @@ object StreamingDeduplicateExec {
private
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153935792
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala:
##
@@ -980,3 +1022,65 @@ object StreamingDeduplicateExec {
private
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153935588
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala:
##
@@ -980,3 +1022,65 @@ object StreamingDeduplicateExec {
private
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153934642
##
core/src/main/scala/org/apache/spark/util/Utils.scala:
##
@@ -320,7 +320,28 @@ private[spark] object Utils extends Logging {
* newly created, and is not
LuciferYang closed pull request #40598: [SPARK-42974][CORE] Restore
`Utils#createTempDir` use `ShutdownHookManager#registerShutdownDeleteDir` to
cleanup tempDir
URL: https://github.com/apache/spark/pull/40598
--
This is an automated message from the Apache Git Service.
To respond to the
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153932172
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153932172
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
RyanBerti opened a new pull request, #40615:
URL: https://github.com/apache/spark/pull/40615
### What changes were proposed in this pull request?
This PR adds a new dependency on the datasketches-java project, and provides
3 new functions which utilize Datasketches HllSketch and Union
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153925550
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer)
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153931344
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153930902
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153930483
##
core/src/main/scala/org/apache/spark/util/Utils.scala:
##
@@ -320,7 +320,28 @@ private[spark] object Utils extends Logging {
* newly created, and is not
lucaspompeun commented on PR #40614:
URL: https://github.com/apache/spark/pull/40614#issuecomment-1491167649
I'have corrected the problem that cause build error in github workflow
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
cloud-fan commented on PR #40601:
URL: https://github.com/apache/spark/pull/40601#issuecomment-1491166746
Or we should probably fail it in ANSI mode, cc @gengliangwang
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
cloud-fan commented on PR #40601:
URL: https://github.com/apache/spark/pull/40601#issuecomment-1491166217
The change makes sense, but I'd say this is a legacy feature and the
existing behavior doesn't make sense at all. For string +/- internal, the
string can be timestamp, timestamp_ntz
LuciferYang commented on code in PR #40613:
URL: https://github.com/apache/spark/pull/40613#discussion_r1153928232
##
core/src/main/scala/org/apache/spark/util/Utils.scala:
##
@@ -330,7 +351,9 @@ private[spark] object Utils extends Logging {
def createTempDir(
root:
lucaspompeun opened a new pull request, #40614:
URL: https://github.com/apache/spark/pull/40614
### What changes were proposed in this pull request?
Correction of code highlights in SQL protobuf documentation.
old version:
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153926757
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer)
LuciferYang opened a new pull request, #40613:
URL: https://github.com/apache/spark/pull/40613
This reverts commit 5cb5d1fa66ad9d6e94beb17d3fda3a8f220bc371.
### What changes were proposed in this pull request?
### Why are the changes needed?
###
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153925550
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer)
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153920360
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer)
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153917326
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer)
srowen commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153918827
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
srowen commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153918687
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
sadikovi commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153918589
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer) {
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r1153917326
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer)
rangadi commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153915858
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
rangadi commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153915713
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
rangadi commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153915130
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
rangadi commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153914621
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
rangadi commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1153908962
##
python/pyspark/sql/connect/session.py:
##
@@ -14,6 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
rangadi commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1153908437
##
python/pyspark/sql/connect/session.py:
##
@@ -489,10 +495,6 @@ def sparkContext(self) -> Any:
def streams(self) -> Any:
raise
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153906537
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153906537
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
rangadi commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1153904387
##
python/pyspark/sql/connect/readwriter.py:
##
@@ -37,7 +37,7 @@
from pyspark.sql.connect._typing import ColumnOrName, OptionalPrimitiveType
from
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153895088
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153904044
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
github-actions[bot] closed pull request #38732: [SPARK-41210][K8S] Window based
executor failure tracking mechanism
URL: https://github.com/apache/spark/pull/38732
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
github-actions[bot] commented on PR #39130:
URL: https://github.com/apache/spark/pull/39130#issuecomment-1491126263
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] closed pull request #39102: [SPARK-41555][SQL] Multi
sparkSession should share single SQLAppStatusStore
URL: https://github.com/apache/spark/pull/39102
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
LuciferYang commented on code in PR #36529:
URL: https://github.com/apache/spark/pull/36529#discussion_r115394
##
common/network-common/src/main/java/org/apache/spark/network/util/JavaUtils.java:
##
@@ -362,6 +364,60 @@ public static byte[] bufferToArray(ByteBuffer buffer)
WweiL commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1153899173
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -2120,7 +2130,6 @@ class SparkConnectPlanner(val
zhengruifeng commented on PR #40612:
URL: https://github.com/apache/spark/pull/40612#issuecomment-1491119721
LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153895796
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153897632
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153896951
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153896257
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153896106
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153895796
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
itholic commented on PR #39937:
URL: https://github.com/apache/spark/pull/39937#issuecomment-1491110788
Test passed. @MaxGekk could you take a look when you find some time?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153895088
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1153895088
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationWithinWatermarkSuite.scala:
##
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache
HyukjinKwon commented on code in PR #40591:
URL: https://github.com/apache/spark/pull/40591#discussion_r1153892668
##
core/src/main/scala/org/apache/spark/scheduler/SparkListener.scala:
##
@@ -289,7 +289,8 @@ case class SparkListenerApplicationStart(
driverAttributes:
HeartSaVioR commented on PR #40561:
URL: https://github.com/apache/spark/pull/40561#issuecomment-1491105348
> What is the decision about batch support?
I just added support of batch in the latest commit. It needs be more test
coverage for batch query support so that's why we have new
1 - 100 of 218 matches
Mail list logo