dongjoon-hyun commented on code in PR #10:
URL:
https://github.com/apache/spark-kubernetes-operator/pull/10#discussion_r1600381515
##
gradle.properties:
##
@@ -18,17 +18,23 @@
group=org.apache.spark.k8s.operator
version=0.1.0
+# Caution: fabric8 version should be aligned
sahnib commented on code in PR #44323:
URL: https://github.com/apache/spark/pull/44323#discussion_r1600366874
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingSymmetricHashJoinHelper.scala:
##
@@ -219,10 +222,41 @@ object
dongjoon-hyun commented on PR #10:
URL:
https://github.com/apache/spark-kubernetes-operator/pull/10#issuecomment-2110695911
Thank you for updating.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
GideonPotok commented on code in PR #46526:
URL: https://github.com/apache/spark/pull/46526#discussion_r1600041876
##
sql/core/benchmarks/CollationBenchmark-jdk21-results.txt:
##
Review Comment:
0. Note, by the way that because we are relying on supportsBinaryEquality,
mkaravel commented on code in PR #46511:
URL: https://github.com/apache/spark/pull/46511#discussion_r1600292083
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationAwareUTF8String.java:
##
@@ -183,6 +204,19 @@ public static int findInSet(final UTF8String
SubhamSinghal commented on PR #46541:
URL: https://github.com/apache/spark/pull/46541#issuecomment-2110596696
@hvanhovell Coalesce does not enforce uniform data distribution across
partitions. We would like to pass custom size based coalescer to have more
uniform data distribution. This
cloud-fan closed pull request #46574: [SPARK-48263] Collate function support
for non UTF8_BINARY strings
URL: https://github.com/apache/spark/pull/46574
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
cloud-fan commented on PR #46574:
URL: https://github.com/apache/spark/pull/46574#issuecomment-2110557374
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
cloud-fan closed pull request #46577: [SPARK-47301][SQL][TESTS][FOLLOWUP]
Remove workaround for ParquetIOSuite
URL: https://github.com/apache/spark/pull/46577
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
cloud-fan commented on PR #46577:
URL: https://github.com/apache/spark/pull/46577#issuecomment-2110553464
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
cloud-fan commented on PR #46437:
URL: https://github.com/apache/spark/pull/46437#issuecomment-2110549856
merged to master/3.5/3.4!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
cloud-fan closed pull request #46437: [SPARK-48172][SQL] Fix escaping issues in
JDBC Dialects
URL: https://github.com/apache/spark/pull/46437
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
jose-torres commented on code in PR #46247:
URL: https://github.com/apache/spark/pull/46247#discussion_r1600227231
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala:
##
@@ -1364,6 +1364,35 @@ class StreamingQuerySuite extends StreamTest with
dbatomic commented on PR #46247:
URL: https://github.com/apache/spark/pull/46247#issuecomment-2110406204
> Let's make clear the scope of tests we are adding here. I see the PR title
is about "stateless" but you are also aware that deduplication is "stateful".
While I agree that we probably
dbatomic commented on code in PR #46247:
URL: https://github.com/apache/spark/pull/46247#discussion_r1600128613
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala:
##
@@ -1364,6 +1364,35 @@ class StreamingQuerySuite extends StreamTest with
dbatomic commented on code in PR #46247:
URL: https://github.com/apache/spark/pull/46247#discussion_r1600126986
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationSuite.scala:
##
@@ -484,6 +486,52 @@ class StreamingDeduplicationSuite extends
dbatomic commented on code in PR #46247:
URL: https://github.com/apache/spark/pull/46247#discussion_r1600124892
##
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationSuite.scala:
##
@@ -484,6 +486,52 @@ class StreamingDeduplicationSuite extends
hvanhovell commented on PR #46541:
URL: https://github.com/apache/spark/pull/46541#issuecomment-2110340780
Can you walk me through the actual use case for this? Coalesce -
historically - is incredibly hard to use for most end user, so before adding
this I'd like to understand why.
--
hvanhovell commented on code in PR #46575:
URL: https://github.com/apache/spark/pull/46575#discussion_r1600102948
##
sql/api/src/main/scala/org/apache/spark/sql/catalyst/encoders/AgnosticEncoder.scala:
##
@@ -209,7 +209,8 @@ object AgnosticEncoders {
// Nullable leaf
nikolamand-db opened a new pull request, #46580:
URL: https://github.com/apache/spark/pull/46580
### What changes were proposed in this pull request?
`PlanWithUnresolvedIdentifier` is rewritten later in analysis which causes
rules like
`SubstituteUnresolvedOrdinals`
panbingkun commented on code in PR #46288:
URL: https://github.com/apache/spark/pull/46288#discussion_r1600020913
##
project/SparkBuild.scala:
##
@@ -266,7 +266,7 @@ object SparkBuild extends PomBuild {
.orElse(sys.props.get("java.home").map { p => new
panbingkun commented on code in PR #46288:
URL: https://github.com/apache/spark/pull/46288#discussion_r1600020913
##
project/SparkBuild.scala:
##
@@ -266,7 +266,7 @@ object SparkBuild extends PomBuild {
.orElse(sys.props.get("java.home").map { p => new
panbingkun opened a new pull request, #46579:
URL: https://github.com/apache/spark/pull/46579
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
dbatomic opened a new pull request, #46578:
URL: https://github.com/apache/spark/pull/46578
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
hvanhovell commented on code in PR #46570:
URL: https://github.com/apache/spark/pull/46570#discussion_r1599971309
##
python/pyspark/sql/tests/connect/test_connect_basic.py:
##
@@ -1358,6 +1359,37 @@ def test_verify_col_name(self):
hvanhovell commented on code in PR #46570:
URL: https://github.com/apache/spark/pull/46570#discussion_r1599969786
##
python/pyspark/sql/tests/test_dataframe.py:
##
@@ -844,6 +844,11 @@ def test_union_classmethod_usage(self):
def test_isinstance_dataframe(self):
zeotuan commented on code in PR #46494:
URL: https://github.com/apache/spark/pull/46494#discussion_r1599968517
##
core/src/main/scala/org/apache/spark/deploy/security/HadoopFSDelegationTokenProvider.scala:
##
@@ -117,10 +117,11 @@ private[deploy] class
hvanhovell commented on code in PR #46570:
URL: https://github.com/apache/spark/pull/46570#discussion_r1599965345
##
connector/connect/common/src/main/protobuf/spark/connect/commands.proto:
##
@@ -484,3 +485,9 @@ message CreateResourceProfileCommandResult {
// (Required)
hvanhovell commented on code in PR #46570:
URL: https://github.com/apache/spark/pull/46570#discussion_r1599960560
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectService.scala:
##
@@ -315,6 +315,12 @@ object SparkConnectService
hvanhovell commented on code in PR #46570:
URL: https://github.com/apache/spark/pull/46570#discussion_r1599956202
##
connector/connect/common/src/main/protobuf/spark/connect/base.proto:
##
@@ -199,6 +200,17 @@ message AnalyzePlanRequest {
// (Required) The logical plan to
hvanhovell commented on code in PR #46570:
URL: https://github.com/apache/spark/pull/46570#discussion_r1599953711
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/service/SessionHolder.scala:
##
@@ -106,7 +106,7 @@ case class SessionHolder(userId: String,
hvanhovell commented on code in PR #46570:
URL: https://github.com/apache/spark/pull/46570#discussion_r1599948324
##
connector/connect/common/src/main/protobuf/spark/connect/base.proto:
##
@@ -199,6 +200,17 @@ message AnalyzePlanRequest {
// (Required) The logical plan to
LuciferYang commented on code in PR #46288:
URL: https://github.com/apache/spark/pull/46288#discussion_r1599902760
##
project/SparkBuild.scala:
##
@@ -266,7 +266,7 @@ object SparkBuild extends PomBuild {
.orElse(sys.props.get("java.home").map { p => new
panbingkun commented on code in PR #46551:
URL: https://github.com/apache/spark/pull/46551#discussion_r1599874698
##
project/SparkBuild.scala:
##
@@ -257,6 +257,7 @@ object SparkBuild extends PomBuild {
val noLintOnCompile = sys.env.contains("NOLINT_ON_COMPILE") &&
uros-db commented on code in PR #46511:
URL: https://github.com/apache/spark/pull/46511#discussion_r1599873181
##
common/unsafe/src/test/java/org/apache/spark/unsafe/types/CollationSupportSuite.java:
##
@@ -102,20 +102,30 @@ public void testContains() throws SparkException {
uros-db commented on PR #46574:
URL: https://github.com/apache/spark/pull/46574#issuecomment-2109979646
btw, I'd say that this PR _does_ introduce some user-facing changes - so I'd
update the PR description to reflect this with more details
--
This is an automated message from the Apache
panbingkun commented on PR #46577:
URL: https://github.com/apache/spark/pull/46577#issuecomment-2109940241
cc @cloud-fan @yaooqinn
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
zhengruifeng commented on code in PR #46576:
URL: https://github.com/apache/spark/pull/46576#discussion_r1599831662
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##
@@ -1823,6 +1823,11 @@ class SparkConnectPlanner(
zhengruifeng opened a new pull request, #46576:
URL: https://github.com/apache/spark/pull/46576
### What changes were proposed in this pull request?
Add function `timestamp_diff`, by reusing existing proto
panbingkun commented on PR #45403:
URL: https://github.com/apache/spark/pull/45403#issuecomment-2109888017
> +1 @cloud-fan
>
> Since the LOCs have been moved to
`ParquetIOWithoutOutputCommitCoordinationSuite`, we need a followup for
reverting
Let me to do it.
--
This is an
uros-db commented on code in PR #46511:
URL: https://github.com/apache/spark/pull/46511#discussion_r1599772157
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationAwareUTF8String.java:
##
@@ -183,6 +204,19 @@ public static int findInSet(final UTF8String
yaooqinn commented on PR #46575:
URL: https://github.com/apache/spark/pull/46575#issuecomment-2109814895
> Not really. RowEncoder is a private API.
I guess it allows char and varchar in UDF APIs
--
This is an automated message from the Apache Git Service.
To respond to the message,
uros-db commented on code in PR #46511:
URL: https://github.com/apache/spark/pull/46511#discussion_r1599736471
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationAwareUTF8String.java:
##
@@ -183,6 +204,19 @@ public static int findInSet(final UTF8String
uros-db commented on code in PR #46511:
URL: https://github.com/apache/spark/pull/46511#discussion_r1599689228
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationAwareUTF8String.java:
##
@@ -34,6 +34,27 @@
* Utility class for collation-aware UTF8String
nebojsa-db commented on PR #46574:
URL: https://github.com/apache/spark/pull/46574#issuecomment-2109753354
Please take a look :) @uros-db @stefankandic @nikolamand-db
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
AngersZh commented on PR #44767:
URL: https://github.com/apache/spark/pull/44767#issuecomment-2109748573
@cloud-fan Changed follow the discussion
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
nebojsa-db commented on code in PR #46574:
URL: https://github.com/apache/spark/pull/46574#discussion_r1599714895
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSuite.scala:
##
@@ -71,6 +71,13 @@ class CollationSuite extends DatasourceV2SQLBase with
nebojsa-db commented on code in PR #46574:
URL: https://github.com/apache/spark/pull/46574#discussion_r1599705816
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collationExpressions.scala:
##
@@ -57,14 +57,14 @@ object CollateExpressionBuilder extends
uros-db commented on code in PR #46561:
URL: https://github.com/apache/spark/pull/46561#discussion_r1599701817
##
sql/core/src/test/scala/org/apache/spark/sql/CollationStringExpressionsSuite.scala:
##
@@ -959,6 +959,37 @@ class CollationStringExpressionsSuite
cloud-fan closed pull request #46523: [SPARK-48155][SQL]
AQEPropagateEmptyRelation for join should check if remain child is just
BroadcastQueryStageExec
URL: https://github.com/apache/spark/pull/46523
--
This is an automated message from the Apache Git Service.
To respond to the message,
cloud-fan commented on PR #46523:
URL: https://github.com/apache/spark/pull/46523#issuecomment-2109727034
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
uros-db commented on code in PR #46561:
URL: https://github.com/apache/spark/pull/46561#discussion_r1599697850
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSQLExpressionsSuite.scala:
##
@@ -1275,6 +1275,38 @@ class CollationSQLExpressionsSuite
})
}
+
uros-db commented on code in PR #46561:
URL: https://github.com/apache/spark/pull/46561#discussion_r1599697850
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSQLExpressionsSuite.scala:
##
@@ -1275,6 +1275,38 @@ class CollationSQLExpressionsSuite
})
}
+
nebojsa-db commented on code in PR #46561:
URL: https://github.com/apache/spark/pull/46561#discussion_r1599695088
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSQLExpressionsSuite.scala:
##
@@ -1275,6 +1275,38 @@ class CollationSQLExpressionsSuite
})
}
+
cloud-fan commented on PR #46575:
URL: https://github.com/apache/spark/pull/46575#issuecomment-2109719920
cc @gengliangwang @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
cloud-fan opened a new pull request, #46575:
URL: https://github.com/apache/spark/pull/46575
### What changes were proposed in this pull request?
Today we can't create `RowEncoder` with char/varchar data type, because we
believe this can't happen. Spark will turn char/varchar
uros-db commented on code in PR #46511:
URL: https://github.com/apache/spark/pull/46511#discussion_r1599693830
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationAwareUTF8String.java:
##
@@ -34,6 +34,27 @@
* Utility class for collation-aware UTF8String
uros-db commented on code in PR #46511:
URL: https://github.com/apache/spark/pull/46511#discussion_r1599689228
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationAwareUTF8String.java:
##
@@ -34,6 +34,27 @@
* Utility class for collation-aware UTF8String
panbingkun commented on PR #46288:
URL: https://github.com/apache/spark/pull/46288#issuecomment-2109705338
> https://github.com/com-lihaoyi/Ammonite/releases/tag/3.0.0-M2
>
> 3.0.0-M2 released ~ @panbingkun
Thanks~ ❤️
Updated.
--
This is an automated message from the
LuciferYang commented on PR #46288:
URL: https://github.com/apache/spark/pull/46288#issuecomment-2109686654
https://github.com/com-lihaoyi/Ammonite/releases/tag/3.0.0-M2
3.0.0-M2 released ~ @panbingkun
--
This is an automated message from the Apache Git Service.
To respond to the
mihailom-db commented on code in PR #46574:
URL: https://github.com/apache/spark/pull/46574#discussion_r1599646376
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSuite.scala:
##
@@ -71,6 +71,13 @@ class CollationSuite extends DatasourceV2SQLBase with
nebojsa-db commented on code in PR #45963:
URL: https://github.com/apache/spark/pull/45963#discussion_r1599656074
##
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java:
##
@@ -1509,12 +1515,62 @@ public boolean semanticEquals(final UTF8String other,
int
mihailom-db commented on code in PR #46574:
URL: https://github.com/apache/spark/pull/46574#discussion_r1599651700
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collationExpressions.scala:
##
@@ -57,14 +57,14 @@ object CollateExpressionBuilder extends
mihailom-db commented on code in PR #46574:
URL: https://github.com/apache/spark/pull/46574#discussion_r1599646376
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSuite.scala:
##
@@ -71,6 +71,13 @@ class CollationSuite extends DatasourceV2SQLBase with
nebojsa-db opened a new pull request, #46574:
URL: https://github.com/apache/spark/pull/46574
### What changes were proposed in this pull request?
collate("xx", "") does not work when there is a config for
default collation set which configures non UTF8_BINARY collation as default.
HyukjinKwon commented on code in PR #46570:
URL: https://github.com/apache/spark/pull/46570#discussion_r1599621151
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -137,6 +137,14 @@ def __init__(
# by __repr__ and _repr_html_ while eager evaluation opens.
LuciferYang commented on code in PR #46551:
URL: https://github.com/apache/spark/pull/46551#discussion_r1599618314
##
project/SparkBuild.scala:
##
@@ -257,6 +257,7 @@ object SparkBuild extends PomBuild {
val noLintOnCompile = sys.env.contains("NOLINT_ON_COMPILE") &&
zhengruifeng commented on code in PR #46570:
URL: https://github.com/apache/spark/pull/46570#discussion_r1599610542
##
python/pyspark/sql/connect/dataframe.py:
##
@@ -137,6 +137,14 @@ def __init__(
# by __repr__ and _repr_html_ while eager evaluation opens.
panbingkun commented on PR #46502:
URL: https://github.com/apache/spark/pull/46502#issuecomment-2109554457
Ready for it, @gengliangwang @dongjoon-hyun @LuciferYang
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
panbingkun commented on PR #46502:
URL: https://github.com/apache/spark/pull/46502#issuecomment-2109554338
> Or how about having these modules depend on the `common/utils` module?
`common/utils` doesn't seem to be a heavyweight module, in this way, the
existing cases can be fixed.
panbingkun commented on PR #46502:
URL: https://github.com/apache/spark/pull/46502#issuecomment-2109543568
sh dev/lint-java
```
Using `mvn` from path: /Users/panbingkun/Developer/infra/maven/maven/bin/mvn
-e Checkstyle checks failed at following occurrences:
[ERROR]
zml1206 commented on PR #44975:
URL: https://github.com/apache/spark/pull/44975#issuecomment-2109527145
Thank you all for review. @cloud-fan @kelvinjian-db @beliefer
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
cloud-fan closed pull request #44975: [SPARK-46707][SQL][FOLLOWUP] Push down
throwable predicate through aggregates
URL: https://github.com/apache/spark/pull/44975
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
cloud-fan commented on PR #44975:
URL: https://github.com/apache/spark/pull/44975#issuecomment-2109516986
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
yaooqinn opened a new pull request, #46572:
URL: https://github.com/apache/spark/pull/46572
### What changes were proposed in this pull request?
Document Mapping Spark SQL Data Types from DB2 and add tests
### Why are the changes needed?
improvement for docs
panbingkun commented on PR #46527:
URL: https://github.com/apache/spark/pull/46527#issuecomment-2109471092
> Let's continue on this one :)
Ready for it!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
mihailom-db commented on code in PR #46474:
URL: https://github.com/apache/spark/pull/46474#discussion_r1599507122
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSuite.scala:
##
@@ -851,6 +852,30 @@ class CollationSuite extends DatasourceV2SQLBase with
uros-db commented on code in PR #46474:
URL: https://github.com/apache/spark/pull/46474#discussion_r1599500946
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSuite.scala:
##
@@ -851,6 +852,30 @@ class CollationSuite extends DatasourceV2SQLBase with
mridulm commented on code in PR #46571:
URL: https://github.com/apache/spark/pull/46571#discussion_r1599499818
##
core/src/main/scala/org/apache/spark/SparkContext.scala:
##
@@ -373,7 +373,7 @@ class SparkContext(config: SparkConf) extends Logging {
private[spark] def
panbingkun commented on code in PR #46527:
URL: https://github.com/apache/spark/pull/46527#discussion_r1599491825
##
mllib/src/main/scala/org/apache/spark/ml/feature/StopWordsRemover.scala:
##
@@ -129,9 +130,9 @@ class StopWordsRemover @Since("1.5.0") (@Since("1.5.0")
override
uros-db commented on code in PR #46561:
URL: https://github.com/apache/spark/pull/46561#discussion_r1599471162
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSQLExpressionsSuite.scala:
##
@@ -1275,6 +1275,38 @@ class CollationSQLExpressionsSuite
})
}
+
uros-db commented on code in PR #46561:
URL: https://github.com/apache/spark/pull/46561#discussion_r1599470164
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSQLExpressionsSuite.scala:
##
@@ -1275,6 +1275,38 @@ class CollationSQLExpressionsSuite
})
}
+
mkaravel commented on code in PR #46511:
URL: https://github.com/apache/spark/pull/46511#discussion_r1599455730
##
common/unsafe/src/main/java/org/apache/spark/sql/catalyst/util/CollationAwareUTF8String.java:
##
@@ -34,6 +34,27 @@
* Utility class for collation-aware
HeartSaVioR commented on PR #46569:
URL: https://github.com/apache/spark/pull/46569#issuecomment-2109409434
Also cherry-picked to 3.5 as well as it's a clean cherry-pick.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
HeartSaVioR closed pull request #46569: [SPARK-48267][SS] Regression e2e test
with SPARK-47305
URL: https://github.com/apache/spark/pull/46569
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
panbingkun commented on code in PR #46527:
URL: https://github.com/apache/spark/pull/46527#discussion_r1599456251
##
mllib/src/main/scala/org/apache/spark/ml/classification/LinearSVC.scala:
##
@@ -179,8 +179,8 @@ class LinearSVC @Since("2.2.0") (
maxBlockSizeInMB)
mkaravel commented on code in PR #46511:
URL: https://github.com/apache/spark/pull/46511#discussion_r1599447728
##
common/unsafe/src/test/java/org/apache/spark/unsafe/types/CollationSupportSuite.java:
##
@@ -102,20 +102,30 @@ public void testContains() throws SparkException {
HeartSaVioR commented on PR #46569:
URL: https://github.com/apache/spark/pull/46569#issuecomment-2109390060
Thanks @viirya for quick reviewing! Merging to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
panbingkun commented on code in PR #46527:
URL: https://github.com/apache/spark/pull/46527#discussion_r1599449547
##
mllib/src/main/scala/org/apache/spark/ml/clustering/KMeans.scala:
##
@@ -451,8 +451,8 @@ class KMeans @Since("1.5.0") (
private def trainWithBlock(dataset:
cloud-fan closed pull request #46504: [SPARK-48157][SQL] Add collation support
for CSV expressions
URL: https://github.com/apache/spark/pull/46504
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
cloud-fan commented on PR #46504:
URL: https://github.com/apache/spark/pull/46504#issuecomment-2109363247
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
cloud-fan closed pull request #46503: [SPARK-48229][SQL] Add collation support
for inputFile expressions
URL: https://github.com/apache/spark/pull/46503
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
cloud-fan commented on PR #46503:
URL: https://github.com/apache/spark/pull/46503#issuecomment-2109358482
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
beliefer commented on PR #46568:
URL: https://github.com/apache/spark/pull/46568#issuecomment-2109342800
LGTM later.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
panbingkun commented on code in PR #46551:
URL: https://github.com/apache/spark/pull/46551#discussion_r1599416657
##
project/SparkBuild.scala:
##
@@ -273,10 +273,9 @@ object SparkBuild extends PomBuild {
// Google Mirror of Maven Central, placed first so that it's used
zml1206 opened a new pull request, #44975:
URL: https://github.com/apache/spark/pull/44975
### What changes were proposed in this pull request?
Push down throwable predicate through aggregates and add ut for "can't push
down nondeterministic filter through aggregate".
### Why are
LuciferYang commented on PR #46567:
URL: https://github.com/apache/spark/pull/46567#issuecomment-2109338837
Thanks @HyukjinKwon @zhengruifeng @amaliujia
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
cloud-fan closed pull request #46568: [SPARK-48265][SQL] Infer window group
limit batch should do constant folding
URL: https://github.com/apache/spark/pull/46568
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
cloud-fan commented on PR #46568:
URL: https://github.com/apache/spark/pull/46568#issuecomment-2109331575
thanks, merging to master/3.5!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
201 - 300 of 66983 matches
Mail list logo