chaoqin-li1123 commented on code in PR #45023:
URL: https://github.com/apache/spark/pull/45023#discussion_r1517356903
##
sql/core/src/main/scala/org/apache/spark/sql/execution/python/PythonStreamingSourceRunner.scala:
##
@@ -0,0 +1,208 @@
+/*
+ * Licensed to the Apache Software
chaoqin-li1123 commented on code in PR #45023:
URL: https://github.com/apache/spark/pull/45023#discussion_r1517356903
##
sql/core/src/main/scala/org/apache/spark/sql/execution/python/PythonStreamingSourceRunner.scala:
##
@@ -0,0 +1,208 @@
+/*
+ * Licensed to the Apache Software
chaoqin-li1123 commented on code in PR #45023:
URL: https://github.com/apache/spark/pull/45023#discussion_r1517356072
##
python/pyspark/sql/datasource.py:
##
@@ -298,6 +320,133 @@ def read(self, partition: InputPartition) ->
Iterator[Union[Tuple, Row]]:
...
+class
HyukjinKwon commented on code in PR #45023:
URL: https://github.com/apache/spark/pull/45023#discussion_r1517351498
##
sql/core/src/main/scala/org/apache/spark/sql/execution/python/PythonStreamingSourceRunner.scala:
##
@@ -0,0 +1,208 @@
+/*
+ * Licensed to the Apache Software Fou
HyukjinKwon commented on code in PR #45023:
URL: https://github.com/apache/spark/pull/45023#discussion_r1517348253
##
python/pyspark/sql/datasource.py:
##
@@ -298,6 +320,133 @@ def read(self, partition: InputPartition) ->
Iterator[Union[Tuple, Row]]:
...
+class Dat
anishshri-db opened a new pull request, #45432:
URL: https://github.com/apache/spark/pull/45432
### What changes were proposed in this pull request?
Issue to fix foreachbatch persist issue for stateful queries
### Why are the changes needed?
This allows us to prevent stateful
HyukjinKwon commented on code in PR #45290:
URL: https://github.com/apache/spark/pull/45290#discussion_r1517309692
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSuite.scala:
##
@@ -183,6 +185,57 @@ class CollationSuite extends DatasourceV2SQLBase {
}
}
+ te
HyukjinKwon commented on PR #45431:
URL: https://github.com/apache/spark/pull/45431#issuecomment-1985167614
cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
T
zhengruifeng opened a new pull request, #45431:
URL: https://github.com/apache/spark/pull/45431
### What changes were proposed in this pull request?
Make `withColumnsRenamed` duplicated column name handling consistent with
`withColumnRenamed`
### Why are the changes needed?
HeartSaVioR commented on PR #45428:
URL: https://github.com/apache/spark/pull/45428#issuecomment-1985161811
FOLLOWUP tag should be OK. Thanks for handling this.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL a
HyukjinKwon closed pull request #45429: [SPARK-47079][PYTHON][DOCS][FOLLOWUP]
Add `VariantType` to API references
URL: https://github.com/apache/spark/pull/45429
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
HyukjinKwon commented on PR #45429:
URL: https://github.com/apache/spark/pull/45429#issuecomment-1985159258
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HeartSaVioR commented on PR #45360:
URL: https://github.com/apache/spark/pull/45360#issuecomment-1985157244
Will review sooner than later. Maybe by today.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above t
HeartSaVioR commented on code in PR #45051:
URL: https://github.com/apache/spark/pull/45051#discussion_r1517217945
##
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/StatefulProcessorHandleSuite.scala:
##
@@ -0,0 +1,299 @@
+/*
+ * Licensed to the Apache So
LuciferYang commented on code in PR #45290:
URL: https://github.com/apache/spark/pull/45290#discussion_r1517243024
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSuite.scala:
##
@@ -183,6 +185,57 @@ class CollationSuite extends DatasourceV2SQLBase {
}
}
+ te
LuciferYang commented on code in PR #45290:
URL: https://github.com/apache/spark/pull/45290#discussion_r1517243024
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSuite.scala:
##
@@ -183,6 +185,57 @@ class CollationSuite extends DatasourceV2SQLBase {
}
}
+ te
LuciferYang commented on code in PR #45290:
URL: https://github.com/apache/spark/pull/45290#discussion_r1517243024
##
sql/core/src/test/scala/org/apache/spark/sql/CollationSuite.scala:
##
@@ -183,6 +185,57 @@ class CollationSuite extends DatasourceV2SQLBase {
}
}
+ te
uros-db commented on code in PR #45422:
URL: https://github.com/apache/spark/pull/45422#discussion_r1517236308
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/CollationUtils.scala:
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (A
uros-db commented on code in PR #45421:
URL: https://github.com/apache/spark/pull/45421#discussion_r1517229022
##
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java:
##
@@ -410,7 +412,9 @@ public boolean endsWith(final UTF8String suffix, int
collationId)
zhengruifeng opened a new pull request, #45429:
URL: https://github.com/apache/spark/pull/45429
### What changes were proposed in this pull request?
Add `VariantType` to API references
### Why are the changes needed?
`VariantType` has been added in `__all__` in `types`
uros-db commented on code in PR #45421:
URL: https://github.com/apache/spark/pull/45421#discussion_r1517226969
##
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java:
##
@@ -396,7 +396,9 @@ public boolean startsWith(final UTF8String prefix, int
collationId
panbingkun commented on PR #44665:
URL: https://github.com/apache/spark/pull/44665#issuecomment-1985106638
friendly ping @HyukjinKwon,
When you are not busy, can you please continue to help review this PR?
--
This is an automated message from the Apache Git Service.
To respond to the me
AngersZh closed pull request #44496: [SPARK-46510][CORE] Spark shell log
filter should be applied to all AbstractAppender
URL: https://github.com/apache/spark/pull/44496
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
HeartSaVioR commented on code in PR #45023:
URL: https://github.com/apache/spark/pull/45023#discussion_r1517209325
##
python/pyspark/sql/datasource.py:
##
@@ -298,6 +320,133 @@ def read(self, partition: InputPartition) ->
Iterator[Union[Tuple, Row]]:
...
+class Dat
LuciferYang commented on PR #45428:
URL: https://github.com/apache/spark/pull/45428#issuecomment-1985100729
Thanks @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comme
LuciferYang commented on PR #45428:
URL: https://github.com/apache/spark/pull/45428#issuecomment-1985099843
also cc @dongjoon-hyun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific co
LuciferYang commented on PR #45428:
URL: https://github.com/apache/spark/pull/45428#issuecomment-1985094500
This is my first time handling such a situation, is it better to create a
new Jira or is it better as a FOLLOWUP of SPARK-47305?
cc @HyukjinKwon @HeartSaVioR @zhengruifeng
-
LuciferYang opened a new pull request, #45428:
URL: https://github.com/apache/spark/pull/45428
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
HeartSaVioR commented on code in PR #45051:
URL: https://github.com/apache/spark/pull/45051#discussion_r1517119556
##
sql/api/src/main/scala/org/apache/spark/sql/streaming/ExpiredTimerInfo.scala:
##
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
yaooqinn commented on PR #45424:
URL: https://github.com/apache/spark/pull/45424#issuecomment-1985053630
Thanks, merged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comm
yaooqinn closed pull request #45424: [SPARK-47319][SQL] Improve missingInput
calculation
URL: https://github.com/apache/spark/pull/45424
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
yaooqinn commented on code in PR #45418:
URL: https://github.com/apache/spark/pull/45418#discussion_r1517168161
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -87,17 +87,26 @@ abstract class JdbcDialect extends Serializable with
Logging {
*/
yaooqinn commented on code in PR #45418:
URL: https://github.com/apache/spark/pull/45418#discussion_r1517155629
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -87,17 +87,26 @@ abstract class JdbcDialect extends Serializable with
Logging {
*/
HyukjinKwon commented on PR #45360:
URL: https://github.com/apache/spark/pull/45360#issuecomment-1984994353
Is this good to go? @HeartSaVioR @rangadi
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
LuciferYang commented on code in PR #45368:
URL: https://github.com/apache/spark/pull/45368#discussion_r1517123366
##
sql/catalyst/src/test/scala/org/apache/spark/sql/connector/catalog/InMemoryTableCatalog.scala:
##
@@ -84,28 +85,28 @@ class BasicInMemoryTableCatalog extends Tab
cloud-fan commented on code in PR #45368:
URL: https://github.com/apache/spark/pull/45368#discussion_r1517121687
##
sql/catalyst/src/test/scala/org/apache/spark/sql/connector/catalog/InMemoryTableCatalog.scala:
##
@@ -84,28 +85,28 @@ class BasicInMemoryTableCatalog extends Table
cloud-fan commented on code in PR #45418:
URL: https://github.com/apache/spark/pull/45418#discussion_r1517121075
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -87,17 +87,26 @@ abstract class JdbcDialect extends Serializable with
Logging {
*/
cloud-fan commented on code in PR #45424:
URL: https://github.com/apache/spark/pull/45424#discussion_r1517119767
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/AttributeSet.scala:
##
@@ -104,13 +104,19 @@ class AttributeSet private (private val baseSet:
wbo4958 commented on code in PR #45232:
URL: https://github.com/apache/spark/pull/45232#discussion_r1517085186
##
python/pyspark/resource/tests/test_connect_resources.py:
##
@@ -0,0 +1,46 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor
wbo4958 commented on code in PR #45232:
URL: https://github.com/apache/spark/pull/45232#discussion_r1517085041
##
python/pyspark/resource/tests/test_connect_resources.py:
##
@@ -0,0 +1,46 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor
wbo4958 commented on code in PR #45232:
URL: https://github.com/apache/spark/pull/45232#discussion_r1517084819
##
python/pyspark/resource/profile.py:
##
@@ -114,14 +122,23 @@ def id(self) -> int:
int
A unique id of this :class:`ResourceProfile`
"""
wbo4958 commented on code in PR #45232:
URL: https://github.com/apache/spark/pull/45232#discussion_r1517084721
##
python/pyspark/resource/profile.py:
##
@@ -114,14 +122,23 @@ def id(self) -> int:
int
A unique id of this :class:`ResourceProfile`
"""
yaooqinn commented on PR #45408:
URL: https://github.com/apache/spark/pull/45408#issuecomment-1984930529
As the Spark Community didn't get any issue report during v3.3.0 - v3.5.1
releases, I think this is a corner case. Maybe we can make the config internal.
--
This is an automated messag
dongjoon-hyun commented on PR #45408:
URL: https://github.com/apache/spark/pull/45408#issuecomment-1984929745
+1 for the direction if we need to support both.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
yaooqinn commented on PR #45408:
URL: https://github.com/apache/spark/pull/45408#issuecomment-1984926315
Thank you @dongjoon-hyun.
In such circumstances, I guess we can add a configuration for base64 classes
to avoid breaking things again. AFAIK, Apache Hive also uses the JDK version
yaooqinn commented on PR #45415:
URL: https://github.com/apache/spark/pull/45415#issuecomment-1984919818
Thanks @zwangsheng, merged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the s
yaooqinn closed pull request #45415: [SPARK-47314][DOC] Remove the wrong
comment line of `ExternalSorter#writePartitionedMapOutput` method
URL: https://github.com/apache/spark/pull/45415
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
yaooqinn commented on PR #45427:
URL: https://github.com/apache/spark/pull/45427#issuecomment-1984918192
Late +1
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscr
HyukjinKwon closed pull request #45427: [MINOR][INFRA] Make "y/n" consistent
within merge script
URL: https://github.com/apache/spark/pull/45427
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the s
HyukjinKwon commented on PR #45427:
URL: https://github.com/apache/spark/pull/45427#issuecomment-1984911418
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
zwangsheng commented on code in PR #45415:
URL: https://github.com/apache/spark/pull/45415#discussion_r1517066704
##
core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala:
##
@@ -690,7 +690,7 @@ private[spark] class ExternalSorter[K, V, C](
* Write all th
HyukjinKwon opened a new pull request, #45427:
URL: https://github.com/apache/spark/pull/45427
### What changes were proposed in this pull request?
This PR changes the y/n message and condition consistent within merging
script.
### Why are the changes needed?
For consist
doki23 commented on PR #45181:
URL: https://github.com/apache/spark/pull/45181#issuecomment-1984850287
> All children have to be considered for changes of their persistence state.
Currently it only checks the fist found child. For clarity there is a test
which fails: [doki23#1](https://gith
HyukjinKwon closed pull request #45426: [SPARK-47309][SQL][XML] Fix schema
inference issues in XML
URL: https://github.com/apache/spark/pull/45426
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HyukjinKwon commented on PR #45426:
URL: https://github.com/apache/spark/pull/45426#issuecomment-1984850009
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon closed pull request #45269: [SPARK-47078][DOCS][PYTHON]
Documentation for SparkSession-based Profilers
URL: https://github.com/apache/spark/pull/45269
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL ab
HyukjinKwon commented on PR #45269:
URL: https://github.com/apache/spark/pull/45269#issuecomment-1984848824
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon commented on code in PR #45422:
URL: https://github.com/apache/spark/pull/45422#discussion_r1517022572
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/CollationUtils.scala:
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundatio
HyukjinKwon commented on PR #45423:
URL: https://github.com/apache/spark/pull/45423#issuecomment-1984835207
See also https://spark.apache.org/contributing.html
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL ab
HyukjinKwon commented on PR #45423:
URL: https://github.com/apache/spark/pull/45423#issuecomment-1984834978
Mind filing a JIRA and linking it to the PR title please?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
github-actions[bot] closed pull request #42398: [SPARK-42746][SQL] Add the
LISTAGG() aggregate function
URL: https://github.com/apache/spark/pull/42398
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go t
github-actions[bot] commented on PR #43936:
URL: https://github.com/apache/spark/pull/43936#issuecomment-1984826472
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] commented on PR #43979:
URL: https://github.com/apache/spark/pull/43979#issuecomment-1984826451
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
shujingyang-db opened a new pull request, #45426:
URL: https://github.com/apache/spark/pull/45426
### What changes were proposed in this pull request?
This PR fixes XML schema inference issues:
1. when there's an empty tag
2. when merging schema for NullType
xinrong-meng commented on PR #45378:
URL: https://github.com/apache/spark/pull/45378#issuecomment-1984523232
Merged to master, thank you all!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the s
xinrong-meng closed pull request #45378: [SPARK-47276][PYTHON][CONNECT]
Introduce `spark.profile.clear` for SparkSession-based profiling
URL: https://github.com/apache/spark/pull/45378
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to Git
dongjoon-hyun commented on PR #45408:
URL: https://github.com/apache/spark/pull/45408#issuecomment-1984433848
Thank you for the confirmation, @ted-jenks . Well, in this case, it's too
late to change the behavior again. Apache Spark 3.3 is already the EOL status
since last year and I don't t
xinrong-meng commented on PR #45414:
URL: https://github.com/apache/spark/pull/45414#issuecomment-1984375769
Looks nice, thank you!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific co
agubichev commented on code in PR #45125:
URL: https://github.com/apache/spark/pull/45125#discussion_r1516770647
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/RewriteWithExpression.scala:
##
@@ -34,7 +34,7 @@ import
org.apache.spark.sql.catalyst.trees.T
xinrong-meng commented on code in PR #45378:
URL: https://github.com/apache/spark/pull/45378#discussion_r1516752307
##
python/pyspark/sql/tests/test_session.py:
##
@@ -531,6 +531,33 @@ def test_dump_invalid_type(self):
},
)
+def test_clear_memory_type
ueshin commented on code in PR #45378:
URL: https://github.com/apache/spark/pull/45378#discussion_r1516750441
##
python/pyspark/sql/profiler.py:
##
@@ -224,6 +224,54 @@ def dump(id: int) -> None:
for id in sorted(code_map.keys()):
dump(id)
+de
xinrong-meng commented on code in PR #45378:
URL: https://github.com/apache/spark/pull/45378#discussion_r1516752307
##
python/pyspark/sql/tests/test_session.py:
##
@@ -531,6 +531,33 @@ def test_dump_invalid_type(self):
},
)
+def test_clear_memory_type
sweisdb opened a new pull request, #45425:
URL: https://github.com/apache/spark/pull/45425
### What changes were proposed in this pull request?
This change adds an additional pass through a key derivation function (KDF)
to the key exchange protocol in `AuthEngine`. Currently, it uses
attilapiros commented on code in PR #45424:
URL: https://github.com/apache/spark/pull/45424#discussion_r1516669562
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/AttributeSet.scala:
##
@@ -104,13 +104,19 @@ class AttributeSet private (private val baseSe
attilapiros commented on code in PR #45424:
URL: https://github.com/apache/spark/pull/45424#discussion_r1516651884
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/AttributeSet.scala:
##
@@ -104,13 +104,19 @@ class AttributeSet private (private val baseSe
peter-toth commented on PR #45424:
URL: https://github.com/apache/spark/pull/45424#issuecomment-1984153122
@cloud-fan can you please take a look?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to t
attilapiros commented on PR #45424:
URL: https://github.com/apache/spark/pull/45424#issuecomment-1984150861
LGTM
I talked to @peter-toth offline and the improvement comes from not
calculating the `inputSet` at all when references is empty
--
This is an automated message from the A
allisonwang-db commented on code in PR #45023:
URL: https://github.com/apache/spark/pull/45023#discussion_r1516609093
##
sql/core/src/main/scala/org/apache/spark/sql/execution/python/PythonStreamingSourceRunner.scala:
##
@@ -0,0 +1,209 @@
+/*
+ * Licensed to the Apache Software
peter-toth commented on PR #35684:
URL: https://github.com/apache/spark/pull/35684#issuecomment-1984107426
@martinf-moodys,
[SPARK-47319](https://issues.apache.org/jira/browse/SPARK-47319) /
https://github.com/apache/spark/pull/45424 might help, especially if you have
many `Union` nodes
peter-toth opened a new pull request, #45424:
URL: https://github.com/apache/spark/pull/45424
### What changes were proposed in this pull request?
This PR speeds up `QueryPlan.missingInput()` calculation.
### Why are the changes needed?
This seems to be the root cause of `Ded
stefankandic commented on code in PR #45405:
URL: https://github.com/apache/spark/pull/45405#discussion_r1516535896
##
sql/api/src/main/scala/org/apache/spark/sql/types/DataType.scala:
##
@@ -117,7 +117,7 @@ object DataType {
private val FIXED_DECIMAL = """decimal\(\s*(\d+)\s
sahnib commented on code in PR #45023:
URL: https://github.com/apache/spark/pull/45023#discussion_r1516471532
##
python/pyspark/sql/datasource.py:
##
@@ -298,6 +320,133 @@ def read(self, partition: InputPartition) ->
Iterator[Union[Tuple, Row]]:
...
+class DataSour
miland-db closed pull request #45423: Miland db/miland legacy error class
URL: https://github.com/apache/spark/pull/45423
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To uns
miland-db opened a new pull request, #45423:
URL: https://github.com/apache/spark/pull/45423
### What changes were proposed in this pull request?
In the PR, I propose to assign the proper names to the legacy error classes
_LEGACY_ERROR_TEMP_324[7-9], and modify tests in testing suites to
dbatomic commented on code in PR #45422:
URL: https://github.com/apache/spark/pull/45422#discussion_r1516510742
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/CollationUtils.scala:
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (
jchen5 commented on code in PR #45125:
URL: https://github.com/apache/spark/pull/45125#discussion_r1516503722
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/RewriteWithExpression.scala:
##
@@ -34,7 +34,7 @@ import
org.apache.spark.sql.catalyst.trees.Tree
cloud-fan closed pull request #45409: [SPARK-45827][SQL] Move data type checks
to CreatableRelationProvider
URL: https://github.com/apache/spark/pull/45409
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
cloud-fan commented on PR #45409:
URL: https://github.com/apache/spark/pull/45409#issuecomment-1983973892
thanks, merging to master!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c
uros-db opened a new pull request, #45422:
URL: https://github.com/apache/spark/pull/45422
### What changes were proposed in this pull request?
### Why are the changes needed?
Currently, all `StringType` arguments passed to built-in string functions in
Spark SQL get treated
MaxGekk commented on code in PR #45405:
URL: https://github.com/apache/spark/pull/45405#discussion_r1516378011
##
sql/api/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -1096,7 +1096,7 @@ colPosition
;
collateClause
-: COLLATE collationN
MaxGekk commented on code in PR #45405:
URL: https://github.com/apache/spark/pull/45405#discussion_r1516415830
##
sql/api/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -1096,7 +1096,7 @@ colPosition
;
collateClause
-: COLLATE collationN
stefankandic commented on code in PR #45405:
URL: https://github.com/apache/spark/pull/45405#discussion_r1516396458
##
sql/api/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -1096,7 +1096,7 @@ colPosition
;
collateClause
-: COLLATE colla
yaooqinn commented on code in PR #45418:
URL: https://github.com/apache/spark/pull/45418#discussion_r1516398411
##
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##
@@ -87,17 +87,26 @@ abstract class JdbcDialect extends Serializable with
Logging {
*/
stefankandic commented on code in PR #45405:
URL: https://github.com/apache/spark/pull/45405#discussion_r1516396458
##
sql/api/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -1096,7 +1096,7 @@ colPosition
;
collateClause
-: COLLATE colla
uros-db commented on code in PR #45421:
URL: https://github.com/apache/spark/pull/45421#discussion_r1516389365
##
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java:
##
@@ -384,27 +387,47 @@ public boolean startsWith(final UTF8String prefix) {
}
pu
uros-db commented on code in PR #45421:
URL: https://github.com/apache/spark/pull/45421#discussion_r1516391257
##
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java:
##
@@ -31,6 +32,8 @@
import com.esotericsoftware.kryo.io.Input;
import com.esotericsoftw
uros-db commented on code in PR #45421:
URL: https://github.com/apache/spark/pull/45421#discussion_r1516381847
##
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java:
##
@@ -384,27 +387,47 @@ public boolean startsWith(final UTF8String prefix) {
}
pu
uros-db commented on code in PR #45421:
URL: https://github.com/apache/spark/pull/45421#discussion_r1516380909
##
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java:
##
@@ -384,27 +387,47 @@ public boolean startsWith(final UTF8String prefix) {
}
pu
uros-db commented on code in PR #45421:
URL: https://github.com/apache/spark/pull/45421#discussion_r1516379232
##
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java:
##
@@ -31,6 +32,8 @@
import com.esotericsoftware.kryo.io.Input;
import com.esotericsoftw
MaxGekk commented on code in PR #45405:
URL: https://github.com/apache/spark/pull/45405#discussion_r1516378011
##
sql/api/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -1096,7 +1096,7 @@ colPosition
;
collateClause
-: COLLATE collationN
1 - 100 of 144 matches
Mail list logo