cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r891878020
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala:
##
@@ -965,6 +965,10 @@ class SessionCatalog(
Borjianamin98 commented on code in PR #36781:
URL: https://github.com/apache/spark/pull/36781#discussion_r891878580
##
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala:
##
@@ -1316,6 +1317,34 @@ abstract class
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r891879819
##
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala:
##
@@ -117,14 +131,45 @@ class CatalogImpl(sparkSession: SparkSession) extends
Catalog {
cloud-fan commented on code in PR #36586:
URL: https://github.com/apache/spark/pull/36586#discussion_r891881553
##
sql/core/src/test/scala/org/apache/spark/sql/internal/CatalogSuite.scala:
##
@@ -553,4 +571,103 @@ class CatalogSuite extends SharedSparkSession with
AnalysisTest
LuciferYang opened a new pull request, #36800:
URL: https://github.com/apache/spark/pull/36800
### What changes were proposed in this pull request?
This pr aims upgrade scala-maven-plugin to 4.6.2
### Why are the changes needed?
This version brings some bug fix related to
AngersZh commented on PR #36786:
URL: https://github.com/apache/spark/pull/36786#issuecomment-1149434288
> Could we add a test?
Need to test after built, seems hard to write test in UT...
--
This is an automated message from the Apache Git Service.
To respond to the message,
AmplabJenkins commented on PR #36787:
URL: https://github.com/apache/spark/pull/36787#issuecomment-1149442173
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
LuciferYang commented on PR #36781:
URL: https://github.com/apache/spark/pull/36781#issuecomment-1149455635
I think this pr should be backport to previous Spark version, because when
run `SPARK-39393: Do not push down predicate filters for repeated primitive
fields` without this pr, I
Yaohua628 opened a new pull request, #36801:
URL: https://github.com/apache/spark/pull/36801
### What changes were proposed in this pull request?
We added the support to query the `_metadata` column with a file-based
streaming source: https://github.com/apache/spark/pull/35676.
HyukjinKwon closed pull request #36797: [SPARK-39394][DOCS][SS][3.3] Improve
PySpark Structured Streaming page more readable
URL: https://github.com/apache/spark/pull/36797
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
HyukjinKwon commented on PR #36797:
URL: https://github.com/apache/spark/pull/36797#issuecomment-1149483125
Merged to branch-3.3.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon opened a new pull request, #36802:
URL: https://github.com/apache/spark/pull/36802
### What changes were proposed in this pull request?
This PR fixes the test to make `CastWithAnsiOffSuite` properly respect
`ansiEnabled` in `cast string to date #2` test by using
HyukjinKwon commented on PR #36802:
URL: https://github.com/apache/spark/pull/36802#issuecomment-1149489864
cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon commented on PR #36800:
URL: https://github.com/apache/spark/pull/36800#issuecomment-1149490024
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon closed pull request #36800: [SPARK-39409][BUILD] Upgrade
scala-maven-plugin to 4.6.2
URL: https://github.com/apache/spark/pull/36800
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HyukjinKwon commented on code in PR #36683:
URL: https://github.com/apache/spark/pull/36683#discussion_r891944792
##
python/pyspark/sql/pandas/conversion.py:
##
@@ -596,7 +596,7 @@ def _create_from_pandas_with_arrow(
]
# Slice the DataFrame to be batched
HyukjinKwon commented on PR #36683:
URL: https://github.com/apache/spark/pull/36683#issuecomment-1149492294
cc @mengxr and @WeichenXu123 in case you guys have some comments.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
HyukjinKwon commented on code in PR #36789:
URL: https://github.com/apache/spark/pull/36789#discussion_r891946395
##
conf/spark-env.sh.template:
##
@@ -79,3 +80,6 @@
# Options for beeline
# - SPARK_BEELINE_OPTS, to set config properties only for the beeline cli
(e.g.
201 - 218 of 218 matches
Mail list logo