commits
Thread
Date
Earlier messages
Messages by Thread
(spark-connect-swift) branch main updated: [SPARK-54893] Support `listTables` and `getTable` in `Catalog`
dongjoon
(spark) branch master updated: [SPARK-54610][SDP][DOCS] Improve SDP Programming Guide
sandy
(spark-kubernetes-operator) branch main updated: [SPARK-54813] Remove `cores` configs from `tests/e2e/watched-namespaces/spark-example.yaml`
dongjoon
(spark-connect-swift) branch main updated: [SPARK-54892] Support `list` data type
dongjoon
(spark-kubernetes-operator) branch main updated: [SPARK-54789] Support label selector based filter on resources to reconcile
dongjoon
(spark) branch master updated (735eda8a5b6b -> ea2617af1ca4)
ruifengz
(spark) branch master updated: [SPARK-54874][TESTS][INFRA] Avoid interleave failed test logs with test outputs
ruifengz
(spark) branch master updated (03537905f144 -> 1fc6ff235345)
ruifengz
(spark) branch master updated: [SPARK-54871][SQL] Trim aliases from grouping and aggregate expressions before handling grouping analytics
wenchen
(spark) branch master updated: [SPARK-54885][PYTHON] Remove unreachable code after upgrading pyarrow minimum version to 18.0.0
ruifengz
(spark) branch master updated: [SPARK-54867][SS] Introduce NamedStreamingRelation wrapper for source identification during analysis
ashrigondekar
(spark) branch master updated: [SPARK-54390][SS] Fix JSON Deserialize in StreamingQueryListenerBus
ashrigondekar
svn commit: r81598 - in dev/spark/v4.1.1-rc2-docs: . _site _site/api _site/api/R _site/api/R/articles _site/api/R/articles/sparkr-vignettes_files _site/api/R/articles/sparkr-vignettes_files/accessible-code-block-0.0.1 _site/api/R/deps _site/api/R/deps/...
gurwls223
svn commit: r81596 - dev/spark/v4.1.1-rc2-bin
gurwls223
(spark) branch branch-4.1 updated (31ecc1061655 -> b300b17d8c2d)
gurwls223
(spark) 01/02: Revert "Removing test jars and class files"
gurwls223
(spark) 02/02: Preparing development version 4.1.2-SNAPSHOT
gurwls223
(spark) tag v4.1.1-rc2 created (now c0690c763baf)
gurwls223
(spark) 02/02: Preparing Spark release v4.1.1-rc2
gurwls223
(spark) 01/02: Removing test jars and class files
gurwls223
svn commit: r81595 - dev/spark/v4.1.1-rc2-bin
gurwls223
(spark) tag v4.1.1-rc2 deleted (was f85e4074c2a0)
gurwls223
(spark) branch master updated: [SPARK-54882][PYTHON] Remove legacy PYARROW_IGNORE_TIMEZONE
ruifengz
svn commit: r81594 - dev/spark/v4.1.1-rc2-bin
gurwls223
(spark) branch branch-4.1 updated (6d1113b20f4c -> 31ecc1061655)
gurwls223
(spark) 02/02: Preparing development version 4.1.2-SNAPSHOT
gurwls223
(spark) 01/02: Revert "Removing test jars and class files"
gurwls223
(spark) tag v4.1.1-rc2 created (now f85e4074c2a0)
gurwls223
(spark) 02/02: Preparing Spark release v4.1.1-rc2
gurwls223
(spark) 01/02: Removing test jars and class files
gurwls223
(spark) branch master updated: Revert "[SPARK-54182][SQL][PYTHON] Optimize non-arrow conversion of df.toPandas`"
gurwls223
(spark) branch master updated: [SPARK-54877][UI] Make display stacktrace on UI error page configurable
sarutak
(spark) branch master updated (6695a6911e78 -> eaed471c8f4e)
yangjie01
(spark) branch master updated: [SPARK-54701][PYTHON] Improve the runnerConf chain for Python workers
gurwls223
(spark) branch branch-4.0 updated (2fc65e1c98ed -> 4d30961a92bc)
gurwls223
(spark) branch branch-4.1 updated (bfc94ab3104e -> 6d1113b20f4c)
gurwls223
(spark) branch master updated: [SPARK-54868][PYTHON][INFRA][FOLLOW-UP] Also enable `faulthandler` in classic tests
ruifengz
(spark) branch master updated: [SPARK-54753][SQL] fix memory leak of ArtifactManager
gurwls223
(spark) branch master updated: [SPARK-54859][PYTHON] Arrow by default PySpark UDF API reference doc
ruifengz
(spark) branch master updated: [SPARK-54868][PYTHON][INFRA][FOLLOWUP] Increase test timeout to 450s
ruifengz
(spark) branch master updated (f80b919d19ce -> faa4d19a1cd9)
ruifengz
(spark-website) branch asf-site updated: Remove redundant forward slash from Spark URL (#656)
yao
[PR] Remove redundant forward slash from Spark URL [spark-website]
via GitHub
Re: [PR] Remove redundant forward slash from Spark URL [spark-website]
via GitHub
(spark) branch master updated: [SPARK-54849][PYTHON] Upgrade the minimum version of `pyarrow` to 18.0.0
ruifengz
(spark) branch master updated: [SPARK-54868][PYTHON][INFRA] Fail hanging tests and log the tracebacks
ruifengz
(spark) branch master updated: [SPARK-54862][DSTREAM] Remove unused private object `RawTextHelper`
yangjie01
(spark) branch master updated (ea8e9412b1ce -> 2c5b4b09868d)
wuyi
(spark) branch master updated: Revert "[SPARK-54830][CORE] Enable checksum based indeterminate shuffle retry by default"
yao
(spark) branch master updated: [SPARK-54869][PYTHON][TESTS] Apply the standard import of pyarrow compute in `test_arrow_udf_scalar`
ruifengz
(spark) branch master updated: [SPARK-54860][INFRA] Add JIRA Ticket Validating in GHA
yao
svn commit: r81560 - in dev/spark/v4.1.1-rc1-docs: . _site _site/api _site/api/R _site/api/R/articles _site/api/R/articles/sparkr-vignettes_files _site/api/R/articles/sparkr-vignettes_files/accessible-code-block-0.0.1 _site/api/R/deps _site/api/R/deps/...
gurwls223
svn commit: r81558 - dev/spark/v4.1.1-rc1-bin
gurwls223
(spark) branch branch-4.1 updated (1cc2478a1f37 -> bfc94ab3104e)
gurwls223
(spark) 02/02: Preparing development version 4.1.2-SNAPSHOT
gurwls223
(spark) 01/02: Revert "Removing test jars and class files"
gurwls223
(spark) tag v4.1.1-rc1 created (now 639216180c74)
gurwls223
(spark) 01/02: Removing test jars and class files
gurwls223
(spark) 02/02: Preparing Spark release v4.1.1-rc1
gurwls223
(spark) branch branch-4.1 updated: [SPARK-54850][SQL] Improve `extractShuffleIds` to find `AdaptiveSparkPlanExec` anywhere in plan tree
yao
(spark) branch master updated: [SPARK-54850][SQL] Improve `extractShuffleIds` to find `AdaptiveSparkPlanExec` anywhere in plan tree
yao
(spark) branch master updated: [SPARK-54830][CORE] Enable checksum based indeterminate shuffle retry by default
wenchen
(spark) branch master updated (df604f04f103 -> 8cc2e4677d59)
ruifengz
(spark-website) branch asf-site updated: Introduce Cursor/VS Code support (#655)
wenchen
(spark) branch master updated: [SPARK-54857][WEBUI][TESTS] Add test ensuring user name and app name in historypage get escaped
sarutak
(spark) branch branch-4.1 updated: [SPARK-54837][DOCS] Document all built-in JDBC datasource providers
yangjie01
(spark) branch master updated: [SPARK-54837][DOCS] Document all built-in JDBC datasource providers
yangjie01
(spark) branch master updated: [SPARK-54858][INFRA] Fix syntax error in `test_report.yml` and recover GA workflow
sarutak
(spark) branch master updated: [SPARK-54853][SQL] Always check `hive.exec.max.dynamic.partitions` on the spark side
wenchen
[PR] Introduce Cursor/VS Code support [spark-website]
via GitHub
Re: [PR] Introduce Cursor/VS Code support [spark-website]
via GitHub
Re: [PR] Introduce Cursor/VS Code support [spark-website]
via GitHub
Re: [PR] Introduce Cursor/VS Code support [spark-website]
via GitHub
Re: [PR] Introduce Cursor/VS Code support [spark-website]
via GitHub
Re: [PR] Introduce Cursor/VS Code support [spark-website]
via GitHub
Re: [PR] Introduce Cursor/VS Code support [spark-website]
via GitHub
Re: [PR] Introduce Cursor/VS Code support [spark-website]
via GitHub
Re: [PR] Introduce Cursor/VS Code support [spark-website]
via GitHub
Re: [PR] Introduce Cursor/VS Code support [spark-website]
via GitHub
Re: [PR] Introduce Cursor/VS Code support [spark-website]
via GitHub
Re: [PR] Introduce Cursor/VS Code support [spark-website]
via GitHub
(spark) branch branch-4.1 updated: Revert "[SPARK-54760][SQL] DelegatingCatalogExtension as session catalog supports both V1 and V2 functions"
wenchen
(spark) branch branch-4.1 updated: [SPARK-54851][BUILD] Support generating bloop files via sbt
wenchen
(spark) branch master updated (174b62d01212 -> eec092c9f9d1)
wenchen
(spark) branch branch-4.1 updated: [SPARK-54847][BUILD] unify the proto/antlr output folder between sbt and maven
wenchen
(spark) branch master updated: [SPARK-54847][BUILD] unify the proto/antlr output folder between sbt and maven
wenchen
(spark) branch master updated: [SPARK-54843][SQL] Try_to_number expression not working for empty string input
wenchen
(spark) branch branch-4.1 updated: [SPARK-54843][SQL] Try_to_number expression not working for empty string input
wenchen
(spark) branch master updated: [SPARK-54840][SQL] OrcList Pre-allocation
yao
(spark) branch master updated (24207e2e9cd1 -> f63ccce6647f)
wenchen
(spark) branch master updated: [SPARK-54834][SQL] Add new interfaces SimpleProcedure and SimpleFunction
ruifengz
(spark) branch master updated: [SPARK-54835][SQL] Avoid unnecessary temp QueryExecution for nested command execution
ruifengz
(spark-website) branch asf-site updated: Improve document for IDE support and simplify doc build (#654)
yao
(spark) branch master updated: [SPARK-54844][BUILD] Upgrade maven.version to 3.9.12
yangjie01
(spark) branch master updated: [SPARK-54842][PYTHON][TESTS] Fix `test_arrow_udf_chained_iii` in Python-Only MacOS26
ruifengz
[PR] Improve document for IDE support and simplify doc build [spark-website]
via GitHub
Re: [PR] Improve document for IDE support and simplify doc build [spark-website]
via GitHub
Re: [PR] Improve document for IDE support and simplify doc build [spark-website]
via GitHub
Re: [PR] Improve document for IDE support and simplify doc build [spark-website]
via GitHub
Re: [PR] Improve document for IDE support and simplify doc build [spark-website]
via GitHub
Re: [PR] Improve document for IDE support and simplify doc build [spark-website]
via GitHub
Re: [PR] Improve document for IDE support and simplify doc build [spark-website]
via GitHub
Re: [PR] Improve document for IDE support and simplify doc build [spark-website]
via GitHub
Re: [PR] Improve document for IDE support and simplify doc build [spark-website]
via GitHub
Re: [PR] Improve document for IDE support and simplify doc build [spark-website]
via GitHub
Re: [PR] Improve document for IDE support and simplify doc build [spark-website]
via GitHub
(spark) branch master updated: Revert "[SPARK-54839][PYTHON] Upgrade the minimum version of `numpy` to 2.0.0"
ruifengz
(spark) branch master updated: [SPARK-54839][PYTHON] Upgrade the minimum version of `numpy` to 2.0.0
ruifengz
(spark) branch master updated: [MINOR] Add Python 3.14 no GIL workflow in README
yangjie01
(spark) branch branch-4.0 updated: [SPARK-54820][PYTHON][4.0][4.1] Make pandas_on_spark_type compatible with numpy 2.4.0
ruifengz
(spark) branch branch-4.1 updated (51042d66d8e8 -> eec9f8fd4c7c)
ruifengz
(spark) branch master updated: [SPARK-54829][PYTHON][INFRA][FOLLOW-UP] Also refresh pypy images
ruifengz
(spark) branch master updated: [SPARK-54826][INFRA][FOLLOW-UP] Delete removed path from build image caches
ruifengz
(spark) branch master updated: [SPARK-54829][PYTHON][INFRA] Refresh docker images for numpy 2.4
ruifengz
(spark) branch master updated: [SPARK-54826][INFRA] Delete scheduled job for numpy 2.1.3
ruifengz
(spark) branch master updated: [SPARK-54827][SQL] Add helper function `TreeNode.containsTag`
ruifengz
(spark) branch master updated (e316d28be78d -> f25b381dfc0c)
ruifengz
(spark) branch master updated: [SPARK-54817][SQL] Refactor `Unpivot` resolution logic to `UnpivotTransformer`
wenchen
(spark) branch master updated: [SPARK-54816][SQL] Make SQL configs not lazy
ruifengz
(spark) branch master updated (316322cbcb55 -> 72238e7e62b1)
wenchen
(spark) branch master updated (512099bfb77c -> 316322cbcb55)
dongjoon
(spark) branch branch-4.1 updated: [SPARK-54760][SQL] DelegatingCatalogExtension as session catalog supports both V1 and V2 functions
wenchen
(spark) branch master updated: [SPARK-54760][SQL] DelegatingCatalogExtension as session catalog supports both V1 and V2 functions
wenchen
(spark) branch branch-4.1 updated: [SPARK-54815][CONNECT] Do not close the class loader of the session state if session is still in use
wenchen
(spark) branch master updated: [SPARK-54815][CONNECT] Do not close the class loader of the session state if session is still in use
wenchen
(spark-kubernetes-operator) branch main updated: [SPARK-54814] Set `parallel` to 1 for K8s integration tests
dongjoon
(spark) branch branch-4.1 updated: [SPARK-46741][SQL] Cache Table with CTE should work when CTE in plan expression subquery
wenchen
(spark) branch master updated: [SPARK-46741][SQL] Cache Table with CTE should work when CTE in plan expression subquery
wenchen
(spark) branch master updated (a76e4fa54b38 -> 1a1ab19617b1)
wenchen
(spark) branch master updated: Revert [SPARK-54758][SQL] Fix generator resolution order in Project
wenchen
(spark) branch master updated: [SPARK-54802][SQL][DOCS] Fix typos in OrderedFilters.scala
gurwls223
(spark) branch master updated: [SPARK-54787][PS] Use list comprehension instead of for loops in pandas
gurwls223
(spark) branch branch-4.1 updated: [SPARK-54801][SQL] Mark a few new 4.1 configs as internal
gurwls223
(spark) branch master updated: [SPARK-54801][SQL] Mark a few new 4.1 configs as internal
gurwls223
(spark-connect-swift) branch main updated: [SPARK-54811] Use Spark 4.1.0 to regenerate `Spark Connect`-based `Swift` source code
dongjoon
(spark) branch master updated: [SPARK-54137][SQL][CONNECT] Remove redundant observed-metrics responses
gurwls223
(spark) branch master updated (5e41971f2734 -> 52df31ae71dc)
gurwls223
(spark) branch master updated (aad5b6e19c6e -> 5e41971f2734)
ashrigondekar
(spark) branch master updated: [SPARK-54800] Changed default implementation for isObjectNotFoundException
wenchen
(spark) branch branch-4.1 updated: [SPARK-54800] Changed default implementation for isObjectNotFoundException
wenchen
(spark) branch master updated (62cc5a72881a -> dfb3a5105f58)
sarutak
(spark) branch master updated (e886428cdfd3 -> 62cc5a72881a)
yangjie01
(spark) branch master updated: [SPARK-54735][SQL] Properly preserve column comments in view with SCHEMA EVOLUTION
wenchen
(spark) branch branch-4.1 updated: [SPARK-52407][SQL][FOLLOWUP] Expression description fix for ThetaIntersectionAgg
wenchen
(spark) branch master updated (e4b99932d02c -> e4b75089034b)
wenchen
(spark) branch branch-4.1 updated: [SPARK-53991][SQL][TEST][FOLLOWUP] Make KLL quantile golden file tests deterministic
wenchen
(spark) branch master updated: [SPARK-53991][SQL][TEST][FOLLOWUP] Make KLL quantile golden file tests deterministic
wenchen
(spark) branch master updated: [SPARK-54597][BUILD][FOLLOW-UP] Explicitly exclude `org.lz4:lz4-java` in SBT build
dongjoon
(spark) branch branch-4.1 updated: [SPARK-54794][CORE] Suppress verbose `FsHistoryProvider.checkForLogs` scanning logs
sarutak
(spark) branch master updated: [SPARK-54794][CORE] Suppress verbose `FsHistoryProvider.checkForLogs` scanning logs
sarutak
(spark) branch master updated: [SPARK-45720][BUILD][DSTREAM][KINESIS] Upgrade KCL to 2.7.2 and remove AWS SDK for Java 1.x dependency
sarutak
(spark-kubernetes-operator) branch gh-pages updated: Sync examples with main branch for Apache Spark 4.1.0
dongjoon
(spark-kubernetes-operator) branch main updated: [SPARK-54795] Suppress Hadoop warnings in History Server example
dongjoon
(spark) branch branch-4.1 updated: [SPARK-54782][SQL] Correct the config versions
ruifengz
(spark) branch master updated (20af8bdfb907 -> d0cbad56a105)
ruifengz
(spark) branch master updated: [SPARK-54745][PYTHON] Fix PySpark import error caused by missing UnixStreamServer on Windows
gurwls223
(spark) branch master updated: [SPARK-54787][PS] Use list comprehension in pandas _bool_column_labels
gurwls223
(spark) branch master updated: [SPARK-46166][PS] Implementation of pandas.DataFrame.any with axis=None
gurwls223
(spark) branch branch-4.1 updated: [SPARK-54745][PYTHON] Fix PySpark import error caused by missing UnixStreamServer on Windows
gurwls223
(spark) branch master updated: [SPARK-54790][PYTHON][TESTS] Add UDTF analyze coverage tests
gurwls223
(spark) branch master updated (588fab4afc2d -> f8f9f137ddef)
gurwls223
(spark) branch master updated (a3934273c553 -> 588fab4afc2d)
gurwls223
(spark) branch master updated (d3633f18f83e -> a3934273c553)
gurwls223
(spark) branch master updated (00163b828b33 -> d3633f18f83e)
gurwls223
(spark) branch master updated (4b90a265c623 -> 00163b828b33)
yao
(spark-kubernetes-operator) branch main updated: [SPARK-54792] Update `setup-minikube` to v0.0.21
dongjoon
(spark-kubernetes-operator) branch main updated: [SPARK-54791] Increase `setup-minikube` resources to 3 cores and 10240m
dongjoon
(spark) branch dependabot/maven/org.apache.logging.log4j-log4j-core-2.25.3 deleted (was a105deae91d5)
yao
(spark) branch dependabot/maven/org.apache.logging.log4j-log4j-core-2.25.3 created (now a105deae91d5)
yao
(spark-kubernetes-operator) branch main updated: [SPARK-54788] Upgrade the minimum K8s version to v1.33
dongjoon
(spark-connect-swift) branch main updated: [SPARK-54786] Upgrade `gRPC Swift Protobuf` to 2.1.2
dongjoon
(spark) branch branch-4.1 updated: [SPARK-54761][SQL] Throw unsupportedTableOperation for constraint operation on DSv1/HMS table
gengliang
(spark) branch master updated (1e6d743c7ae2 -> 4b90a265c623)
gengliang
(spark-connect-swift) branch main updated: [SPARK-54779] Upgrade `gRPC Swift NIO Transport` to 2.4.0
dongjoon
(spark-connect-swift) branch main updated: [SPARK-54778] Upgrade `grpc-swift-2` to 2.2.1
dongjoon
(spark-connect-swift) branch main updated: [SPARK-54773] Make `README.md` and `Examples` up-to-date with 4.1.0
dongjoon
(spark) branch branch-3.5 updated: [SPARK-54750][SQL] Fix ROUND returning NULL for Decimal values with l…
yao
(spark) branch branch-4.0 updated: [SPARK-54750][SQL] Fix ROUND returning NULL for Decimal values with l…
yao
(spark) branch branch-4.1 updated: [SPARK-54750][SQL] Fix ROUND returning NULL for Decimal values with l…
yao
(spark) branch master updated: [SPARK-54750][SQL] Fix ROUND returning NULL for Decimal values with l…
yao
(spark) branch branch-4.1 updated: [SPARK-54556][CORE] Rollback succeeding shuffle map stages when shuffle checksum mismatch detected
wenchen
(spark) branch master updated (09a2cadc1fb4 -> 0da9e0505b59)
wenchen
(spark) branch branch-4.1 updated: [SPARK-54696][CONNECT] Clean-up Arrow Buffers - follow-up
wenchen
(spark) branch master updated: [SPARK-54696][CONNECT] Clean-up Arrow Buffers - follow-up
wenchen
(spark) branch master updated (bcbf7d6abf40 -> 9dfd84ee3361)
ruifengz
(spark) branch master updated (59977a84257e -> bcbf7d6abf40)
yangjie01
(spark) branch master updated: [SPARK-54720][SQL] Add SparkSession.emptyDataFrame with a schema
hvanhovell
(spark) branch branch-4.0 updated (50659fcccb0b -> 6786afcbf4fa)
yangjie01
(spark) branch master updated (80b4486a6fdc -> 1aa4b15eec3d)
gurwls223
(spark) branch master updated: [SPARK-54769][PYTHON] Remove dead code in conversion.py
gurwls223
(spark) branch master updated: [SPARK-54767][K8S][INFRA] Update K8s IT CI to use K8s 1.35
dongjoon
(spark-kubernetes-operator) branch main updated: [SPARK-54766] Update CI to test K8s 1.35
dongjoon
(spark-kubernetes-operator) branch main updated: [SPARK-54765] Make `README.md` and `examples` up-to-date with 4.1.0
dongjoon
(spark) branch master updated: [SPARK-54744][PYTHON] Only invalidate cache when necessary
ruifengz
(spark) branch branch-3.5 updated (e348c2999496 -> d1790ee73cb4)
sarutak
(spark) branch branch-3.5 updated (d9a370702c4a -> e348c2999496)
sarutak
(spark) branch master updated (1d8c11120fde -> bc47877693ac)
wenchen
(spark) branch branch-4.1 updated: [SPARK-54652][SQL] Complete conversion of IDENTIFIER()
wenchen
(spark) branch master updated: [SPARK-54652][SQL] Complete conversion of IDENTIFIER()
wenchen
(spark-kubernetes-operator) branch main updated: [SPARK-54764] Use 4.1.0 instead of 4.0.1 for tests and benchmark
dongjoon
Earlier messages