MaxGekk commented on PR #43695:
URL: https://github.com/apache/spark/pull/43695#issuecomment-1826715119
@cloud-fan Quite the same:
```scala
scala> val divCol = lit(1) / lit(0)
val divCol: org.apache.spark.sql.Column = `/`(1, 0)
scala> spark.range(1).select(divCol).collect()
cloud-fan commented on PR #43695:
URL: https://github.com/apache/spark/pull/43695#issuecomment-1826686951
@MaxGekk not quite related in this PR, but what if the expression creation
is different from the df creation? like
```
val divCol = lit(1) / lit(0)
itholic commented on PR #44015:
URL: https://github.com/apache/spark/pull/44015#issuecomment-1826538856
I believe this is the last refactoring for the current code base of
`CategoricalOps`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
itholic opened a new pull request, #44015:
URL: https://github.com/apache/spark/pull/44015
### What changes were proposed in this pull request?
This PR follows-up for https://github.com/apache/spark/pull/43993 to make
more refactoring for `CategoricalOps`.
### Why are
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405234673
##
python/docs/source/conf.py:
##
@@ -194,7 +194,11 @@
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
-
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405239729
##
dev/requirements.txt:
##
@@ -31,12 +31,12 @@ pandas-stubs<1.2.0.54
mkdocs
# Documentation (Python)
-pydata_sphinx_theme
+pydata_sphinx_theme==0.13
ipython
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405239729
##
dev/requirements.txt:
##
@@ -31,12 +31,12 @@ pandas-stubs<1.2.0.54
mkdocs
# Documentation (Python)
-pydata_sphinx_theme
+pydata_sphinx_theme==0.13
ipython
LuciferYang commented on PR #44014:
URL: https://github.com/apache/spark/pull/44014#issuecomment-1826508000
https://github.com/LuciferYang/spark/actions/runs/6993380440/job/19026004116
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
LuciferYang opened a new pull request, #44014:
URL: https://github.com/apache/spark/pull/44014
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
###
itholic commented on PR #44012:
URL: https://github.com/apache/spark/pull/44012#issuecomment-1826501628
Documentation build passed:
https://github.com/itholic/spark/actions/runs/6991575840/job/19023836477
--
This is an automated message from the Apache Git Service.
To respond to the
LuciferYang commented on PR #44008:
URL: https://github.com/apache/spark/pull/44008#issuecomment-1826480761
https://github.com/panbingkun/spark/actions/runs/6758634955/job/18370477191
![image](https://github.com/apache/spark/assets/1475305/938771ba-65b0-4d97-8e85-d249a600e6a6)
beliefer commented on PR #44011:
URL: https://github.com/apache/spark/pull/44011#issuecomment-1826478614
@srowen @LuciferYang Thank you!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
ulysses-you opened a new pull request, #44013:
URL: https://github.com/apache/spark/pull/44013
### What changes were proposed in this pull request?
This pr introduces `case class AdaptiveRuleContext(isSubquery: Boolean,
isFinalStage: Boolean)` which can be used inside
wangyum commented on code in PR #44009:
URL: https://github.com/apache/spark/pull/44009#discussion_r140529
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala:
##
@@ -771,6 +771,17 @@ object LimitPushDown extends Rule[LogicalPlan] {
HyukjinKwon commented on PR #44012:
URL: https://github.com/apache/spark/pull/44012#issuecomment-1826463424
This is a nice improvement!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
Hisoka-X commented on code in PR #42398:
URL: https://github.com/apache/spark/pull/42398#discussion_r1405287835
##
sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala:
##
@@ -158,6 +158,69 @@ class SQLQuerySuite extends QueryTest with
SharedSparkSession with
github-actions[bot] commented on PR #42071:
URL: https://github.com/apache/spark/pull/42071#issuecomment-1826450649
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
github-actions[bot] commented on PR #42538:
URL: https://github.com/apache/spark/pull/42538#issuecomment-1826450643
We're closing this PR because it hasn't been updated in a while. This isn't
a judgement on the merit of the PR in any way. It's just a way of keeping the
PR queue manageable.
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405240579
##
dev/requirements.txt:
##
@@ -31,12 +31,12 @@ pandas-stubs<1.2.0.54
mkdocs
# Documentation (Python)
-pydata_sphinx_theme
+pydata_sphinx_theme==0.13
Review
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405240579
##
dev/requirements.txt:
##
@@ -31,12 +31,12 @@ pandas-stubs<1.2.0.54
mkdocs
# Documentation (Python)
-pydata_sphinx_theme
+pydata_sphinx_theme==0.13
Review
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405240335
##
dev/requirements.txt:
##
@@ -31,12 +31,12 @@ pandas-stubs<1.2.0.54
mkdocs
# Documentation (Python)
-pydata_sphinx_theme
+pydata_sphinx_theme==0.13
Review
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405240335
##
dev/requirements.txt:
##
@@ -31,12 +31,12 @@ pandas-stubs<1.2.0.54
mkdocs
# Documentation (Python)
-pydata_sphinx_theme
+pydata_sphinx_theme==0.13
Review
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405239729
##
dev/requirements.txt:
##
@@ -31,12 +31,12 @@ pandas-stubs<1.2.0.54
mkdocs
# Documentation (Python)
-pydata_sphinx_theme
+pydata_sphinx_theme==0.13
ipython
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405238199
##
python/docs/source/reference/pyspark.pandas/frame.rst:
##
@@ -319,8 +320,8 @@ specific plotting methods of the form
``DataFrame.plot.``.
.. autosummary::
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405238199
##
python/docs/source/reference/pyspark.pandas/frame.rst:
##
@@ -319,8 +320,8 @@ specific plotting methods of the form
``DataFrame.plot.``.
.. autosummary::
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405236858
##
python/docs/source/_templates/autosummary/class_with_docs.rst:
##
@@ -47,7 +47,9 @@
.. autosummary::
{% for item in attributes %}
- ~{{ name }}.{{
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405236561
##
python/docs/source/_templates/autosummary/class_with_docs.rst:
##
@@ -47,7 +47,9 @@
.. autosummary::
{% for item in attributes %}
- ~{{ name }}.{{
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405236561
##
python/docs/source/_templates/autosummary/class_with_docs.rst:
##
@@ -47,7 +47,9 @@
.. autosummary::
{% for item in attributes %}
- ~{{ name }}.{{
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405236561
##
python/docs/source/_templates/autosummary/class_with_docs.rst:
##
@@ -47,7 +47,9 @@
.. autosummary::
{% for item in attributes %}
- ~{{ name }}.{{
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405234673
##
python/docs/source/conf.py:
##
@@ -194,7 +194,11 @@
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
-
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405234673
##
python/docs/source/conf.py:
##
@@ -194,7 +194,11 @@
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
-
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405233405
##
python/docs/source/reference/pyspark.pandas/frame.rst:
##
@@ -299,6 +299,7 @@ in Spark. These can be accessed by
``DataFrame.spark.``.
.. autosummary::
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405232095
##
python/docs/source/reference/pyspark.pandas/frame.rst:
##
@@ -299,6 +299,7 @@ in Spark. These can be accessed by
``DataFrame.spark.``.
.. autosummary::
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405232095
##
python/docs/source/reference/pyspark.pandas/frame.rst:
##
@@ -299,6 +299,7 @@ in Spark. These can be accessed by
``DataFrame.spark.``.
.. autosummary::
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405232095
##
python/docs/source/reference/pyspark.pandas/frame.rst:
##
@@ -299,6 +299,7 @@ in Spark. These can be accessed by
``DataFrame.spark.``.
.. autosummary::
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405232095
##
python/docs/source/reference/pyspark.pandas/frame.rst:
##
@@ -299,6 +299,7 @@ in Spark. These can be accessed by
``DataFrame.spark.``.
.. autosummary::
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405232095
##
python/docs/source/reference/pyspark.pandas/frame.rst:
##
@@ -299,6 +299,7 @@ in Spark. These can be accessed by
``DataFrame.spark.``.
.. autosummary::
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405232095
##
python/docs/source/reference/pyspark.pandas/frame.rst:
##
@@ -299,6 +299,7 @@ in Spark. These can be accessed by
``DataFrame.spark.``.
.. autosummary::
itholic commented on code in PR #44012:
URL: https://github.com/apache/spark/pull/44012#discussion_r1405231062
##
python/docs/source/reference/pyspark.pandas/frame.rst:
##
@@ -299,6 +299,7 @@ in Spark. These can be accessed by
``DataFrame.spark.``.
.. autosummary::
srowen commented on PR #44011:
URL: https://github.com/apache/spark/pull/44011#issuecomment-1826415496
Merged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
srowen closed pull request #44011: [SPARK-46100][CORE][PYTHON] Reduce stack
depth by replace (string|array).size with (string|array).length
URL: https://github.com/apache/spark/pull/44011
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
dongjoon-hyun commented on PR #44008:
URL: https://github.com/apache/spark/pull/44008#issuecomment-1826410068
Is that a MIMA issue?
https://github.com/apache/spark/assets/9700541/ef55b97a-c13a-49eb-ad7b-987cd74e93b3;>
From GitHub Log, only `mima` complains while the others are
rangadi commented on code in PR #43985:
URL: https://github.com/apache/spark/pull/43985#discussion_r1405191450
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/service/SessionHolder.scala:
##
@@ -47,9 +47,19 @@ case class SessionKey(userId: String,
MaxGekk commented on PR #43695:
URL: https://github.com/apache/spark/pull/43695#issuecomment-1826388775
> Can we show the impact to the real error message
@cloud-fan I added an example, please, take a look at the PR.
--
This is an automated message from the Apache Git Service.
To
cloud-fan commented on PR #43867:
URL: https://github.com/apache/spark/pull/43867#issuecomment-1826351872
> This may also affect other Rules. Since HiveTableRelation is not resolved,
the Project.projectList of the parent plan will not be resolved.
Good point. Why is
cloud-fan commented on code in PR #43958:
URL: https://github.com/apache/spark/pull/43958#discussion_r1405122383
##
sql/core/src/main/scala/org/apache/spark/sql/execution/window/WindowFunctionFrame.scala:
##
@@ -175,6 +178,23 @@ abstract class OffsetWindowFunctionFrameBase(
cloud-fan commented on code in PR #43978:
URL: https://github.com/apache/spark/pull/43978#discussion_r1405081903
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/RewriteWithExpression.scala:
##
@@ -35,56 +35,82 @@ object RewriteWithExpression extends
cloud-fan commented on code in PR #44004:
URL: https://github.com/apache/spark/pull/44004#discussion_r1405045604
##
sql/core/src/test/resources/sql-tests/inputs/misc-functions.sql:
##
@@ -21,21 +21,30 @@ CREATE TEMPORARY VIEW tbl_misc AS SELECT * FROM (VALUES
(1), (8), (2)) AS
cloud-fan commented on PR #44004:
URL: https://github.com/apache/spark/pull/44004#issuecomment-1826332136
cc @srielau
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
cloud-fan commented on code in PR #44004:
URL: https://github.com/apache/spark/pull/44004#discussion_r1405037748
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala:
##
@@ -139,6 +128,31 @@ object RaiseError {
new RaiseError(errorClass,
turboFei commented on PR #44005:
URL: https://github.com/apache/spark/pull/44005#issuecomment-1826300772
gentle ping @dongjoon-hyun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
beliefer commented on code in PR #44004:
URL: https://github.com/apache/spark/pull/44004#discussion_r1404849612
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala:
##
@@ -139,6 +128,31 @@ object RaiseError {
new RaiseError(errorClass,
beliefer commented on PR #44011:
URL: https://github.com/apache/spark/pull/44011#issuecomment-1826279786
> Is this all the similar cases in the project? I think we can fix all cases
in one PR :)
It's very hard. I fix the cases in core module paid two hours.
--
This is an automated
beliefer commented on code in PR #44009:
URL: https://github.com/apache/spark/pull/44009#discussion_r1404844214
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/LimitPushdownSuite.scala:
##
@@ -352,4 +352,21 @@ class LimitPushdownSuite extends PlanTest {
LuciferYang commented on PR #44011:
URL: https://github.com/apache/spark/pull/44011#issuecomment-1826278609
Is this all the similar cases in the project? I think we can fix all cases
in one PR :)
--
This is an automated message from the Apache Git Service.
To respond to the message,
LuciferYang commented on PR #44008:
URL: https://github.com/apache/spark/pull/44008#issuecomment-1826278081
IIRC, Did @panbingkun encounter some issues that have not been resolved
when he trying to upgrade this version before?
--
This is an automated message from the Apache Git
beliefer commented on code in PR #43841:
URL: https://github.com/apache/spark/pull/43841#discussion_r1404840863
##
sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantShufflesSuite.scala:
##
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation
itholic opened a new pull request, #44010:
URL: https://github.com/apache/spark/pull/44010
### What changes were proposed in this pull request?
This PR proposes to refactor the script used to generate the [Supported
pandas
58 matches
Mail list logo