This is an automated email from the ASF dual-hosted git repository.
gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new 969d2d4f627 [SPARK-40142][PYTHON][DOCS][FOLLOW-UP] Remove non-ANSI
compliant example in element_at
969d2d4f627 is described below
commit 969d2d4f62759f75125c0ee7e2324bda8927527c
Author: Hyukjin Kwon <[email protected]>
AuthorDate: Thu Sep 22 11:44:05 2022 +0900
[SPARK-40142][PYTHON][DOCS][FOLLOW-UP] Remove non-ANSI compliant example in
element_at
### What changes were proposed in this pull request?
This PR is a followup of https://github.com/apache/spark/pull/37850 that
removes non-ANSI compliant example in `element_at`.
### Why are the changes needed?
ANSI build fails to run the example.
https://github.com/apache/spark/actions/runs/3094607589/jobs/5008176959
```
Caused by: org.apache.spark.SparkArrayIndexOutOfBoundsException:
[INVALID_ARRAY_INDEX_IN_ELEMENT_AT] The index -4 is out of bounds. The array
has 3 elements. Use `try_element_at` to tolerate accessing element at invalid
index and return NULL instead. If necessary set "spark.sql.ansi.enabled" to
"false" to bypass this error.
at
org.apache.spark.sql.errors.QueryExecutionErrors$.invalidElementAtIndexError(QueryExecutionErrors.scala:264)
...
/usr/local/pypy/pypy3.7/lib-python/3/runpy.py:125: RuntimeWarning:
'pyspark.sql.functions' found in sys.modules after import of package
'pyspark.sql', but prior to execution of 'pyspark.sql.functions'; this may
result in unpredictable behaviour
warn(RuntimeWarning(msg))
/__w/spark/spark/python/pyspark/context.py:310: FutureWarning: Python 3.7
support is deprecated in Spark 3.4.
warnings.warn("Python 3.7 support is deprecated in Spark 3.4.",
FutureWarning)
**********************************************************************
1 of 6 in pyspark.sql.functions.element_at
```
### Does this PR introduce _any_ user-facing change?
No. The example added is not exposed to end users yet.
### How was this patch tested?
Manually tested with enabling the ANSI configuration
(`spark.sql.ansi.enabled`)
Closes #37959 from HyukjinKwon/SPARK-40142-followup.
Authored-by: Hyukjin Kwon <[email protected]>
Signed-off-by: Hyukjin Kwon <[email protected]>
---
python/pyspark/sql/functions.py | 5 -----
1 file changed, 5 deletions(-)
diff --git a/python/pyspark/sql/functions.py b/python/pyspark/sql/functions.py
index 7885229fc5c..fe114e07c88 100644
--- a/python/pyspark/sql/functions.py
+++ b/python/pyspark/sql/functions.py
@@ -6595,11 +6595,6 @@ def element_at(col: "ColumnOrName", extraction: Any) ->
Column:
>>> df.select(element_at(df.data, -1)).collect()
[Row(element_at(data, -1)='c')]
- Returns `None` if there is no value corresponding to the given
`extraction`.
-
- >>> df.select(element_at(df.data, -4)).collect()
- [Row(element_at(data, -4)=None)]
-
>>> df = spark.createDataFrame([({"a": 1.0, "b": 2.0},)], ['data'])
>>> df.select(element_at(df.data, lit("a"))).collect()
[Row(element_at(data, a)=1.0)]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]