HyukjinKwon commented on a change in pull request #28327:
URL: https://github.com/apache/spark/pull/28327#discussion_r415204996
##########
File path: python/pyspark/sql/column.py
##########
@@ -296,12 +299,17 @@ def getItem(self, key):
+----+------+
| 1| value|
+----+------+
-
- .. versionchanged:: 3.0
- If `key` is a `Column` object, the indexing operator should be used
instead.
- For example, `map_col.getItem(col('id'))` should be replaced with
`map_col[col('id')]`.
"""
- return _bin_op("getItem")(self, key)
+ if isinstance(key, Column) and not is_instance_of(
+ SparkContext._gateway,
+ key._jc.expr(),
+ "org.apache.spark.sql.catalyst.expressions.Literal"):
Review comment:
That was my current take. It was because `getField` support a literal
only at this moment as well (whereas Scala side only support `String`). I took
a look for `functions.py`, and seems it's more consistent to just let it take
string only, and no `Column` instance. Thanks for pointing this one ..
I will update this PR accordingly later tomorrow.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]