[
https://issues.apache.org/jira/browse/SPARK-30421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022835#comment-17022835
]
Dongjoon Hyun commented on SPARK-30421:
---------------------------------------
Nope. Your example is different. I illustrated what I wanted.
"Pandas supports filtering with *the original column's index* on the dropped
data frame."
That's my point. I intentionally didn't declare `df2` or `df2["bar"]`.
> Dropped columns still available for filtering
> ---------------------------------------------
>
> Key: SPARK-30421
> URL: https://issues.apache.org/jira/browse/SPARK-30421
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 2.4.4
> Reporter: Tobias Hermann
> Priority: Minor
>
> The following minimal example:
> {quote}val df = Seq((0, "a"), (1, "b")).toDF("foo", "bar")
> df.select("foo").where($"bar" === "a").show
> df.drop("bar").where($"bar" === "a").show
> {quote}
> should result in an error like the following:
> {quote}org.apache.spark.sql.AnalysisException: cannot resolve '`bar`' given
> input columns: [foo];
> {quote}
> However, it does not but instead works without error, as if the column "bar"
> would exist.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]