[ 
https://issues.apache.org/jira/browse/SPARK-30421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011422#comment-17011422
 ] 

Tobias Hermann commented on SPARK-30421:
----------------------------------------

[~cloud_fan] I think it is an issue because it means one can not simply look at 
the schema of a dataframe to determine if an operation is valid. Instead one 
has to consider the whole history of how the dataframe was created/derived. 
This leads to the effect that refactorings, e.g., changing the way of creation 
of a dataframe, will break one's code, even though the refactoring should be 
totally OK because it results in the exact same dataframe schema.

> Dropped columns still available for filtering
> ---------------------------------------------
>
>                 Key: SPARK-30421
>                 URL: https://issues.apache.org/jira/browse/SPARK-30421
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.4.4
>            Reporter: Tobias Hermann
>            Priority: Minor
>
> The following minimal example:
> {quote}val df = Seq((0, "a"), (1, "b")).toDF("foo", "bar")
> df.select("foo").where($"bar" === "a").show
> df.drop("bar").where($"bar" === "a").show
> {quote}
> should result in an error like the following:
> {quote}org.apache.spark.sql.AnalysisException: cannot resolve '`bar`' given 
> input columns: [foo];
> {quote}
> However, it does not but instead works without error, as if the column "bar" 
> would exist.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to