Github user maropu commented on the issue:
https://github.com/apache/spark/pull/21745
@gatorsmile It seems the AnalysisBarrier commit causes this error, so v2.2
does not have this issue;
```
scala> df.select(df("name")).filter(df("id") === 0).explain(true)
== Parsed Logical Plan ==
!Filter (id#26 = 0)
+- Project [name#25]
+- Project [_1#22 AS name#25, _2#23 AS id#26]
+- LocalRelation [_1#22, _2#23]
== Analyzed Logical Plan ==
name: string
Project [name#25]
+- Filter (id#26 = 0)
+- Project [name#25, id#26]
+- Project [_1#22 AS name#25, _2#23 AS id#26]
+- LocalRelation [_1#22, _2#23]
== Optimized Logical Plan ==
Project [_1#22 AS name#25]
+- Filter (_2#23 = 0)
+- LocalRelation [_1#22, _2#23]
== Physical Plan ==
*Project [_1#22 AS name#25]
+- *Filter (_2#23 = 0)
+- LocalTableScan [_1#22, _2#23]
=== Applying Rule
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveMissingReferences ===
!!Filter (id#26 = 0) Project [name#25]
!+- Project [name#25] +- Filter (id#26 = 0)
! +- Project [_1#22 AS name#25, _2#23 AS id#26] +- Project [name#25,
id#26]
! +- LocalRelation [_1#22, _2#23] +- Project [_1#22
AS name#25, _2#23 AS id#26]
! +-
LocalRelation [_1#22, _2#23]
```
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]