Github user skambha commented on a diff in the pull request:
https://github.com/apache/spark/pull/17185#discussion_r207726267
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/package.scala
---
@@ -169,25 +181,50 @@ package object expressions {
})
}
- // Find matches for the given name assuming that the 1st part is a
qualifier (i.e. table name,
- // alias, or subquery alias) and the 2nd part is the actual name.
This returns a tuple of
+ // Find matches for the given name assuming that the 1st two parts
are qualifier
+ // (i.e. database name and table name) and the 3rd part is the
actual column name.
+ //
+ // For example, consider an example where "db1" is the database
name, "a" is the table name
+ // and "b" is the column name and "c" is the struct field name.
+ // If the name parts is db1.a.b.c, then Attribute will match
--- End diff --
Actually it won't fail without this patch too, the
col.innerField1.innerField2 will resolve correctly to the struct field if there
is a table named col and column named innerField1 which has a innerField2
field.
Please see an example of such a scenario in this test output on master
spark:
https://github.com/apache/spark/blob/master/sql/core/src/test/resources/sql-tests/results/columnresolution.sql.out#L360
------
Let me look into the sql standard and get back on that.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]