Kimahriman commented on pull request #32972:
URL: https://github.com/apache/spark/pull/32972#issuecomment-864554473


   Yeah I'm saying it only properly handles one level of a nested struct, not 
recursively like it should. Got your code running to show an example:
   
   ```
   >>> df1 = spark.createDataFrame([Row(a=Row(aa=Row(aaa=1)))])
   >>> df2 = spark.createDataFrame([Row(a=Row(aa=Row(aab=1)))])
   >>> df1
   DataFrame[a: struct<aa:struct<aaa:bigint>>]
   >>> df2
   DataFrame[a: struct<aa:struct<aab:bigint>>]
   >>> df1.unionByName(df2)
   DataFrame[a: struct<aa:struct<aaa:bigint,aab:bigint>>]
   >>> df1.unionByName(df2).explain()
   == Physical Plan ==
   Union
   :- *(1) Project [if (isnull(a#21)) null else named_struct(aa, if 
(isnull(a#21.aa)) null else named_struct(aaa, a#21.aa.aaa, aab, null)) AS a#37]
   :  +- *(1) Scan ExistingRDD[a#21]
   +- *(2) Project [if (isnull(a#23)) null else named_struct(aa, if 
(isnull(a#23.aa)) null else named_struct(aaa, null, aab, a#23.aa.aab)) AS a#34]
      +- *(2) Scan ExistingRDD[a#23]
   ```
   
   The inner struct gets merged adding missing columns even though 
allowMissingCol is false


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to