Github user viirya commented on a diff in the pull request:

    https://github.com/apache/spark/pull/6146#discussion_r32817406
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
    @@ -332,47 +334,60 @@ private[hive] object HadoopTableReader extends 
HiveInspectors with Logging {
     
         logDebug(soi.toString)
     
    +    val allStructFieldNames = soi.getAllStructFieldRefs().toList
    +      .map(fieldRef => fieldRef.getFieldName())
    +
         val (fieldRefs, fieldOrdinals) = nonPartitionKeyAttrs.map { case 
(attr, ordinal) =>
    -      soi.getStructFieldRef(attr.name) -> ordinal
    +      // If the partition contain this attribute or not
    --- End diff --
    
    When a non-partition key attribution doesn't exist in the partition's 
(determined by the partition's `StructObjectInspector`), we should produce a 
null ref. Previously, we don't check it and directly call `getStructFieldRef`  
to query the field ref. It will cause the error reported in the JIRA.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to