Github user HeartSaVioR commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22282#discussion_r214618743
  
    --- Diff: 
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaRecordToUnsafeRowConverter.scala
 ---
    @@ -44,6 +44,11 @@ private[kafka010] class KafkaRecordToUnsafeRowConverter {
           5,
           DateTimeUtils.fromJavaTimestamp(new 
java.sql.Timestamp(record.timestamp)))
         rowWriter.write(6, record.timestampType.id)
    +    val keys = record.headers.toArray.map(_.key())
    --- End diff --
    
    Might be better to define a new local value for `record.headers.toArray`, 
because it creates a new array when `headers` is not empty. It also guarantees 
consistent view for extracting keys and values, though we know `headers` is 
unlikely to be modified during this.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to