[ 
https://issues.apache.org/jira/browse/HIVE-21218?focusedWorklogId=397796&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-397796
 ]

ASF GitHub Bot logged work on HIVE-21218:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 04/Mar/20 19:17
            Start Date: 04/Mar/20 19:17
    Worklog Time Spent: 10m 
      Work Description: davidov541 commented on pull request #933: HIVE-21218: 
Adding support for Confluent Kafka Avro message format
URL: https://github.com/apache/hive/pull/933#discussion_r387878744
 
 

 ##########
 File path: kafka-handler/src/java/org/apache/hadoop/hive/kafka/KafkaSerDe.java
 ##########
 @@ -133,12 +134,40 @@
       Preconditions.checkArgument(!schemaFromProperty.isEmpty(), "Avro Schema 
is empty Can not go further");
       Schema schema = AvroSerdeUtils.getSchemaFor(schemaFromProperty);
       LOG.debug("Building Avro Reader with schema {}", schemaFromProperty);
-      bytesConverter = new AvroBytesConverter(schema);
+      bytesConverter = getByteConverterForAvroDelegate(schema, tbl);
     } else {
       bytesConverter = new BytesWritableConverter();
     }
   }
 
+  enum BytesConverterType {
+    CONFLUENT,
+    SKIP,
+    NONE;
+
+    static BytesConverterType fromString(String value) {
+      try {
+        return BytesConverterType.valueOf(value.trim().toUpperCase());
+      } catch (Exception e){
+        return NONE;
+      }
+    }
+  }
+
+  BytesConverter getByteConverterForAvroDelegate(Schema schema, Properties 
tbl) {
+    String avroBytesConverterProperty = tbl.getProperty(AvroSerdeUtils
+                                                            
.AvroTableProperties.AVRO_SERDE_TYPE
+                                                            .getPropName(), 
BytesConverterType.NONE.toString());
+    BytesConverterType avroByteConverterType = 
BytesConverterType.fromString(avroBytesConverterProperty);
+    Integer avroSkipBytes = 
Integer.getInteger(tbl.getProperty(AvroSerdeUtils.AvroTableProperties.AVRO_SERDE_SKIP_BYTES
 
 Review comment:
   I think I'm confused what you're asking for then. The initialize function 
takes in a java.util.Properties object that has properties that have been set 
for the serde in the DDL for the table. It reads a few from that object, and 
then passes it to getByteConverterForAvroDelegate, where it is also used in the 
code added here. The usage of the properties object here matches what is being 
done in initialize, and seems to match what I would expect. These aren't 
pulling system properties of the JVM, or at least are not necessarily doing so, 
instead reading from the Properties object passed to us.
   
   Does that make sense, or am I way off base?
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 397796)
    Time Spent: 7h 10m  (was: 7h)

> KafkaSerDe doesn't support topics created via Confluent Avro serializer
> -----------------------------------------------------------------------
>
>                 Key: HIVE-21218
>                 URL: https://issues.apache.org/jira/browse/HIVE-21218
>             Project: Hive
>          Issue Type: Bug
>          Components: kafka integration, Serializers/Deserializers
>    Affects Versions: 3.1.1
>            Reporter: Milan Baran
>            Assignee: David McGinnis
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HIVE-21218.2.patch, HIVE-21218.3.patch, 
> HIVE-21218.4.patch, HIVE-21218.5.patch, HIVE-21218.patch
>
>          Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> According to [Google 
> groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A]
>  the Confluent avro serialzier uses propertiary format for kafka value - 
> <magic_byte 0x00><4 bytes of schema ID><regular avro bytes for object that 
> conforms to schema>. 
> This format does not cause any problem for Confluent kafka deserializer which 
> respect the format however for hive kafka handler its bit a problem to 
> correctly deserialize kafka value, because Hive uses custom deserializer from 
> bytes to objects and ignores kafka consumer ser/deser classes provided via 
> table property.
> It would be nice to support Confluent format with magic byte.
> Also it would be great to support Schema registry as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to