[ 
https://issues.apache.org/jira/browse/AVRO-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14488033#comment-14488033
 ] 

Ryan Blue commented on AVRO-1661:
---------------------------------

If you decide to store the schema in the message body, please don't create a 
one-off format to do it! Use a set Avro schema for the payload-and-headers like 
Flume does. Something like this:

{code:javascript}
{ "name": "KafkaMessage",
  "type": "record",
  "fields": [
    { "name": "headers", "type": "map", "values": "string" },
    { "name": "payload", "type": "bytes" }
  ]
}
{code}

> Schema Evolution not working 
> -----------------------------
>
>                 Key: AVRO-1661
>                 URL: https://issues.apache.org/jira/browse/AVRO-1661
>             Project: Avro
>          Issue Type: Bug
>          Components: java
>    Affects Versions: 1.7.6, 1.7.7
>         Environment: Ubuntu 14.10
>            Reporter: Nicolas PHUNG
>              Labels: avsc, evolution, schema
>
> This is the Avro Schema (OLD) I was using to write Avro binary data before:
> {noformat}
> {
>     "namespace": "com.hello.world",
>     "type": "record",
>     "name": "Toto",
>     "fields": [
>         {
>             "name": "a",
>             "type": [
>                 "string",
>                 "null"
>             ]
>         },
>         {
>             "name": "b",
>             "type": "string"
>         }
>     ]
> }
> {noformat}
> This is the Avro Schema (NEW) I'm using to read the Avro binary data :
> {noformat}
> {
>     "namespace": "com.hello.world",
>     "type": "record",
>     "name": "Toto",
>     "fields": [
>         {
>             "name": "a",
>             "type": [
>                 "string",
>                 "null"
>             ]
>         },
>         {
>             "name": "b",
>             "type": "string"
>         },
>         {
>             "name": "c",
>             "type": "string",
>             "default": "na"
>         }
>     ]
> }
> {noformat}
> However, I can't read the old data with the new Schema. I've got the 
> following errors :
> {noformat}
> 15/04/08 17:32:22 ERROR executor.Executor: Exception in task 0.0 in stage 3.0 
> (TID 3)
> java.io.EOFException
>       at org.apache.avro.io.BinaryDecoder.ensureBounds(BinaryDecoder.java:473)
>       at org.apache.avro.io.BinaryDecoder.readInt(BinaryDecoder.java:128)
>       at org.apache.avro.io.BinaryDecoder.readString(BinaryDecoder.java:259)
>       at org.apache.avro.io.BinaryDecoder.readString(BinaryDecoder.java:272)
>       at 
> org.apache.avro.io.ValidatingDecoder.readString(ValidatingDecoder.java:113)
>       at 
> org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:353)
>       at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:157)
>       at 
> org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:193)
>       at 
> org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:183)
>       at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:151)
>       at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:142)
>       at com.miguno.kafka.avro.AvroDecoder.fromBytes(AvroDecoder.scala:31)
> {noformat}
> From my understanding, I should be able to read the old data with the new 
> schema that contains a new field with a default value. But it doesn't seem to 
> work. Am I doing something wrong ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to