[ 
https://issues.apache.org/jira/browse/AVRO-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16645586#comment-16645586
 ] 

ASF GitHub Bot commented on AVRO-1335:
--------------------------------------

vimota commented on a change in pull request #241: AVRO-1335: Adds C++ support 
for default values in schema serializatio…
URL: https://github.com/apache/avro/pull/241#discussion_r224252562
 
 

 ##########
 File path: lang/c++/api/NodeImpl.hh
 ##########
 @@ -538,6 +557,16 @@ inline NodePtr resolveSymbol(const NodePtr &node)
     return symNode->getNode();
 }
 
+template< typename T >
+inline std::string intToHex( T i )
 
 Review comment:
   Done.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> C++ should support field default values
> ---------------------------------------
>
>                 Key: AVRO-1335
>                 URL: https://issues.apache.org/jira/browse/AVRO-1335
>             Project: Avro
>          Issue Type: Improvement
>          Components: c++
>    Affects Versions: 1.7.4
>            Reporter: Bin Guo
>            Assignee: Victor Mota
>            Priority: Major
>         Attachments: AVRO-1335.patch
>
>
> We found that resolvingDecoder could not provide bidirectional compatibility 
> between different version of schemas.
> Especially for records, for example:
> {code:title=First schema}
> {
>     "type": "record",
>     "name": "TestRecord",
>     "fields": [
>         {
>             "name": "MyData",
>                       "type": {
>                               "type": "record",
>                               "name": "SubData",
>                               "fields": [
>                                       {
>                                               "name": "Version1",
>                                               "type": "string"
>                                       }
>                               ]
>                       }
>         },
>       {
>             "name": "OtherData",
>             "type": "string"
>         }
>     ]
> }
> {code}
> {code:title=Second schema}
> {
>     "type": "record",
>     "name": "TestRecord",
>     "fields": [
>         {
>             "name": "MyData",
>                       "type": {
>                               "type": "record",
>                               "name": "SubData",
>                               "fields": [
>                                       {
>                                               "name": "Version1",
>                                               "type": "string"
>                                       },
>                                       {
>                                               "name": "Version2",
>                                               "type": "string"
>                                       }
>                               ]
>                       }
>         },
>       {
>             "name": "OtherData",
>             "type": "string"
>         }
>     ]
> }
> {code}
> Say, node A knows only the first schema and node B knows the second schema, 
> and the second schema has more fields. 
> Any data generated by node B can be resolved by first schema 'cause the 
> additional field is marked as skipped.
> But data generated by node A can not be resolved by second schema and throws 
> an exception *"Don't know how to handle excess fields for reader."*
> This is because data is resolved exactly according to the auto-generated 
> codec_traits which trying to read the excess field.
> The problem is we just can not only ignore the excess field in record, since 
> the data after the troublesome record also needs to be resolved.
> Actually this problem stucked us for a very long time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to