[ https://issues.apache.org/jira/browse/AVRO-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16143358#comment-16143358 ]
Thiruvalluvan M. G. commented on AVRO-1335: ------------------------------------------- The pull request looks good. However a couple of minor issues: * Line 565 of setw should use {{2 * sizeof(T)}}. For `sizeof` operator gives the number of bytes for the given type but the width in hexadecimal is 2 per byte. * Style issues: `int_to_hex` could be renamed to `intToHex` to make it consistent with the rest of the code in the file. * The following code can be replaced with `memcpy`: {code} for (int j = 0; j < kByteStringSize; j++) { s[i*kByteStringSize + j] = hex_string[j]; } {code} > C++ should support field default values > --------------------------------------- > > Key: AVRO-1335 > URL: https://issues.apache.org/jira/browse/AVRO-1335 > Project: Avro > Issue Type: Improvement > Components: c++ > Affects Versions: 1.7.4 > Reporter: Bin Guo > Attachments: AVRO-1335.patch > > > We found that resolvingDecoder could not provide bidirectional compatibility > between different version of schemas. > Especially for records, for example: > {code:title=First schema} > { > "type": "record", > "name": "TestRecord", > "fields": [ > { > "name": "MyData", > "type": { > "type": "record", > "name": "SubData", > "fields": [ > { > "name": "Version1", > "type": "string" > } > ] > } > }, > { > "name": "OtherData", > "type": "string" > } > ] > } > {code} > {code:title=Second schema} > { > "type": "record", > "name": "TestRecord", > "fields": [ > { > "name": "MyData", > "type": { > "type": "record", > "name": "SubData", > "fields": [ > { > "name": "Version1", > "type": "string" > }, > { > "name": "Version2", > "type": "string" > } > ] > } > }, > { > "name": "OtherData", > "type": "string" > } > ] > } > {code} > Say, node A knows only the first schema and node B knows the second schema, > and the second schema has more fields. > Any data generated by node B can be resolved by first schema 'cause the > additional field is marked as skipped. > But data generated by node A can not be resolved by second schema and throws > an exception *"Don't know how to handle excess fields for reader."* > This is because data is resolved exactly according to the auto-generated > codec_traits which trying to read the excess field. > The problem is we just can not only ignore the excess field in record, since > the data after the troublesome record also needs to be resolved. > Actually this problem stucked us for a very long time. -- This message was sent by Atlassian JIRA (v6.4.14#64029)