[ 
https://issues.apache.org/jira/browse/AVRO-762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12999483#comment-12999483
 ] 

Doug Cutting commented on AVRO-762:
-----------------------------------

I'm getting the following 2 test failures on Ubuntu 10.10:

{code}
TEST /home/cutting/src/avro/trunk/lang/c/tests/schema_tests/fail/record_with_non
array_fields.../bin/bash: line 5: 25436 Segmentation fault      ${dir}$tst
FAIL: test_avro_schema
{code}

{code}
+ ../libtool execute valgrind --leak-check=full -q test_avro_data
+ grep -E ^==[0-9]+== 
==25529== 1,538 (32 direct, 1,506 indirect) bytes in 1 blocks are definitely los
t in loss record 18 of 18
==25529==    at 0x4025BD3: malloc (vg_replace_malloc.c:236)
==25529==    by 0x4025C5D: realloc (vg_replace_malloc.c:525)
==25529==    by 0x8049888: test_allocator (test_avro_data.c:58)
==25529==    by 0x40319CB: avro_schema_record (schema.c:560)
==25529==    by 0x40329C0: avro_schema_from_json_t (schema.c:853)
==25529==    by 0x4032D46: avro_schema_from_json (schema.c:1104)
==25529==    by 0x804A3A4: test_nested_record (test_avro_data.c:402)
==25529==    by 0x804B09D: main (test_avro_data.c:661)
==25529== 
+ [ 0 -eq 0 ]
+ exit 1
FAIL: test_valgrind
{code}


> Better schema resolution
> ------------------------
>
>                 Key: AVRO-762
>                 URL: https://issues.apache.org/jira/browse/AVRO-762
>             Project: Avro
>          Issue Type: New Feature
>          Components: c
>            Reporter: Douglas Creager
>            Assignee: Douglas Creager
>            Priority: Blocker
>             Fix For: 1.5.0
>
>         Attachments: 0001-Better-schema-resolution.patch, 
> 0002-Promotion-of-values-during-schema-resolution.patch, 
> 0003-Recursive-schema-resolution.patch
>
>
> I've been working on a pretty major patch that changes the way the C library 
> implements schema resolution.  Before, we would compare the writer and reader 
> schemas each time we try to read a record from an Avro file.  This is a fair 
> bit of wasted effort.  The approach I'm taking with the new implementation is 
> to separate schema resolution and binary parsing into separate operations.  
> There's a new "consumer" API, which defines a set of callbacks for processing 
> Avro data that conforms to a schema.  The new avro_consume_binary function 
> reads binary-encoded Avro data from a buffer or file, and passes that data 
> into a consumer instance.  Each consumer instance is associated with the 
> writer schema of the data that it expects to process.
> Schema resolution is now implemented in the new avro_resolver_new function, 
> which returns a consumer instance that knows how to translate from the writer 
> schema to the reader schema.  As the resolver receives data via the consumer 
> API, it fills in the contents of a destination avro_datum_t (which should be 
> an instance of the reader schema).
> This work isn't complete yet — I still have to implement promotion (int->long 
> and friends), and have to add support for recursive schemas (via the 
> AVRO_LINK schema type).  But I wanted to get the patch out there for people 
> to view and test in the meantime.  This patch depends on a few other of my 
> patches, that haven't made it into SVN yet; if you want to test the code 
> without patching yourself, I have a tracking branch on 
> [github|https://github.com/dcreager/avro/tree/resolution].

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira


Reply via email to