Yes, I think it's inevitable that there is some kind of registry or database of
schemas, since a fingerprint by itself does not contain enough information to
be able to decode a record. https://issues.apache.org/jira/browse/AVRO-1704
calls this a SchemaStore.
In some cases, I imagine an
Yes, as a generic crossplatform solution this makes a lot of sense. It s
easy to build and stops consuming the messages as soon as the fingerprint
changes.
In my corporate reality I see that a source of messages puts them into
Kafka, then several consumers read and deserialize them in
Including a schema fingerprint at the start
1) reuses stuff we have
2) gives a language independent notion of compatibility
3) doesn't bind how folks get stuff in/out of the single record form.
--
Sean Busbey
On Dec 22, 2015 06:52, "Niels Basjes" wrote:
> I was not clear
Thanks for pointing this out.
This is exactly what I was working on.
The way I solved the 'does the schema match' question at work is by
requiring that all schema's start with a single text field "schema
classname" being the full class name of the class that was used to generate
it.
That way we
I was not clear enough in my previous email.
What I meant is to 'wrap' the application schema in a serialization wrapper
schema that has a field indicating the "schema classname".
That (generic setup) combined with some generated code in the schema
classes should yield a solution that supports
Niels,
This sounds like a good idea to me to have methods like this. I've had
to write those methods several times!
The idea is also related to AVRO-1704 [1], which is a suggestion to
standardize the encoding that is used for single records. Some projects
have been embedding the schema
Guys? Any opinions on this idea?
On 18 Dec 2015 14:01, "Niels Basjes" wrote:
> Hi,
>
> I'm working on a project where I'm putting Avro records into Kafka and at
> the other end pull them out again.
> For that purpose I wrote two methods 'toBytes' and 'fromBytes' in a
> separate
Hi,
I'm working on a project where I'm putting Avro records into Kafka and at
the other end pull them out again.
For that purpose I wrote two methods 'toBytes' and 'fromBytes' in a
separate class (see below).
I see this as the type of problem many developers run into.
Would it be a good idea to