I note that the enum type appears to be missing the specification of the
default attribute.
On Wed, 15 May 2024 at 08:26, Martin Grigorov wrote:
> Hi Clemens,
>
> What is the difference between your document and the specification [1] ?
> I haven't read it completely but it looks very similar to
Hello,
We are building a data processing system that has the following required
properties:
- Data is produced/consumed in JSON format
- These JSON documents must always adhere to a schema
- The schema must be defined in JSON also
- It should be possible to evolve schemas and verify s
There is perhaps a little ambiguity in the spec:
>From https://avro.apache.org/docs/current/spec.html#names
Record, enums and fixed are named types. Each has a fullname that is
composed of two parts; a name and a namespace.* Equality of names is
defined on the fullname*.
>From https://avro.apac
> In other words, clients (or servers) could happily use both classes while
> talking to different other peers (supporting, for instance, protocol
> negotiation).
>
> This is the most important reason for why I think the rule you mention is
> kind of strange.
>
> What do you other
set schema", where all schemas can be read (i.e.
> typically, the "superset schema" is read-compatible with all different
> versions, typically since the superset schema is the latest of these
> schemas).
>
> BR
>
> /Anders
>
>
> On 2016-12-02 17:01, El
Hello,
Today we (Hotels.com) open sourced a library for performing strong, schema
based conversions between arbitrarily structured 'natural' JSON documents
and Avro. 'Jasvorno' allows the conversion from any JSON document to Avro
and back, while ensuring conformity to a user defined schema. It rem
Hi,
I've been attempting to understand the implementation of Avro schema
compatibility rules and am slightly confused by the structure of the code.
It seems that there are at least two possible entry points:
- org.apache.avro.SchemaCompatibility.checkReaderWriterCompatibility(Schema,
Schema
!= SchemaCompatibilityType.COMPATIBLE) {
throw new SchemaValidationException(readUsing, writtenWith);
}
}
Or am I missing something fundamental?
Thanks,
Elliot.
On 17 February 2017 at 12:27, Elliot West wrote:
> Hi,
>
> I've been attempting to understand the implementation
definitive judgement on the matter, simply stating that 'an
implementation may optionally use aliases'. Should perhaps this be
configurable in the aforementioned implementations so that the user can
decide and also have a chance of obtaining consistent behaviour?
Elliot.
On 22 February 2017
I think what you're seeing is the value type only, which may contain null
and is thus represented as an Avro union of null and string. The key type
of an Avro may is assumed to be string according to the specification.
On Sat, 4 Mar 2017 at 13:37, Telco Phone wrote:
> I am trying to get to the k
Hello,
We at the Hotels.com Data Platform team have been using the features
provided by AVRO-1933 and AVRO-2003 in production for a while now. Given
that these issues have not yet been merged or released, we rolled the
functionality into a library for our use. We also developed a fluent API to
sim
Try:
{
"TMS_ID" : "asdf"
}
On Wed, 9 Aug 2017 at 19:22, Manish Mehndiratta
wrote:
> Hi Team,
>
> I stripped out my avro schema file and json file to only one element and
> yet it continues to give me the same error.
>
> Exception in thread "main" *org.apache.avro.AvroTypeException: Expected
>
"type" : "record",
"name" : "DataModel",
"fields" : [
{ "name" : "TMS_ID", "type" : "string", "default" : "NONE" }
]
}
]
}
On 9 August 2017 at 20:46, Manish Mehndir
Hi,
I understand that Avro supports circular type dependencies, and also schema
composition. However, I cannot seem to be able to create a circular type
reference that spans multiple schema payloads. Consider this example:
*a.avsc*
{
"type": "record", "name": "a", "fields": [{
"name": "X",
me": "c", "fields": [{
> "name": "Z", "type": "a"
> }]
> }
>
> }]
> }
>
> }]
> }
>
>
> However this is not exactly easy to look at...
>
> I recommend using IDL to define/maintain s
A quick question: If the datum is valid in more than one schema, what is
the scenario where knowing the specific schema is necessary? Is it that the
equivalent schemas might evolve in a divergent manner over time or perhaps
that by targeting a specific schema you are wanting to convey some out of
b
A word of caution on the union type. You may find support for unions very
patchy if you are hoping to process records using well known data
processing engines. We’ve been unable to usefully read union types in both
Apache Spark and Hive for example. The simple null union construct is the
exception:
The error in question is: READER_FIELD_MISSING_DEFAULT_VALUE,
location:/fields/0/type/fields/0
READER_FIELD_MISSING_DEFAULT_VALUE indicates that the reader requires a
default value on a field
The field can be identified with the JSON pointer: /fields/0/type/fields/0
Applying the pointer to the re
Forwards compatibility permits us to update the writer schema independently
of the consumer. In particular, with record types we are free to add fields
with no breaking changes with respect to the consumer - a record is an
extensible type.
Avro includes two additional extensible types: enum and un
19 matches
Mail list logo