Hello!  It looks like the Python mechanism was modelled after the
wordy Java version, and I think that the Java version was done that
way to avoid dropping "everything-in-Schema.java".  It's tempting,
because so many operations start with a Schema.

I certainly have no objections to having a more pythonic version that
is closer to what developers expect!  I'm not a very good judge at
what is or isn't pythonic, however.

What sort of type checks do you think wouldn't be possible if the
is_compatible_with method were to be moved into a schema?

All my best, Ryan

On Fri, Apr 9, 2021 at 5:54 AM Subhash Bhushan
<[email protected]> wrote:
>
> The current Python implementation for compatibility is pretty detailed and
> rigorous, but it is structured as a standalone module. It duplicates
> aspects that are part of the `Schema` object and its subclasses.
>
> A compatibility check today looks like this:
>
> *result =
> ReaderWriterCompatibilityChecker().get_compatibility(reader_schema,
> writer_schema)*
> *assert result.compatibility is SchemaCompatibilityType.incompatible*
> *assert message in result.messages*
> *assert location in result.locations*
>
> It would probably be better (and pythonic) to perform the compatibility
> check as part of the schema object itself. We could either reuse the
> existing `match` method:
>
> *reader_schema.match(writer_schema)*
>
> or add a new *is_compatible_with* method:
>
> *reader_schema.is_compatible_with(writer_schema)*
>
> The compatibility check can be declared in the abstract in *schema.py* and
> overridden within each subclass. The implementation will be pythonic and
> leverages the existing OOP structure.
>
> But it may mean some of the type checks that are part of the current
> implementation may not make it through. I am not 100% sure if they do add
> value today.
>
> Would this be an acceptable change? If so, I will create a Jira ticket and
> take it up.
>
> Regards,
> Subhash.

Reply via email to