Hi all,

I've been trying to do some research on the logical decimal type and why
the scale of a decimal type must be between zero and the precision of the
type, inclusive. The ticket https://issues.apache.org/jira/browse/AVRO-1402 has
a lot of discussion around the design of the type, but I haven't been able
to find any rationale for the limitations on the scale of the type.

These don't appear to align with existing conventions for precision and
scale in the context of SQL numeric types, the JDBC API, and the Java
standard library's BigDecimal class. In these contexts, the precision must
be a positive number, but the scale can be any value--positive
(representing the number of digits of precision that are available after
the decimal point), negative (representing the number of trailing zeroes at
the end of the number before an implicit decimal point), or zero. It is not
bounded by the precision of the type.

The definitions for scale and precision appear to align across these
contexts, including the Avro spec, so I'm curious as to why the Avro
spec--seemingly an anomaly--is the only one to declare these limitations on
what the scale of a decimal type can be.

Does anyone know why these exist, and if not, would it be okay to file a
ticket to remove them from the spec and begin work on it?

Cheers,

Chris

Reply via email to