scovich commented on code in PR #523:
URL: https://github.com/apache/parquet-format/pull/523#discussion_r2414031848
##########
VariantShredding.md:
##########
@@ -65,12 +65,12 @@ The series of measurements `34, null, "n/a", 100` would be
stored as:
Both `value` and `typed_value` are optional fields used together to encode a
single value.
Values in the two fields must be interpreted according to the following table:
-| `value` | `typed_value` | Meaning
|
-|----------|---------------|-------------------------------------------------------------|
-| null | null | The value is missing; only valid for shredded
object fields |
-| non-null | null | The value is present and may be any type,
including null |
-| null | non-null | The value is present and is the shredded type
|
-| non-null | non-null | The value is present and is a partially shredded
object |
+| `value` | `typed_value` | Meaning
|
+|----------|---------------|----------------------------------------------------------------------------------------|
+| null | null | The value is missing; only valid for shredded
object fields |
+| non-null | null | The value is present and may be any type (except
the shredded type), including null |
Review Comment:
This is a material change in the spec -- today a NULL `typed_value` column
entry puts _no constraints whatsoever_ on the corresponding `value` column
entry (for primitive types, at least).
Don't get me wrong, it's very reasonable to require this, and adding the
requirement would make primitive shredding match the similar rule for partial
object shredding (a field in typed_value column must never be present in
objects stored in the value column).
That said, I don't know if it makes much of a difference for readers
consuming numeric types, because somebody who sees an int32 `typed_value`
column probably still has to read the `value` column to pick up int8 values and
sufficiently narrow int64 values (and possibly even decimal values whose
fractional part is zero). And once they're doing that, it doesn't actually
change the algorithm at all if the `value` column contains values that "should"
have shredded and did not. Depending on the reader's casting semantics, they
might need to pick up decimal and float columns as well. Maybe even string
columns (i.e. spark's casting semantics would attempt to parse strings into
integers).
Going a step further -- will (should?) the shredding spec take a stance on
whether writers are required to perform widening and/or lossless narrowing
conversions and/or lossy narrowing conversions of values whose types are in the
same [equivalence
class](https://github.com/apache/parquet-format/blob/master/VariantEncoding.md#encoding-types)
as the `typed_value` column?
* the variant spec places the scale 9 decimal4 value `0.035000000`, the int8
value `-5` and the int64 value `8589934592` all in the same "exact numeric"
variant equivalence class, which indicates "logical equivalence" for which
"user expressions... should behave the same."
* i.e. does the presence of an int32 `typed_value` column forbid the
existence of int8 entries in the `value` column? Does it forbid "narrow" int64
entries such as `10` that convert losslessly to int32? What about
decimal4(scale=5) values such as `15.0000` that allows a "lossless" (**)
conversion to int32 but requires an expensive integer `%` operation to detect?
What about decimal4(scale=5) values such as `1.23000` that could be rounded to
int32?
(**) I use scare quotes for "lossless" conversion of 1.00000 to 1 because
some would argue that the presence of those five trailing decimal zeros is
information that would be lost by conversion to int (the input had six
significant figures and the output has only one).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]