gene-db commented on code in PR #461:
URL: https://github.com/apache/parquet-format/pull/461#discussion_r1857160147
##########
VariantEncoding.md:
##########
@@ -39,13 +39,41 @@ Another motivation for the representation is that (aside
from metadata) each nes
For example, in a Variant containing an Array of Variant values, the
representation of an inner Variant value, when paired with the metadata of the
full variant, is itself a valid Variant.
This document describes the Variant Binary Encoding scheme.
-[VariantShredding.md](VariantShredding.md) describes the details of the
Variant shredding scheme.
+The [Variant Shredding specification](VariantShredding.md) describes the
details of shredding Variant values as typed Parquet columns.
+
+## Variant in Parquet
-# Variant in Parquet
A Variant value in Parquet is represented by a group with 2 fields, named
`value` and `metadata`.
-Both fields `value` and `metadata` are of type `binary`, and cannot be `null`.
-# Metadata encoding
+* The Variant group must be annotated with the `VARIANT` logical type.
Review Comment:
Those limits are in the engine, Spark in this example. Those limits should
not be in the encoding spec itself.
##########
VariantShredding.md:
##########
@@ -25,290 +25,316 @@
The Variant type is designed to store and process semi-structured data
efficiently, even with heterogeneous values.
Query engines encode each Variant value in a self-describing format, and store
it as a group containing `value` and `metadata` binary fields in Parquet.
Since data is often partially homogenous, it can be beneficial to extract
certain fields into separate Parquet columns to further improve performance.
-We refer to this process as **shredding**.
-Each Parquet file remains fully self-describing, with no additional metadata
required to read or fully reconstruct the Variant data from the file.
-Combining shredding with a binary residual provides the flexibility to
represent complex, evolving data with an unbounded number of unique fields
while limiting the size of file schemas, and retaining the performance benefits
of a columnar format.
+This process is **shredding**.
-This document focuses on the shredding semantics, Parquet representation,
implications for readers and writers, as well as the Variant reconstruction.
-For now, it does not discuss which fields to shred, user-facing API changes,
or any engine-specific considerations like how to use shredded columns.
-The approach builds upon the [Variant Binary Encoding](VariantEncoding.md),
and leverages the existing Parquet specification.
+Shredding enables the use of Parquet's columnar representation for more
compact data encoding, column statistics for data skipping, and partial
projections.
-At a high level, we replace the `value` field of the Variant Parquet group
with one or more fields called `object`, `array`, `typed_value`, and
`variant_value`.
-These represent a fixed schema suitable for constructing the full Variant
value for each row.
+For example, the query `SELECT variant_get(event, '$.event_ts', 'timestamp')
FROM tbl` only needs to load field `event_ts`, and if that column is shredded,
it can be read by columnar projection without reading or deserializing the rest
of the `event` Variant.
+Similarly, for the query `SELECT * FROM tbl WHERE variant_get(event,
'$.event_type', 'string') = 'signup'`, the `event_type` shredded column
metadata can be used for skipping and to lazily load the rest of the Variant.
-Shredding allows a query engine to reap the full benefits of Parquet's
columnar representation, such as more compact data encoding, min/max statistics
for data skipping, and I/O and CPU savings from pruning unnecessary fields not
accessed by a query (including the non-shredded Variant binary data).
-Without shredding, any query that accesses a Variant column must fetch all
bytes of the full binary buffer.
-With shredding, we can get nearly equivalent performance as in a relational
(scalar) data model.
+## Variant Metadata
-For example, `select variant_get(variant_col, ‘$.field1.inner_field2’,
‘string’) from tbl` only needs to access `inner_field2`, and the file scan
could avoid fetching the rest of the Variant value if this field was shredded
into a separate column in the Parquet schema.
-Similarly, for the query `select * from tbl where variant_get(variant_col,
‘$.id’, ‘integer’) = 123`, the scan could first decode the shredded `id`
column, and only fetch/decode the full Variant value for rows that pass the
filter.
+Variant metadata is stored in the top-level Variant group in a binary
`metadata` column regardless of whether the Variant value is shredded.
-# Parquet Example
+All `value` columns within the Variant must use the same `metadata`.
+All field names of a Variant, whether shredded or not, must be present in the
metadata.
-Consider the following Parquet schema together with how Variant values might
be mapped to it.
-Notice that we represent each shredded field in `object` as a group of two
fields, `typed_value` and `variant_value`.
-We extract all homogenous data items of a certain path into `typed_value`, and
set aside incompatible data items in `variant_value`.
-Intuitively, incompatibilities within the same path may occur because we store
the shredding schema per Parquet file, and each file can contain several row
groups.
-Selecting a type for each field that is acceptable for all rows would be
impractical because it would require buffering the contents of an entire file
before writing.
+## Value Shredding
-Typically, the expectation is that `variant_value` exists at every level as an
option, along with one of `object`, `array` or `typed_value`.
-If the actual Variant value contains a type that does not match the provided
schema, it is stored in `variant_value`.
-An `variant_value` may also be populated if an object can be partially
represented: any fields that are present in the schema must be written to those
fields, and any missing fields are written to `variant_value`.
-
-The `metadata` column is unchanged from its unshredded representation, and may
be referenced in `variant_value` fields in the shredded data.
+Variant values are stored in Parquet fields named `value`.
+Each `value` field may have an associated shredded field named `typed_value`
that stores the value when it matches a specific type.
+When `typed_value` is present, readers **must** reconstruct shredded values
according to this specification.
+For example, a Variant field, `measurement` may be shredded as long values by
adding `typed_value` with type `int64`:
```
-optional group variant_col {
- required binary metadata;
- optional binary variant_value;
- optional group object {
- optional group a {
- optional binary variant_value;
- optional int64 typed_value;
- }
- optional group b {
- optional binary variant_value;
- optional group object {
- optional group c {
- optional binary variant_value;
- optional binary typed_value (STRING);
- }
- }
- }
- }
+required group measurement (VARIANT) {
+ required binary metadata;
+ optional binary value;
+ optional int64 typed_value;
}
```
-| Variant Value | Top-level variant_value | b.variant_value | a.typed_value |
a.variant_value | b.object.c.typed_value | b.object.c.variant_value | Notes |
-|---------------|-------------------------|-----------------|---------------|-----------------|------------------------|--------------------------|-------|
-| {a: 123, b: {c: “hello”}} | null | null | 123 | null | hello | null | All
values shredded |
-| {a: 1.23, b: {c: “123”}} | null | null | null | 1.23 | 123 | null | a is not
an integer |
-| {a: 123, b: {c: null}} | null | null | null | 123 | null | null | b.object.c
set to non-null to indicate VariantNull |
-| {a: 123, b: {} | null | null | null | 123 | null | null | b.object.c set to
null, to indicate that c is missing |
-| {a: 123, d: 456} | {d: 456} | null | 123 | null | null | null | Extra field
d is stored as variant_value |
-| [{a: 1, b: {c: 2}}, {a: 3, b: {c: 4}}] | [{a: 1, b: {c: 2}}, {a: 3, b: {c:
4}}] | null | null | null | null | null | Not an object |
+The series of measurements `34, null, "n/a", 100` would be stored as:
-# Parquet Layout
+| Value | `metadata` | `value` | `typed_value` |
+|---------|------------------|-----------------------|---------------|
+| 34 | `01 00` v1/empty | null | `34` |
+| null | `01 00` v1/empty | `00` (null) | null |
+| "n/a" | `01 00` v1/empty | `13 6E 2F 61` (`n/a`) | null |
+| 100 | `01 00` v1/empty | null | `100` |
-The `array` and `object` fields represent Variant array and object types,
respectively.
-Arrays must use the three-level list structure described in
[LogicalTypes.md](LogicalTypes.md).
+Both `value` and `typed_value` are optional fields used together to encode a
single value.
+Values in the two fields must be interpreted according to the following table:
-An `object` field must be a group.
-Each field name of this inner group corresponds to the Variant value's object
field name.
-Each inner field's type is a recursively shredded variant value: that is, the
fields of each object field must be one or more of `object`, `array`,
`typed_value` or `variant_value`.
+| `value` | `typed_value` | Meaning
|
+|----------|---------------|-------------------------------------------------------------|
+| null | null | The value is missing; only valid for shredded
object fields |
+| non-null | null | The value is present and may be any type,
including null |
+| null | non-null | The value is present and is the shredded type
|
+| non-null | non-null | The value is present and is a partially shredded
object |
-Similarly the elements of an `array` must be a group containing one or more of
`object`, `array`, `typed_value` or `variant_value`.
+An object is _partially shredded_ when the `value` is an object and the
`typed_value` is a shredded object.
-Each leaf in the schema can store an arbitrary Variant value.
-It contains an `variant_value` binary field and a `typed_value` field.
-If non-null, `variant_value` represents the value stored as a Variant binary.
-The `typed_value` field may be any type that has a corresponding Variant type.
-For each value in the data, at most one of the `typed_value` and
`variant_value` may be non-null.
-A writer may omit either field, which is equivalent to all rows being null.
+If both fields are non-null and either is not an object, the value is invalid.
Readers must either fail or return the `typed_value`.
-Dictionary IDs in a `variant_value` field refer to entries in the top-level
`metadata` field.
+If a Variant is missing in a context where a value is required, readers must
either fail or return a Variant null: basic type 0 (primitive) and physical
type 0 (null).
+For example, if a Variant is required (like `measurement` above) and both
`value` and `typed_value` are null, the returned `value` must be `00` (Variant
null).
-For an `object`, a null field means that the field does not exist in the
reconstructed Variant object.
-All elements of an `array` must be non-null, since array elements cannote be
missing.
+### Shredded Value Types
-| typed_value | variant_value | Meaning |
-|-------------|----------------|---------|
-| null | null | Field is Variant Null (not missing) in the reconstructed
Variant. |
-| null | non-null | Field may be any type in the reconstructed Variant. |
-| non-null | null | Field has this column’s type in the reconstructed Variant.
|
-| non-null | non-null | Invalid |
+Shredded values must use the following Parquet types:
-The `typed_value` may be absent from the Parquet schema for any field, which
is equivalent to its value being always null (in which case the shredded field
is always stored as a Variant binary).
-By the same token, `variant_value` may be absent, which is equivalent to their
value being always null (in which case the field will always have the value
Null or have the type of the `typed_value` column).
+| Variant Type | Equivalent Parquet Type |
+|-----------------------------|------------------------------|
+| boolean | BOOLEAN |
Review Comment:
I think the state of this is that the storage will not do any conversions at
all. However, the engine itself is allowed to "normalize" variants to optimize.
In this case, engines will probably normalize within "exact numerics" in order
to make shredding more effective.
##########
VariantShredding.md:
##########
@@ -25,290 +25,316 @@
The Variant type is designed to store and process semi-structured data
efficiently, even with heterogeneous values.
Query engines encode each Variant value in a self-describing format, and store
it as a group containing `value` and `metadata` binary fields in Parquet.
Since data is often partially homogenous, it can be beneficial to extract
certain fields into separate Parquet columns to further improve performance.
-We refer to this process as **shredding**.
-Each Parquet file remains fully self-describing, with no additional metadata
required to read or fully reconstruct the Variant data from the file.
-Combining shredding with a binary residual provides the flexibility to
represent complex, evolving data with an unbounded number of unique fields
while limiting the size of file schemas, and retaining the performance benefits
of a columnar format.
+This process is **shredding**.
-This document focuses on the shredding semantics, Parquet representation,
implications for readers and writers, as well as the Variant reconstruction.
-For now, it does not discuss which fields to shred, user-facing API changes,
or any engine-specific considerations like how to use shredded columns.
-The approach builds upon the [Variant Binary Encoding](VariantEncoding.md),
and leverages the existing Parquet specification.
+Shredding enables the use of Parquet's columnar representation for more
compact data encoding, column statistics for data skipping, and partial
projections.
-At a high level, we replace the `value` field of the Variant Parquet group
with one or more fields called `object`, `array`, `typed_value`, and
`variant_value`.
-These represent a fixed schema suitable for constructing the full Variant
value for each row.
+For example, the query `SELECT variant_get(event, '$.event_ts', 'timestamp')
FROM tbl` only needs to load field `event_ts`, and if that column is shredded,
it can be read by columnar projection without reading or deserializing the rest
of the `event` Variant.
+Similarly, for the query `SELECT * FROM tbl WHERE variant_get(event,
'$.event_type', 'string') = 'signup'`, the `event_type` shredded column
metadata can be used for skipping and to lazily load the rest of the Variant.
-Shredding allows a query engine to reap the full benefits of Parquet's
columnar representation, such as more compact data encoding, min/max statistics
for data skipping, and I/O and CPU savings from pruning unnecessary fields not
accessed by a query (including the non-shredded Variant binary data).
-Without shredding, any query that accesses a Variant column must fetch all
bytes of the full binary buffer.
-With shredding, we can get nearly equivalent performance as in a relational
(scalar) data model.
+## Variant Metadata
-For example, `select variant_get(variant_col, ‘$.field1.inner_field2’,
‘string’) from tbl` only needs to access `inner_field2`, and the file scan
could avoid fetching the rest of the Variant value if this field was shredded
into a separate column in the Parquet schema.
-Similarly, for the query `select * from tbl where variant_get(variant_col,
‘$.id’, ‘integer’) = 123`, the scan could first decode the shredded `id`
column, and only fetch/decode the full Variant value for rows that pass the
filter.
+Variant metadata is stored in the top-level Variant group in a binary
`metadata` column regardless of whether the Variant value is shredded.
-# Parquet Example
+All `value` columns within the Variant must use the same `metadata`.
+All field names of a Variant, whether shredded or not, must be present in the
metadata.
-Consider the following Parquet schema together with how Variant values might
be mapped to it.
-Notice that we represent each shredded field in `object` as a group of two
fields, `typed_value` and `variant_value`.
-We extract all homogenous data items of a certain path into `typed_value`, and
set aside incompatible data items in `variant_value`.
-Intuitively, incompatibilities within the same path may occur because we store
the shredding schema per Parquet file, and each file can contain several row
groups.
-Selecting a type for each field that is acceptable for all rows would be
impractical because it would require buffering the contents of an entire file
before writing.
+## Value Shredding
-Typically, the expectation is that `variant_value` exists at every level as an
option, along with one of `object`, `array` or `typed_value`.
-If the actual Variant value contains a type that does not match the provided
schema, it is stored in `variant_value`.
-An `variant_value` may also be populated if an object can be partially
represented: any fields that are present in the schema must be written to those
fields, and any missing fields are written to `variant_value`.
-
-The `metadata` column is unchanged from its unshredded representation, and may
be referenced in `variant_value` fields in the shredded data.
+Variant values are stored in Parquet fields named `value`.
+Each `value` field may have an associated shredded field named `typed_value`
that stores the value when it matches a specific type.
+When `typed_value` is present, readers **must** reconstruct shredded values
according to this specification.
+For example, a Variant field, `measurement` may be shredded as long values by
adding `typed_value` with type `int64`:
```
-optional group variant_col {
- required binary metadata;
- optional binary variant_value;
- optional group object {
- optional group a {
- optional binary variant_value;
- optional int64 typed_value;
- }
- optional group b {
- optional binary variant_value;
- optional group object {
- optional group c {
- optional binary variant_value;
- optional binary typed_value (STRING);
- }
- }
- }
- }
+required group measurement (VARIANT) {
+ required binary metadata;
+ optional binary value;
+ optional int64 typed_value;
}
```
-| Variant Value | Top-level variant_value | b.variant_value | a.typed_value |
a.variant_value | b.object.c.typed_value | b.object.c.variant_value | Notes |
-|---------------|-------------------------|-----------------|---------------|-----------------|------------------------|--------------------------|-------|
-| {a: 123, b: {c: “hello”}} | null | null | 123 | null | hello | null | All
values shredded |
-| {a: 1.23, b: {c: “123”}} | null | null | null | 1.23 | 123 | null | a is not
an integer |
-| {a: 123, b: {c: null}} | null | null | null | 123 | null | null | b.object.c
set to non-null to indicate VariantNull |
-| {a: 123, b: {} | null | null | null | 123 | null | null | b.object.c set to
null, to indicate that c is missing |
-| {a: 123, d: 456} | {d: 456} | null | 123 | null | null | null | Extra field
d is stored as variant_value |
-| [{a: 1, b: {c: 2}}, {a: 3, b: {c: 4}}] | [{a: 1, b: {c: 2}}, {a: 3, b: {c:
4}}] | null | null | null | null | null | Not an object |
+The series of measurements `34, null, "n/a", 100` would be stored as:
-# Parquet Layout
+| Value | `metadata` | `value` | `typed_value` |
+|---------|------------------|-----------------------|---------------|
+| 34 | `01 00` v1/empty | null | `34` |
+| null | `01 00` v1/empty | `00` (null) | null |
+| "n/a" | `01 00` v1/empty | `13 6E 2F 61` (`n/a`) | null |
+| 100 | `01 00` v1/empty | null | `100` |
-The `array` and `object` fields represent Variant array and object types,
respectively.
-Arrays must use the three-level list structure described in
[LogicalTypes.md](LogicalTypes.md).
+Both `value` and `typed_value` are optional fields used together to encode a
single value.
+Values in the two fields must be interpreted according to the following table:
-An `object` field must be a group.
-Each field name of this inner group corresponds to the Variant value's object
field name.
-Each inner field's type is a recursively shredded variant value: that is, the
fields of each object field must be one or more of `object`, `array`,
`typed_value` or `variant_value`.
+| `value` | `typed_value` | Meaning
|
+|----------|---------------|-------------------------------------------------------------|
+| null | null | The value is missing; only valid for shredded
object fields |
+| non-null | null | The value is present and may be any type,
including null |
+| null | non-null | The value is present and is the shredded type
|
+| non-null | non-null | The value is present and is a partially shredded
object |
Review Comment:
Should we mention that the shredded field names must not be present in the
variant `value`?
##########
VariantEncoding.md:
##########
@@ -416,14 +444,36 @@ Field names are case-sensitive.
Field names are required to be unique for each object.
It is an error for an object to contain two fields with the same name, whether
or not they have distinct dictionary IDs.
-# Versions and extensions
+## Versions and extensions
An implementation is not expected to parse a Variant value whose metadata
version is higher than the version supported by the implementation.
However, new types may be added to the specification without incrementing the
version ID.
In such a situation, an implementation should be able to read the rest of the
Variant value if desired.
-# Shredding
+## Shredding
A single Variant object may have poor read performance when only a small
subset of fields are needed.
A better approach is to create separate columns for individual fields,
referred to as shredding or subcolumnarization.
[VariantShredding.md](VariantShredding.md) describes the Variant shredding
specification in Parquet.
+
+## Conversion to JSON
+
+Values stored in the Variant encoding are a superset of JSON values.
+For example, a Variant value can be a date that has no equivalent type in JSON.
+To maximize compatibility with readers that can process JSON but not Variant,
the following conversions should be used when producing JSON from a Variant:
+
+| Variant type | JSON type | Representation requirements
| Example |
+|---------------|-----------|----------------------------------------------------------|--------------------------------------|
+| Null type | null | `null`
| `null` |
+| Boolean | boolean | `true` or `false`
| `true` |
+| Exact Numeric | number | Digits in fraction must match scale, no exponent
| `34`, 34.00 |
Review Comment:
I think maybe the wording or presentation of this mapping is a bit confusing.
I think we are on all on the same page of allowing engines to "normalize"
the Variant value. For example, I think the Spark implementation already
normalizes `1.00` to `1`. There are also many optimizations and efficiency
aspects with normalization, so we should not disallow that.
Maybe what this chart is trying to show is: "if you want to output a Variant
value as a JSON string, this is the output format you should use". So, for
numbers, the conversion should be like `1` or `1.23` (no quotes), not `"1"`, or
`"1.23"`. If this chart was about the JSON output formatting, would that be
more clear?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]