This is an automated email from the ASF dual-hosted git repository.
blue pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/parquet-format.git
The following commit(s) were added to refs/heads/master by this push:
new 37b6e8b Simplify Variant shredding and refactor for clarity (#461)
37b6e8b is described below
commit 37b6e8b863fb510314c07649665251f6474b0c11
Author: Ryan Blue <[email protected]>
AuthorDate: Thu Feb 6 12:53:11 2025 -0800
Simplify Variant shredding and refactor for clarity (#461)
---
VariantEncoding.md | 157 +++++++++++------
VariantShredding.md | 495 ++++++++++++++++++++++++++++------------------------
2 files changed, 374 insertions(+), 278 deletions(-)
diff --git a/VariantEncoding.md b/VariantEncoding.md
index 2930c71..28c7cfd 100644
--- a/VariantEncoding.md
+++ b/VariantEncoding.md
@@ -39,13 +39,42 @@ Another motivation for the representation is that (aside
from metadata) each nes
For example, in a Variant containing an Array of Variant values, the
representation of an inner Variant value, when paired with the metadata of the
full variant, is itself a valid Variant.
This document describes the Variant Binary Encoding scheme.
-[VariantShredding.md](VariantShredding.md) describes the details of the
Variant shredding scheme.
+Variant fields can also be _shredded_.
+Shredding refers to extracting some elements of the variant into separate
columns for more efficient extraction/filter pushdown.
+The [Variant Shredding specification](VariantShredding.md) describes the
details of shredding Variant values as typed Parquet columns.
+
+## Variant in Parquet
-# Variant in Parquet
A Variant value in Parquet is represented by a group with 2 fields, named
`value` and `metadata`.
-Both fields `value` and `metadata` are of type `binary`, and cannot be `null`.
-# Metadata encoding
+* The Variant group must be annotated with the `VARIANT` logical type.
+* Both fields `value` and `metadata` must be of type `binary` (called
`BYTE_ARRAY` in the Parquet thrift definition).
+* The `metadata` field is `required` and must be a valid Variant metadata, as
defined below.
+* The `value` field must be annotated as `required` for unshredded Variant
values, or `optional` if parts of the value are [shredded](VariantShredding.md)
as typed Parquet columns.
+* When present, the `value` field must be a valid Variant value, as defined
below.
+
+This is the expected unshredded representation in Parquet:
+
+```
+optional group variant_name (VARIANT) {
+ required binary metadata;
+ required binary value;
+}
+```
+
+This is an example representation of a shredded Variant in Parquet:
+```
+optional group shredded_variant_name (VARIANT) {
+ required binary metadata;
+ optional binary value;
+ optional int64 typed_value;
+}
+```
+
+The `VARIANT` annotation places no additional restrictions on the repetition
of Variant groups, but repetition may be restricted by containing types (such
as `MAP` and `LIST`).
+The Variant group name is the name of the Variant column.
+
+## Metadata encoding
The encoded metadata always starts with a header byte.
```
@@ -95,7 +124,7 @@ The first `offset` value will always be `0`, and the last
`offset` value will al
The last part of the metadata is `bytes`, which stores all the string values
in the dictionary.
All string values must be UTF-8 encoded strings.
-## Metadata encoding grammar
+### Metadata encoding grammar
The grammar for encoded metadata is as follows
@@ -119,7 +148,7 @@ Notes:
- If `sorted_strings` is set to 1, strings in the dictionary must be unique
and sorted in lexicographic order. If the value is set to 0, readers may not
make any assumptions about string order or uniqueness.
-# Value encoding
+## Value encoding
The entire encoded Variant value includes the `value_metadata` byte, and then
0 or more bytes for the `val`.
```
@@ -132,16 +161,16 @@ value | value_header |
basic_type |
| |
+-------------------------------------------------+
```
-## Basic Type
+### Basic Type
The `basic_type` is 2-bit value that represents which basic type the Variant
value is.
The [basic types table](#encoding-types) shows what each value represents.
-## Value Header
+### Value Header
The `value_header` is a 6-bit value that contains more information about the
type, and the format depends on the `basic_type`.
-### Value Header for Primitive type (`basic_type`=0)
+#### Value Header for Primitive type (`basic_type`=0)
When `basic_type` is `0`, `value_header` is a 6-bit `primitive_header`.
The [primitive types table](#encoding-types) shows what each value represents.
@@ -152,7 +181,7 @@ value_header | primitive_header |
+-----------------------+
```
-### Value Header for Short string (`basic_type`=1)
+#### Value Header for Short string (`basic_type`=1)
When `basic_type` is `1`, `value_header` is a 6-bit `short_string_header`.
```
@@ -163,7 +192,7 @@ value_header | short_string_header |
```
The `short_string_header` value is the length of the string.
-### Value Header for Object (`basic_type`=2)
+#### Value Header for Object (`basic_type`=2)
When `basic_type` is `2`, `value_header` is made up of
`field_offset_size_minus_one`, `field_id_size_minus_one`, and `is_large`.
```
@@ -181,7 +210,7 @@ The actual number of bytes is computed as
`field_offset_size_minus_one + 1` and
`is_large` is a 1-bit value that indicates how many bytes are used to encode
the number of elements.
If `is_large` is `0`, 1 byte is used, and if `is_large` is `1`, 4 bytes are
used.
-### Value Header for Array (`basic_type`=3)
+#### Value Header for Array (`basic_type`=3)
When `basic_type` is `3`, `value_header` is made up of
`field_offset_size_minus_one`, and `is_large`.
```
@@ -198,21 +227,21 @@ The actual number of bytes is computed as
`field_offset_size_minus_one + 1`.
`is_large` is a 1-bit value that indicates how many bytes are used to encode
the number of elements.
If `is_large` is `0`, 1 byte is used, and if `is_large` is `1`, 4 bytes are
used.
-## Value Data
+### Value Data
The `value_data` encoding format depends on the type specified by
`value_metadata`.
For some types, the `value_data` will be 0-bytes.
-### Value Data for Primitive type (`basic_type`=0)
+#### Value Data for Primitive type (`basic_type`=0)
When `basic_type` is `0`, `value_data` depends on the `primitive_header` value.
The [primitive types table](#encoding-types) shows the encoding format for
each primitive type.
-### Value Data for Short string (`basic_type`=1)
+#### Value Data for Short string (`basic_type`=1)
When `basic_type` is `1`, `value_data` is the sequence of UTF-8 encoded bytes
that represents the string.
-### Value Data for Object (`basic_type`=2)
+#### Value Data for Object (`basic_type`=2)
When `basic_type` is `2`, `value_data` encodes an object.
The encoding format is shown in the following diagram:
@@ -282,7 +311,7 @@ The `field_id` list must be `[<id for key "a">, <id for key
"b">, <id for key "c
The `field_offset` list must be `[<offset for value 1>, <offset for value 2>,
<offset for value 3>, <last offset>]`.
The `value` list can be in any order.
-### Value Data for Array (`basic_type`=3)
+#### Value Data for Array (`basic_type`=3)
When `basic_type` is `3`, `value_data` encodes an array. The encoding format
is shown in the following diagram:
```
@@ -323,7 +352,7 @@ The `field_offset` list is followed by the `value` list.
There are `num_elements` number of `value` entries and each `value` is an
encoded Variant value.
For the i-th array entry, the value is the Variant `value` starting from the
i-th `field_offset` byte offset.
-## Value encoding grammar
+### Value encoding grammar
The grammar for an encoded value is:
@@ -364,7 +393,7 @@ It is semantically identical to the "string" primitive type.
The Decimal type contains a scale, but no precision. The implied precision of
a decimal value is `floor(log_10(val)) + 1`.
-# Encoding types
+## Encoding types
*Variant basic types*
| Basic Type | ID | Description |
@@ -376,31 +405,31 @@ The Decimal type contains a scale, but no precision. The
implied precision of a
*Variant primitive types*
-| Type Equivalence Class | Physical Type | Type ID |
Equivalent Parquet Type | Binary format
|
-|----------------------|-----------------------------|---------|-----------------------------|---------------------------------------------------------------------------------------------|
-| NullType | null | `0` | any
| none
|
-| Boolean | boolean (True) | `1` | BOOLEAN
| none
|
-| Boolean | boolean (False) | `2` | BOOLEAN
| none
|
-| Exact Numeric | int8 | `3` | INT(8,
signed) | 1 byte
|
-| Exact Numeric | int16 | `4` | INT(16,
signed) | 2 byte little-endian
|
-| Exact Numeric | int32 | `5` | INT(32,
signed) | 4 byte little-endian
|
-| Exact Numeric | int64 | `6` | INT(64,
signed) | 8 byte little-endian
|
-| Double | double | `7` | DOUBLE
| IEEE little-endian
|
-| Exact Numeric | decimal4 | `8` |
DECIMAL(precision, scale) | 1 byte scale in range [0, 38], followed by
little-endian unscaled value (see decimal table) |
-| Exact Numeric | decimal8 | `9` |
DECIMAL(precision, scale) | 1 byte scale in range [0, 38], followed by
little-endian unscaled value (see decimal table) |
-| Exact Numeric | decimal16 | `10` |
DECIMAL(precision, scale) | 1 byte scale in range [0, 38], followed by
little-endian unscaled value (see decimal table) |
-| Date | date | `11` | DATE
| 4 byte little-endian
|
-| Timestamp | timestamp with time zone | `12` |
TIMESTAMP(isAdjustedToUTC=true, MICROS) | 8-byte little-endian
|
-| TimestampNTZ | timestamp without time zone | `13` |
TIMESTAMP(isAdjustedToUTC=false, MICROS) | 8-byte little-endian
|
-| Float | float | `14` | FLOAT
| IEEE little-endian
|
-| Binary | binary | `15` | BINARY
| 4 byte little-endian size, followed by bytes
|
-| String | string | `16` | STRING
| 4 byte little-endian size, followed by UTF-8 encoded bytes
|
-| TimeNTZ | time without time zone | `21` |
TIME(isAdjustedToUTC=false, MICROS) | 8-byte little-endian
|
-| Timestamp | timestamp with time zone | `22` |
TIMESTAMP(isAdjustedToUTC=true, NANOS) | 8-byte little-endian
|
-| TimestampNTZ | timestamp without time zone | `23` |
TIMESTAMP(isAdjustedToUTC=false, NANOS) | 8-byte little-endian
|
-| UUID | uuid | `24` | UUID
| 16-byte big-endian
|
-
-The *Type Equivalence Class* column indicates logical equivalence of
physically encoded types.
+| Equivalence Class | Variant Physical Type | Type ID | Equivalent
Parquet Type | Binary format
|
+|----------------------|-----------------------------|---------|-----------------------------|---------------------------------------------------------------------------------------------------------------------|
+| NullType | null | `0` | UNKNOWN
| none
|
+| Boolean | boolean (True) | `1` | BOOLEAN
| none
|
+| Boolean | boolean (False) | `2` | BOOLEAN
| none
|
+| Exact Numeric | int8 | `3` | INT(8,
signed) | 1 byte
|
+| Exact Numeric | int16 | `4` | INT(16,
signed) | 2 byte little-endian
|
+| Exact Numeric | int32 | `5` | INT(32,
signed) | 4 byte little-endian
|
+| Exact Numeric | int64 | `6` | INT(64,
signed) | 8 byte little-endian
|
+| Double | double | `7` | DOUBLE
| IEEE little-endian
|
+| Exact Numeric | decimal4 | `8` |
DECIMAL(precision, scale) | 1 byte scale in range [0, 38], followed by
little-endian unscaled value (see decimal table) |
+| Exact Numeric | decimal8 | `9` |
DECIMAL(precision, scale) | 1 byte scale in range [0, 38], followed by
little-endian unscaled value (see decimal table) |
+| Exact Numeric | decimal16 | `10` |
DECIMAL(precision, scale) | 1 byte scale in range [0, 38], followed by
little-endian unscaled value (see decimal table) |
+| Date | date | `11` | DATE
| 4 byte little-endian
|
+| Timestamp | timestamp | `12` |
TIMESTAMP(isAdjustedToUTC=true, MICROS) | 8-byte little-endian
|
+| TimestampNTZ | timestamp without time zone | `13` |
TIMESTAMP(isAdjustedToUTC=false, MICROS) | 8-byte little-endian
|
+| Float | float | `14` | FLOAT
| IEEE little-endian
|
+| Binary | binary | `15` | BINARY
| 4 byte little-endian size, followed by bytes
|
+| String | string | `16` | STRING
| 4 byte little-endian size, followed by UTF-8 encoded bytes
|
+| TimeNTZ | time without time zone | `21` |
TIME(isAdjustedToUTC=false, MICROS) | 8-byte little-endian
|
+| Timestamp | timestamp with time zone | `22` |
TIMESTAMP(isAdjustedToUTC=true, NANOS) | 8-byte little-endian
|
+| TimestampNTZ | timestamp without time zone | `23` |
TIMESTAMP(isAdjustedToUTC=false, NANOS) | 8-byte little-endian
|
+| UUID | uuid | `24` | UUID
| 16-byte big-endian
|
+
+The *Equivalence Class* column indicates logical equivalence of physically
encoded types.
For example, a user expression operating on a string value containing "hello"
should behave the same, whether it is encoded with the short string
optimization, or long string encoding.
Similarly, user expressions operating on an *int8* value of 1 should behave
the same as a decimal16 with scale 2 and unscaled value 100.
@@ -413,14 +442,14 @@ Similarly, user expressions operating on an *int8* value
of 1 should behave the
| 18 <= precision <= 38 | int128 |
| > 38 | Not supported |
-# String values must be UTF-8 encoded
+## String values must be UTF-8 encoded
All strings within the Variant binary format must be UTF-8 encoded.
This includes the dictionary key string values, the "short string" values, and
the "long string" values.
-# Object field ID order and uniqueness
+## Object field ID order and uniqueness
-For objects, field IDs and offsets must be listed in the order of the
corresponding field names, sorted lexicographically.
+For objects, field IDs and offsets must be listed in the order of the
corresponding field names, sorted lexicographically (using unsigned byte
ordering for UTF-8).
Note that the field values themselves are not required to follow this order.
As a result, offsets will not necessarily be listed in ascending order.
The field values are not required to be in the same order as the field IDs, to
enable flexibility when constructing Variant values.
@@ -432,14 +461,44 @@ Field names are case-sensitive.
Field names are required to be unique for each object.
It is an error for an object to contain two fields with the same name, whether
or not they have distinct dictionary IDs.
-# Versions and extensions
+## Versions and extensions
An implementation is not expected to parse a Variant value whose metadata
version is higher than the version supported by the implementation.
However, new types may be added to the specification without incrementing the
version ID.
In such a situation, an implementation should be able to read the rest of the
Variant value if desired.
-# Shredding
+## Shredding
A single Variant object may have poor read performance when only a small
subset of fields are needed.
A better approach is to create separate columns for individual fields,
referred to as shredding or subcolumnarization.
[VariantShredding.md](VariantShredding.md) describes the Variant shredding
specification in Parquet.
+
+## Conversion to JSON
+
+Values stored in the Variant encoding are a superset of JSON values.
+For example, a Variant value can be a date that has no equivalent type in JSON.
+To maximize compatibility with readers that can process JSON but not Variant,
the following conversions should be used when producing JSON from a Variant:
+
+| Variant type | JSON type | Representation requirements
| Example |
+|------------------|-----------|----------------------------------------------------------|------------------------------------------|
+| Null type | null | `null`
| `null` |
+| Boolean | boolean | `true` or `false`
| `true` |
+| Exact Numeric | number | Digits in fraction must match scale, no
exponent | `34`, `34.00` |
+| Float | number | Fraction must be present
| `14.20` |
+| Double | number | Fraction must be present
| `1.0` |
+| Date | string | ISO-8601 formatted date
| `"2017-11-16"` |
+| Time | string | ISO-8601 formatted UTC time
| `"22:31:08.000001"` |
+| Timestamp (6) | string | ISO-8601 formatted UTC timestamp including
+00:00 offset | `"2017-11-16T22:31:08.000001+00:00"` |
+| Timestamp (9) | string | ISO-8601 formatted UTC timestamp including
+00:00 offset | `"2017-11-16T22:31:08.000000001+00:00"` |
+| TimestampNTZ (6) | string | ISO-8601 formatted UTC timestamp with no
offset or zone | `"2017-11-16T22:31:08.000001"` |
+| TimestampNTZ (9) | string | ISO-8601 formatted UTC timestamp with no
offset or zone | `"2017-11-16T22:31:08.000000001"` |
+| Binary | string | Base64 encoded binary
| `"dmFyaWFudAo="` |
+| String | string |
| `"variant"` |
+| UUID | string |
| `"f79c3e09-677c-4bbd-a479-3f349cb785e7"` |
+| Array | array |
| `[34, "abc", "2017-11-16]` |
+| Object | object |
| `{"id": 34, "data": "abc"}` |
+
+Notes:
+
+* For timestamp and timestampntz, values must use microsecond precision and
trailing 0s are required
+* For float and double, infinities and not a number values are encoded as
strings: `"Infinity"`, `"-Infinity"`, and `"NaN"`
diff --git a/VariantShredding.md b/VariantShredding.md
index 54d8272..b3ecd4a 100644
--- a/VariantShredding.md
+++ b/VariantShredding.md
@@ -25,290 +25,327 @@
The Variant type is designed to store and process semi-structured data
efficiently, even with heterogeneous values.
Query engines encode each Variant value in a self-describing format, and store
it as a group containing `value` and `metadata` binary fields in Parquet.
Since data is often partially homogenous, it can be beneficial to extract
certain fields into separate Parquet columns to further improve performance.
-We refer to this process as **shredding**.
-Each Parquet file remains fully self-describing, with no additional metadata
required to read or fully reconstruct the Variant data from the file.
-Combining shredding with a binary residual provides the flexibility to
represent complex, evolving data with an unbounded number of unique fields
while limiting the size of file schemas, and retaining the performance benefits
of a columnar format.
+This process is **shredding**.
-This document focuses on the shredding semantics, Parquet representation,
implications for readers and writers, as well as the Variant reconstruction.
-For now, it does not discuss which fields to shred, user-facing API changes,
or any engine-specific considerations like how to use shredded columns.
-The approach builds upon the [Variant Binary Encoding](VariantEncoding.md),
and leverages the existing Parquet specification.
+Shredding enables the use of Parquet's columnar representation for more
compact data encoding, column statistics for data skipping, and partial
projections.
-At a high level, we replace the `value` field of the Variant Parquet group
with one or more fields called `object`, `array`, `typed_value`, and
`variant_value`.
-These represent a fixed schema suitable for constructing the full Variant
value for each row.
+For example, the query `SELECT variant_get(event, '$.event_ts', 'timestamp')
FROM tbl` only needs to load field `event_ts`, and if that column is shredded,
it can be read by columnar projection without reading or deserializing the rest
of the `event` Variant.
+Similarly, for the query `SELECT * FROM tbl WHERE variant_get(event,
'$.event_type', 'string') = 'signup'`, the `event_type` shredded column
metadata can be used for skipping and to lazily load the rest of the Variant.
-Shredding allows a query engine to reap the full benefits of Parquet's
columnar representation, such as more compact data encoding, min/max statistics
for data skipping, and I/O and CPU savings from pruning unnecessary fields not
accessed by a query (including the non-shredded Variant binary data).
-Without shredding, any query that accesses a Variant column must fetch all
bytes of the full binary buffer.
-With shredding, we can get nearly equivalent performance as in a relational
(scalar) data model.
+## Variant Metadata
-For example, `select variant_get(variant_col, ‘$.field1.inner_field2’,
‘string’) from tbl` only needs to access `inner_field2`, and the file scan
could avoid fetching the rest of the Variant value if this field was shredded
into a separate column in the Parquet schema.
-Similarly, for the query `select * from tbl where variant_get(variant_col,
‘$.id’, ‘integer’) = 123`, the scan could first decode the shredded `id`
column, and only fetch/decode the full Variant value for rows that pass the
filter.
+Variant metadata is stored in the top-level Variant group in a binary
`metadata` column regardless of whether the Variant value is shredded.
-# Parquet Example
+All `value` columns within the Variant must use the same `metadata`.
+All field names of a Variant, whether shredded or not, must be present in the
metadata.
-Consider the following Parquet schema together with how Variant values might
be mapped to it.
-Notice that we represent each shredded field in `object` as a group of two
fields, `typed_value` and `variant_value`.
-We extract all homogenous data items of a certain path into `typed_value`, and
set aside incompatible data items in `variant_value`.
-Intuitively, incompatibilities within the same path may occur because we store
the shredding schema per Parquet file, and each file can contain several row
groups.
-Selecting a type for each field that is acceptable for all rows would be
impractical because it would require buffering the contents of an entire file
before writing.
+## Value Shredding
-Typically, the expectation is that `variant_value` exists at every level as an
option, along with one of `object`, `array` or `typed_value`.
-If the actual Variant value contains a type that does not match the provided
schema, it is stored in `variant_value`.
-An `variant_value` may also be populated if an object can be partially
represented: any fields that are present in the schema must be written to those
fields, and any missing fields are written to `variant_value`.
+Variant values are stored in Parquet fields named `value`.
+Each `value` field may have an associated shredded field named `typed_value`
that stores the value when it matches a specific type.
+When `typed_value` is present, readers **must** reconstruct shredded values
according to this specification.
-The `metadata` column is unchanged from its unshredded representation, and may
be referenced in `variant_value` fields in the shredded data.
+For example, a Variant field, `measurement` may be shredded as long values by
adding `typed_value` with type `int64`:
+```
+required group measurement (VARIANT) {
+ required binary metadata;
+ optional binary value;
+ optional int64 typed_value;
+}
+```
+The Parquet columns used to store variant metadata and values must be accessed
by name, not by position.
+
+The series of measurements `34, null, "n/a", 100` would be stored as:
+
+| Value | `metadata` | `value` | `typed_value` |
+|---------|------------------|-----------------------|---------------|
+| 34 | `01 00` v1/empty | null | `34` |
+| null | `01 00` v1/empty | `00` (null) | null |
+| "n/a" | `01 00` v1/empty | `13 6E 2F 61` (`n/a`) | null |
+| 100 | `01 00` v1/empty | null | `100` |
+
+Both `value` and `typed_value` are optional fields used together to encode a
single value.
+Values in the two fields must be interpreted according to the following table:
+
+| `value` | `typed_value` | Meaning
|
+|----------|---------------|-------------------------------------------------------------|
+| null | null | The value is missing; only valid for shredded
object fields |
+| non-null | null | The value is present and may be any type,
including null |
+| null | non-null | The value is present and is the shredded type
|
+| non-null | non-null | The value is present and is a partially shredded
object |
+
+An object is _partially shredded_ when the `value` is an object and the
`typed_value` is a shredded object.
+Writers must not produce data where both `value` and `typed_value` are
non-null, unless the Variant value is an object.
+
+If a Variant is missing in a context where a value is required, readers must
return a Variant null (`00`): basic type 0 (primitive) and physical type 0
(null).
+For example, if a Variant is required (like `measurement` above) and both
`value` and `typed_value` are null, the returned `value` must be `00` (Variant
null).
+
+### Shredded Value Types
+
+Shredded values must use the following Parquet types:
+
+| Variant Type | Parquet Physical Type | Parquet
Logical Type |
+|-----------------------------|-----------------------------------|--------------------------|
+| boolean | BOOLEAN |
|
+| int8 | INT32 | INT(8,
signed=true) |
+| int16 | INT32 | INT(16,
signed=true) |
+| int32 | INT32 |
|
+| int64 | INT64 |
|
+| float | FLOAT |
|
+| double | DOUBLE |
|
+| decimal4 | INT32 | DECIMAL(P,
S) |
+| decimal8 | INT64 | DECIMAL(P,
S) |
+| decimal16 | BYTE_ARRAY / FIXED_LEN_BYTE_ARRAY | DECIMAL(P,
S) |
+| date | INT32 | DATE
|
+| time | INT64 |
TIME(false, MICROS) |
+| timestamptz(6) | INT64 |
TIMESTAMP(true, MICROS) |
+| timestamptz(9) | INT64 |
TIMESTAMP(true, NANOS) |
+| timestampntz(6) | INT64 |
TIMESTAMP(false, MICROS) |
+| timestampntz(9) | INT64 |
TIMESTAMP(false, NANOS) |
+| binary | BINARY |
|
+| string | BINARY | STRING
|
+| uuid | FIXED_LEN_BYTE_ARRAY[len=16] | UUID
|
+| array | GROUP; see Arrays below | LIST
|
+| object | GROUP; see Objects below |
|
+
+#### Primitive Types
+
+Primitive values can be shredded using the equivalent Parquet primitive type
from the table above for `typed_value`.
+
+Unless the value is shredded as an object (see [Objects](#objects)),
`typed_value` or `value` (but not both) must be non-null.
+
+#### Arrays
+
+Arrays can be shredded by using a 3-level Parquet list for `typed_value`.
+
+If the value is not an array, `typed_value` must be null.
+If the value is an array, `value` must be null.
+
+The list `element` must be a required group.
+The `element` group can contain `value` and `typed_value` fields.
+The element's `value` field stores the element as Variant-encoded `binary`
when the `typed_value` is not present or cannot represent it.
+The `typed_value` field may be omitted when not shredding elements as a
specific type.
+When `typed_value` is omitted, `value` must be `required`.
+
+For example, a `tags` Variant may be shredded as a list of strings using the
following definition:
```
-optional group variant_col {
- required binary metadata;
- optional binary variant_value;
- optional group object {
- optional group a {
- optional binary variant_value;
- optional int64 typed_value;
- }
- optional group b {
- optional binary variant_value;
- optional group object {
- optional group c {
- optional binary variant_value;
- optional binary typed_value (STRING);
+optional group tags (VARIANT) {
+ required binary metadata;
+ optional binary value;
+ optional group typed_value (LIST) { # must be optional to allow a null list
+ repeated group list {
+ required group element { # shredded element
+ optional binary value;
+ optional binary typed_value (STRING);
+ }
}
- }
}
- }
}
```
-| Variant Value | Top-level variant_value | b.variant_value | a.typed_value |
a.variant_value | b.object.c.typed_value | b.object.c.variant_value | Notes |
-|---------------|-------------------------|-----------------|---------------|-----------------|------------------------|--------------------------|-------|
-| {a: 123, b: {c: “hello”}} | null | null | 123 | null | hello | null | All
values shredded |
-| {a: 1.23, b: {c: “123”}} | null | null | null | 1.23 | 123 | null | a is not
an integer |
-| {a: 123, b: {c: null}} | null | null | 123 | null | null | null | b.object.c
set to non-null to indicate VariantNull |
-| {a: 123, b: {}} | null | null | 123 | null | null | null | b.object.c set to
null, to indicate that c is missing |
-| {a: 123, d: 456} | {d: 456} | null | 123 | null | null | null | Extra field
d is stored as variant_value |
-| [{a: 1, b: {c: 2}}, {a: 3, b: {c: 4}}] | [{a: 1, b: {c: 2}}, {a: 3, b: {c:
4}}] | null | null | null | null | null | Not an object |
+All elements of an array must be present (not missing) because the `array`
Variant encoding does not allow missing elements.
+That is, either `typed_value` or `value` (but not both) must be non-null.
+Null elements must be encoded in `value` as Variant null: basic type 0
(primitive) and physical type 0 (null).
-# Parquet Layout
+The series of `tags` arrays `["comedy", "drama"], ["horror", null], ["comedy",
"drama", "romance"], null` would be stored as:
-The `array` and `object` fields represent Variant array and object types,
respectively.
-Arrays must use the three-level list structure described in
[LogicalTypes.md](LogicalTypes.md).
+| Array | `value` | `typed_value `|
`typed_value...value` | `typed_value...typed_value` |
+|----------------------------------|-------------|---------------|-----------------------|--------------------------------|
+| `["comedy", "drama"]` | null | non-null | [null,
null] | [`comedy`, `drama`] |
+| `["horror", null]` | null | non-null | [null,
`00`] | [`horror`, null] |
+| `["comedy", "drama", "romance"]` | null | non-null | [null,
null, null] | [`comedy`, `drama`, `romance`] |
+| null | `00` (null) | null |
| |
-An `object` field must be a group.
-Each field name of this inner group corresponds to the Variant value's object
field name.
-Each inner field's type is a recursively shredded variant value: that is, the
fields of each object field must be one or more of `object`, `array`,
`typed_value` or `variant_value`.
+#### Objects
-Similarly the elements of an `array` must be a group containing one or more of
`object`, `array`, `typed_value` or `variant_value`.
+Fields of an object can be shredded using a Parquet group for `typed_value`
that contains shredded fields.
-Each leaf in the schema can store an arbitrary Variant value.
-It contains an `variant_value` binary field and a `typed_value` field.
-If non-null, `variant_value` represents the value stored as a Variant binary.
-The `typed_value` field may be any type that has a corresponding Variant type.
-For each value in the data, at most one of the `typed_value` and
`variant_value` may be non-null.
-A writer may omit either field, which is equivalent to all rows being null.
+If the value is an object, `typed_value` must be non-null.
+If the value is not an object, `typed_value` must be null.
+Readers can assume that a value is not an object if `typed_value` is null and
that `typed_value` field values are correct; that is, readers do not need to
read the `value` column if `typed_value` fields satisfy the required fields.
-Dictionary IDs in a `variant_value` field refer to entries in the top-level
`metadata` field.
+Each shredded field in the `typed_value` group is represented as a required
group that contains optional `value` and `typed_value` fields.
+The `value` field stores the value as Variant-encoded `binary` when the
`typed_value` cannot represent the field.
+This layout enables readers to skip data based on the field statistics for
`value` and `typed_value`.
-For an `object`, a null field means that the field does not exist in the
reconstructed Variant object.
-All elements of an `array` must be non-null, since array elements cannote be
missing.
+The `value` column of a partially shredded object must never contain fields
represented by the Parquet columns in `typed_value` (shredded fields).
+Readers may always assume that data is written correctly and that shredded
fields in `typed_value` are not present in `value`.
+As a result, reads when a field is defined in both `value` and a `typed_value`
shredded field may be inconsistent.
-| typed_value | variant_value | Meaning |
-|-------------|----------------|---------|
-| null | null | Field is Variant Null (not missing) in the reconstructed
Variant. |
-| null | non-null | Field may be any type in the reconstructed Variant. |
-| non-null | null | Field has this column’s type in the reconstructed Variant.
|
-| non-null | non-null | Invalid |
-
-The `typed_value` may be absent from the Parquet schema for any field, which
is equivalent to its value being always null (in which case the shredded field
is always stored as a Variant binary).
-By the same token, `variant_value` may be absent, which is equivalent to their
value being always null (in which case the field will always have the value
Null or have the type of the `typed_value` column).
-
-# Unshredded values
+For example, a Variant `event` field may shred `event_type` (`string`) and
`event_ts` (`timestamp`) columns using the following definition:
+```
+optional group event (VARIANT) {
+ required binary metadata;
+ optional binary value; # a variant, expected to be an object
+ optional group typed_value { # shredded fields for the variant
object
+ required group event_type { # shredded field for event_type
+ optional binary value;
+ optional binary typed_value (STRING);
+ }
+ required group event_ts { # shredded field for event_ts
+ optional binary value;
+ optional int64 typed_value (TIMESTAMP(true, MICROS));
+ }
+ }
+}
+```
-If all values can be represented at a given level by whichever of `object`,
`array`, or `typed_value` is present, `variant_value` is set to null.
+The group for each named field must use repetition level `required`.
-If a value cannot be represented by whichever of `object`, `array`, or
`typed_value` is present in the schema, then it is stored in `variant_value`,
and the other fields are set to null.
-In the Parquet example above, if field `a` was an object or array, or a
non-integer scalar, it would be stored in `variant_value`.
+A field's `value` and `typed_value` are set to null (missing) to indicate that
the field does not exist in the variant.
+To encode a field that is present with a null value, the `value` must contain
a Variant null: basic type 0 (primitive) and physical type 0 (null).
-If a value is an object, and the `object` field is present but does not
contain all of the fields in the value, then any remaining fields are stored in
an object in `variant_value`.
-In the Parquet example above, if field `b` was an object of the form `{"c": 1,
"d": 2}"`, then the object `{"d": 2}` would be stored in `variant_value`, and
the `c` field would be shredded recursively under `object.c`.
+When both `value` and `typed_value` for a field are non-null, engines should
fail.
+If engines choose to read in such cases, then the `typed_value` column must be
used.
+Readers may always assume that data is written correctly and that only `value`
or `typed_value` is defined.
+As a result, reads when both `value` and `typed_value` are defined may be
inconsistent with optimized reads that require only one of the columns.
-Note that an array is always fully shredded if there is an `array` field, so
the above consideration for `object` is not relevant for arrays: only one of
`array` or `variant_value` may be non-null at a given level.
+The table below shows how the series of objects in the first column would be
stored:
-# Using variant_value vs. typed_value
+| Event object
| `value` | `typed_value` |
`typed_value.event_type.value` | `typed_value.event_type.typed_value` |
`typed_value.event_ts.value` | `typed_value.event_ts.typed_value` | Notes
|
+|------------------------------------------------------------------------------------|-----------------------------------|---------------|--------------------------------|--------------------------------------|------------------------------|------------------------------------|--------------------------------------------------|
+| `{"event_type": "noop", "event_ts": 1729794114937}`
| null | non-null | null
| `noop` | null
| 1729794114937 | Fully shredded object
|
+| `{"event_type": "login", "event_ts": 1729794146402, "email":
"[email protected]"}` | `{"email": "[email protected]"}` | non-null |
null | `login` | null
| 1729794146402 | Partially shredded
object |
+| `{"error_msg": "malformed: ..."}`
| `{"error_msg", "malformed: ..."}` | non-null | null
| null | null
| null | Object with all shredded fields missing
|
+| `"malformed: not an object"`
| `malformed: not an object` | null |
| |
| | Not an object (stored as Variant
string) |
+| `{"event_ts": 1729794240241, "click": "_button"}`
| `{"click": "_button"}` | non-null | null
| null | null
| 1729794240241 | Field `event_type` is missing
|
+| `{"event_type": null, "event_ts": 1729794954163}`
| null | non-null | `00` (field exists,
is null) | null | null
| 1729794954163 | Field `event_type` is present and is
null |
+| `{"event_type": "noop", "event_ts": "2024-10-24"`
| null | non-null | null
| `noop` | `"2024-10-24"`
| null | Field `event_ts` is present but not a
timestamp |
+| `{ }`
| null | non-null | null
| null | null
| null | Object is present but empty
|
+| null
| `00` (null) | null |
| |
| | Object/value is null
|
+| missing
| null | null |
| |
| | Object/value is missing
|
+| INVALID
| `{"event_type": "login"}` | non-null | null
| `login` | null
| 1729795057774 | INVALID: Shredded field is present in
`value` |
+| INVALID
| `"a"` | non-null | null
| null | null
| null | INVALID: `typed_value` is present for
non-object |
+| INVALID
| `02 00` (object with 0 fields) | null |
| |
| | INVALID: `typed_value` is null for
object |
-In general, it is desirable to store values in the `typed_value` field rather
than the `variant_value` whenever possible.
-This will typically improve encoding efficiency, and allow the use of Parquet
statistics to filter at the row group or page level.
-In the best case, the `variant_value` fields are all null and the engine does
not need to read them (or it can omit them from the schema on write entirely).
-There are two main motivations for including the `variant_value` column:
+Invalid cases in the table above must not be produced by writers.
+Readers must return an object when `typed_value` is non-null containing the
shredded fields.
-1) In a case where there are rare type mismatches (for example, a numeric
field with rare strings like “n/a”), we allow the field to be shredded, which
could still be a significant performance benefit compared to fetching and
decoding the full value/metadata binary.
-2) Since there is a single schema per file, there would be no easy way to
recover from a type mismatch encountered late in a file write. Parquet files
can be large, and buffering all file data before starting to write could be
expensive. Including a variant column for every field guarantees we can adhere
to the requested shredding schema.
+## Nesting
-# Top-level metadata
+The `typed_value` associated with any Variant `value` field can be any
shredded type, as shown in the sections above.
-Any values stored in a shredded `variant_value` field may have dictionary IDs
referring to the metadata.
-There is one metadata value for the entire Variant record, and that is stored
in the top-level `metadata` field.
-This means any `variant_value` values in the shredded representation is only
the "value" portion of the [Variant Binary Encoding](VariantEncoding.md).
+For example, the `event` object above may also shred sub-fields as object
(`location`) or array (`tags`).
-The metadata is kept at the top-level, instead of shredding the metadata with
the shredded variant values because:
-* Simplified shredding scheme and specification. No need for additional
struct-of-binary values, or custom concatenated binary scheme for
`variant_value`.
-* Simplified and good performance for write shredding. No need to rebuild the
metadata, or re-encode IDs for `variant_value`.
-* Simplified and good performance for Variant reconstruction. No need to
re-encode IDs for `variant_value`.
+```
+optional group event (VARIANT) {
+ required binary metadata;
+ optional binary value;
+ optional group typed_value {
+ required group event_type {
+ optional binary value;
+ optional binary typed_value (STRING);
+ }
+ required group event_ts {
+ optional binary value;
+ optional int64 typed_value (TIMESTAMP(true, MICROS));
+ }
+ required group location {
+ optional binary value;
+ optional group typed_value {
+ required group latitude {
+ optional binary value;
+ optional double typed_value;
+ }
+ required group longitude {
+ optional binary value;
+ optional double typed_value;
+ }
+ }
+ }
+ required group tags {
+ optional binary value;
+ optional group typed_value (LIST) {
+ repeated group list {
+ required group element {
+ optional binary value;
+ optional binary typed_value (STRING);
+ }
+ }
+ }
+ }
+ }
+}
+```
# Data Skipping
-Shredded columns are expected to store statistics in the same format as a
normal Parquet column.
-In general, the engine can only skip a row group or page if all rows in the
`variant_value` field are null, since it is possible for a `variant_get`
expression to successfully cast a value from the `variant_value` to the target
type.
-For example, if `typed_value` is of type `int64`, then the string “123” might
be contained in `variant_value`, which would not be reflected in statistics,
but could be retained by a filter like `where variant_get(col, “$.field”,
“long”) = 123`.
-If `variant_value` is all-null, then the engine can prune pages or row groups
based on `typed_value`.
-This specification is not strict about what values may be stored in
`variant_value` rather than `typed_value`, so it is not safe to skip rows based
on `typed_value` unless the corresponding `variant_value` column is all-null,
or the engine has specific knowledge of the behavior of the writer that
produced the shredded data.
+Statistics for `typed_value` columns can be used for file, row group, or page
skipping when `value` is always null (missing).
-# Shredding Semantics
+When the corresponding `value` column is all nulls, all values must be the
shredded `typed_value` field's type.
+Because the type is known, comparisons with values of that type are valid.
+`IS NULL`/`IS NOT NULL` and `IS NAN`/`IS NOT NAN` filter results are also
valid.
-Reconstruction of Variant value from a shredded representation is not expected
to produce a bit-for-bit identical binary to the original unshredded value.
-For example, in a reconstructed Variant value, the order of object field
values may be different from the original binary.
-This is allowed since the [Variant Binary
Encoding](VariantEncoding.md#object-field-id-order-and-uniqueness) does not
require an ordering of the field values, but the field IDs will still be
ordered lexicographically according to the corresponding field names.
+Comparisons with values of other types are not necessarily valid and data
should not be skipped.
-The physical representation of scalar values may also be different in the
reconstructed Variant binary.
-In particular, the [Variant Binary Encoding](VariantEncoding.md) considers all
integer and decimal representations to represent a single logical type.
-This flexibility enables shredding to be applicable in more scenarios, while
maintaining all information and values losslessly.
-As a result, it is valid to shred a decimal into a decimal column with a
different scale, or to shred an integer as a decimal, as long as no numeric
precision is lost.
-For example, it would be valid to write the value 123 to a Decimal(9, 2)
column, but the value 1.234 would need to be written to the `variant_value`
column.
-When reconstructing, it would be valid for a reader to reconstruct 123 as an
integer, or as a Decimal(9, 2).
-Engines should not depend on the physical type of a Variant value, only the
logical type.
+Casting behavior for Variant is delegated to processing engines.
+For example, the interpretation of a string as a timestamp may depend on the
engine's SQL session time zone.
-On the other hand, shredding as a different logical type is not allowed.
-For example, the integer value 123 could not be shredded to a string
`typed_value` column as the string "123", since that would lose type
information.
-It would need to be written to the `variant_value` column.
+## Reconstructing a Shredded Variant
-# Reconstructing a Variant
+It is possible to recover an unshredded Variant value using a recursive
algorithm, where the initial call is to `construct_variant` with the top-level
Variant group fields.
-It is possible to recover a full Variant value using a recursive algorithm,
where the initial call is to `ConstructVariant` with the top-level fields,
which are assumed to be null if they are not present in the schema.
+```python
+def construct_variant(metadata: Metadata, value: Variant, typed_value: Any) ->
Variant:
+ """Constructs a Variant from value and typed_value"""
+ if typed_value is not None:
+ if isinstance(typed_value, dict):
+ # this is a shredded object
+ object_fields = {
+ name: construct_variant(metadata, field.value,
field.typed_value)
+ for (name, field) in typed_value
+ }
-```
-# Constructs a Variant from `variant_value`, `object`, `array` and
`typed_value`.
-# Only one of object, array and typed_value may be non-null.
-def ConstructVariant(variant_value, object, array, typed_value):
- if object is null and array is null and typed_value is null and
variant_value is null: return VariantNull
- if object is not null:
- return ConstructObject(variant_value, object)
- elif array is not null:
- return ConstructArray(array)
- elif typed_value is not null:
- return cast(typed_value as Variant)
- else:
- variant_value
-
-# Construct an object from an `object` group, and a (possibly null) Variant
variant_value
-def ConstructObject(variant_value, object):
- # If variant_value is present and is not an Object, then the result is
ambiguous.
- assert(variant_value is null or is_object(variant_value))
- # Null fields in the object are missing from the reconstructed Variant.
- nonnull_object_fields = object.fields.filter(field -> field is not null)
- all_keys = Union(variant_value.keys, non_null_object_fields)
- return VariantObject(all_keys.map { key ->
- if key in object: (key, ConstructVariant(object[key].variant_value,
object[key].object, object[key].array, object[key].typed_value))
- else: (key, variant_value[key])
- })
-
-def ConstructArray(array):
- newVariantArray = VariantArray()
- for i in range(array.size):
- newVariantArray.append(ConstructVariant(array[i].variant_value,
array[i].object, array[i].array, array[i].typed_value)
-```
+ if value is not None:
+ # this is a partially shredded object
+ assert isinstance(value, VariantObject), "partially shredded
value must be an object"
+ assert typed_value.keys().isdisjoint(value.keys()), "object
keys must be disjoint"
-# Nested Parquet Example
+ # union the shredded fields and non-shredded fields
+ return VariantObject(metadata,
object_fields).union(VariantObject(metadata, value))
-This section describes a more deeply nested example, using a top-level array
as the shredding type.
+ else:
+ return VariantObject(metadata, object_fields)
-Below is a sample of JSON that would be fully shredded in this example.
-It contains an array of objects, containing an `a` field shredded as an array,
and a `b` field shredded as an integer.
+ elif isinstance(typed_value, list):
+ # this is a shredded array
+ assert value is None, "shredded array must not conflict with
variant value"
-```
-[
- {
- "a": [1, 2, 3],
- "b": 100
- },
- {
- "a": [4, 5, 6],
- "b": 200
- }
-]
-```
+ elements = [
+ construct_variant(metadata, elem.value, elem.typed_value)
+ for elem in list(typed_value)
+ ]
+ return VariantArray(metadata, elements)
+ else:
+ # this is a shredded primitive
+ assert value is None, "shredded primitive must not conflict with
variant value"
-The corresponding Parquet schema with “a” and “b” as leaf types is:
+ return primitive_to_variant(typed_value)
-```
-optional group variant_col {
- required binary metadata;
- optional binary variant_value;
- optional group array (LIST) {
- repeated group list {
- optional group element {
- optional binary variant_value;
- optional group object {
- optional group a {
- optional binary variant_value;
- optional group array (LIST) {
- repeated group list {
- optional group element {
- optional int64 typed_value;
- optional binary variant_value;
- }
- }
- }
- }
- optional group b {
- optional int64 typed_value;
- optional binary variant_value;
- }
- }
- }
- }
- }
-}
-```
+ elif value is not None:
+ return Variant(metadata, value)
-In the above example schema, if “a” is an array containing a mix of integer
and non-integer values, the engine will shred individual elements appropriately
into either `typed_value` or `variant_value`.
-If the top-level Variant is not an array (for example, an object), the engine
cannot shred the value and it will store it in the top-level `variant_value`.
-Similarly, if "a" is not an array, it will be stored in the `variant_value`
under "a".
+ else:
+ # value is missing
+ return None
-Consider the following example:
-
-```
-[
- {
- "a": [1, 2, 3],
- "b": 100,
- “c”: “unexpected”
- },
- {
- "a": [4, 5, 6],
- "b": 200
- },
- “not an object”
-]
+def primitive_to_variant(typed_value: Any): Variant:
+ if isinstance(typed_value, int):
+ return VariantInteger(typed_value)
+ elif isinstance(typed_value, str):
+ return VariantString(typed_value)
+ ...
```
-The second array element can be fully shredded, but the first and third cannot
be. The contents of `variant_col.array[*].variant_value` would be as follows:
-
-```
-[
- { “c”: “unexpected” },
- NULL,
- “not an object”
-]
-```
-# Backward and forward compatibility
+## Backward and forward compatibility
-Shredding is an optional feature of Variant, and readers must continue to be
able to read a group containing only a `value` and `metadata` field.
+Shredding is an optional feature of Variant, and readers must continue to be
able to read a group containing only `value` and `metadata` fields.
-Any fields in the same group as `typed_value`/`variant_value` that start with
`_` (underscore) can be ignored.
-This is intended to allow future backwards-compatible extensions.
-In particular, the field names `_metadata_key_paths` and any name starting
with `_spark` are reserved, and should not be used by other implementations.
-Any extra field names that do not start with an underscore should be assumed
to be backwards incompatible, and readers should fail when reading such a
schema.
+Engines that do not write shredded values must be able to read shredded values
according to this spec or must fail.
-Engines without shredding support are not expected to be able to read Parquet
files that use shredding.
-Since different files may contain conflicting schemas (e.g. a `typed_value`
column with incompatible types in two files), it may not be possible to infer
or specify a single schema that would allow all Parquet files for a table to be
read.
+Different files may contain conflicting shredding schemas.
+That is, files may contain different `typed_value` columns for the same
Variant with incompatible types.
+It may not be possible to infer or specify a single shredded schema that would
allow all Parquet files for a table to be read without reconstructing the value
as a Variant.