This is an automated email from the ASF dual-hosted git repository.
julien pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/parquet-format.git
The following commit(s) were added to refs/heads/master by this push:
new 4f20815 GH-455: Add Variant specification docs (#456)
4f20815 is described below
commit 4f208158dba80ff4bff4afaa4441d7270103dff6
Author: Gene Pang <[email protected]>
AuthorDate: Wed Oct 9 08:57:45 2024 -0700
GH-455: Add Variant specification docs (#456)
* Add Variant specification docs
---------
Co-authored-by: Ryan Blue <[email protected]>
Co-authored-by: Julien Le Dem <[email protected]>
---
VariantEncoding.md | 429 ++++++++++++++++++++++++++++++++++++++++++++++++++++
VariantShredding.md | 300 ++++++++++++++++++++++++++++++++++++
2 files changed, 729 insertions(+)
diff --git a/VariantEncoding.md b/VariantEncoding.md
new file mode 100644
index 0000000..1eac3bc
--- /dev/null
+++ b/VariantEncoding.md
@@ -0,0 +1,429 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one
+ - or more contributor license agreements. See the NOTICE file
+ - distributed with this work for additional information
+ - regarding copyright ownership. The ASF licenses this file
+ - to you under the Apache License, Version 2.0 (the
+ - "License"); you may not use this file except in compliance
+ - with the License. You may obtain a copy of the License at
+ -
+ - http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing,
+ - software distributed under the License is distributed on an
+ - "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ - KIND, either express or implied. See the License for the
+ - specific language governing permissions and limitations
+ - under the License.
+ -->
+
+# Variant Binary Encoding
+
+> [!IMPORTANT]
+> **This specification is still under active development, and has not been
formally adopted.**
+
+A Variant represents a type that contain one of:
+- Primitive: A type and corresponding value (e.g. INT, STRING)
+- Array: An ordered list of Variant values
+- Object: An unordered collection of string/Variant pairs (i.e. key/value
pairs). An object may not contain duplicate keys.
+
+A Variant is encoded with 2 binary values, the [value](#value-encoding) and
the [metadata](#metadata-encoding).
+
+There are a fixed number of allowed primitive types, provided in the table
below.
+These represent a commonly supported subset of the [logical
types](https://github.com/apache/parquet-format/blob/master/LogicalTypes.md)
allowed by the Parquet format.
+
+The Variant Binary Encoding allows representation of semi-structured data
(e.g. JSON) in a form that can be efficiently queried by path.
+The design is intended to allow efficient access to nested data even in the
presence of very wide or deep structures.
+
+Another motivation for the representation is that (aside from metadata) each
nested Variant value is contiguous and self-contained.
+For example, in a Variant containing an Array of Variant values, the
representation of an inner Variant value, when paired with the metadata of the
full variant, is itself a valid Variant.
+
+This document describes the Variant Binary Encoding scheme.
+[VariantShredding.md](VariantShredding.md) describes the details of the
Variant shredding scheme.
+
+# Variant in Parquet
+A Variant value in Parquet is represented by a group with 2 fields, named
`value` and `metadata`.
+Both fields `value` and `metadata` are of type `binary`, and cannot be `null`.
+
+# Metadata encoding
+
+The encoded metadata always starts with a header byte.
+```
+ 7 6 5 4 3 0
+ +-------+---+---+---------------+
+header | | | | version |
+ +-------+---+---+---------------+
+ ^ ^
+ | +-- sorted_strings
+ +-- offset_size_minus_one
+```
+The `version` is a 4-bit value that must always contain the value `1`.
+`sorted_strings` is a 1-bit value indicating whether dictionary strings are
sorted and unique.
+`offset_size_minus_one` is a 2-bit value providing the number of bytes per
dictionary size and offset field.
+The actual number of bytes, `offset_size`, is `offset_size_minus_one + 1`.
+
+The entire metadata is encoded as the following diagram shows:
+```
+ 7 0
+ +-----------------------+
+metadata | header |
+ +-----------------------+
+ | |
+ : dictionary_size : <-- little-endian, `offset_size` bytes
+ | |
+ +-----------------------+
+ | |
+ : offset : <-- little-endian, `offset_size` bytes
+ | |
+ +-----------------------+
+ :
+ +-----------------------+
+ | |
+ : offset : <-- little-endian, `offset_size` bytes
+ | | (`dictionary_size + 1` offsets)
+ +-----------------------+
+ | |
+ : bytes :
+ | |
+ +-----------------------+
+```
+
+The metadata is encoded first with the `header` byte, then `dictionary_size`
which is a little-endian value of `offset_size` bytes, and represents the
number of string values in the dictionary.
+Next, is an `offset` list, which contains `dictionary_size + 1` values.
+Each `offset` is a little-endian value of `offset_size` bytes, and represents
the starting byte offset of the i-th string in `bytes`.
+The first `offset` value will always be `0`, and the last `offset` value will
always be the total length of `bytes`.
+The last part of the metadata is `bytes`, which stores all the string values
in the dictionary.
+
+## Metadata encoding grammar
+
+The grammar for encoded metadata is as follows
+
+```
+metadata: <header> <dictionary_size> <dictionary>
+header: 1 byte (<version> | <sorted_strings> << 4 | (<offset_size_minus_one>
<< 6))
+version: a 4-bit version ID. Currently, must always contain the value 1
+sorted_strings: a 1-bit value indicating whether metadata strings are sorted
+offset_size_minus_one: 2-bit value providing the number of bytes per
dictionary size and offset field.
+dictionary_size: `offset_size` bytes. little-endian value indicating the
number of strings in the dictionary
+dictionary: <offset>* <bytes>
+offset: `offset_size` bytes. little-endian value indicating the starting
position of the ith string in `bytes`. The list should contain `dictionary_size
+ 1` values, where the last value is the total length of `bytes`.
+bytes: dictionary string values
+```
+
+Notes:
+- Offsets are relative to the start of the `bytes` array.
+- The length of the ith string can be computed as `offset[i+1] - offset[i]`.
+- The offset of the first string is always equal to 0 and is therefore
redundant. It is included in the spec to simplify in-memory-processing.
+- `offset_size_minus_one` indicates the number of bytes per `dictionary_size`
and `offset` entry. I.e. a value of 0 indicates 1-byte offsets, 1 indicates
2-byte offsets, 2 indicates 3 byte offsets and 3 indicates 4-byte offsets.
+- If `sorted_strings` is set to 1, strings in the dictionary must be unique
and sorted in lexicographic order. If the value is set to 0, readers may not
make any assumptions about string order or uniqueness.
+
+
+# Value encoding
+
+The entire encoded Variant value includes the `value_metadata` byte, and then
0 or more bytes for the `val`.
+```
+ 7 2 1 0
+ +------------------------------------+------------+
+value | value_header | basic_type |
+ +------------------------------------+------------+
+ | |
+ : value_data : <-- 0 or more
bytes
+ | |
+ +-------------------------------------------------+
+```
+## Basic Type
+
+The `basic_type` is 2-bit value that represents which basic type the Variant
value is.
+The [basic types table](#encoding-types) shows what each value represents.
+
+## Value Header
+
+The `value_header` is a 6-bit value that contains more information about the
type, and the format depends on the `basic_type`.
+
+### Value Header for Primitive type (`basic_type`=0)
+
+When `basic_type` is `0`, `value_header` is a 6-bit `primitive_header`.
+The [primitive types table](#encoding-types) shows what each value represents.
+```
+ 5 0
+ +-----------------------+
+value_header | primitive_header |
+ +-----------------------+
+```
+
+### Value Header for Short string (`basic_type`=1)
+
+When `basic_type` is `1`, `value_header` is a 6-bit `short_string_header`.
+```
+ 5 0
+ +-----------------------+
+value_header | short_string_header |
+ +-----------------------+
+```
+The `short_string_header` value is the length of the string.
+
+### Value Header for Object (`basic_type`=2)
+
+When `basic_type` is `2`, `value_header` is made up of
`field_offset_size_minus_one`, `field_id_size_minus_one`, and `is_large`.
+```
+ 5 4 3 2 1 0
+ +---+---+-------+-------+
+value_header | | | | |
+ +---+---+-------+-------+
+ ^ ^ ^
+ | | +-- field_offset_size_minus_one
+ | +-- field_id_size_minus_one
+ +-- is_large
+```
+`field_offset_size_minus_one` and `field_id_size_minus_one` are 2-bit values
that represent the number of bytes used to encode the field offsets and field
ids.
+The actual number of bytes is computed as `field_offset_size_minus_one + 1`
and `field_id_size_minus_one + 1`.
+`is_large` is a 1-bit value that indicates how many bytes are used to encode
the number of elements.
+If `is_large` is `0`, 1 byte is used, and if `is_large` is `1`, 4 bytes are
used.
+
+### Value Header for Array (`basic_type`=3)
+
+When `basic_type` is `3`, `value_header` is made up of
`field_offset_size_minus_one`, and `is_large`.
+```
+ 5 3 2 1 0
+ +-----------+---+-------+
+value_header | | | |
+ +-----------+---+-------+
+ ^ ^
+ | +-- field_offset_size_minus_one
+ +-- is_large
+```
+`field_offset_size_minus_one` is a 2-bit value that represents the number of
bytes used to encode the field offset.
+The actual number of bytes is computed as `field_offset_size_minus_one + 1`.
+`is_large` is a 1-bit value that indicates how many bytes are used to encode
the number of elements.
+If `is_large` is `0`, 1 byte is used, and if `is_large` is `1`, 4 bytes are
used.
+
+## Value Data
+
+The `value_data` encoding format depends on the type specified by
`value_metadata`.
+For some types, the `value_data` will be 0-bytes.
+
+### Value Data for Primitive type (`basic_type`=0)
+
+When `basic_type` is `0`, `value_data` depends on the `primitive_header` value.
+The [primitive types table](#encoding-types) shows the encoding format for
each primitive type.
+
+### Value Data for Short string (`basic_type`=1)
+
+When `basic_type` is `1`, `value_data` is the sequence of bytes that
represents the string.
+
+### Value Data for Object (`basic_type`=2)
+
+When `basic_type` is `2`, `value_data` encodes an object.
+The encoding format is shown in the following diagram:
+```
+ 7 0
+ +-----------------------+
+object value_data | |
+ : num_elements : <-- little-endian, 1 or 4 bytes
+ | |
+ +-----------------------+
+ | |
+ : field_id : <-- little-endian,
`field_id_size` bytes
+ | |
+ +-----------------------+
+ :
+ +-----------------------+
+ | |
+ : field_id : <-- little-endian,
`field_id_size` bytes
+ | | (`num_elements` field_ids)
+ +-----------------------+
+ | |
+ : field_offset : <-- little-endian,
`field_offset_size` bytes
+ | |
+ +-----------------------+
+ :
+ +-----------------------+
+ | |
+ : field_offset : <-- little-endian,
`field_offset_size` bytes
+ | | (`num_elements + 1`
field_offsets)
+ +-----------------------+
+ | |
+ : value :
+ | |
+ +-----------------------+
+ :
+ +-----------------------+
+ | |
+ : value : <-- (`num_elements` values)
+ | |
+ +-----------------------+
+```
+An object `value_data` begins with `num_elements`, a 1-byte or 4-byte
little-endian value, representing the number of elements in the object.
+The size in bytes of `num_elements` is indicated by `is_large` in the
`value_header`.
+Next, is a list of `field_id` values.
+There are `num_elements` number of entries and each `field_id` is a
little-endian value of `field_id_size` bytes.
+A `field_id` is an index into the dictionary in the metadata.
+The `field_id` list is followed by a `field_offset` list.
+There are `num_elements + 1` number of entries and each `field_offset` is a
little-endian value of `field_offset_size` bytes.
+A `field_offset` represents the byte offset (relative to the first byte of the
first `value`) where the i-th `value` starts.
+The last `field_offset` points to the byte after the end of the last `value`.
+The `field_offset` list is followed by the `value` list.
+There are `num_elements` number of `value` entries and each `value` is an
encoded Variant value.
+For the i-th key-value pair of the object, the key is the metadata dictionary
entry indexed by the i-th `field_id`, and the value is the Variant `value`
starting from the i-th `field_offset` byte offset.
+
+The field ids and field offsets must be in lexicographical order of the
corresponding field names in the metadata dictionary.
+However, the actual `value` entries do not need to be in any particular order.
+This implies that the `field_offset` values may not be monotonically
increasing.
+For example, for the following object:
+```
+{
+ "c": 3,
+ "b": 2,
+ "a": 1
+}
+```
+The `field_id` list must be `[<id for key "a">, <id for key "b">, <id for key
"c">]`, in lexicographical order.
+The `field_offset` list must be `[<offset for value 1>, <offset for value 2>,
<offset for value 3>, <last offset>]`.
+The `value` list can be in any order.
+
+### Value Data for Array (`basic_type`=3)
+
+When `basic_type` is `3`, `value_data` encodes an array. The encoding format
is shown in the following diagram:
+```
+ 7 0
+ +-----------------------+
+array value_data | |
+ : num_elements : <-- little-endian, 1 or 4 bytes
+ | |
+ +-----------------------+
+ | |
+ : field_offset : <-- little-endian,
`field_offset_size` bytes
+ | |
+ +-----------------------+
+ :
+ +-----------------------+
+ | |
+ : field_offset : <-- little-endian,
`field_offset_size` bytes
+ | | (`num_elements + 1`
field_offsets)
+ +-----------------------+
+ | |
+ : value :
+ | |
+ +-----------------------+
+ :
+ +-----------------------+
+ | |
+ : value : <-- (`num_elements` values)
+ | |
+ +-----------------------+
+```
+An array `value_data` begins with `num_elements`, a 1-byte or 4-byte
little-endian value, representing the number of elements in the array.
+The size in bytes of `num_elements` is indicated by `is_large` in the
`value_header`.
+Next, is a `field_offset` list.
+There are `num_elements + 1` number of entries and each `field_offset` is a
little-endian value of `field_offset_size` bytes.
+A `field_offset` represents the byte offset (relative to the first byte of the
first `value`) where the i-th `value` starts.
+The last `field_offset` points to the byte after the last byte of the last
`value`.
+The `field_offset` list is followed by the `value` list.
+There are `num_elements` number of `value` entries and each `value` is an
encoded Variant value.
+For the i-th array entry, the value is the Variant `value` starting from the
i-th `field_offset` byte offset.
+
+## Value encoding grammar
+
+The grammar for an encoded value is:
+
+```
+value: <value_metadata> <value_data>?
+value_metadata: 1 byte (<basic_type> | (<value_header> << 2))
+basic_type: ID from Basic Type table. <value_header> must be a corresponding
variation
+value_header: <primitive_header> | <short_string_header> | <object_header> |
<array_header>
+primitive_header: ID from Primitive Type table. <val> must be a corresponding
variation of <primitive_val>
+short_string_header: unsigned string length in bytes from 0 to 63
+object_header: (is_large << 4 | field_id_size_minus_one << 2 |
field_offset_size_minus_one)
+array_header: (is_large << 2 | field_offset_size_minus_one)
+value_data: <primitive_val> | <short_string_val> | <object_val> | <array_val>
+primitive_val: see table for binary representation
+short_string_val: bytes
+object_val: <num_elements> <field_id>* <field_offset>* <fields>
+array_val: <num_elements> <field_offset>* <fields>
+num_elements: a 1 or 4 byte little-endian value (depending on is_large in
<object_header>/<array_header>)
+field_id: a 1, 2, 3 or 4 byte little-endian value (depending on
field_id_size_minus_one in <object_header>), indexing into the dictionary
+field_offset: a 1, 2, 3 or 4 byte little-endian value (depending on
field_offset_size_minus_one in <object_header>/<array_header>), providing the
offset in bytes within fields
+fields: <value>*
+```
+
+Each `value_data` must correspond to the type defined by `value_metadata`.
Boolean and null types do not have a corresponding `value_data`, since their
type defines their value.
+
+Each `array_val` and `object_val` must contain exactly `num_elements + 1`
values for `field_offset`.
+The last entry is the offset that is one byte past the last field (i.e. the
total size of all fields in bytes).
+All offsets are relative to the first byte of the first field in the
object/array.
+
+`field_id_size_minus_one` and `field_offset_size_minus_one` indicate the
number of bytes per field ID/offset.
+For example, a value of 0 indicates 1-byte IDs, 1 indicates 2-byte IDs, 2
indicates 3 byte IDs and 3 indicates 4-byte IDs.
+The `is_large` flag for arrays and objects is used to indicate whether the
number of elements is indicated using a one or four byte value.
+When more than 255 elements are present, `is_large` must be set to true.
+It is valid for an implementation to use a larger value than necessary for any
of these fields (e.g. `is_large` may be true for an object with less than 256
elements).
+
+The "short string" basic type may be used as an optimization to fold string
length into the type byte for strings less than 64 bytes.
+It is semantically identical to the "string" primitive type.
+
+The Decimal type contains a scale, but no precision. The implied precision of
a decimal value is `floor(log_10(val)) + 1`.
+
+# Encoding types
+
+| Basic Type | ID | Description |
+|--------------|-----|---------------------------------------------------|
+| Primitive | `0` | One of the primitive types |
+| Short string | `1` | A string with a length less than 64 bytes |
+| Object | `2` | A collection of (string-key, variant-value) pairs |
+| Array | `3` | An ordered sequence of variant values |
+
+| Logical Type | Physical Type | Type ID | Equivalent
Parquet Type | Binary format
|
+|----------------------|-----------------------------|---------|-----------------------------|---------------------------------------------------------------------------------------------------------------------|
+| NullType | null | `0` | any
| none
|
+| Boolean | boolean (True) | `1` | BOOLEAN
| none
|
+| Boolean | boolean (False) | `2` | BOOLEAN
| none
|
+| Exact Numeric | int8 | `3` | INT(8,
signed) | 1 byte
|
+| Exact Numeric | int16 | `4` | INT(16,
signed) | 2 byte little-endian
|
+| Exact Numeric | int32 | `5` | INT(32,
signed) | 4 byte little-endian
|
+| Exact Numeric | int64 | `6` | INT(64,
signed) | 8 byte little-endian
|
+| Double | double | `7` | DOUBLE
| IEEE little-endian
|
+| Exact Numeric | decimal4 | `8` |
DECIMAL(precision, scale) | 1 byte scale in range [0, 38], followed by
little-endian unscaled value (see decimal table) |
+| Exact Numeric | decimal8 | `9` |
DECIMAL(precision, scale) | 1 byte scale in range [0, 38], followed by
little-endian unscaled value (see decimal table) |
+| Exact Numeric | decimal16 | `10` |
DECIMAL(precision, scale) | 1 byte scale in range [0, 38], followed by
little-endian unscaled value (see decimal table) |
+| Date | date | `11` | DATE
| 4 byte little-endian
|
+| Timestamp | timestamp | `12` |
TIMESTAMP(true, MICROS) | 8-byte little-endian
|
+| TimestampNTZ | timestamp without time zone | `13` |
TIMESTAMP(false, MICROS) | 8-byte little-endian
|
+| Float | float | `14` | FLOAT
| IEEE little-endian
|
+| Binary | binary | `15` | BINARY
| 4 byte little-endian size, followed by bytes
|
+| String | string | `16` | STRING
| 4 byte little-endian size, followed by UTF-8 encoded bytes
|
+
+| Decimal Precision | Decimal value type |
+|-----------------------|--------------------|
+| 1 <= precision <= 9 | int32 |
+| 10 <= precision <= 18 | int64 |
+| 18 <= precision <= 38 | int128 |
+| > 38 | Not supported |
+
+The *Logical Type* column indicates logical equivalence of physically encoded
types.
+For example, a user expression operating on a string value containing "hello"
should behave the same, whether it is encoded with the short string
optimization, or long string encoding.
+Similarly, user expressions operating on an *int8* value of 1 should behave
the same as a decimal16 with scale 2 and unscaled value 100.
+
+# Field ID order and uniqueness
+
+For objects, field IDs and offsets must be listed in the order of the
corresponding field names, sorted lexicographically.
+Note that the fields themselves are not required to follow this order.
+As a result, offsets will not necessarily be listed in ascending order.
+
+An implementation may rely on this field ID order in searching for field names.
+E.g. a binary search on field IDs (combined with metadata lookups) may be used
to find a field with a given field.
+
+Field names are case-sensitive.
+Field names are required to be unique for each object.
+It is an error for an object to contain two fields with the same name, whether
or not they have distinct dictionary IDs.
+
+# Versions and extensions
+
+An implementation is not expected to parse a Variant value whose metadata
version is higher than the version supported by the implementation.
+However, new types may be added to the specification without incrementing the
version ID.
+In such a situation, an implementation should be able to read the rest of the
Variant value if desired.
+
+# Shredding
+
+A single Variant object may have poor read performance when only a small
subset of fields are needed.
+A better approach is to create separate columns for individual fields,
referred to as shredding or subcolumnarization.
+[VariantShredding.md](VariantShredding.md) describes the Variant shredding
specification in Parquet.
diff --git a/VariantShredding.md b/VariantShredding.md
new file mode 100644
index 0000000..51160a9
--- /dev/null
+++ b/VariantShredding.md
@@ -0,0 +1,300 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one
+ - or more contributor license agreements. See the NOTICE file
+ - distributed with this work for additional information
+ - regarding copyright ownership. The ASF licenses this file
+ - to you under the Apache License, Version 2.0 (the
+ - "License"); you may not use this file except in compliance
+ - with the License. You may obtain a copy of the License at
+ -
+ - http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing,
+ - software distributed under the License is distributed on an
+ - "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ - KIND, either express or implied. See the License for the
+ - specific language governing permissions and limitations
+ - under the License.
+ -->
+
+# Variant Shredding
+
+> [!IMPORTANT]
+> **This specification is still under active development, and has not been
formally adopted.**
+
+The Variant type is designed to store and process semi-structured data
efficiently, even with heterogeneous values.
+Query engines encode each Variant value in a self-describing format, and store
it as a group containing `value` and `metadata` binary fields in Parquet.
+Since data is often partially homogenous, it can be beneficial to extract
certain fields into separate Parquet columns to further improve performance.
+We refer to this process as **shredding**.
+Each Parquet file remains fully self-describing, with no additional metadata
required to read or fully reconstruct the Variant data from the file.
+Combining shredding with a binary residual provides the flexibility to
represent complex, evolving data with an unbounded number of unique fields
while limiting the size of file schemas, and retaining the performance benefits
of a columnar format.
+
+This document focuses on the shredding semantics, Parquet representation,
implications for readers and writers, as well as the Variant reconstruction.
+For now, it does not discuss which fields to shred, user-facing API changes,
or any engine-specific considerations like how to use shredded columns.
+The approach builds upon the [Variant Binary Encoding](VariantEncoding.md),
and leverages the existing Parquet specification.
+
+At a high level, we replace the `value` field of the Variant Parquet group
with one or more fields called `object`, `array`, `typed_value`, and
`variant_value`.
+These represent a fixed schema suitable for constructing the full Variant
value for each row.
+
+Shredding allows a query engine to reap the full benefits of Parquet's
columnar representation, such as more compact data encoding, min/max statistics
for data skipping, and I/O and CPU savings from pruning unnecessary fields not
accessed by a query (including the non-shredded Variant binary data).
+Without shredding, any query that accesses a Variant column must fetch all
bytes of the full binary buffer.
+With shredding, we can get nearly equivalent performance as in a relational
(scalar) data model.
+
+For example, `select variant_get(variant_col, ‘$.field1.inner_field2’,
‘string’) from tbl` only needs to access `inner_field2`, and the file scan
could avoid fetching the rest of the Variant value if this field was shredded
into a separate column in the Parquet schema.
+Similarly, for the query `select * from tbl where variant_get(variant_col,
‘$.id’, ‘integer’) = 123`, the scan could first decode the shredded `id`
column, and only fetch/decode the full Variant value for rows that pass the
filter.
+
+# Parquet Example
+
+Consider the following Parquet schema together with how Variant values might
be mapped to it.
+Notice that we represent each shredded field in `object` as a group of two
fields, `typed_value` and `variant_value`.
+We extract all homogenous data items of a certain path into `typed_value`, and
set aside incompatible data items in `variant_value`.
+Intuitively, incompatibilities within the same path may occur because we store
the shredding schema per Parquet file, and each file can contain several row
groups.
+Selecting a type for each field that is acceptable for all rows would be
impractical because it would require buffering the contents of an entire file
before writing.
+
+Typically, the expectation is that `variant_value` exists at every level as an
option, along with one of `object`, `array` or `typed_value`.
+If the actual Variant value contains a type that does not match the provided
schema, it is stored in `variant_value`.
+An `variant_value` may also be populated if an object can be partially
represented: any fields that are present in the schema must be written to those
fields, and any missing fields are written to `variant_value`.
+
+The `metadata` column is unchanged from its unshredded representation, and may
be referenced in `variant_value` fields in the shredded data.
+
+```
+optional group variant_col {
+ required binary metadata;
+ optional binary variant_value;
+ optional group object {
+ optional group a {
+ optional binary variant_value;
+ optional int64 typed_value;
+ }
+ optional group b {
+ optional binary variant_value;
+ optional group object {
+ optional group c {
+ optional binary variant_value;
+ optional binary typed_value (STRING);
+ }
+ }
+ }
+ }
+}
+```
+
+| Variant Value | Top-level variant_value | b.variant_value | a.typed_value |
a.variant_value | b.object.c.typed_value | b.object.c.variant_value | Notes |
+|---------------|-------------------------|-----------------|---------------|-----------------|------------------------|--------------------------|-------|
+| {a: 123, b: {c: “hello”}} | null | null | 123 | null | hello | null | All
values shredded |
+| {a: 1.23, b: {c: “123”}} | null | null | null | 1.23 | 123 | null | a is not
an integer |
+| {a: 123, b: {c: null}} | null | null | null | 123 | null | null | b.object.c
set to non-null to indicate VariantNull |
+| {a: 123, b: {} | null | null | null | 123 | null | null | b.object.c set to
null, to indicate that c is missing |
+| {a: 123, d: 456} | {d: 456} | null | 123 | null | null | null | Extra field
d is stored as variant_value |
+| [{a: 1, b: {c: 2}}, {a: 3, b: {c: 4}}] | [{a: 1, b: {c: 2}}, {a: 3, b: {c:
4}}] | null | null | null | null | null | Not an object |
+
+# Parquet Layout
+
+The `array` and `object` fields represent Variant array and object types,
respectively.
+Arrays must use the three-level list structure described in
https://github.com/apache/parquet-format/blob/master/LogicalTypes.md.
+
+An `object` field must be a group.
+Each field name of this inner group corresponds to the Variant value's object
field name.
+Each inner field's type is a recursively shredded variant value: that is, the
fields of each object field must be one or more of `object`, `array`,
`typed_value` or `variant_value`.
+
+Similarly the elements of an `array` must be a group containing one or more of
`object`, `array`, `typed_value` or `variant_value`.
+
+Each leaf in the schema can store an arbitrary Variant value.
+It contains an `variant_value` binary field and a `typed_value` field.
+If non-null, `variant_value` represents the value stored as a Variant binary.
+The `typed_value` field may be any type that has a corresponding Variant type.
+For each value in the data, at most one of the `typed_value` and
`variant_value` may be non-null.
+A writer may omit either field, which is equivalent to all rows being null.
+
+Dictionary IDs in a `variant_value` field refer to entries in the top-level
`metadata` field.
+
+For an `object`, a null field means that the field does not exist in the
reconstructed Variant object.
+All elements of an `array` must be non-null, since array elements cannote be
missing.
+
+| typed_value | variant_value | Meaning |
+|-------------|----------------|---------|
+| null | null | Field is Variant Null (not missing) in the reconstructed
Variant. |
+| null | non-null | Field may be any type in the reconstructed Variant. |
+| non-null | null | Field has this column’s type in the reconstructed Variant.
|
+| non-null | non-null | Invalid |
+
+The `typed_value` may be absent from the Parquet schema for any field, which
is equivalent to its value being always null (in which case the shredded field
is always stored as a Variant binary).
+By the same token, `variant_value` may be absent, which is equivalent to their
value being always null (in which case the field will always have the value
Null or have the type of the `typed_value` column).
+
+# Unshredded values
+
+If all values can be represented at a given level by whichever of `object`,
`array`, or `typed_value` is present, `variant_value` is set to null.
+
+If a value cannot be represented by whichever of `object`, `array`, or
`typed_value` is present in the schema, then it is stored in `variant_value`,
and the other fields are set to null.
+In the Parquet example above, if field `a` was an object or array, or a
non-integer scalar, it would be stored in `variant_value`.
+
+If a value is an object, and the `object` field is present but does not
contain all of the fields in the value, then any remaining fields are stored in
an object in `variant_value`.
+In the Parquet example above, if field `b` was an object of the form `{"c": 1,
"d": 2}"`, then the object `{"d": 2}` would be stored in `variant_value`, and
the `c` field would be shredded recursively under `object.c`.
+
+Note that an array is always fully shredded if there is an `array` field, so
the above consideration for `object` is not relevant for arrays: only one of
`array` or `variant_value` may be non-null at a given level.
+
+# Using variant_value vs. typed_value
+
+In general, it is desirable to store values in the `typed_value` field rather
than the `variant_value` whenever possible.
+This will typically improve encoding efficiency, and allow the use of Parquet
statistics to filter at the row group or page level.
+In the best case, the `variant_value` fields are all null and the engine does
not need to read them (or it can omit them from the schema on write entirely).
+There are two main motivations for including the `variant_value` column:
+
+1) In a case where there are rare type mismatches (for example, a numeric
field with rare strings like “n/a”), we allow the field to be shredded, which
could still be a significant performance benefit compared to fetching and
decoding the full value/metadata binary.
+2) Since there is a single schema per file, there would be no easy way to
recover from a type mismatch encountered late in a file write. Parquet files
can be large, and buffering all file data before starting to write could be
expensive. Including a variant column for every field guarantees we can adhere
to the requested shredding schema.
+
+# Data Skipping
+
+Shredded columns are expected to store statistics in the same format as a
normal Parquet column.
+In general, the engine can only skip a row group or page if all rows in the
`variant_value` field are null, since it is possible for a `variant_get`
expression to successfully cast a value from the `variant_value` to the target
type.
+For example, if `typed_value` is of type `int64`, then the string “123” might
be contained in `variant_value`, which would not be reflected in statistics,
but could be retained by a filter like `where variant_get(col, “$.field”,
“long”) = 123`.
+If `variant_value` is all-null, then the engine can prune pages or row groups
based on `typed_value`.
+This specification is not strict about what values may be stored in
`variant_value` rather than `typed_value`, so it is not safe to skip rows based
on `typed_value` unless the corresponding `variant_value` column is all-null,
or the engine has specific knowledge of the behavior of the writer that
produced the shredded data.
+
+# Shredding Semantics
+
+Reconstruction of Variant value from a shredded representation is not expected
to produce a bit-for-bit identical binary to the original unshredded value.
+For example, the order of fields in the binary may change, as may the physical
representation of scalar values.
+
+In particular, the [Variant Binary Encoding](VariantEncoding.md) considers all
integer and decimal representations to represent a single logical type.
+As a result, it is valid to shred a decimal into a decimal column with a
different scale, or to shred an integer as a decimal, as long as no numeric
precision is lost.
+For example, it would be valid to write the value 123 to a Decimal(9, 2)
column, but the value 1.234 would need to be written to the **variant_value**
column.
+When reconstructing, it would be valid for a reader to reconstruct 123 as an
integer, or as a Decimal(9, 2).
+Engines should not depend on the physical type of a Variant value, only the
logical type.
+
+On the other hand, shredding as a different logical type is not allowed.
+For example, the integer value 123 could not be shredded to a string
`typed_value` column as the string "123", since that would lose type
information.
+It would need to be written to the `variant_value` column.
+
+# Reconstructing a Variant
+
+It is possible to recover a full Variant value using a recursive algorithm,
where the initial call is to `ConstructVariant` with the top-level fields,
which are assumed to be null if they are not present in the schema.
+
+```
+# Constructs a Variant from `variant_value`, `object`, `array` and
`typed_value`.
+# Only one of object, array and typed_value may be non-null.
+def ConstructVariant(variant_value, object, array, typed_value):
+ if object is null and array is null and typed_value is null and
variant_value is null: return VariantNull
+ if object is not null:
+ return ConstructObject(variant_value, object)
+ elif array is not null:
+ return ConstructArray(array)
+ elif typed_value is not null:
+ return cast(typed_value as Variant)
+ else:
+ variant_value
+
+# Construct an object from an `object` group, and a (possibly null) Variant
variant_value
+def ConstructObject(variant_value, object):
+ # If variant_value is present and is not an Object, then the result is
ambiguous.
+ assert(variant_value is null or is_object(variant_value))
+ # Null fields in the object are missing from the reconstructed Variant.
+ nonnull_object_fields = object.fields.filter(field -> field is not null)
+ all_keys = Union(variant_value.keys, non_null_object_fields)
+ return VariantObject(all_keys.map { key ->
+ if key in object: (key, ConstructVariant(object[key].variant_value,
object[key].object, object[key].array, object[key].typed_value))
+ else: (key, variant_value[key])
+ })
+
+def ConstructArray(array):
+ newVariantArray = VariantArray()
+ for i in range(array.size):
+ newVariantArray.append(ConstructVariant(array[i].variant_value,
array[i].object, array[i].array, array[i].typed_value)
+```
+
+# Nested Parquet Example
+
+This section describes a more deeply nested example, using a top-level array
as the shredding type.
+
+Below is a sample of JSON that would be fully shredded in this example.
+It contains an array of objects, containing an `a` field shredded as an array,
and a `b` field shredded as an integer.
+
+```
+[
+ {
+ "a": [1, 2, 3],
+ "b": 100
+ },
+ {
+ "a": [4, 5, 6],
+ "b": 200
+ }
+]
+```
+
+
+The corresponding Parquet schema with “a” and “b” as leaf types is:
+
+```
+optional group variant_col {
+ required binary metadata;
+ optional binary variant_value;
+ optional group array (LIST) {
+ repeated group list {
+ optional group element {
+ optional binary variant_value;
+ optional group object {
+ optional group a {
+ optional binary variant_value;
+ optional group array (LIST) {
+ repeated group list {
+ optional group element {
+ optional int64 typed_value;
+ optional binary variant_value;
+ }
+ }
+ }
+ }
+ optional group b {
+ optional int64 typed_value;
+ optional binary variant_value;
+ }
+ }
+ }
+ }
+ }
+}
+```
+
+In the above example schema, if “a” is an array containing a mix of integer
and non-integer values, the engine will shred individual elements appropriately
into either `typed_value` or `variant_value`.
+If the top-level Variant is not an array (for example, an object), the engine
cannot shred the value and it will store it in the top-level `variant_value`.
+Similarly, if "a" is not an array, it will be stored in the `variant_value`
under "a".
+
+Consider the following example:
+
+```
+[
+ {
+ "a": [1, 2, 3],
+ "b": 100,
+ “c”: “unexpected”
+ },
+ {
+ "a": [4, 5, 6],
+ "b": 200
+ },
+ “not an object”
+]
+```
+
+The second array element can be fully shredded, but the first and third cannot
be. The contents of `variant_col.array[*].variant_value` would be as follows:
+
+```
+[
+ { “c”: “unexpected” },
+ NULL,
+ “not an object”
+]
+```
+
+# Backward and forward compatibility
+
+Shredding is an optional feature of Variant, and readers must continue to be
able to read a group containing only a `value` and `metadata` field.
+
+Any fields in the same group as `typed_value`/`variant_value` that start with
`_` (underscore) can be ignored.
+This is intended to allow future backwards-compatible extensions.
+In particular, the field names `_metadata_key_paths` and any name starting
with `_spark` are reserved, and should not be used by other implementations.
+Any extra field names that do not start with an underscore should be assumed
to be backwards incompatible, and readers should fail when reading such a
schema.
+
+Engines without shredding support are not expected to be able to read Parquet
files that use shredding.
+Since different files may contain conflicting schemas (e.g. a `typed_value`
column with incompatible types in two files), it may not be possible to infer
or specify a single schema that would allow all Parquet files for a table to be
read.