This is an automated email from the ASF dual-hosted git repository.
wenchen pushed a commit to branch branch-4.0
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-4.0 by this push:
new 4e994105af5d [SPARK-51183][SQL] Link to Parquet spec in Variant docs
4e994105af5d is described below
commit 4e994105af5dc4ecc0c4a3361fb370103758a45a
Author: cashmand <[email protected]>
AuthorDate: Mon Feb 17 12:21:54 2025 +0800
[SPARK-51183][SQL] Link to Parquet spec in Variant docs
### What changes were proposed in this pull request?
The Parquet spec in
https://github.com/apache/parquet-format/blob/master/VariantEncoding.md is
based on the one in Spark, but has received a number of updates (especially
related to Shredding).
At this point, the code in Spark more closely matches the latest version of
the Parquet spec (the main gap being the lack of a few new scalar types that
were recently added, and which we will try to add to Spark soon).
This PR updates the README.md and shredding.md files to just point to the
Parquet spec, which we plan to have the Spark code follow.
### Why are the changes needed?
Improve internal documentation and avoid maintaining two copies of the spec.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
None.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes #49910 from cashmand/fix_variant_readme.
Authored-by: cashmand <[email protected]>
Signed-off-by: Wenchen Fan <[email protected]>
(cherry picked from commit b63b90e4c55d7b4f8de6ea2d75fe18cd92c0869d)
Signed-off-by: Wenchen Fan <[email protected]>
---
common/variant/README.md | 380 +-------------------------------------------
common/variant/shredding.md | 244 ----------------------------
2 files changed, 1 insertion(+), 623 deletions(-)
diff --git a/common/variant/README.md b/common/variant/README.md
index 58ebab7bd265..71163c257503 100644
--- a/common/variant/README.md
+++ b/common/variant/README.md
@@ -1,379 +1 @@
-# Overview
-
-A Variant represents a type that contain one of:
-- Primitive: A type and corresponding value (e.g. INT, STRING)
-- Array: An ordered list of Variant values
-- Object: An unordered collection of string/Variant pairs (i.e. key/value
pairs). An object may not contain duplicate keys.
-
-A variant is encoded with 2 binary values, the [value](#value-encoding) and
the [metadata](#metadata-encoding).
-
-There are a fixed number of allowed primitive types, provided in the table
below. These represent a commonly supported subset of the [logical
types](https://github.com/apache/parquet-format/blob/master/LogicalTypes.md)
allowed by the Parquet.
-
-The Variant spec allows representation of semi-structured data (e.g. JSON) in
a form that can be efficiently queried by path. The design is intended to allow
efficient access to nested data even in the presence of very wide or deep
structures.
-
-Another motivation for the representation is that (aside from metadata) each
inner Variant value is contiguous and self-contained. For example, in a Variant
containing an Array of Variant values, the representation of an inner Variant
value, when paired with the metadata of the full variant, is itself a valid
Variant.
-
-# Metadata encoding
-
-The encoded metadata always starts with a header byte.
-```
- 7 6 5 4 3 0
- +-------+---+---+---------------+
-header | | | | version |
- +-------+---+---+---------------+
- ^ ^
- | +-- sorted_strings
- +-- offset_size_minus_one
-```
-The `version` is a 4-bit value that must always contain the value `1`.
-`sorted_strings` is a 1-bit value indicating whether dictionary strings are
sorted and unique.
-`offset_size_minus_one` is a 2-bit value providing the number of bytes per
dictionary size and offset field.
-The actual number of bytes, `offset_size`, is `offset_size_minus_one + 1`.
-
-The entire metadata is encoded as the following diagram shows:
-```
- 7 0
- +-----------------------+
-metadata | header |
- +-----------------------+
- | |
- : dictionary_size : <-- little-endian, `offset_size` bytes
- | |
- +-----------------------+
- | |
- : offset : <-- little-endian, `offset_size` bytes
- | |
- +-----------------------+
- :
- +-----------------------+
- | |
- : offset : <-- little-endian, `offset_size` bytes
- | | (`dictionary_size + 1` offsets)
- +-----------------------+
- | |
- : bytes :
- | |
- +-----------------------+
-```
-
-The metadata is encoded first with the `header` byte, then `dictionary_size`
which is a little-endian value of `offset_size` bytes, and represents the
number of string values in the dictionary.
-Next, is an `offset` list, which contains `dictionary_size + 1` values.
-Each `offset` is a little-endian value of `offset_size` bytes, and represents
the starting byte offset of the i-th string in `bytes`.
-The first `offset` value will always be `0`, and the last `offset` value will
always be the total length of `bytes`.
-The last part of the metadata is `bytes`, which stores all the string values
in the dictionary.
-
-## Metadata encoding grammar
-
-The grammar for encoded metadata is as follows
-
-```
-metadata: <header> <dictionary_size> <dictionary>
-header: 1 byte (<version> | <sorted_strings> << 4 | (<offset_size_minus_one>
<< 6))
-version: a 4-bit version ID. Currently, must always contain the value 1
-sorted_strings: a 1-bit value indicating whether metadata strings are sorted
-offset_size_minus_one: 2-bit value providing the number of bytes per
dictionary size and offset field.
-dictionary_size: `offset_size` bytes. little-endian value indicating the
number of strings in the dictionary
-dictionary: <offset>* <bytes>
-offset: `offset_size` bytes. little-endian value indicating the starting
position of the ith string in `bytes`. The list should contain `dictionary_size
+ 1` values, where the last value is the total length of `bytes`.
-bytes: dictionary string values
-```
-
-Notes:
-- Offsets are relative to the start of the `bytes` array.
-- The length of the ith string can be computed as `offset[i+1] - offset[i]`.
-- The offset of the first string is always equal to 0 and is therefore
redundant. It is included in the spec to simplify in-memory-processing.
-- `offset_size_minus_one` indicates the number of bytes per `dictionary_size`
and `offset` entry. I.e. a value of 0 indicates 1-byte offsets, 1 indicates
2-byte offsets, 2 indicates 3 byte offsets and 3 indicates 4-byte offsets.
-- If `sorted_strings` is set to 1, strings in the dictionary must be unique
and sorted in lexicographic order. If the value is set to 0, readers may not
make any assumptions about string order or uniqueness.
-
-
-# Value encoding
-
-The entire encoded Variant value includes the `value_metadata` byte, and then
0 or more bytes for the `val`.
-```
- 7 2 1 0
- +------------------------------------+------------+
-value | value_header | basic_type |
- +------------------------------------+------------+
- | |
- : value_data : <-- 0 or more
bytes
- | |
- +-------------------------------------------------+
-```
-## Basic Type
-
-The `basic_type` is 2-bit value that represents which basic type the Variant
value is.
-The [basic types table](#encoding-types) shows what each value represents.
-
-## Value Header
-
-The `value_header` is a 6-bit value that contains more information about the
type, and the format depends on the `basic_type`.
-
-### Value Header for Primitive type (`basic_type`=0)
-
-When `basic_type` is `0`, `value_header` is a 6-bit `primitive_header`.
-The [primitive types table](#encoding-types) shows what each value represents.
-```
- 5 0
- +-----------------------+
-value_header | primitive_header |
- +-----------------------+
-```
-
-### Value Header for Short string (`basic_type`=1)
-
-When `basic_type` is `1`, `value_header` is a 6-bit `short_string_header`.
-```
- 5 0
- +-----------------------+
-value_header | short_string_header |
- +-----------------------+
-```
-The `short_string_header` value is the length of the string.
-
-### Value Header for Object (`basic_type`=2)
-
-When `basic_type` is `2`, `value_header` is made up of
`field_offset_size_minus_one`, `field_id_size_minus_one`, and `is_large`.
-```
- 5 4 3 2 1 0
- +---+---+-------+-------+
-value_header | | | | |
- +---+---+-------+-------+
- ^ ^ ^
- | | +-- field_offset_size_minus_one
- | +-- field_id_size_minus_one
- +-- is_large
-```
-`field_offset_size_minus_one` and `field_id_size_minus_one` are 2-bit values
that represent the number of bytes used to encode the field offsets and field
ids.
-The actual number of bytes is computed as `field_offset_size_minus_one + 1`
and `field_id_size_minus_one + 1`.
-`is_large` is a 1-bit value that indicates how many bytes are used to encode
the number of elements.
-If `is_large` is `0`, 1 byte is used, and if `is_large` is `1`, 4 bytes are
used.
-
-### Value Header for Array (`basic_type`=3)
-
-When `basic_type` is `3`, `value_header` is made up of
`field_offset_size_minus_one`, and `is_large`.
-```
- 5 3 2 1 0
- +-----------+---+-------+
-value_header | | | |
- +-----------+---+-------+
- ^ ^
- | +-- field_offset_size_minus_one
- +-- is_large
-```
-`field_offset_size_minus_one` is a 2-bit value that represents the number of
bytes used to encode the field offset.
-The actual number of bytes is computed as `field_offset_size_minus_one + 1`.
-`is_large` is a 1-bit value that indicates how many bytes are used to encode
the number of elements.
-If `is_large` is `0`, 1 byte is used, and if `is_large` is `1`, 4 bytes are
used.
-
-## Value Data
-
-The `value_data` encoding format depends on the type specified by
`value_metadata`.
-For some types, the `value_data` will be 0-bytes.
-
-### Value Data for Primitive type (`basic_type`=0)
-
-When `basic_type` is `0`, `value_data` depends on the `primitive_header` value.
-The [primitive types table](#encoding-types) shows the encoding format for
each primitive type.
-
-### Value Data for Short string (`basic_type`=1)
-
-When `basic_type` is `1`, `value_data` is the sequence of bytes that
represents the string.
-
-### Value Data for Object (`basic_type`=2)
-
-When `basic_type` is `2`, `value_data` encodes an object.
-The encoding format is shown in the following diagram:
-```
- 7 0
- +-----------------------+
-object value_data | |
- : num_elements : <-- little-endian, 1 or 4 bytes
- | |
- +-----------------------+
- | |
- : field_id : <-- little-endian,
`field_id_size` bytes
- | |
- +-----------------------+
- :
- +-----------------------+
- | |
- : field_id : <-- little-endian,
`field_id_size` bytes
- | | (`num_elements` field_ids)
- +-----------------------+
- | |
- : field_offset : <-- little-endian,
`field_offset_size` bytes
- | |
- +-----------------------+
- :
- +-----------------------+
- | |
- : field_offset : <-- little-endian,
`field_offset_size` bytes
- | | (`num_elements + 1`
field_offsets)
- +-----------------------+
- | |
- : value :
- | |
- +-----------------------+
- :
- +-----------------------+
- | |
- : value : <-- (`num_elements` values)
- | |
- +-----------------------+
-```
-An object `value_data` begins with `num_elements`, a 1-byte or 4-byte
little-endian value, representing the number of elements in the object.
-The size in bytes of `num_elements` is indicated by `is_large` in the
`value_header`.
-Next, is a list of `field_id` values.
-There are `num_elements` number of entries and each `field_id` is a
little-endian value of `field_id_size` bytes.
-A `field_id` is an index into the dictionary in the metadata.
-The `field_id` list is followed by a `field_offset` list.
-There are `num_elements + 1` number of entries and each `field_offset` is a
little-endian value of `field_offset_size` bytes.
-A `field_offset` represents the byte offset (relative to the first byte of the
first `value`) where the i-th `value` starts.
-The last `field_offset` points to the byte after the end of the last `value`.
-The `field_offset` list is followed by the `value` list.
-There are `num_elements` number of `value` entries and each `value` is an
encoded Variant value.
-For the i-th key-value pair of the object, the key is the metadata dictionary
entry indexed by the i-th `field_id`, and the value is the Variant `value`
starting from the i-th `field_offset` byte offset.
-
-The field ids and field offsets must be in lexicographical order of the
corresponding field names in the metadata dictionary.
-However, the actual `value` entries do not need to be in any particular order.
-This implies that the `field_offset` values may not be monotonically
increasing.
-For example, for the following object:
-```
-{
- "c": 3,
- "b": 2,
- "a": 1
-}
-```
-The `field_id` list must be `[<id for key "a">, <id for key "b">, <id for key
"c">]`, in lexicographical order.
-The `field_offset` list must be `[<offset for value 1>, <offset for value 2>,
<offset for value 3>, <last offset>]`.
-The `value` list can be in any order.
-
-### Value Data for Array (`basic_type`=3)
-
-When `basic_type` is `3`, `value_data` encodes an array. The encoding format
is shown in the following diagram:
-```
- 7 0
- +-----------------------+
-array value_data | |
- : num_elements : <-- little-endian, 1 or 4 bytes
- | |
- +-----------------------+
- | |
- : field_offset : <-- little-endian,
`field_offset_size` bytes
- | |
- +-----------------------+
- :
- +-----------------------+
- | |
- : field_offset : <-- little-endian,
`field_offset_size` bytes
- | | (`num_elements + 1`
field_offsets)
- +-----------------------+
- | |
- : value :
- | |
- +-----------------------+
- :
- +-----------------------+
- | |
- : value : <-- (`num_elements` values)
- | |
- +-----------------------+
-```
-An array `value_data` begins with `num_elements`, a 1-byte or 4-byte
little-endian value, representing the number of elements in the array.
-The size in bytes of `num_elements` is indicated by `is_large` in the
`value_header`.
-Next, is a `field_offset` list.
-There are `num_elements + 1` number of entries and each `field_offset` is a
little-endian value of `field_offset_size` bytes.
-A `field_offset` represents the byte offset (relative to the first byte of the
first `value`) where the i-th `value` starts.
-The last `field_offset` points to the byte after the last byte of the last
`value`.
-The `field_offset` list is followed by the `value` list.
-There are `num_elements` number of `value` entries and each `value` is an
encoded Variant value.
-For the i-th array entry, the value is the Variant `value` starting from the
i-th `field_offset` byte offset.
-
-## Value encoding grammar
-
-The grammar for an encoded value is:
-
-```
-value: <value_metadata> <value_data>?
-value_metadata: 1 byte (<basic_type> | (<value_header> << 2))
-basic_type: ID from Basic Type table. <value_header> must be a corresponding
variation
-value_header: <primitive_header> | <short_string_header> | <object_header> |
<array_header>
-primitive_header: ID from Primitive Type table. <val> must be a corresponding
variation of <primitive_val>
-short_string_header: unsigned string length in bytes from 0 to 63
-object_header: (is_large << 4 | field_id_size_minus_one << 2 |
field_offset_size_minus_one)
-array_header: (is_large << 2 | field_offset_size_minus_one)
-value_data: <primitive_val> | <short_string_val> | <object_val> | <array_val>
-primitive_val: see table for binary representation
-short_string_val: bytes
-object_val: <num_elements> <field_id>* <field_offset>* <fields>
-array_val: <num_elements> <field_offset>* <fields>
-num_elements: a 1 or 4 byte little-endian value (depending on is_large in
<object_header>/<array_header>)
-field_id: a 1, 2, 3 or 4 byte little-endian value (depending on
field_id_size_minus_one in <object_header>), indexing into the dictionary
-field_offset: a 1, 2, 3 or 4 byte little-endian value (depending on
field_offset_size_minus_one in <object_header>/<array_header>), providing the
offset in bytes within fields
-fields: <value>*
-```
-
-Each `value_data` must correspond to the type defined by `value_metadata`.
Boolean and null types do not have a corresponding `value_data`, since their
type defines their value.
-
-Each `array_val` and `object_val` must contain exactly `num_elements + 1`
values for `field_offset`. The last entry is the offset that is one byte past
the last field (i.e. the total size of all fields in bytes). All offsets are
relative to the first byte of the first field in the object/array.
-
-`field_id_size_minus_one` and `field_offset_size_minus_one` indicate the
number of bytes per field ID/offset. I.e. a value of 0 indicates 1-byte IDs, 1
indicates 2-byte IDs, 2 indicates 3 byte IDs and 3 indicates 4-byte IDs. The
`is_large` flag for arrays and objects is used to indicate whether the number
of elements is indicated using a one or four byte value. When more than 255
elements are present, `is_large` must be set to true. It is valid for an
implementation to use a larger value [...]
-
-The "short string" basic type may be used as an optimization to fold string
length into the type byte for strings less than 64 bytes. It is semantically
identical to the "string" primitive type.
-
-The Decimal type contains a scale, but no precision. The implied precision of
a decimal value is `floor(log_10(val)) + 1`.
-
-# Encoding types
-
-| Basic Type | ID | Description |
-|--------------|-----|---------------------------------------------------|
-| Primitive | `0` | One of the primitive types |
-| Short string | `1` | A string with a length less than 64 bytes |
-| Object | `2` | A collection of (string-key, variant-value) pairs |
-| Array | `3` | An ordered sequence of variant values |
-
-| Logical Type | Physical Type | Type ID | Equivalent
Parquet Type | Binary format
|
-|----------------------|-----------------------------|---------|-----------------------------|---------------------------------------------------------------------------------------------------------------------|
-| NullType | null | `0` | any
| none
|
-| Boolean | boolean (True) | `1` | BOOLEAN
| none
|
-| Boolean | boolean (False) | `2` | BOOLEAN
| none
|
-| Exact Numeric | int8 | `3` | INT(8,
signed) | 1 byte
|
-| Exact Numeric | int16 | `4` | INT(16,
signed) | 2 byte little-endian
|
-| Exact Numeric | int32 | `5` | INT(32,
signed) | 4 byte little-endian
|
-| Exact Numeric | int64 | `6` | INT(64,
signed) | 8 byte little-endian
|
-| Double | double | `7` | DOUBLE
| IEEE little-endian
|
-| Exact Numeric | decimal4 | `8` |
DECIMAL(precision, scale) | 1 byte scale in range [0, 38], followed by
little-endian unscaled value (see decimal table) |
-| Exact Numeric | decimal8 | `9` |
DECIMAL(precision, scale) | 1 byte scale in range [0, 38], followed by
little-endian unscaled value (see decimal table) |
-| Exact Numeric | decimal16 | `10` |
DECIMAL(precision, scale) | 1 byte scale in range [0, 38], followed by
little-endian unscaled value (see decimal table) |
-| Date | date | `11` | DATE
| 4 byte little-endian
|
-| Timestamp | timestamp | `12` |
TIMESTAMP(true, MICROS) | 8-byte little-endian
|
-| TimestampNTZ | timestamp without time zone | `13` |
TIMESTAMP(false, MICROS) | 8-byte little-endian
|
-| Float | float | `14` | FLOAT
| IEEE little-endian
|
-| Binary | binary | `15` | BINARY
| 4 byte little-endian size, followed by bytes
|
-| String | string | `16` | STRING
| 4 byte little-endian size, followed by UTF-8 encoded bytes
|
-
-| Decimal Precision | Decimal value type |
-|-----------------------|--------------------|
-| 1 <= precision <= 9 | int32 |
-| 10 <= precision <= 18 | int64 |
-| 18 <= precision <= 38 | int128 |
-| > 38 | Not supported |
-
-The *Logical Type* column indicates logical equivalence of physically encoded
types. For example, a user expression operating on a string value containing
"hello" should behave the same, whether it is encoded with the short string
optimization, or long string encoding. Similarly, user expressions operating on
an *int8* value of 1 should behave the same as a decimal16 with scale 2 and
unscaled value 100.
-
-# Field ID order and uniqueness
-
-For objects, field IDs and offsets must be listed in the order of the
corresponding field names, sorted lexicographically. Note that the fields
themselves are not required to follow this order. As a result, offsets will not
necessarily be listed in ascending order.
-
-An implementation may rely on this field ID order in searching for field
names. E.g. a binary search on field IDs (combined with metadata lookups) may
be used to find a field with a given field.
-
-Field names are case-sensitive. Field names are required to be unique for each
object. It is an error for an object to contain two fields with the same name,
whether or not they have distinct dictionary IDs.
-
-# Versions and extensions
-
-An implementation is not expected to parse a Variant value whose metadata
version is higher than the version supported by the implementation. However,
new types may be added to the specification without incrementing the version
ID. In such a situation, an implementation should be able to read the rest of
the Variant value if desired.
-
-# Shredding
-
-For columnar storage formats, a single Variant object may have poor read
performance when only a small number of fields are needed. A better approach is
to create separate columns for individual fields, referred to as shredding or
subcolumnarization. [shredding.md](shredding.md) describes an approach to
shredding Variant columns in Parquet and similar columnar formats.
+The Variant spec is being developed in the Apache Parquet project, and has not
been finalized. Spark currently implements the version of the spec at
https://github.com/apache/parquet-format/blob/37b6e8b863fb510314c07649665251f6474b0c11/VariantEncoding.md
and
https://github.com/apache/parquet-format/blob/37b6e8b863fb510314c07649665251f6474b0c11/VariantShredding.md,
but does not yet support Variant values containing UUID, Time, or
nanosecond-precision Timestamp. Spark does not yet have pub [...]
diff --git a/common/variant/shredding.md b/common/variant/shredding.md
deleted file mode 100644
index 40648619eba8..000000000000
--- a/common/variant/shredding.md
+++ /dev/null
@@ -1,244 +0,0 @@
-# Shredding Overview
-
-The Spark Variant type is designed to store and process semi-structured data
efficiently, even with heterogeneous values. Query engines encode each variant
value in a self-describing format, and store it as a group containing **value**
and **metadata** binary fields in Parquet. Since data is often partially
homogenous, it can be beneficial to extract certain fields into separate
Parquet columns to further improve performance. We refer to this process as
"shredding". Each Parquet file rem [...]
-
-This document focuses on the shredding semantics, Parquet representation,
implications for readers and writers, as well as the Variant reconstruction.
For now, it does not discuss which fields to shred, user-facing API changes, or
any engine-specific considerations like how to use shredded columns. The
approach builds on top of the generic Spark Variant representation, and
leverages the existing Parquet specification for maximum compatibility with the
open-source ecosystem.
-
-At a high level, we replace the **value** and **metadata** of the Variant
Parquet group with one or more fields called **object**, **array**,
**typed_value** and **untyped_value**. These represent a fixed schema suitable
for constructing the full Variant value for each row.
-
-Shredding lets Spark (or any other query engine) reap the full benefits of
Parquet's columnar representation, such as more compact data encoding, min/max
statistics for data skipping, and I/O and CPU savings from pruning unnecessary
fields not accessed by a query (including the non-shredded Variant binary data).
-Without shredding, any query that accesses a Variant column must fetch all
bytes of the full binary buffer. With shredding, we can get nearly equivalent
performance as in a relational (scalar) data model.
-
-For example, `select variant_get(variant_col, ‘$.field1.inner_field2’,
‘string’) from tbl` only needs to access `inner_field2`, and the file scan
could avoid fetching the rest of the Variant value if this field was shredded
into a separate column in the Parquet schema. Similarly, for the query `select
* from tbl where variant_get(variant_col, ‘$.id’, ‘integer’) = 123`, the scan
could first decode the shredded `id` column, and only fetch/decode the full
Variant value for rows that pass th [...]
-
-# Parquet Example
-
-Consider the following Parquet schema together with how Variant values might
be mapped to it. Notice that we represent each shredded field in **object** as
a group of two fields, **typed_value** and **untyped_value**. We extract all
homogenous data items of a certain path into **typed_value**, and set aside
incompatible data items in **untyped_value**. Intuitively, incompatibilities
within the same path may occur because we store the shredding schema per
Parquet file, and each file can c [...]
-
-Typically, the expectation is that **untyped_value** exists at every level as
an option, along with one of **object**, **array** or **typed_value**. If the
actual Variant value contains a type that does not match the provided schema,
it is stored in **untyped_value**. An **untyped_value** may also be populated
if an object can be partially represented: any fields that are present in the
schema must be written to those fields, and any missing fields are written to
**untyped_valud**.
-
-```
-optional group variant_col {
- optional binary untyped_value;
- optional group object {
- optional group a {
- optional binary untyped_value;
- optional int64 typed_value;
- }
- optional group b {
- optional binary untyped_value;
- optional group object {
- optional group c {
- optional binary untyped_value;
- optional binary typed_value (STRING);
- }
- }
- }
- }
-}
-```
-
-| Variant Value | Top-level untyped_value | b.untyped_value | Non-null in a |
Non-null in b.c |
-|---------------|--------------------------|---------------|---------------|------------------|
-| {a: 123, b: {c: “hello”}} | null | null | typed_value | typed_value |
-| {a: 1.23, b: {c: “123”}} | null | null | untyped_value | typed_value |
-| {a: [1,2,3], b: {c: null}} | null | null | untyped_value | untyped_value |
-| {a: 123, c: 456} | {c: 456} | null | typed_value | null |
-| {a: 123, b: {c: "hello", d: 456}} | null | {d: 456} | typed_value |
typed_value |
-| [{a: 1, b: {c: 2}}, {a: 3, b: {c: 4}}] | [{a: 1, b: {c: 2}}, {a: 3, b: {c:
4}}] | null | null | null |
-
-# Parquet Layout
-
-The **array** and **object** fields represent Variant array and object types,
respectively. Arrays must use the three-level list structure described in
https://github.com/apache/parquet-format/blob/master/LogicalTypes.md.
-
-An **object** field must be a group. Each field name of this inner group
corresponds to the Variant value's object field name. Each inner field's type
is a recursively shredded variant value: that is, the fields of each object
field must be one or more of **object**, **array**, **typed_value** or
**untyped_value**.
-
-Similarly the elements of an **array** must be a group containing one or more
of **object**, **array**, **typed_value** or **untyped_value**.
-
-Each leaf in the schema can store an arbitrary Variant value. It contains an
**untyped_value** binary field and a **typed_value** field. If non-null,
**untyped_value** represents the value stored as a Variant binary; the metadata
and value of a normal Variant are concatenated. The **typed_value** field may
be any type that has a corresponding Variant type. For each value in the data,
at most one of the **typed_value** and **untyped_value** may be non-null. A
writer may omit either field, [...]
-
-| typed_value | untyped_value | Meaning |
-|-------------|----------------|---------|
-| null | null | Field is missing in the reconstructed Variant. |
-| null | non-null | Field may be any type in the reconstructed Variant. |
-| non-null | null | Field has this column’s type in the reconstructed Variant.
|
-| non-null | non-null | Invalid |
-
-The **typed_value** may be absent from the Parquet schema for any field, which
is equivalent to its value being always null (in which case the shredded field
is always stored as a Variant binary). By the same token, **untyped_value** may
be absent, which is equivalent to their value being always null (in which case
the field will always be missing or have the type of the **typed_value**
column).
-
-The full metadata and value can be reconstructed from **untyped_value** by
treating the leading bytes as metadata, and using the header, dictionary size
and final dictionary offset to determine the start of the Variant value
section. (See the metadata description in the common/variant/README.md for more
detail on how to interpret it.) For example, in the binary below, there is a
one-element dictionary, and the final offset (`offset[1]`) indicates that the
last dictionary entry ends at th [...]
-
-```
- hdr sz offset[0] offset[1] bytes[0] bytes[1] value
- --------------------------------------------------------------------
-| | | | | | |
-| 0x01 | 0x01 | 0x00 | 0x02 | ‘h’ | ‘i’ | . . . . . . . .
-|______|______|_________|_________|________|________|________________
-```
-
-# Unshredded values
-
-If all values can be represented at a given level by whichever of **object**,
**array** or **typed_value** is present, **untyped_value** is set to null.
-
-If a value cannot be represented by whichever of **object**, **array** or
**typed_value** is present in the schema, then it is stored in
**untyped_value**, and the other fields are set to null. In the Parquet example
above, if field **a** was an object or array, or a non-integer scalar, it would
be stored in **untyped_value**.
-
-If a value is an object, and the **object** field is present but does not
contain all of the fields in the value, then any remaining fields are stored in
an object in **untyped_value**. In the Parquet example above, if field **b**
was an object of the form **{"c": 1, "d": 2}"**, then the object **{"d": 2}**
would be stored in **untyped_value**, and the **c** field would be shredded
recursively under **object.c**.
-
-Note that an array is always fully shredded if there is an **array** field, so
the above consideration for **object** is not relevant for arrays: only one of
**array** or **untyped_value** may be non-null at a given level.
-
-# Using untyped_value vs. typed_value
-
-In general, it is desirable to store values in the **typed_value** field
rather than the **untyped_value** whenever possible. This will typically
improve encoding efficiency, and allow the use of Parquet statistics to filter
at the row group or page level. In the best case, the **untyped_value** fields
are all null and the engine does not need to read them (or it can omit them
from the schema on write entirely). There are two main motivations for
including the **untyped_value** column:
-
-1) In a case where there are rare type mismatches (for example, a numeric
field with rare strings like “n/a”), we allow the field to be shredded, which
could still be a significant performance benefit compared to fetching and
decoding the full value/metadata binary.
-2) Since there is a single schema per file, there would be no easy way to
recover from a type mismatch encountered late in a file write. Parquet files
can be large, and buffering all file data before starting to write could be
expensive. Including an untyped column for every field guarantees we can adhere
to the requested shredding schema.
-
-The **untyped_value** is stored in a single binary column, rather than storing
the value and metadata separately as is done in the unshredded binary format.
The motivation for storing them separately for unshredded data is that this
lets the engine encode and compress the metadata more efficiently when the
fields are consistent across rows. We chose to combine them in the shredded
fields: we expect the encoding/compression benefit to be lower, since in the
case of uniform data, the value [...]
-
-# Data Skipping
-
-Shredded columns are expected to store statistics in the same format as a
normal Parquet column. In general, the engine can only skip a row group or page
if all rows in the **untyped_value** field are null, since it is possible for a
`variant_get` expression to successfully cast a value from the
**untyped_value** to the target type. For example, if **typed_value** is of
type `int64`, then the string “123” might be contained in **untyped_value**,
which would not be reflected in statistics [...]
-
-# Shredding Semantics
-
-Variant defines a number of integer and decimal types of varying widths. When
writing, it would be quite limiting to strictly enforce the mapping between
Variant types and Parquet/Spark types. For example, if we chose to shred a
field as `int64`, and encountered the value 123 encoded as `int32`, it seems
preferable to write this to the **typed_value** column, even though it
technically loses information about the type in the original Variant object,
and would be reconstructed as an `int64`.
-
-On the other hand, storing arbitrarily casted values in the **typed_value**
column could create inconsistent behavior before and after shredding, and could
leak behavior from the writing engine to the reading engine. For example,
double-to-string casts can produce different results in different engines.
Performing such a cast while shredding (even if we somehow retained the
knowledge that the original value was a `double`) could result in confusing
behavior changes if shredding took plac [...]
-
-Our approach is a pragmatic compromise that allows the use of **typed_value**
in cases where the type can be losslessly widened without resulting in a
significant difference in the reconstructed Variant:
-
-1) All integer and decimal types in Variant are conceptually a single “number”
type. The engine may shred any number into the **typed_value** of any other
number, provided that no information about the value is lost. For example, the
integer 123 may be shredded as Decimal<9, 2>, but 1.23 may not be shredded as
any integer type.
-
-2) To ensure that behavior remains unchanged before and after shredding, we
will aim to have all Spark expressions that operate on Variant be agnostic to
the specific numeric type. For example, `cast(val as string)` should produce
the string “123” if `val` is any integer or decimal type that is exactly equal
to 123. Note that this is unlike the normal Spark behavior for `decimal` types,
which would produce “123.00” for `Decimal<9,2>`.
-
-3) One exception to the above is `schema_of_variant`, which will still report
the underlying physical type. This means that `schema_of_variant` may report
different numeric types before and after shredding.
-
-4) Other than integer and decimal, we will not allow casting between types.
For example, we will not write the string “123” to an integer **typed_value**
column, even though `variant_get(“123”, “$”, “integer”)` would produce the
integer 123. Similarly, double and float types are considered distinct from
other numeric types, and we would not write them to a numeric **typed_value**
column.
-
-# Reconstructing a Variant
-
-It is possible to recover a full Variant value using a recursive algorithm,
where the initial call is to `ConstructVariant` with the top-level fields,
which are assumed to be null if they are not present in the schema.
-
-```
-# Constructs a Variant from `untyped_value`, `object`, `array` and
`typed_value`.
-# Only one of object, array and typed_value may be non-null.
-def ConstructVariant(untyped_value, object, array, typed_value):
- if object is null and array is null and typed_value is null: return
untyped_value
- elif object is not null:
- return ConstructObject(untyped_value, object)
- elif array is not null:
- return ConstructArray(array)
- else:
- # Leaf in the tree.
- assert(untyped_value is null or untyped_value is VariantNull)
- return coalesce(untyped_value, cast(typed_value as Variant))
-
-# Construct an object from an `object` group, and a (possibly null) Variant
untyped_value
-def ConstructObject(untyped_value, object)
- # If untyped_value is present and is not an Object, then the result is
ambiguous.
- assert(untyped_value is null or is_object(untyped_value))
- all_keys = Union(untyped_value.keys, object.fields)
- return VariantObject(all_keys.map { key ->
- if object[field] is null: (key, untyped_value[field])
- else: (key, ConstructVariant(null, object[field], null, null))
- }
-
-def ConstructArray(array)
- newVariantArray = VariantArray()
- for i in range(array.size):
- # Any of these may be missing from the schema, in which case they are null.
- newVariantArray.append(ConstructVariant(array[i].untyped_value,
array[i].object, array[i].array, array[i].typed_value)
-```
-
-# Nested Parquet Example
-
-This section describes a more deeply nested example, using a top-level array
as the shredding type.
-
-Below is a sample of JSON that would be fully shredded in this example. It
contains an array of objects, containing an “a” field shredded as an array, and
a “b” field shredded as an integer.
-
-```
-[
- {
- "a": [1, 2, 3],
- "b": 100
- },
- {
- "a": [4, 5, 6],
- "b": 200
- }
-]
-```
-
-
-The corresponding Parquet schema with “a” and “b” as leaf types is:
-
-```
-optional group variant_col {
- optional binary untyped_value;
- optional group array (LIST) {
- repeated group list {
- optional group element {
- optional binary untyped_value;
- optional group object {
- optional group a {
- optional binary untyped_value;
- optional group array (LIST) {
- repeated group list {
- optional group element {
- optional int64 typed_value;
- optional binary untyped_value;
- }
- }
- }
- }
- optional group b {
- optional int64 typed_value;
- optional binary untyped_value;
- }
- }
- }
- }
- }
-}
-```
-
-In the above example schema, if “a” is an array containing a mix of integer
and non-integer values, the engine will shred individual elements appropriately
into either **typed_value** or **untyped_value**.
-If the top-level Variant is not an array (for example, an object), the engine
cannot shred the value and it will store it in the top-level **untyped_value**.
-Similarly, if "a" is not an array, it will be stored in the **untyped_value**
under "a".
-
-Consider the following example:
-
-```
-[
- {
- "a": [1, 2, 3],
- "b": 100,
- “c”: “unexpected”
- },
- {
- "a": [4, 5, 6],
- "b": 200
- },
- “not an object”
-]
-```
-
-The second array element can be fully shredded, but the first and third cannot
be. The contents of `variant_col.array[*].untyped_value` would be as follows:
-
-```
-[
- { “c”: “unexpected” },
- NULL,
- “not an object”
-]
-```
-
-# Backward and forward compatibility
-
-Shredding is an optional features of Variant, and readers must continue to be
able to read a group containing only a `value` and `metadata` column.
-
-We will follow the convention defined in
https://github.com/delta-io/delta/blob/master/protocol_rfcs/variant-type.md#variant-data-in-parquet,
and ignore any fields in the same group as typed_value/untyped_value that
start with `_` (underscore).
-This is intended to allow future backwards-compatible extensions. In
particular, the field names `_metadata_key_paths` and any name starting with
`_spark` are reserved, and should not be used by other implementations.
-Any extra field names that do not start with an underscore should be assumed
to be backwards incompatible, and readers should fail when reading such a
schema.
-
-Engines without shredding support are not expected to be able to read Parquet
files that use shredding. Since different files may contain conflicting schemas
(e.g. a `typed_value` column with incompatible types in two files), it may not
be possible to infer or specify a single schema that would allow all Parquet
files for a table to be read.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]