This is an automated email from the ASF dual-hosted git repository.
yuxia pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/fluss.git
The following commit(s) were added to refs/heads/main by this push:
new 156a43403 [docs] Add MAP type to data types and Paimon integration
documentation (#2332)
156a43403 is described below
commit 156a434033e428d3989a722c27c39577ac38ad24
Author: ForwardXu <[email protected]>
AuthorDate: Fri Jan 9 14:22:47 2026 +0800
[docs] Add MAP type to data types and Paimon integration documentation
(#2332)
---
website/docs/streaming-lakehouse/integrate-data-lakes/paimon.md | 3 ++-
website/docs/table-design/data-types.md | 1 +
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/website/docs/streaming-lakehouse/integrate-data-lakes/paimon.md
b/website/docs/streaming-lakehouse/integrate-data-lakes/paimon.md
index 8cfe2aec5..d7f924974 100644
--- a/website/docs/streaming-lakehouse/integrate-data-lakes/paimon.md
+++ b/website/docs/streaming-lakehouse/integrate-data-lakes/paimon.md
@@ -92,7 +92,7 @@ SELECT * FROM orders$lake$snapshots;
When you specify the `$lake` suffix in a query, the table behaves like a
standard Paimon table and inherits all its capabilities.
This allows you to take full advantage of Flink's query support and
optimizations on Paimon, such as querying system tables, time travel, and more.
-For further information, refer to Paimon’s [SQL Query
documentation](https://paimon.apache.org/docs/1.3/flink/sql-query/#sql-query).
+For further information, refer to Paimon's [SQL Query
documentation](https://paimon.apache.org/docs/1.3/flink/sql-query/#sql-query).
#### Union Read of Data in Fluss and Paimon
@@ -176,6 +176,7 @@ The following table shows the mapping between [Fluss data
types](table-design/da
| BINARY | BINARY
|
| BYTES | BYTES
|
| ARRAY\<t\> | ARRAY\<t\>
|
+| MAP\<kt, vt\> | MAP\<kt,
vt\> |
| ROW\<n0 t0, n1 t1, ...\><br/>ROW\<n0 t0 'd0', n1 t1 'd1', ...\> | ROW\<n0
t0, n1 t1, ...\><br/>ROW\<n0 t0 'd0', n1 t1 'd1', ...\> |
## Snapshot Metadata
diff --git a/website/docs/table-design/data-types.md
b/website/docs/table-design/data-types.md
index b39872bff..03024310a 100644
--- a/website/docs/table-design/data-types.md
+++ b/website/docs/table-design/data-types.md
@@ -29,5 +29,6 @@ Fluss has a rich set of native data types available to users.
All the data types
| BINARY(n) | A
fixed-length binary string (=a sequence of bytes) where n is the number of
bytes. n must have a value between 1 and Integer.MAX_VALUE (both inclusive).
[...]
| BYTES | A
variable-length binary string (=a sequence of bytes).
[...]
| ARRAY\<t\> | An array
of elements with same subtype. <br/>Compared to the SQL standard, the maximum
cardinality of an array cannot be specified but is fixed at 2,147,483,647.
Also, any valid type is supported as a subtype.<br/>The type can be declared
using ARRAY\<t\> where t is the data type of the contained elements.
[...]
+| MAP\<kt, vt\> | An
associative array that maps keys to values. A map cannot contain duplicate
keys; each key can map to at most one value. Map keys are always non-nullable
and will be automatically converted to non-nullable types if a nullable key
type is provided. There is no restriction of key types; it is the
responsibility of the user to ensure uniqueness. The map type is an extension
to the SQL standard.<br/>The type can be declare [...]
| ROW\<n0 t0, n1 t1, ...\><br/>ROW\<n0 t0 'd0', n1 t1 'd1', ...\> | A sequence
of fields. <br/>A field consists of a field name, field type, and an optional
description. The most specific type of a row of a table is a row type. In this
case, each column of the row corresponds to the field of the row type that has
the same ordinal position as the column. <br/>Compared to the SQL standard, an
optional field description simplifies the handling with complex structures.
<br/>A row type is sim [...]