luoyuxia commented on code in PR #1924:
URL: https://github.com/apache/fluss/pull/1924#discussion_r2508549836
##########
website/docs/quickstart/_shared-lake-analytics.md:
##########
@@ -0,0 +1,58 @@
+The data for the `datalake_enriched_orders` table is stored in Fluss (for
real-time data) and {props.name} (for historical data).
+
+When querying the `datalake_enriched_orders` table, Fluss uses a union
operation that combines data from both Fluss and {props.name} to provide a
complete result set -- combines **real-time** and **historical** data.
+
+If you wish to query only the data stored in {props.name}—offering
high-performance access without the overhead of unioning data—you can use the
`datalake_enriched_orders$lake` table by appending the `$lake` suffix.
+This approach also enables all the optimizations and features of a Flink
{props.name} table source, including system table such as
`datalake_enriched_orders$lake$snapshots`.
+
+To query the snapshots directly from {props.name}, use the following SQL:
+```sql title="Flink SQL"
+-- switch to batch mode
+SET 'execution.runtime-mode' = 'batch';
+```
+
Review Comment:
```sql title="Flink SQL"
SET 'sql-client.execution.result-mode' = 'tableau';
```
Also change this to make the screen looks well.
##########
website/docs/quickstart/flink-lake.md:
##########
@@ -253,26 +354,26 @@ CREATE TABLE enriched_orders (
`cust_mktsegment` STRING,
`nation_name` STRING,
PRIMARY KEY (`order_key`) NOT ENFORCED
+) WITH (
+ 'table.datalake.enabled' = 'true',
+ 'table.datalake.freshness' = '30s'
);
```
-## Streaming into Fluss
-
-First, run the following SQL to sync data from source tables to Fluss tables:
+Next, perform streaming data writing into the **datalake-enabled** table,
`datalake_enriched_orders`:
```sql title="Flink SQL"
-EXECUTE STATEMENT SET
-BEGIN
- INSERT INTO fluss_nation SELECT * FROM
`default_catalog`.`default_database`.source_nation;
- INSERT INTO fluss_customer SELECT * FROM
`default_catalog`.`default_database`.source_customer;
- INSERT INTO fluss_order SELECT * FROM
`default_catalog`.`default_database`.source_order;
-END;
+-- switch to streaming mode
+SET 'execution.runtime-mode' = 'streaming';
```
-Fluss primary-key tables support high QPS point lookup queries on primary
keys. Performing a [lookup
join](https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/dev/table/sql/queries/joins/#lookup-join)
is really efficient and you can use it to enrich
-the `fluss_orders` table with information from the `fluss_customer` and
`fluss_nation` primary-key tables.
+```sql title="Flink SQL"
+-- execute DML job asynchronously
+SET 'table.dml-sync' = 'false';
Review Comment:
don't need this since it's `false` by default.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]