WuWei-art opened a new issue, #2045:
URL: https://github.com/apache/fluss/issues/2045

   ### Search before asking
   
   - [x] I searched in the [issues](https://github.com/apache/fluss/issues) and 
found nothing similar.
   
   
   ### Fluss version
   
   0.8.0 (latest release)
   
   ### Please describe the bug 🐞
   
   Running Flink version 1.20.1, Fluss version 0.8
   Jar files in Flink/lib directory:
   ```
   [bi@Adoris01 lib]$ ls -lst fluss-* paimon-flink-1.20-1.*
   52040 -rw-r--r-- 1 bi bi 53287592 Nov 27 15:30 paimon-flink-1.20-1.2.0.jar
   47240 -rw-r--r-- 1 bi bi 48373144 Nov 27 09:56 paimon-flink-1.20-1.0.1.jarbak
   45480 -rw-r----- 1 bi bi 46570148 Nov 27 09:56 
fluss-lake-paimon-0.8.0-incubating.jar
   32532 -rw-r----- 1 bi bi 33309337 Nov 27 09:56 
fluss-fs-hdfs-0.8.0-incubating.jar
   66900 -rw-r--r-- 1 bi bi 68502659 Nov 27 09:56 
fluss-flink-1.20-0.8.0-incubating.jar
   [bi@Adoris01 lib]$ 
   ```
   Confirmed that the Fluss Lake Tiering Service task was started and running 
normally using the following command:
   `./flink run /opt/bi/fluss-tiering/fluss-flink-tiering-0.8.0-incubating.jar 
--fluss.bootstrap.servers Adoris01:9123 --datalake.format paimon 
--datalake.paimon.metastore filesystem --datalake.paimon.warehouse 
hdfs:///paimon/fluss/warehouse`
   After starting ./sql-client.sh, the following script was entered:
   
   ```
   SET 'sql-client.execution.result-mode' = 'tableau';
   SET 'execution.runtime-mode' = 'batch';
   SET 'table.local-time-zone' = 'UTC';
   
   CREATE CATALOG fluss_catalog WITH (
     'type' = 'fluss',
     'bootstrap.servers' = 'Adoris01:9123'
   );
   
   use catalog fluss_catalog;
   
   CREATE TABLE fluss_order_with_lake (
       `order_key` BIGINT,
       `cust_key` INT NOT NULL,
       `total_price` DECIMAL(15, 2),
       `order_date` DATE,
       `order_priority` STRING,
       `clerk` STRING,
       `ptime` AS PROCTIME(),
       PRIMARY KEY (`order_key`) NOT ENFORCED
    ) WITH (
        'table.datalake.enabled' = 'true',
        'table.datalake.freshness' = '30s',
        'paimon.file.format' = 'orc',
        'paimon.deletion-vectors.enabled' = 'true'
   );
   
   INSERT INTO fluss_order_with_lake
   VALUES (1,1,15.1, cast(null as date), '11','测试');
   
   ```Data for this table can already be seen in the paimon HDFS path:
   
   ```
   [bi@Adoris01 bin]$ hdfs dfs -ls 
/paimon/fluss/warehouse/fluss.db/fluss_order_with_lake/bucket-2
   Found 2 items
   -rw-r--r--   2 bi supergroup       1198 2025-11-27 17:08 
/paimon/fluss/warehouse/fluss.db/fluss_order_with_lake/bucket-2/changelog-5cf48dec-b132-4035-afc4-5d11e7da2a3c-0.orc
   -rw-r--r--   2 bi supergroup       1198 2025-11-27 17:08 
/paimon/fluss/warehouse/fluss.db/fluss_order_with_lake/bucket-2/data-5cf48dec-b132-4035-afc4-5d11e7da2a3c-1.orc
   [bi@Adoris01 bin]$ 
   
   ```However, when querying in ./sql-client.sh:
   
   ```
   Flink SQL> select * from fluss_order_with_lake limit 10;
   
+-----------+----------+-------------+------------+----------------+-------+-------------------------+
   | order_key | cust_key | total_price | order_date | order_priority | clerk | 
                  ptime |
   
+-----------+----------+-------------+------------+----------------+-------+-------------------------+
   |         1 |        1 |       15.10 |     <NULL> |             11 |  测试 | 
2025-11-27 09:10:32.956 |
   
+-----------+----------+-------------+------------+----------------+-------+-------------------------+
   1 row in set (6.99 seconds)
   
   Flink SQL> select * from fluss_order_with_lake$lake2025-11-27 17:10:38,271 
INFO  org.apache.hadoop.conf.Configuration.deprecation             [] - 
dfs.permissions is deprecated. Instead, use dfs.permissions.enabled
   
   Empty set (3.03 seconds)
   
   ```Still no data when querying with `select * from 
fluss_order_with_lake$lake limit 10
   
   ### Solution
   
   _No response_
   
   ### Are you willing to submit a PR?
   
   - [ ] I'm willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to