luoyuxia commented on code in PR #1405:
URL: https://github.com/apache/fluss/pull/1405#discussion_r2242145127


##########
fluss-lake/fluss-lake-paimon/src/main/java/com/alibaba/fluss/lake/paimon/tiering/PaimonBucketOffset.java:
##########
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.alibaba.fluss.lake.paimon.tiering;
+
+import 
com.alibaba.fluss.shaded.jackson2.com.fasterxml.jackson.annotation.JsonIgnoreProperties;
+import 
com.alibaba.fluss.shaded.jackson2.com.fasterxml.jackson.annotation.JsonInclude;
+
+import javax.annotation.Nullable;
+
+import java.io.Serializable;
+
+/** The bucket offset information to be stored in Paimon's snapshot property. 
*/
+@JsonInclude(JsonInclude.Include.NON_NULL)
+@JsonIgnoreProperties(ignoreUnknown = true)
+public class PaimonBucketOffset implements Serializable {

Review Comment:
   Is it possible to follow what we do for json Serialization? See 
`CompletedSnapshotJsonSerde`.  we also reuse the json Serializer to replace 
`PaimonBucketOffsetSerializer`



##########
fluss-lake/fluss-lake-paimon/src/main/java/com/alibaba/fluss/lake/paimon/tiering/PaimonLakeCommitter.java:
##########
@@ -59,6 +54,8 @@ public class PaimonLakeCommitter implements 
LakeCommitter<PaimonWriteResult, Pai
     private FileStoreCommit fileStoreCommit;
     private final TablePath tablePath;
     private static final ThreadLocal<Long> currentCommitSnapshotId = new 
ThreadLocal<>();
+    private static final String FLUSS_LAKE_SNAP_BUCKET_OFFSET_PROPERTY = 
"fluss-bucket-offset";

Review Comment:
   as discuss, should be `fluss-offsets`



##########
fluss-common/src/main/java/com/alibaba/fluss/lake/committer/CommittedLakeSnapshot.java:
##########
@@ -32,7 +32,7 @@ public class CommittedLakeSnapshot {
     private final long lakeSnapshotId;
     // <partition_name, bucket> -> log offset, partition_name will be null if 
it's not a

Review Comment:
   nit: we can update the comment, now it should be partition_id



##########
fluss-lake/fluss-lake-paimon/src/main/java/com/alibaba/fluss/lake/paimon/tiering/PaimonLakeCommitter.java:
##########
@@ -74,6 +71,14 @@ public PaimonCommittable 
toCommittable(List<PaimonWriteResult> paimonWriteResult
         for (PaimonWriteResult paimonWriteResult : paimonWriteResults) {
             committable.addFileCommittable(paimonWriteResult.commitMessage());
         }
+        if (!paimonWriteResults.isEmpty()) {
+            committable.addProperty(

Review Comment:
   This only add the bucket offsets for the records written in this snapshot 
delta. But we definetly need the full bucket offsets since when snapshot 
expire, we'll lose the previous bucket offsets .



##########
fluss-lake/fluss-lake-paimon/src/main/java/com/alibaba/fluss/lake/paimon/tiering/PaimonLakeCommitter.java:
##########
@@ -110,50 +115,45 @@ public void abort(PaimonCommittable committable) throws 
IOException {
     @Override
     public CommittedLakeSnapshot getMissingLakeSnapshot(@Nullable Long 
latestLakeSnapshotIdOfFluss)
             throws IOException {
-        Long latestLakeSnapshotIdOfLake =
-                
getCommittedLatestSnapshotIdOfLake(FLUSS_LAKE_TIERING_COMMIT_USER);
-        if (latestLakeSnapshotIdOfLake == null) {
+        Snapshot latestLakeSnapshotOfLake =
+                
getCommittedLatestSnapshotOfLake(FLUSS_LAKE_TIERING_COMMIT_USER);
+        if (latestLakeSnapshotOfLake == null) {
             return null;
         }
 
         // we get the latest snapshot committed by fluss,
         // but the latest snapshot is not greater than 
latestLakeSnapshotIdOfFluss, no any missing
         // snapshot, return directly
         if (latestLakeSnapshotIdOfFluss != null
-                && latestLakeSnapshotIdOfLake <= latestLakeSnapshotIdOfFluss) {
+                && latestLakeSnapshotOfLake.id() <= 
latestLakeSnapshotIdOfFluss) {
             return null;
         }
 
-        // todo: the temporary way to scan the delta to get the log end offset,
-        // we should read from snapshot's properties in Paimon 1.2
         CommittedLakeSnapshot committedLakeSnapshot =
-                new CommittedLakeSnapshot(latestLakeSnapshotIdOfLake);
-        ScanMode scanMode =
-                fileStoreTable.primaryKeys().isEmpty() ? ScanMode.DELTA : 
ScanMode.CHANGELOG;
-
-        Iterator<ManifestEntry> manifestEntryIterator =
-                fileStoreTable
-                        .store()
-                        .newScan()
-                        .withSnapshot(latestLakeSnapshotIdOfLake)
-                        .withKind(scanMode)
-                        .readFileIterator();
-
-        int bucketIdColumnIndex = getColumnIndex(BUCKET_COLUMN_NAME);
-        int logOffsetColumnIndex = getColumnIndex(OFFSET_COLUMN_NAME);
-        while (manifestEntryIterator.hasNext()) {
-            updateCommittedLakeSnapshot(
-                    committedLakeSnapshot,
-                    manifestEntryIterator.next(),
-                    bucketIdColumnIndex,
-                    logOffsetColumnIndex);
+                new CommittedLakeSnapshot(latestLakeSnapshotOfLake.id());
+
+        String property =
+                
latestLakeSnapshotOfLake.properties().get(FLUSS_LAKE_SNAP_BUCKET_OFFSET_PROPERTY);

Review Comment:
   It'll cause NPE if restore from a old version paimon snapshot. 
   I'd like to suggest:
   - if latestLakeSnapshotOfLake.properties() is null, throw exception and tell 
a user there's a missing snapshot in Fluss metadata, user must  run the tiering 
service using `fluss-flink-tiering-0.7.0.jar` for once to make sure no missing 
snapshot, and then switch to using a newer version.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to