stevenzwu commented on code in PR #7362:
URL: https://github.com/apache/iceberg/pull/7362#discussion_r1169010048


##########
docs/flink-configuration.md:
##########
@@ -116,7 +116,8 @@ env.getConfig()
 | case-sensitive              | connector.iceberg.case-sensitive              
| N/A                          | false                            | If true, 
match column name in a case sensitive way.          |
 | as-of-timestamp             | N/A                                           
| N/A                          | null                             | For time 
travel in batch mode. Read data from the most recent snapshot as of the given 
time in milliseconds. |
 | starting-strategy           | connector.iceberg.starting-strategy           
| N/A                          | INCREMENTAL_FROM_LATEST_SNAPSHOT | Starting 
strategy for streaming execution. TABLE_SCAN_THEN_INCREMENTAL: Do a regular 
table scan then switch to the incremental mode. The incremental mode starts 
from the current snapshot exclusive. INCREMENTAL_FROM_LATEST_SNAPSHOT: Start 
incremental mode from the latest snapshot inclusive. If it is an empty map, all 
future append snapshots should be discovered. 
INCREMENTAL_FROM_EARLIEST_SNAPSHOT: Start incremental mode from the earliest 
snapshot inclusive. If it is an empty map, all future append snapshots should 
be discovered. INCREMENTAL_FROM_SNAPSHOT_ID: Start incremental mode from a 
snapshot with a specific id inclusive. INCREMENTAL_FROM_SNAPSHOT_TIMESTAMP: 
Start incremental mode from a snapshot with a specific timestamp inclusive. If 
the timestamp is between two snapshots, it should start from the snapshot after 
the timestamp. Just fo
 r FIP27 Source. |
-| start-snapshot-timestamp    | N/A                                           
| N/A                          | null                             | Start to 
read data from the most recent snapshot as of the given time in milliseconds. |
+| start-snapshot-timestamp    | N/A                                           
| N/A                          | null                             | Stream 
Mode: Start to read data from the oldest snapshot that was committed either at 
or after a given time. Batch Mode: Start to read data from the oldest 
snapshot(exclusive) that was committed either at or before a given time. |

Review Comment:
   is there any difference btw stream and batch mode for 
`start-snapshot-timestamp`?



##########
flink/v1.17/flink/src/test/java/org/apache/iceberg/flink/source/TestIcebergSourceBoundedSql.java:
##########
@@ -75,6 +78,14 @@ protected List<Row> run(
       throws Exception {
     String select = String.join(",", sqlSelectedFields);
     String optionStr = SqlHelpers.sqlOptionsToString(options);
-    return SqlHelpers.sql(getTableEnv(), "select %s from t %s %s", select, 
optionStr, sqlFilter);
+    TableResult tableResult =
+        getTableEnv()
+            .executeSql(String.format("select %s from t %s %s", select, 
optionStr, sqlFilter));
+    try (CloseableIterator<Row> iter = tableResult.collect()) {
+      return Lists.newArrayList(iter);
+    } catch (Exception e) {
+      // To retrieve the underlying exception information that actually caused 
the task failure.
+      throw (RuntimeException) 
e.getCause().getCause().getCause().getCause().getCause().getCause();

Review Comment:
   same comment for cause extraction



##########
flink/v1.17/flink/src/test/java/org/apache/iceberg/flink/source/TestIcebergSourceBounded.java:
##########
@@ -129,6 +129,9 @@ protected List<Row> run(
 
     try (CloseableIterator<Row> iter = stream.executeAndCollect()) {
       return Lists.newArrayList(iter);
+    } catch (Exception e) {
+      // To retrieve the underlying exception information that actually caused 
the task failure.
+      throw (RuntimeException) 
e.getCause().getCause().getCause().getCause().getCause().getCause();

Review Comment:
   why it is important to extract the root cause exception? also this chaining 
is very fragile. if we just want the root cause, there are util method to do 
that.



##########
flink/v1.17/flink/src/test/java/org/apache/iceberg/flink/source/TestFlinkScan.java:
##########
@@ -413,6 +415,141 @@ public void testIncrementalRead() throws Exception {
         TestFixtures.SCHEMA);
   }
 
+  @Test
+  public void testIncrementalReadWithTimestampRange() throws Exception {

Review Comment:
   we probably should break it down to multiple test methods. one for each 
scenario



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to