dramaticlly commented on code in PR #15590:
URL: https://github.com/apache/iceberg/pull/15590#discussion_r3075788618


##########
core/src/main/java/org/apache/iceberg/DataFileAccumulator.java:
##########
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.BiFunction;
+import java.util.function.Consumer;
+import java.util.function.Function;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.DataFileSet;
+
+/** Accumulates data files and flushes them to manifests when a count 
threshold is reached. */
+class DataFileAccumulator {
+
+  static final int DEFAULT_FLUSH_THRESHOLD = 100_000;

Review Comment:
   thanks @RussellSpitzer for the formula, I think `targetManifestSizeBytes / 
(30 * numCols)` estimate works well for the estimation on how many entries can 
we fit into a 8MB manifests with some local benchmark and collected the result 
for table of 5/30/150 columns, each entry contribute from 200B to 3000B in the 
manifest. I compared both end-to-end latency and manifest written on the disk, 
more details can be found in 
https://github.com/dramaticlly/iceberg/commit/383b5605ac59a62686e7a3f7ebfba6d0dfc03a48
   
   With the dynamic flushing threshold, we could potentially stage over 100k 
entries for the narrow tables but much less for wide table to have a balance 
between memory overhead and entries saturation for flushed manifests. 
   
   I also noticed that we need to have amortized IO to not slow down the 
writes, so ideally we shall accumulate enough entries so that it can flush out 
all using available cores in parallel using the Tasks.foreach on the threadpool.
   
   At the end of the day, the estimation might still produce some tail 
fragmentation in the flushed manifests. With the [existing implementation of 
RollingManifestWriter](https://github.com/apache/iceberg/blob/main/core/src/main/java/org/apache/iceberg/RollingManifestWriter.java#L121),
 we can probably handle the 1.01 manifests but not for 1.10 manifests, and once 
the entries are divided in threadpool it's a bit difficult to recombine the 
tails. I hope we can leverage on the ManifestMergeManager step at the end of 
the commit to regroup the manifests together. The alternative and more complex 
approach would be not flush until we reached our flush count and recombine 
everything in the end, but I want to double check if this is necessary given we 
also have the async procedure to recluster/rewrite the manifest.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to