kbendick commented on a change in pull request #1213: URL: https://github.com/apache/iceberg/pull/1213#discussion_r460589483
########## File path: core/src/main/java/org/apache/iceberg/io/PartitionedFanoutWriter.java ########## @@ -0,0 +1,73 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.iceberg.io; + +import java.io.IOException; +import java.util.Iterator; +import java.util.Map; +import org.apache.iceberg.FileFormat; +import org.apache.iceberg.PartitionKey; +import org.apache.iceberg.PartitionSpec; +import org.apache.iceberg.relocated.com.google.common.collect.Maps; + +public abstract class PartitionedFanoutWriter<T> extends BaseTaskWriter<T> { + private final Map<PartitionKey, RollingFileAppender> writers = Maps.newHashMap(); + + public PartitionedFanoutWriter(PartitionSpec spec, FileFormat format, FileAppenderFactory<T> appenderFactory, + OutputFileFactory fileFactory, FileIO io, long targetFileSize) { + super(spec, format, appenderFactory, fileFactory, io, targetFileSize); + } + + /** + * Create a PartitionKey from the values in row. + * <p> + * Any PartitionKey returned by this method can be reused by the implementation. + * + * @param row a data row + */ + protected abstract PartitionKey partition(T row); + + @Override + public void write(T row) throws IOException { + PartitionKey partitionKey = partition(row); + + RollingFileAppender writer = writers.get(partitionKey); + if (writer == null) { + // NOTICE: we need to copy a new partition key here, in case of messing up the keys in writers. + PartitionKey copiedKey = partitionKey.copy(); + writer = new RollingFileAppender(copiedKey); + writers.put(copiedKey, writer); Review comment: As I think about it more, this is probably not a concern that needs to be considered in this PR. The purpose of this PR is to abstract the generic task writers to share between Flink and Spark. However, I would like to further discuss if we should consider this issue for longer running streaming programs in general where a row with a predicate reaches a writing TaskManager and does not output its values / output its file for a long time. Having further read through the docs and the existing code base, I don't think this could affect correctness but I still think it might cause performance issues during scan planning when reading from the partitioned table. During scan planning, IIUC, an inclusive projection could possibly match a very large number of rows that might fall outside of the predicate range if the `RollingFileAppender` for this rarely observed predicate at this Task Manager buffers its data for a very long time before writing (say days or even weeks in a longer running streaming query). I definitely don't think this needs to be tackled in this PR, but I would like to discuss what we expect to happen in this situation and how downstream systems that read this table will handle this situation. To me, this is different than the spark streaming writer as that is still using the batch writer due to spark's microbatch processing. cc @JingsongLi to see if my concern here is at all well founded or if I'm simply misunderstanding Icebergs intended behavior during read and write. It's possible if users are only using the blink planner that they might also still be using microbatches that would then cover my concern. I can attempt to come up with an example that further demonstrates my concern if need be, though I don't think my concern should be cause for blocking this PR. At the very least, this long buffered data might not be observed by down stream systems until a much later snapshot than snapshots that occurred around the time that rows first entered this writer's buffer, which seems like unexpected behavior on read. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
