rdblue commented on a change in pull request #1213: URL: https://github.com/apache/iceberg/pull/1213#discussion_r460585801
########## File path: core/src/main/java/org/apache/iceberg/io/PartitionedFanoutWriter.java ########## @@ -0,0 +1,73 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.iceberg.io; + +import java.io.IOException; +import java.util.Iterator; +import java.util.Map; +import org.apache.iceberg.FileFormat; +import org.apache.iceberg.PartitionKey; +import org.apache.iceberg.PartitionSpec; +import org.apache.iceberg.relocated.com.google.common.collect.Maps; + +public abstract class PartitionedFanoutWriter<T> extends BaseTaskWriter<T> { + private final Map<PartitionKey, RollingFileAppender> writers = Maps.newHashMap(); + + public PartitionedFanoutWriter(PartitionSpec spec, FileFormat format, FileAppenderFactory<T> appenderFactory, + OutputFileFactory fileFactory, FileIO io, long targetFileSize) { + super(spec, format, appenderFactory, fileFactory, io, targetFileSize); + } + + /** + * Create a PartitionKey from the values in row. + * <p> + * Any PartitionKey returned by this method can be reused by the implementation. + * + * @param row a data row + */ + protected abstract PartitionKey partition(T row); + + @Override + public void write(T row) throws IOException { + PartitionKey partitionKey = partition(row); + + RollingFileAppender writer = writers.get(partitionKey); + if (writer == null) { + // NOTICE: we need to copy a new partition key here, in case of messing up the keys in writers. + PartitionKey copiedKey = partitionKey.copy(); + writer = new RollingFileAppender(copiedKey); + writers.put(copiedKey, writer); Review comment: > Should we be concerned that a writer won't emit a file until a streaming query is closed due to the previously mentioned case? I think that the intent is to close and emit all of the file files each checkpoint, rather than keeping them open. That is required to achieve exactly-once writes because the data needs to be committed to the table. I think that also takes care of your second question because data is constantly added to the table. > Would it be beneficial to at least emit a warning or info level log to the user that it might be beneficial to pre-partition their data according to the partition key spec . . . I think a reasonable thing to do is to limit the number of writers that are kept open, to limit the resources that are held. Then you can either fail if you go over that limit, or can close and release files with a LRU policy. Failing brings the problem to the user's attention immediately and is similar to what we do on the Spark side, which doesn't allow writing new data to a partition after it is finished. That ensures that data is either clustered for the write, or the job fails. The long-term plan for Spark is to be able to influence the logical plan that is writing to a table. That would be the equivalent of adding an automatic `keyBy` or rough `orderBy` for Flink. I think we would eventually want to do this for Flink as well, but I'm not sure what data clustering and sorting operations are supported currently. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
