pvary commented on code in PR #12774:
URL: https://github.com/apache/iceberg/pull/12774#discussion_r2097607928


##########
core/src/main/java/org/apache/iceberg/io/ObjectModel.java:
##########
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.io;
+
+import org.apache.iceberg.FileFormat;
+
+/**
+ * Direct conversion is used between file formats and engine internal formats 
for performance
+ * reasons. Object models encapsulate these conversions.
+ *
+ * <p>{@link ReadBuilder} is provided for reading data files stored in a given 
{@link FileFormat}
+ * into the engine specific object model.
+ *
+ * <p>{@link AppenderBuilder} is provided for writing engine specific object 
model to data/delete
+ * files stored in a given {@link FileFormat}.
+ *
+ * <p>Iceberg supports the following object models natively:
+ *
+ * <ul>
+ *   <li>generic - reads and writes Iceberg {@link 
org.apache.iceberg.data.Record}s
+ *   <li>spark - reads and writes Spark InternalRow records
+ *   <li>spark-vectorized - vectorized reads for Spark columnar batches. Not 
supported for {@link
+ *       FileFormat#AVRO}
+ *   <li>flink - reads and writes Flink RowData records
+ *   <li>arrow - vectorized reads for into Arrow columnar format. Only 
supported for {@link
+ *       FileFormat#PARQUET}
+ * </ul>
+ *
+ * <p>Engines could implement their own object models to leverage Iceberg data 
file reading and
+ * writing capabilities.
+ *
+ * @param <E> the engine specific schema of the input data for the appender
+ */
+public interface ObjectModel<E> {

Review Comment:
   I tried several ways, but the information required from the engines is 
different for different file formats:
   
   For Avro:
   ```
       public ObjectModel(
           String name,
           BiFunction<org.apache.iceberg.Schema, Map<Integer, ?>, 
DatumReader<?>> readerFunction,
           BiFunction<Schema, E, DatumWriter<?>> writerFunction,
           BiFunction<Schema, E, DatumWriter<?>> deleteRowWriterFunction)
   ```
   
   For Parquet:
   ```
       private ObjectModel(
           String name,
           ReaderFunction<D> readerFunction,
           BatchReaderFunction<D, F> batchReaderFunction,
           WriterFunction<D, E> writerFunction,
           Function<CharSequence, ?> pathTransformFunc) {
   [..]
     public interface ReaderFunction<D> {
       ParquetValueReader<D> read(
           Schema schema, MessageType messageType, Map<Integer, ?> 
constantFieldAccessors);
     }
   
     public interface BatchReaderFunction<D, F> {
       VectorizedReader<D> read(
           Schema schema,
           MessageType messageType,
           Map<Integer, ?> constantFieldAccessors,
           F deleteFilter,
           Map<String, String> config);
     }
   
     public interface WriterFunction<D, E> {
       ParquetValueWriter<D> write(E engineSchema, Schema icebergSchema, 
MessageType messageType);
     }
   ```
   
   For ORC:
   ```
       private ObjectModel(
           String name,
           ReaderFunction<D> readerFunction,
           BatchReaderFunction<D> batchReaderFunction,
           WriterFunction<E> writerFunction,
           Function<CharSequence, ?> pathTransformFunc) {
   [..]
     public interface WriterFunction<E> {
       OrcRowWriter<?> write(Schema schema, TypeDescription messageType, E 
nativeSchema);
     }
   
     public interface ReaderFunction<D> {
       OrcRowReader<D> read(
           Schema schema, TypeDescription messageType, Map<Integer, ?> 
constantFieldAccessors);
     }
   
     public interface BatchReaderFunction<D> {
       OrcBatchReader<D> read(
           Schema schema, TypeDescription messageType, Map<Integer, ?> 
constantFieldAccessors);
     }
   ```
   
   I don't see how we can push this behind a meaningful common interface.
   
   I have tried another approach, where the `WriteBuilder`, `ReadBuilder` 
provides an API to return "all" of the provided configuration values, like 
`schema`, `projection`, `nameMapping` etc, but I decided against it because of 
the following reasons:
   - Engine and File Format often would need shared calculations above these 
configuration values
   - Getters on a builder is an anti-pattern
   - We aim for a single transformation - keeping a single ObjectModel (or 
whatever we call it 😄) for that seems like a natural choice
   
   Another possibility could be that we define an intermediate Object Model 
(maybe something like Arrow), and provide a double transformation File Format 
-> Arrow -> Engine, and Engine -> Arrow -> File Format. If we don't materialize 
the intermediate model, then we lose performance only on the double 
transformation. The issue with this is that it is an even bigger overhaul of 
the reader/write API, and I expect that there will be a serious performance hit.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to