samredai commented on a change in pull request #3432:
URL: https://github.com/apache/iceberg/pull/3432#discussion_r741507524



##########
File path: site/docs/cow-and-mor.md
##########
@@ -0,0 +1,195 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+# Copy-on-Write and Merge-on-Read
+
+This page explains the concept of copy-on-write and merge-on-read in the 
context of Iceberg to provide readers more clarity around Iceberg's table spec 
design.
+
+## Introduction
+
+In Iceberg, copy-on-write and merge-on-read are different ways to handle 
row-level update and delete operations. Here are their definitions:
+
+- **copy-on-write (CoW)**: an update/delete directly rewrites the entire 
affected data files.
+- **merge-on-read (MoR)**: update/delete information is encoded in the form of 
delete files. The table reader can apply all delete information at read time. A 
compaction process takes care of merging delete files into data files 
asynchronously. 
+
+Clearly, CoW is more efficient in reading data, but MoR is more efficient in 
writing data.
+Users can choose to use **BOTH** CoW and MoR against the same Iceberg table 
based on different situations. 
+A common example is that, for a time-partitioned table, newer partitions with 
more frequent updates are maintained in the MoR approach through a CDC 
streaming pipeline,
+and older partitions are maintained in the CoW way with less frequent GDPR 
updates from batch ETL jobs.
+
+## Copy-on-write
+
+As the definition states, given a user's update/delete requirement, the CoW 
write process would search for all the affected data files and perform rewrite.
+Spark supports CoW `DELETE`, `UPDATE` and `MERGE` operations through Spark 
extensions. More details can be found in [Spark Writes](../spark-writes) page.
+
+## Merge-on-read
+
+In the next few sections, we provide more details around the Iceberg MoR 
design.
+
+### Row-Level Delete File Spec
+
+As documented in the [Spec](../spec/#row-level-deletes) page, Iceberg supports 
2 different types of row-level delete files: **position deletes** and 
**equality deletes**.
+If you are unfamiliar with these concepts, please read the related sections in 
the spec for more information before proceeding.
+
+Also note that because row-level delete files are valid Iceberg data files, 
each file must define the partition it belongs to.
+If the file belongs to `Unpartitioned` (the partition spec has no partition 
field), then the delete file is called a **global delete**. 
+Otherwise, it is called a **partition delete**.
+
+### MoR Update as Delete + Insert
+
+In Iceberg, update is modeled as a delete with an insert within the same 
transaction, so there is no concept of an "update file".
+During a MoR write transaction, new data files and delete files are committed 
with the same sequence number.
+During a MoR read process, delete files are applied to data files of strictly 
lower sequence numbers.
+This ensures the latest updated rows are displayed to users during a MoR read.
+
+### Delete File Writer
+
+From the end user's perspective, it is very rare that they could directly 
request deletion of a specific row of a specific file given the abstraction 
provided by Iceberg. 
+A delete requirement almost always comes as a predicate such as `id = 5` or 
`date < '2000-01-01'`. 
+Given the predicate, a delete writer can write delete files in one or some 
combinations of the following ways:
+
+1. **partition position deletes**: perform a scan \[1\] to know the data files 
and row positions affected by the predicate and then write partition \[2\] 
position deletes
+2. **partition equality deletes**: convert input predicate \[3\] to partition 
equality predicates and write partition equality deletes
+3. **partition global deletes**: convert input predicate to equality 
predicates and write global equality deletes 
+
+\[1\] scan here can mean a table scan, or a scan of unflushed files (stored in 
memory, local RocksDB, etc.) for use cases like streaming
+
+\[2\] it is in theory possible to write global position deletes, but the 
writer already knows the exact partitions to write, so it is almost always 
preferred to write partition position deletes because it costs the same and 
improves the efficiency of the MoR read process.
+
+\[3\] if the input inequality predicate cannot be converted to a finite number 
of equality predicates (e.g. `price > 2.33`), then it is only possible to use 
position deletes instead.
+
+### Data File Reader with Deletes
+
+During MoR read time, the Iceberg reader indexes all the delete files and 
determines the associated delete files for each data file. Typically,
+
+- as described before, delete files are only applied to data files that has a 
strictly lower sequence number
+- global deletes are associated with every data file, so it is desirable to 
have as few global deletes as possible
+- partition deletes are pruned based on their statistics and secondary index 
information so that each data file is associated with the minimum number of 
necessary delete files possible.
+
+Because position deletes must be sorted by file path and row position, 
applying position deletes to data files can be done by streaming the rows in 
position deletes.
+Therefore, there is not too much burden on memory side, but the number of IO 
increases as the number of position delete files increases, so it's desirable 
to have a low number of position deletes for each data file.

Review comment:
       s/on memory side/on the memory side




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to