jackye1995 commented on a change in pull request #3432:
URL: https://github.com/apache/iceberg/pull/3432#discussion_r763712539



##########
File path: site/docs/row-level-deletes.md
##########
@@ -0,0 +1,196 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+# Row-Level Deletes
+
+Iceberg supports metadata-based deletion through the `DeleteFiles` interface.
+It allows you to quickly delete a specific file or any file that might match a 
given expression without the need to read or write any data in the table.
+
+Row-level delete targets more complicated use cases such as general data 
protection regulation (GDPR).
+Copy-on-write and merge-on-read are two different approaches to handle 
row-level delete operations. Here are their definitions in Iceberg:
+
+- **copy-on-write**: a delete directly rewrites all the affected data files.
+- **merge-on-read**: delete information is encoded in the form of _delete 
files_. The table reader can apply all delete information at read time.
+
+Overall, copy-on-write is more efficient in reading data, whereas 
merge-on-read is more efficient in writing deletes, but requires more 
maintenance and tuning to be performant in reading data with deletes.
+Users can choose to use **both** copy-on-write and merge-on-read for the same 
Iceberg table based on different situations. 
+For example, a time-partitioned table can have newer partitions with more 
frequent updates maintained in the merge-on-read approach through a streaming 
pipeline,
+and older partitions maintained in the copy-on-write approach with less 
frequent GDPR deletes from batch ETL jobs.
+
+There are use cases that could only be supported by one approach such as 
change data capture (CDC).
+There are also limitations for different compute engines that lead them to 
prefer one approach over another.
+Please check out the documentation of the specific compute engines to see the 
details of their capabilities for these two approaches.
+In this article, we will focus on explaining Iceberg's core design of 
copy-on-write and merge-on-read.
+
+!!!Note
+    In Iceberg, update is modeled as a delete with an insert within the same 
transaction. 
+    Therefore, we will focus our discussion on delete in this article. 
+
+## Copy-on-write
+
+In the copy-on-write approach, given a user's delete requirement, the write 
process would search for all the affected data files and perform a rewrite 
operation.
+
+For example, consider an unpartitioned table with schema `(id bigint, category 
string, data string)` that has the following files:
+
+```
+file A: (1, 'c1', 'data1'), (2, 'c1', 'data2')
+file B: (3, 'c2', 'data1'), (4, 'c2', 'data2')
+file C: (5, 'c3', 'data3'), (6, 'c3', 'data2')
+```
+
+A copy-on-write deletion of `data='data1'` rewrites files A and B into a 
different set of files. An example output table file layout might look like:
+
+```
+file D: (2, 'c1', 'data2'), (4, 'c2', 'data2')
+file C: (5, 'c3', 'data3'), (6, 'c3', 'data2')
+```
+
+There is no effect on read side in the copy-on-write approach.
+
+## Merge-on-read
+
+### Row-level delete file spec
+
+Iceberg supports 2 different types of row-level delete files: **position 
deletes** and **equality deletes**.
+If you are unfamiliar with these concepts, please read the 
[spec](../spec/#row-level-deletes) page for more information before proceeding.
+
+Also note that because row-level delete files are valid Iceberg data files, 
all rows in each delete file must belong to the same partition.
+If a delete file belongs to `Unpartitioned` (the partition spec has no 
partition field), then the delete file is called a **global delete**. 
+Otherwise, it is called a **partition delete**.
+
+### Writing delete files
+
+From the end user's perspective, it is very rare to directly request deletion 
of a specific row of a specific file. 
+A delete requirement usually comes as a predicate such as `id = 5` or `date < 
TIMESTAMP '2000-01-01'`. 
+Given a predicate, delete files can be produced in one or some combinations of 
the following ways:
+
+1. **partition position deletes**: perform a scan \[1\] to know the data files 
and row positions affected by the predicate and then write partition position 
deletes \[2\]
+2. **partition equality deletes**: convert input predicate to equality 
predicates \[3\] for each affected partition and write partition equality 
deletes
+3. **partition global deletes**: convert input predicate to equality 
predicates and write global equality deletes 
+
+\[1\] scan here can mean a table scan, or a scan of unflushed files (stored in 
memory, local RocksDB, etc.) for use cases like streaming.
+
+\[2\] it is in theory possible to write global position deletes, but it is 
always preferred to write partition position deletes because the writer already 
knows the exact partition to use after the scan.
+
+\[3\] if the input inequality predicate cannot be easily converted to a finite 
number of equality predicates (e.g. `price > 2.33`), then it is preferred to 
use position deletes instead.
+
+For example, in the change data capture use case, both partition position 
deletes and partition equality deletes are used. 
+For a new delete such as `id = 1001`, the writer checks the unflushed data 
index to see if there is an existing row matching `id = 1001` and performs a 
position delete.
+It also writes an equality delete that is applied to all the existing data 
files that are already in the table.
+
+### Reading data with delete files
+
+During Iceberg's scan file planning phase, a delete file index is constructed 
to filter the delete files associated with any data file, with the following 
rules:
+
+1. equality delete files are applied to data files of strictly lower sequence 
numbers
+2. position delete files are applied to data files of equal or lower sequence 
numbers
+3. further pruning is performed by comparing the partition and statistics 
information of each pair of delete and data file. Therefore partition deletes 
are always preferred to global deletes.
+
+After the planning phase, each data file to read is associated with a set of 
delete files to merge.
+Because position deletes must be sorted by file path and row position based on 
the spec,
+applying position deletes to data files can be viewed as merging two sorted 
lists, which can be done efficiently.
+When a position delete file is too large, we can also stream the rows without 
the need to buffer the entire file into memory.
+In contrast, all the rows of all the associated equality delete files must be 
loaded to memory to be checked against every row in a data file.
+Therefore, having too many equality deletes not only is computationally less 
efficient but also runs the risk of out-of-memory error.
+
+#### Position delete vs equality delete
+
+From the performance perspective, position delete is preferred based on its 
read mechanism. However, the advantage in read has the following tradeoffs:
+
+1. slower write due to scanning
+2. potential commit conflicts with operations that tries to remove data files
+
+The slower write of position delete is already explained the [writing delete 
files](#writing-delete-files) section.
+Here we use an example to show the issue of potential commit conflicts.
+Consider the following case, where a `rewrite` starts at `t1`, a `delete` 
starts at `t2` before `rewrite` completes.
+These two operations are running against the same table snapshot of sequence 
number `100`:
+
+```text
+t0: data.1 data.2 data.3 delete.3_1 (seq=100)
+       \     /       \     /
+t1:     data.4        data.5        (rewrite)
+
+t2: data.1 data.2 data.3 delete.3_1    
+                         delete.3_2 (delete)
+```
+
+If `delete` commits after `rewrite`:
+
+- if `delete.3_2` is a position delete, `delete` has to be retried from 
beginning because the old file is removed.
+- if `delete.3_2` is an equality delete, `delete` can succeed after retry 
because `delete.3_2` will gain a higher sequence number `102` and be applied to 
`data.5` with sequence number `101`.
+
+If `rewrite` commits after `delete`:
+
+- if `delete.3_2` is a position delete, `rewrite` has to be retried from 
beginning because it removes `data.3` which undeletes `delete.3_2`.
+- if `delete.3_2` is an equality delete, `rewrite` can commit `data.5` with 
sequence number `100`, so `delete.3_2` with sequence number `101` can still be 
applied to `data.5`.
+
+From the analysis above, we can conclude that:
+
+1. equality delete is faster to write, but requires more effort in maintenance 
to have as good read performance as position delete.
+2. if there are lots of data file removals and row-level deletes commits 
happening at the same time, equality delete could ensure that all the 
operations are committed without conflict.
+3. if a table does not receive frequent deletes or data file removals and 
row-level deletes could be isolated, position delete is recommended for better 
performance.
+
+Similar to the tradeoff between copy-on-write and merge-on-read, the right 
choice of the delete file to write depends on the specific application 
requirements.
+For instance, in the change data capture example we described above, we choose 
to write an equality delete to apply to existing table data because it is too 
slow to perform a scan and then generate position deletes in a streaming 
pipeline. 
+However, because we can quickly know the row position of unflushed data, it is 
more desirable to write a position delete for it.
+
+## Compaction
+
+In Iceberg, we are interested in the following 2 types of compactions:
+
+- **data compaction**: optimization of data files into better layout through 
algorithms like bin-packing, sorting, z-ordering, etc.
+- **delete compaction**: optimization of delete files into better layout, 
which also includes removing delete files
+
+!!!Note
+    The big data community commonly uses the terms _major compaction_ and 
_minor compaction_ to describe compaction features. 
+    However, we notice that using these terms could easily cause confusion 
because people have different definitions for these terms.
+    Therefore, we do not use these terms in this article, and we will focus on 
explaining specific Iceberg actions that compact data and delete files.
+
+### Data compaction
+
+Iceberg data compaction is done through the `RewriteDataFiles` action.
+Users can choose different rewrite strategies such as bin-packing or sorting 
to rewrite files into optimized layout.
+More details could be found in the procedure pages for each engine that 
implements this action, including Spark and Flink.
+
+### Delete compaction
+
+Accumulating too many deletes would result in high latency at read time.
+Having too many equality deletes might even cause out-of-memory error.
+Delete compaction should be used to keep the number of delete files low in an 
Iceberg table.
+
+The most straight-forward way to achieve delete compaction is to merge deletes 
into data files and produce new data files that do not contain the removed rows.
+Notice that data compaction already merges deletes as a part of reading data 
files, so 2 configurations are added to `RewriteDataFiles` to also support 
delete compaction:
+
+| Property                          | Default             | Description        
                                    |
+| --------------------------------- | ------------------- | 
------------------------------------------------------ |
+| delete-file-threshold             | `Integer.MAX_VALUE` | The minimum number 
of deletes that needs to be associated with a data file for it to be considered 
for rewriting. If a data file has this number of deletes or more, it will be 
rewritten regardless of its file size. |
+| use-starting-sequence-number      | true                | If the compaction 
should use the sequence number of the snapshot at compaction start time for new 
data files, instead of using the sequence number of the newly produced 
snapshot. The use case was discussed in section [position delete vs equality 
delete](#position-delete-vs-equality-delete) to minimize commit conflicts. |
+
+#### Additional compaction actions

Review comment:
       I think a compatibility matrix is likely needed for each engine as we 
are also introducing multi-version support. So I am not including it here.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to