[ 
https://issues.apache.org/jira/browse/HBASE-28696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17880742#comment-17880742
 ] 

Hudson commented on HBASE-28696:
--------------------------------

Results for branch master
        [build #1159 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1159/]: 
(/) *{color:green}+1 overall{color}*
----
details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1159/General_20Nightly_20Build_20Report/]








(/) {color:green}+1 jdk17 hadoop3 checks{color}
-- For more information [see jdk17 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1159/JDK17_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Partition BackupSystemTable queries
> -----------------------------------
>
>                 Key: HBASE-28696
>                 URL: https://issues.apache.org/jira/browse/HBASE-28696
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 2.6.0
>            Reporter: Ray Mattingly
>            Assignee: Ray Mattingly
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 2.7.0, 3.0.0-beta-2, 2.6.1
>
>
> When successfully taking an incremental backup, one of our final steps is to 
> delete bulk load metadata from the system table for the bulk loads that 
> needed to be captured in the given backup. This means that we will basically 
> truncate the entire bulk loads system table in a single batch of the deletes 
> after successfully taking an incremental backup. This logic occurs in 
> {{{}BackupSystemTable#deleteBulkLoadedRows{}}}:
> {code:java}
> /*
>  * Removes rows recording bulk loaded hfiles from backup table
>  * @param lst list of table names
>  * @param rows the rows to be deleted
>  */
> public void deleteBulkLoadedRows(List<byte[]> rows) throws IOException {
>   try (Table table = connection.getTable(bulkLoadTableName)) {
>     List<Delete> lstDels = new ArrayList<>();
>     for (byte[] row : rows) {
>       Delete del = new Delete(row);
>       lstDels.add(del);
>       LOG.debug("orig deleting the row: " + Bytes.toString(row));
>     }
>     table.delete(lstDels);
>     LOG.debug("deleted " + rows.size() + " original bulkload rows");
>   }
> } {code}
> Depending on your usage, one may run tons of bulk loads between backups, so 
> this design is needlessly fragile. We should partition these deletes so that 
> we never erroneously fail a backup due to this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to