Ruben Van Wanzeele created HBASE-28998:
------------------------------------------

             Summary: Backup support for S3 broken by checksum added in 
HBASE28625
                 Key: HBASE-28998
                 URL: https://issues.apache.org/jira/browse/HBASE-28998
             Project: HBase
          Issue Type: Bug
            Reporter: Ruben Van Wanzeele


With the 2.6.1 version, we fail to backup to S3 because of the introduction of 
the checksum validation (introduced with HBASE-28625).

Stacktrace:
{code:java}
Error: java.io.IOException: Checksum mismatch between 
hdfs://hdfsns/hbase/hbase/data/SYSTEM/CATALOG/b884434cc05aae3a21c0d0723173ce02/0/43b0e3a7b608441eab7dbce2782511bf
 and 
s3a://product-eks-v2-brt-master-574-backup/hbase/backup_1732545852227/SYSTEM/CATALOG/archive/data/SYSTEM/CATALOG/b884434cc05aae3a21c0d0723173ce02/0/43b0e3a7b608441eab7dbce2782511bf.
 Input and output filesystems are of different types.
Their checksum algorithms may be incompatible. You can choose file-level 
checksum validation via -Ddfs.checksum.combine.mode=COMPOSITE_CRC when 
block-sizes or filesystems are different.
 Or you can skip checksum-checks altogether with -no-checksum-verify, for the 
table backup scenario, you should use -i option to skip checksum-checks.
 (NOTE: By skipping checksums, one runs the risk of masking data-corruption 
during file-transfer.)        at 
org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.verifyCopyResult(ExportSnapshot.java:596)
        at 
org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.copyFile(ExportSnapshot.java:332)
        at 
org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.map(ExportSnapshot.java:254)
        at 
org.apache.hadoop.hbase.snapshot.ExportSnapshot$ExportMapper.map(ExportSnapshot.java:180)
 {code}
I think the solution is to only do the checksum validation if the filesystems 
are the same.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to