[
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16652876#comment-16652876
]
Ted Yu commented on HADOOP-15850:
---------------------------------
I tried to add '-blocksperchunk 0' option when invoking DistCp:
{code}
2018-10-17 02:33:53,708 DEBUG [Time-limited test]
mapreduce.MapReduceBackupCopyJob(416): New DistCp options: [-async,
-blocksperchunk, 0,
hdfs://localhost:34344/user/hbase/test-data/78931012-3303-fc71-e289-5a9726f1bfcc/data/default/test-1539743586635/2e17accd93f78be97c0f585e68f283d6/f/46480cbed054406c9ef52ff123729938_SeqId_205_,
hdfs://localhost:34344/user/hbase/test-data/78931012-3303-fc71-e289-5a9726f1bfcc/data/default/test-1539743586635/2e17accd93f78be97c0f585e68f283d6/f/7e3cc96eb3f7447cb4f925df947d1fa3_SeqId_205_,
hdfs://localhost:34344/backupUT/backup_1539743624592]
{code}
Still encountered 'Inconsistent sequence file' error.
> CopyCommitter#concatFileChunks should check that the source file to be merged
> is a split
> ----------------------------------------------------------------------------------------
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
> Issue Type: Task
> Components: tools/distcp
> Affects Versions: 3.1.1
> Reporter: Ted Yu
> Priority: Major
> Attachments: HADOOP-15850.v1.patch,
> testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" +
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS,
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test]
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp :
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test]
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp :
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test]
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN [Thread-936] mapred.LocalJobRunner$Job(590):
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> length = 5142 aclEntries = null, xAttrs = null}
> at
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
> at
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
> at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that
> their isSplit() return false. Otherwise the following from toString should be
> logged:
> {code}
> if (isSplit()) {
> sb.append(", chunkOffset = ").append(this.getChunkOffset());
> sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that
> defeats the purpose of using DistCp.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]