[ 
https://issues.apache.org/jira/browse/HBASE-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-16235:
----------------------------------
    Status: Open  (was: Patch Available)

> TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too 
> many hfiles
> ----------------------------------------------------------------------------------------
>
>                 Key: HBASE-16235
>                 URL: https://issues.apache.org/jira/browse/HBASE-16235
>             Project: HBase
>          Issue Type: Bug
>            Reporter: ChiaPing Tsai
>            Assignee: ChiaPing Tsai
>            Priority: Trivial
>         Attachments: HBASE-16235-v1.patch
>
>
> TestSnapshotFromMaster#testSnapshotHFileArchiving assumes that all hfiles 
> will be compacted and be moved to “archive folder” after cleaning. But not 
> all hfiles will be compacted if there are large number of hfiles.
> The above may be happened if changing the default config like smaller write 
> buffer(hbase.client.write.buffer) or ExponentialClientBackoffPolicy.
> {code:title=TestSnapshotFromMaster.java|borderStyle=solid}
> // it should also check the hfiles in the normal path 
> (/hbase/data/default/...)
> public void testSnapshotHFileArchiving() throws Exception {
>   //...
>   // get the archived files for the table
>     Collection<String> files = getArchivedHFiles(archiveDir, rootDir, fs, 
> TABLE_NAME);
>     // and make sure that there is a proper subset
>     for (String fileName : snapshotHFiles) {
>       assertTrue("Archived hfiles " + files + " is missing snapshot file:" + 
> fileName,
>         files.contains(fileName));
>     }
>   //...
> }   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to