[
https://issues.apache.org/jira/browse/HBASE-2935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12919670#action_12919670
]
Alex Newman commented on HBASE-2935:
------------------------------------
I am not exactly sure what we are testing with this jira.
- Are we verifying that hdfs throws the ChecksumException, shouldn't this be an
hdfs test?
- Is it enough to just force cause a checksumException and to make sure the
logsplitter handles it correctly?
> Refactor "Corrupt Data" Tests in TestHLogSplit
> ----------------------------------------------
>
> Key: HBASE-2935
> URL: https://issues.apache.org/jira/browse/HBASE-2935
> Project: HBase
> Issue Type: Bug
> Components: test
> Affects Versions: 0.89.20100621
> Reporter: Nicolas Spiegelberg
> Priority: Minor
>
> While fixing HBASE-2643, I noticed that a couple of the HLogSplit tests from
> HBASE-2437 were now failing. 3 tests are trying to detect proper handling of
> garbage data: testCorruptedFileGetsArchivedIfSkipErrors,
> testTrailingGarbageCorruptionLogFileSkipErrorsFalseThrows,
> testCorruptedLogFilesSkipErrorsFalseDoesNotTouchLogs. However, these tests
> are corrupting data at the HBase level. Data corruption should be tested at
> the HDFS level, because the filesystem is responsible for data validation.
> These tests need to inject corrupt data at the HDFS level & then verify that
> ChecksumExceptions are thrown.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.