[ 
https://issues.apache.org/jira/browse/HDFS-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14309818#comment-14309818
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7738:
-------------------------------------------

> In general I don't like random tests because they imply intermittent 
> failures, ...

If there are intermittent failures, it means that there are bugs either in the 
code or in the test.  I guess what you don't like is poorly written random 
tests which may experience intermittent failures.  For well written tests, it 
won't have intermittent failures.

Why we need random tests?  It is because the problem space is huge so that it 
is impossible to try all the cases.  We have to do random sampling.

testBasicTruncate, which is a well written test, does cover a lot of cases.  
However, it only tests a 12 bytes file with 3 blocks.  Also, toTruncate is 
consecutive.  For example, it does not test the case calling truncate to take 
out 10 blocks at once.

> Add more negative tests for truncate
> ------------------------------------
>
>                 Key: HDFS-7738
>                 URL: https://issues.apache.org/jira/browse/HDFS-7738
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: test
>            Reporter: Tsz Wo Nicholas Sze
>            Assignee: Tsz Wo Nicholas Sze
>            Priority: Minor
>             Fix For: 2.7.0
>
>         Attachments: h7738_20150204.patch, h7738_20150205.patch, 
> h7738_20150205b.patch, h7738_20150206.patch
>
>
> The following are negative test cases for truncate.
> - new length > old length
> - truncating a directory
> - truncating a non-existing file
> - truncating a file without write permission
> - truncating a file opened for append
> - truncating a file in safemode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to