[ 
https://issues.apache.org/jira/browse/HDFS-8247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14512224#comment-14512224
 ] 

Hadoop QA commented on HDFS-8247:
---------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m 19s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | javac |   7m 45s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 18s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   3m 59s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 31s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  6s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   1m 22s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 168m 59s | Tests failed in hadoop-hdfs. |
| | | 193m  0s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestSecureNameNode |
| Timed out tests | org.apache.hadoop.cli.TestHDFSCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12728107/HDFS-8247.00.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 78c6b46 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10385/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10385/testReport/ |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10385/console |


This message was automatically generated.

> TestDiskspaceQuotaUpdate#testAppendOverTypeQuota is failing
> -----------------------------------------------------------
>
>                 Key: HDFS-8247
>                 URL: https://issues.apache.org/jira/browse/HDFS-8247
>             Project: Hadoop HDFS
>          Issue Type: Test
>          Components: HDFS
>    Affects Versions: 2.7.1
>            Reporter: Anu Engineer
>            Assignee: Xiaoyu Yao
>         Attachments: HDFS-8247.00.patch
>
>
> Running org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate 
> failing with the following error
> Running org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
> Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 19.828 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
> testAppendOverTypeQuota(org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate)
>   Time elapsed: 0.962 sec  <<< ERROR!
> org.apache.hadoop.hdfs.protocol.QuotaByStorageTypeExceededException: Quota by 
> storage type : SSD on path : /TestAppendOverTypeQuota is exceeded. quota = 1 
> B but space consumed = 1 KB
>       at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuotaByStorageType(DirectoryWithQuotaFeature.java:227)
>       at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:240)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:874)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.verifyQuotaForUCBlock(FSNamesystem.java:2765)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.prepareFileForAppend(FSNamesystem.java:2713)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2686)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2968)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2939)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:659)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:418)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1492)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1423)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>       at com.sun.proxy.$Proxy19.append(Unknown Source)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:328)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:606)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
>       at com.sun.proxy.$Proxy20.append(Unknown Source)
>       at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1460)
>       at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1524)
>       at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1494)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:342)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:338)
>       at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:338)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:320)
>       at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1164)
>       at org.apache.hadoop.hdfs.DFSTestUtil.appendFile(DFSTestUtil.java:814)
>       at 
> org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate.testAppendOverTypeQuota(TestDiskspaceQuotaUpdate.java:251)
> Results :
> Tests in error: 
>   TestDiskspaceQuotaUpdate.testAppendOverTypeQuota:251 ยป 
> QuotaByStorageTypeExceeded



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to