[ 
https://issues.apache.org/jira/browse/HDFS-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17118518#comment-17118518
 ] 

Zhao Yi Ming edited comment on HDFS-14773 at 5/28/20, 10:05 AM:
----------------------------------------------------------------

[~simbadzina] I tried the recreate steps(hadoop version is 3.1.1) seems it can 
return the correct response.  Can you help have a look if anything I missed? 
Thanks!

 

 
{code:java}
$ curl -L -i -X PUT -T file 
"https://<hostname>:50470/webhdfs/v1/quota/file?op=CREATE" --cacert 
/data/zhaoyim/ssl/test_ca_cert

HTTP/1.1 100 ContinueHTTP/1.1 307 Temporary Redirect
Date: Thu, 28 May 2020 09:55:37 GMT
Cache-Control: no-cache
Expires: Thu, 28 May 2020 09:55:37 GMT
Date: Thu, 28 May 2020 09:55:37 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Location: 
https://<dn>:50475/webhdfs/v1/quota/install_solr_service.sh?op=CREATE&namenoderpcaddress=<nn>:8020&createflag=&createparent=true&overwrite=false
Content-Type: application/octet-stream
Content-Length: 0HTTP/1.1 100 ContinueHTTP/1.1 403 Forbidden
Content-Type: application/json; charset=utf-8
Content-Length: 2171
Connection: close

{{ "RemoteException": { "exception": "DSQuotaExceededException", 
"javaClassName": "org.apache.hadoop.hdfs.protocol.DSQuotaExceededException", 
"message": "The DiskSpace quota of /quota is exceeded: quota = 1024 B = 1 KB 
but diskspace consumed = 134217728 B = 128 MB\n\tat 
org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:195)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:222)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:1154)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:986)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:945)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.addBlock(FSDirWriteFileOp.java:504)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.saveAllocatedBlock(FSDirWriteFileOp.java:771)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.storeAllocatedBlock(FSDirWriteFileOp.java:259)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2714)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:875)\n\tat
 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:561)\n\tat
 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat
 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)\n\tat
 org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)\n\tat 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)\n\tat 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)\n\tat 
java.security.AccessController.doPrivileged(Native Method)\n\tat 
javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)\n\tat
 org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)\n" }}

{code}
 

BTW: swebhdfs worked well on my env:
{code:java}
$ hdfs dfs -ls swebhdfs://<ip>:50470/quota
Found 1 items
-rw-r--r--   1 dr.who hdfs          0 2020-05-28 17:55 
swebhdfs://<ip>:50470/quota/install_solr_service.sh
$
{code}


was (Author: zhaoyim):
[~simbadzina] I tried the recreate steps(hadoop version is 3.1.1) seems it can 
return the correct response.  Can you help have a look if anything I missed? 
Thanks!

 

 
{code:java}
$ curl -L -i -X PUT -T file 
"https://<hostname>:50470/webhdfs/v1/quota/file?op=CREATE" --cacert 
/data/zhaoyim/ssl/test_ca_cert

HTTP/1.1 100 ContinueHTTP/1.1 307 Temporary Redirect
Date: Thu, 28 May 2020 09:55:37 GMT
Cache-Control: no-cache
Expires: Thu, 28 May 2020 09:55:37 GMT
Date: Thu, 28 May 2020 09:55:37 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Location: 
https://<dn>:50475/webhdfs/v1/quota/install_solr_service.sh?op=CREATE&namenoderpcaddress=<nn>:8020&createflag=&createparent=true&overwrite=false
Content-Type: application/octet-stream
Content-Length: 0HTTP/1.1 100 ContinueHTTP/1.1 403 Forbidden
Content-Type: application/json; charset=utf-8
Content-Length: 2171
Connection: close

{{ "RemoteException": { "exception": "DSQuotaExceededException", 
"javaClassName": "org.apache.hadoop.hdfs.protocol.DSQuotaExceededException", 
"message": "The DiskSpace quota of /quota is exceeded: quota = 1024 B = 1 KB 
but diskspace consumed = 134217728 B = 128 MB\n\tat 
org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:195)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:222)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:1154)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:986)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:945)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.addBlock(FSDirWriteFileOp.java:504)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.saveAllocatedBlock(FSDirWriteFileOp.java:771)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.storeAllocatedBlock(FSDirWriteFileOp.java:259)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2714)\n\tat
 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:875)\n\tat
 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:561)\n\tat
 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat
 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)\n\tat
 org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)\n\tat 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)\n\tat 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)\n\tat 
java.security.AccessController.doPrivileged(Native Method)\n\tat 
javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)\n\tat
 org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)\n" }}

{code}
 

> SWEBHDFS closes the connection before a client can read the error response 
> for a DSQuotaExceededException
> ---------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-14773
>                 URL: https://issues.apache.org/jira/browse/HDFS-14773
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: webhdfs
>            Reporter: Simbarashe Dzinamarira
>            Priority: Major
>         Attachments: HDFS-14773-failing-test.patch
>
>
> When a DSQuotaExceededException is encountered using swebhdfs, the connection 
> is closed before the client can read the error response. This does not happen 
> for webhdfs.
> Attached is a patch for a test case that exposes the bug.
> You can recreate the bug on a live cluster using the steps below.
> *1) Create a directory and set a space quota*
> hdfs mkdir <directory-with-quota>
> hdfs dfsadmin -setSpaceQuota <N> <directory-with-quota> 
> *2) Write a file whose size exceeds the quota, using swebhdfs.*
> curl -L -i --negotiate -u : -X PUT -T largeFile 
> "<namenode-url>:<port>/webhdfs/v1/<directory-with-quota>/largeFile?op=CREATE"
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to