[
https://issues.apache.org/jira/browse/HDFS-14462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16896462#comment-16896462
]
Erik Krogen edited comment on HDFS-14462 at 7/30/19 7:58 PM:
-------------------------------------------------------------
Good find, thanks [~simbadzina]. I took a look at the v1 patch:
# It's possible for {{validateResponse}} not to throw anything, so I think we
need to do:
{code}
} catch (IOException e) {
validateResponse(op, conn, true);
throw e;
}
{code}
to ensure that we don't swallow and permanently lose a failure. Maybe we should
also log {{e}} since we are masking it? I'm not sure if it will ever contain
useful information.
# You have an unused {{DFSAdmin}} in your test
# You can just use {{assertTrue()}} instead of {{Assert.assertTrue()}}
# In your {{setQuota}} command you're also setting the {{namespaceQuota}} equal
to the {{spaceQuota}}, you probably want {{HdfsConstants#QUOTA_DONT_SET}} for
name quota
# Can we use smaller quota and file sizes? 500MB seems pretty large
# Right now if the {{DSQuotaExceededException}} is thrown from the {{close()}}
call, the test still succeeds. Can we make the test enforce that it should be
the {{write()}} method which throws?
# Make sure to fix the checkstyle, and also typically we don't use star-imports
({{import static package.Class.*}})
was (Author: xkrogen):
Good find, thanks [~simbadzina]. I took a look at the v1 patch:
# It's possible for {{validateResponse}} not to throw anything, so I think we
need to do:
{code}
} catch (IOException e) {
validateResponse(op, conn, true);
throw e;
}
{code}
to ensure that we don't swallow and permanently lose a failure. Maybe we should
also log {{e}} since we are masking it? I'm not sure if it will ever contain
useful information.
# You have an unused {{DFSAdmin}} in your test
# You can just use {{assertTrue()}} instead of {{Assert.assertTrue()}}
# In your {{setQuota}} command you're also setting the {{namespaceQuota}} equal
to the {{spaceQuota}}, you probably want {{HdfsConstants#QUOTA_DONT_SET}} for
name quota
# Can we use smaller quota and file sizes? 500MB seems pretty large
# Right now if the {{DSQuotaExceededException}} is thrown from the {{close()}}
call, the test still succeeds. Can we make the test enforce that it should be
the {{write()}} method which throws?
> WebHDFS throws "Error writing request body to server" instead of
> DSQuotaExceededException
> -----------------------------------------------------------------------------------------
>
> Key: HDFS-14462
> URL: https://issues.apache.org/jira/browse/HDFS-14462
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: webhdfs
> Affects Versions: 3.2.0, 2.9.2, 3.0.3, 2.8.5, 2.7.7, 3.1.2
> Reporter: Erik Krogen
> Assignee: Simbarashe Dzinamarira
> Priority: Major
> Attachments: HDFS-14462.001.patch
>
>
> We noticed recently in our environment that, when writing data to HDFS via
> WebHDFS, a quota exception is returned to the client as:
> {code}
> java.io.IOException: Error writing request body to server
> at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3536)
> ~[?:1.8.0_172]
> at
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3519)
> ~[?:1.8.0_172]
> at
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> ~[?:1.8.0_172]
> at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
> ~[?:1.8.0_172]
> at java.io.FilterOutputStream.flush(FilterOutputStream.java:140)
> ~[?:1.8.0_172]
> at java.io.DataOutputStream.flush(DataOutputStream.java:123)
> ~[?:1.8.0_172]
> {code}
> It is entirely opaque to the user that this exception was caused because they
> exceeded their quota. Yet in the DataNode logs:
> {code}
> 2019-04-24 02:13:09,639 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer
> Exception
> org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota
> of /foo/path/here is exceeded: quota = XXXXXXXXXXXX B = X TB but diskspace
> consumed = XXXXXXXXXXXXXXXX B = X TB
> at
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:211)
> at
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:239)
> {code}
> This was on a 2.7.x cluster, but I verified that the same logic exists on
> trunk. I believe we need to fix some of the logic within the
> {{ExceptionHandler}} to add special handling for the quota exception.
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]