[ 
https://issues.apache.org/jira/browse/HDFS-14462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16898145#comment-16898145
 ] 

Erik Krogen commented on HDFS-14462:
------------------------------------

{quote}With regards to item #6, try with resources throws the error in the try 
portion and suppresses the one in the close. When I don't throw a 
DSQuotaExceededException in the try, then the generic HTTPURLConnection error 
is the one which the test throws.
{quote}
We should be able to achieve this without the try-with-resources, something 
like:
{code:java}
try {
  // do a write which triggers the quota exception
  fail("should have thrown exception");
} catch (DSQuotaExceededException e) {
  // expected
} finally {
  out.close();
}
{code}
For the new log statement, I don't think it should be at "error" level, 
probably just "warn" (maybe even "info"?), since we can expect this to happen 
in normal circumstances like a quota exception. See this [helpful 
guide|https://en.wikipedia.org/wiki/Log4j#Log4j_log_levels] on log level 
semantics. It would also be nice to include some additional information, like:
{code:java}
LOG.warn("Write to output stream for file {} failed. Attempting to fetch the 
cause from the stream", fspath, e);
{code}
It's good to remember that many people who look at these logs won't bother to 
look at the place in the code where the log statement originates, so some 
context is helpful.

> WebHDFS throws "Error writing request body to server" instead of 
> DSQuotaExceededException
> -----------------------------------------------------------------------------------------
>
>                 Key: HDFS-14462
>                 URL: https://issues.apache.org/jira/browse/HDFS-14462
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: webhdfs
>    Affects Versions: 3.2.0, 2.9.2, 3.0.3, 2.8.5, 2.7.7, 3.1.2
>            Reporter: Erik Krogen
>            Assignee: Simbarashe Dzinamarira
>            Priority: Major
>         Attachments: HDFS-14462.001.patch, HDFS-14462.002.patch
>
>
> We noticed recently in our environment that, when writing data to HDFS via 
> WebHDFS, a quota exception is returned to the client as:
> {code}
> java.io.IOException: Error writing request body to server
>         at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3536)
>  ~[?:1.8.0_172]
>         at 
> sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3519)
>  ~[?:1.8.0_172]
>         at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) 
> ~[?:1.8.0_172]
>         at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) 
> ~[?:1.8.0_172]
>         at java.io.FilterOutputStream.flush(FilterOutputStream.java:140) 
> ~[?:1.8.0_172]
>         at java.io.DataOutputStream.flush(DataOutputStream.java:123) 
> ~[?:1.8.0_172]
> {code}
> It is entirely opaque to the user that this exception was caused because they 
> exceeded their quota. Yet in the DataNode logs:
> {code}
> 2019-04-24 02:13:09,639 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
> Exception
> org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota 
> of /foo/path/here is exceeded: quota = XXXXXXXXXXXX B = X TB but diskspace 
> consumed = XXXXXXXXXXXXXXXX B = X TB
>         at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:211)
>         at 
> org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:239)
> {code}
> This was on a 2.7.x cluster, but I verified that the same logic exists on 
> trunk. I believe we need to fix some of the logic within the 
> {{ExceptionHandler}} to add special handling for the quota exception.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to