[jira] [Updated] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9634:
-
Fix Version/s: 2.8.0

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0, 2.7.1, 3.0.0-alpha1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-21 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-9634:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.7.3
   Status: Resolved  (was: Patch Available)

I've committed the fix to trunk, branch-2, branch-2.8 and branch-2.7.

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 2.7.3
>
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-12 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-9634:
-
Attachment: HDFS-9634.002.patch

Attaching HDFS-9634-002.patch. Sorry about the previous bad patch.

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-11 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-9634:
-
Target Version/s: 3.0.0, 2.8.0
  Status: Patch Available  (was: Open)

[~daryn], [~kihwal], and [~jlowe]:
Attached HDFS-9634.001.patch

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.1, 3.0.0, 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9634.001.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-11 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-9634:
-
Attachment: HDFS-9634.001.patch

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9634.001.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)