[ 
https://issues.apache.org/jira/browse/HDFS-14083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16778529#comment-16778529
 ] 

Yongjun Zhang commented on HDFS-14083:
--------------------------------------

HI guys,

I took a look and I agree with [~tlipcon]'s comments with additional thoughts. 

1. Given errno is thread safe (per 
http://www.unix.org/whitepapers/reentrant.html), we should have readDirect to 
initialize errno to 0, and set it to other values upon failures.

2. . Besides the static variables are not thead-safe (Todd pointed out it might 
be ok), the naming is also too generic since they are intended for readDirect. 
{code}
static time_t last_reported_err_time = 0;
static long last_reported_err_cnt = 0;
 {code}
 Maybe we can change the variable name to include "_read_direct" to be 
specific? 

3. If HADOOP-14603 is fixed, then we will not have this excessive logging 
issue, but it doesn't seem to hurt to have HDFS-14083 as an interim fix, and it 
can stay as is even after HADOOP-14603 fix.

Wonder if you agree [~tlipcon] and other folks?

Thanks.

> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> ---------------------------------------------------------------------
>
>                 Key: HDFS-14083
>                 URL: https://issues.apache.org/jira/browse/HDFS-14083
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: libhdfs, native
>    Affects Versions: 3.0.3
>            Reporter: Pranay Singh
>            Assignee: Pranay Singh
>            Priority: Minor
>         Attachments: HADOOP-15928.001.patch, HADOOP-15928.002.patch, 
> HDFS-14083.003.patch, HDFS-14083.004.patch, HDFS-14083.005.patch, 
> HDFS-14083.006.patch, HDFS-14083.007.patch, HDFS-14083.008.patch, 
> HDFS-14083.009.patch
>
>
> Problem:
> ------------
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> {code}
> --------------------------------------------------------------------------------------------------
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
>         at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> {code}
> h3. Root cause
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via {{hdfsOpenFileImpl()}} calls {{readDirect()}} which 
> is hitting this
> exception.
> h3. Fix:
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to