[jira] [Updated] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HADOOP-15928:
--
Description: 
Problem:

There is excessive error logging when a file is opened by libhdfs 
(DFSClient/HDFS) in S3 environment, this issue is caused because buffered read 
is not supported in S3 environment, HADOOP-14603 "S3A input stream to support 
ByteBufferReadable"  

The following message is printed repeatedly in the error log/ to STDERR:
--
UnsupportedOperationException: Byte-buffer read unsupported by input 
streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported by 
input stream
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)

Root cause

After investigating the issue, it appears that the above exception is printed 
because
when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
hitting this
exception.

Fix:

Since the hdfs client is not initiating the byte buffered read but is happening 
in a implicit manner, we should not be generating the error log during open of 
a file.




  was:
Problem:

There is excessive error logging when a file is opened by libhdfs 
(DFSClient/HDFS) in S3 environment, this issue is caused because buffered read 
is not supported in S3 environment, HADOOP-14603 "S3A input stream to support 
ByteBufferReadable"  

Excessive error logging results in defect IMPALA-5256: "ERROR log files can get 
very large". This causes the error log files to be huge. 

The following message is printed repeatedly in the error log:

UnsupportedOperationException: Byte-buffer read unsupported by input 
streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported by 
input stream
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)

Root cause

After investigating the issue, it appears that the above exception is printed 
because
when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
hitting this
exception.

Fix:

Since the hdfs client is not initiating the byte buffered read but is happening 
in a implicit manner, we should not be generating the error log during open of 
a file.





> Excessive error logging when using HDFS in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HADOOP-15928:
--
Description: 
Problem:

There is excessive error logging when a file is opened by libhdfs 
(DFSClient/HDFS) in S3 environment, this issue is caused because buffered read 
is not supported in S3 environment, HADOOP-14603 "S3A input stream to support 
ByteBufferReadable"  

Excessive error logging results in defect IMPALA-5256: "ERROR log files can get 
very large". This causes the error log files to be huge. 

The following message is printed repeatedly in the error log:

UnsupportedOperationException: Byte-buffer read unsupported by input 
streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported by 
input stream
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)

Root cause

After investigating the issue, it appears that the above exception is printed 
because
when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
hitting this
exception.

Fix:

Since the hdfs client is not initiating the byte buffered read but is happening 
in a implicit manner, we should not be generating the error log during open of 
a file.




  was:
Problem:

There is excessive error logging when Impala uses HDFS in S3 environment, this 
issue is caused because of  defect HADOOP-14603 "S3A input stream to support 
ByteBufferReadable"  

Excessive error logging results in defect IMPALA-5256: "ERROR log files can get 
very large". This causes the error log files to be huge. 

The following message is printed repeatedly in the error log:

UnsupportedOperationException: Byte-buffer read unsupported by input 
streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported by 
input stream
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)

Root cause

After investigating the issue, it appears that the above exception is printed 
because
when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
hitting this
exception.

Fix:

Since the hdfs client is not initiating the byte buffered read but is happening 
in a implicit manner, we should not be generating the error log during open of 
a file.





> Excessive error logging when using HDFS in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> Excessive error logging results in defect IMPALA-5256: "ERROR log files can 
> get very large". This causes the error log files to be huge. 
> The following message is printed repeatedly in the error log:
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HADOOP-15928:
--
Status: Patch Available  (was: In Progress)

> Excessive error logging when using HDFS in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when Impala uses HDFS in S3 environment, 
> this issue is caused because of  defect HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> Excessive error logging results in defect IMPALA-5256: "ERROR log files can 
> get very large". This causes the error log files to be huge. 
> The following message is printed repeatedly in the error log:
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15928) Excessive error logging when using HDFS in S3 environment

2018-11-13 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HADOOP-15928:
--
Attachment: HADOOP-15928.001.patch

> Excessive error logging when using HDFS in S3 environment
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when Impala uses HDFS in S3 environment, 
> this issue is caused because of  defect HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> Excessive error logging results in defect IMPALA-5256: "ERROR log files can 
> get very large". This causes the error log files to be huge. 
> The following message is printed repeatedly in the error log:
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org