[jira] [Commented] (HADOOP-15063) IOException may be thrown when read from Aliyun OSS in some case

2017-11-22 Thread wujinhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16262396#comment-16262396
 ] 

wujinhu commented on HADOOP-15063:
--

Thanks for the review.
I found it is the same with https://issues.apache.org/jira/browse/HADOOP-14072
I will close this.

> IOException may be thrown when read from Aliyun OSS in some case
> 
>
> Key: HADOOP-15063
> URL: https://issues.apache.org/jira/browse/HADOOP-15063
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2, 3.0.0-beta1
>Reporter: wujinhu
>Assignee: wujinhu
> Attachments: HADOOP-15063.001.patch
>
>
> IOException will be thrown in this case
> 1. set part size = n(102400)
> 2. assume current position = 0, then partRemaining = 102400
> 3. we call seek(pos = 101802), with pos > position && pos < position + 
> partRemaining, so it will skip pos - position bytes, but partRemaining 
> remains the same
> 4. if we read bytes more than n - pos, it will throw IOException.
> Current code:
> {code:java}
> @Override
>   public synchronized void seek(long pos) throws IOException {
> checkNotClosed();
> if (position == pos) {
>   return;
> } else if (pos > position && pos < position + partRemaining) {
>   AliyunOSSUtils.skipFully(wrappedStream, pos - position);
>   // we need update partRemaining here
>   position = pos;
> } else {
>   reopen(pos);
> }
>   }
> {code}
> Logs:
> java.io.IOException: Failed to read from stream. Remaining:101802
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
>   at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> How to re-produce:
> 1. create a file with 10MB size
> 2. 
> {code:java}
> int seekTimes = 150;
> for (int i = 0; i < seekTimes; i++) {
>   long pos = size / (seekTimes - i) - 1;
>   LOG.info("begin seeking for pos: " + pos);
>   byte []buf = new byte[1024];
>   instream.read(pos, buf, 0, 1024);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15063) IOException may be thrown when read from Aliyun OSS in some case

2017-11-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16262356#comment-16262356
 ] 

Steve Loughran commented on HADOOP-15063:
-

+ [~uncleGen] + [~drankye]

I'll leave it to the OSS experts to review the production code; but the 
argument makes sense, and the patch appears to fix it. But I don't know the 
code well enough to be the reviewer there —let's see what the others say

Test-wise: which endpoint did you run the full module test suite against?

test code comments:

* use try-with-resrouces to autoamtcially close the input stream, even on an 
assert failure
* use assertEquals(56, bytesRead) for an automatic message if the check fails
* if the store is eventually consistent, use a different filename for the 
different test. This guarantees that you don't accidentally get the file from a 
previous test case.
* minor layout change: and use {{bye[] buf}} as the layout for declaring the 
variable (i.e. )

> IOException may be thrown when read from Aliyun OSS in some case
> 
>
> Key: HADOOP-15063
> URL: https://issues.apache.org/jira/browse/HADOOP-15063
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2, 3.0.0-beta1
>Reporter: wujinhu
>Assignee: wujinhu
> Attachments: HADOOP-15063.001.patch
>
>
> IOException will be thrown in this case
> 1. set part size = n(102400)
> 2. assume current position = 0, then partRemaining = 102400
> 3. we call seek(pos = 101802), with pos > position && pos < position + 
> partRemaining, so it will skip pos - position bytes, but partRemaining 
> remains the same
> 4. if we read bytes more than n - pos, it will throw IOException.
> Current code:
> {code:java}
> @Override
>   public synchronized void seek(long pos) throws IOException {
> checkNotClosed();
> if (position == pos) {
>   return;
> } else if (pos > position && pos < position + partRemaining) {
>   AliyunOSSUtils.skipFully(wrappedStream, pos - position);
>   // we need update partRemaining here
>   position = pos;
> } else {
>   reopen(pos);
> }
>   }
> {code}
> Logs:
> java.io.IOException: Failed to read from stream. Remaining:101802
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
>   at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> How to re-produce:
> 1. create a file with 10MB size
> 2. 
> {code:java}
> int seekTimes = 150;
> for (int i = 0; i < seekTimes; i++) {
>   long pos = size / (seekTimes - i) - 1;
>   LOG.info("begin seeking for pos: " + pos);
>   byte []buf = new byte[1024];
>   instream.read(pos, buf, 0, 1024);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15063) IOException may be thrown when read from Aliyun OSS in some case

2017-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16262240#comment-16262240
 ] 

Hadoop QA commented on HADOOP-15063:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HADOOP-15063 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15063 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898815/HADOOP-15063.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13736/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> IOException may be thrown when read from Aliyun OSS in some case
> 
>
> Key: HADOOP-15063
> URL: https://issues.apache.org/jira/browse/HADOOP-15063
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2, 3.0.0-beta1
>Reporter: wujinhu
>Assignee: wujinhu
> Attachments: HADOOP-15063.001.patch
>
>
> IOException will be thrown in this case
> 1. set part size = n(102400)
> 2. assume current position = 0, then partRemaining = 102400
> 3. we call seek(pos = 101802), with pos > position && pos < position + 
> partRemaining, so it will skip pos - position bytes, but partRemaining 
> remains the same
> 4. if we read bytes more than n - pos, it will throw IOException.
> Current code:
> {code:java}
> @Override
>   public synchronized void seek(long pos) throws IOException {
> checkNotClosed();
> if (position == pos) {
>   return;
> } else if (pos > position && pos < position + partRemaining) {
>   AliyunOSSUtils.skipFully(wrappedStream, pos - position);
>   // we need update partRemaining here
>   position = pos;
> } else {
>   reopen(pos);
> }
>   }
> {code}
> Logs:
> java.io.IOException: Failed to read from stream. Remaining:101802
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
>   at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> How to re-produce:
> 1. create a file with 10MB size
> 2. 
> {code:java}
> int seekTimes = 150;
> for (int i = 0; i < seekTimes; i++) {
>   long pos = size / (seekTimes - i) - 1;
>   LOG.info("begin seeking for pos: " + pos);
>   byte []buf = new byte[1024];
>   instream.read(pos, buf, 0, 1024);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org