[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-06-11 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15326196#comment-15326196
 ] 

Duo Zhang commented on HADOOP-12910:


The stuffs under java.util.function is mainly used for lambda expression... so 
do NOT consider copying CompletableFuture to JDK7...

And is there any strong reason that we must consider branch-2 at the first 
place? The current user code can not benefit since we only add new API. You 
must rewrite your code if you want to use the new API... And IMO, if you only 
want to batch calls to namenode, a thread pool is enough, and you can easily 
prevent OOM by limiting the max size of thread pool. You need to change your 
architecture to SEDA or TPC if you want to write real high performance 
asynchronous program.

Thanks.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.

2016-06-11 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15326123#comment-15326123
 ] 

Xiaoyu Yao edited comment on HADOOP-13255 at 6/12/16 1:26 AM:
--

Thanks [~xiaochen] for working on this and [~zhz] for the review. I would 
suggest we fix with the approach in v1 patch.  

1. V1 patch is correct and less risky. All the change is localized to 
KMSCientProvider compared with broader change in DelegationTokenAuthenticator  
or KerberosAuthenticator.
 
2. V2 patch below won't be able to handle the proxy user and token user cases 
as the currentUGI is not sufficient for these cases. There are a few fixes 
around KMSClientProvider#actualUGI to make this right. You can refer to how 
actualUGI is initialized in KMSClientProvider#KMSClientProvider(). 

{code}
  UserGroupInformation.getCurrentUser().checkTGTAndReloginFromKeytab();
{code}





was (Author: xyao):
Thanks [~xiaochen] for working on this and [~zhz] for the review. I would 
suggest we fix with the approach in v1 patch.  

1. V1 patch is less ricky. All the change is localized to KMSCientProvider 
compared with broader change in DelegationTokenAuthenticator  or 
KerberosAuthenticator.
 
2. V2 patch below won't be able to handle the proxy user and token user cases 
as the currentUGI is not sufficient for these cases. There are a few fixes 
around KMSClientProvider#actualUGI to make this right. You can refer to how 
actualUGI is initialized in KMSClientProvider#KMSClientProvider(). 

{code}
  UserGroupInformation.getCurrentUser().checkTGTAndReloginFromKeytab();
{code}




> KMSClientProvider should check and renew tgt when doing delegation token 
> operations.
> 
>
> Key: HADOOP-13255
> URL: https://issues.apache.org/jira/browse/HADOOP-13255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13255.01.patch, HADOOP-13255.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.

2016-06-11 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15326123#comment-15326123
 ] 

Xiaoyu Yao commented on HADOOP-13255:
-

Thanks [~xiaochen] for working on this and [~zhz] for the review. I would 
suggest we fix with the approach in v1 patch.  

1. V1 patch is less ricky. All the change is localized to KMSCientProvider 
compared with broader change in DelegationTokenAuthenticator  or 
KerberosAuthenticator.
 
2. V2 patch below won't be able to handle the proxy user and token user cases 
as the currentUGI is not sufficient for these cases. There are a few fixes 
around KMSClientProvider#actualUGI to make this right. You can refer to how 
actualUGI is initialized in KMSClientProvider#KMSClientProvider(). 

{code}
  UserGroupInformation.getCurrentUser().checkTGTAndReloginFromKeytab();
{code}




> KMSClientProvider should check and renew tgt when doing delegation token 
> operations.
> 
>
> Key: HADOOP-13255
> URL: https://issues.apache.org/jira/browse/HADOOP-13255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13255.01.patch, HADOOP-13255.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug

2016-06-11 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15326095#comment-15326095
 ] 

Akira AJISAKA commented on HADOOP-13192:


Hi [~zhudebin], please do not set the fix version. It is set by committer when 
the source code is changed in the issue.
Also, would you create a pull request for trunk instead of branch-2.6?
https://wiki.apache.org/hadoop/HowToContribute#Generating_a_patch

> org.apache.hadoop.util.LineReader  match recordDelimiter has a bug
> --
>
> Key: HADOOP-13192
> URL: https://issues.apache.org/jira/browse/HADOOP-13192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.2
>Reporter: binde
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> org.apache.hadoop.util.LineReader.readCustomLine()  has a bug,
> when line is   bccc, recordDelimiter is aaab, the result should be a,ccc,
> show the code on line 310:
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>   bufferPosn--;
>   delPosn = 0;
> }
>   }
> shoud be :
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>  // - change here - start 
>   bufferPosn -= delPosn;
>  // - change here - end 
>   
>   delPosn = 0;
> }
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug

2016-06-11 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-13192:
---
Fix Version/s: (was: 2.6.5)

> org.apache.hadoop.util.LineReader  match recordDelimiter has a bug
> --
>
> Key: HADOOP-13192
> URL: https://issues.apache.org/jira/browse/HADOOP-13192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.2
>Reporter: binde
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> org.apache.hadoop.util.LineReader.readCustomLine()  has a bug,
> when line is   bccc, recordDelimiter is aaab, the result should be a,ccc,
> show the code on line 310:
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>   bufferPosn--;
>   delPosn = 0;
> }
>   }
> shoud be :
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>  // - change here - start 
>   bufferPosn -= delPosn;
>  // - change here - end 
>   
>   delPosn = 0;
> }
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug

2016-06-11 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reopened HADOOP-13192:


> org.apache.hadoop.util.LineReader  match recordDelimiter has a bug
> --
>
> Key: HADOOP-13192
> URL: https://issues.apache.org/jira/browse/HADOOP-13192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.2
>Reporter: binde
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> org.apache.hadoop.util.LineReader.readCustomLine()  has a bug,
> when line is   bccc, recordDelimiter is aaab, the result should be a,ccc,
> show the code on line 310:
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>   bufferPosn--;
>   delPosn = 0;
> }
>   }
> shoud be :
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>  // - change here - start 
>   bufferPosn -= delPosn;
>  // - change here - end 
>   
>   delPosn = 0;
> }
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-06-11 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15326063#comment-15326063
 ] 

Kai Zheng commented on HADOOP-13081:


This is a question. Since it's cloning a UGI, it may first clone the subject 
exactly, rather than incompletely? Note it's possible to construct a subject 
instance with another, looking at it's constructors.
{code}
+   
+  /**
+   * @return clone of the UGI with a new subject.
+   */
+  @InterfaceAudience.Public
+  @InterfaceStability.Evolving
+  public UserGroupInformation clone() {
+Subject subject = new Subject();
+subject.getPrincipals().addAll(this.getSubject().getPrincipals());
+// The ctor will set other fields automatically from the principals.
+return new UserGroupInformation(subject);
+  }
{code}

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
> Attachments: HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13241) document s3a better

2016-06-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15326048#comment-15326048
 ] 

Chris Nauroth commented on HADOOP-13241:


Patch 002 is looking great.  I just have one more question.

bq. When a partitioned upload is in progress, they may be visible.

Is this accurate?  I thought S3 doesn't make it visible until after completion 
of the multi-part upload.  Am I mistaken?

> document s3a better
> ---
>
> Key: HADOOP-13241
> URL: https://issues.apache.org/jira/browse/HADOOP-13241
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13241-branch-2-001.patch, 
> HADOOP-13241-branch-2-002.patch
>
>
> s3a can be documented better, things like classpath, troubleshooting, etc.
> sit down and do it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-06-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15326045#comment-15326045
 ] 

Chris Nauroth commented on HADOOP-13081:


Hello [~sershe].  This is definitely in my review queue, but I don't expect 
I'll get to it soon.  UGI changes always need careful scrutiny, even if it's a 
small patch.  One initial question: did you investigate adding a unit test, 
possibly to {{TestUserGroupInformation}}?

I will get to this eventually, but I'd also welcome others to review it if I'm 
not giving you the SLA you want.  :-)

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
> Attachments: HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-06-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15326044#comment-15326044
 ] 

Hadoop QA commented on HADOOP-13207:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
44s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
23s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
24s{color} | {color:red} root: The patch generated 3 new + 30 unchanged - 51 
fixed = 33 total (was 81) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
28s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:babe025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12809645/HADOOP-13207-branch-2-005.patch
 |
| JIRA Issue | HADOOP-13207 |
| Optional Tests |  

[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-06-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13207:

Status: Patch Available  (was: Open)

> Specify FileSystem listStatus and listFiles
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, 
> HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-06-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13207:

Attachment: HADOOP-13207-branch-2-005.patch

Patch 005: incorporates an openstack implementation of the getFileStatus/list* 
tests, {{TestSwiftContractGetFileStatus}}.

Patch tested against: local OSX, HDFS, S3A, Azure, Rackspace swift

> Specify FileSystem listStatus and listFiles
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, 
> HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13207) Specify FileSystem listStatus and listFiles

2016-06-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13207:

Attachment: HADOOP-13207-branch-2-004.patch

HADOOP-13207 patch 004; a lot more on status list operations on files and on 
the root path. Makes sure that the filter applies to the path itself, etc. Root 
dir operations are carefully constructed so that no attempt is made to do a 
recursive listFiles() on a root FS unless it's the FS permits root dir 
operations (i.e. its a small transient one)


FS spec to use "path" as name of path argument, not "f" or "p" (as the code 
does), same for "filter". + some other text cleanup.

> Specify FileSystem listStatus and listFiles
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, 
> HADOOP-13207-branch-2-004.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug

2016-06-11 Thread binde (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binde resolved HADOOP-13192.

   Resolution: Fixed
Fix Version/s: 2.6.5

> org.apache.hadoop.util.LineReader  match recordDelimiter has a bug
> --
>
> Key: HADOOP-13192
> URL: https://issues.apache.org/jira/browse/HADOOP-13192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.2
>Reporter: binde
> Fix For: 2.6.5
>
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> org.apache.hadoop.util.LineReader.readCustomLine()  has a bug,
> when line is   bccc, recordDelimiter is aaab, the result should be a,ccc,
> show the code on line 310:
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>   bufferPosn--;
>   delPosn = 0;
> }
>   }
> shoud be :
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>  // - change here - start 
>   bufferPosn -= delPosn;
>  // - change here - end 
>   
>   delPosn = 0;
> }
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13254) Make Diskchecker Pluggable

2016-06-11 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15325743#comment-15325743
 ] 

Yufei Gu commented on HADOOP-13254:
---

whitespace issue and test failure are unrelated.

> Make Diskchecker Pluggable
> --
>
> Key: HADOOP-13254
> URL: https://issues.apache.org/jira/browse/HADOOP-13254
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: HADOOP-13254.001.patch, HADOOP-13254.002.patch, 
> HADOOP-13254.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org