[jira] [Commented] (HADOOP-13389) TestS3ATemporaryCredentials.testSTS error

2016-07-19 Thread Steven K. Wong (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385384#comment-15385384
 ] 

Steven K. Wong commented on HADOOP-13389:
-

[~cnauroth], thanks for the suggestions. I'll work on a patch.

> TestS3ATemporaryCredentials.testSTS error
> -
>
> Key: HADOOP-13389
> URL: https://issues.apache.org/jira/browse/HADOOP-13389
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Steven K. Wong
>
> {{org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS}} throws a 403 
> AccessDenied when run without any AWS credentials (access key and secret key) 
> in the config.
> {noformat}
> com.amazonaws.AmazonServiceException: Cannot call GetSessionToken with 
> session credentials (Service: AWSSecurityTokenService; Status Code: 403; 
> Error Code: AccessDenied; Request ID: X)
>   at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1106)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.getSessionToken(AWSSecurityTokenServiceClient.java:355)
>   at 
> org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS(TestS3ATemporaryCredentials.java:105)
> {noformat}
> It fails because the InstanceProfileCredentialsProvider in the credentials 
> chain (on line 91) is used, but an instance profile always provides a 
> temporary credential and GetSessionToken requires a long-term (not temporary) 
> credential.
> Suggestion on how to fix this test case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13390) GC pressure of NetworkTopology.isAncestor call can be eliminated

2016-07-19 Thread He Tianyi (JIRA)
He Tianyi created HADOOP-13390:
--

 Summary: GC pressure of NetworkTopology.isAncestor call can be 
eliminated
 Key: HADOOP-13390
 URL: https://issues.apache.org/jira/browse/HADOOP-13390
 Project: Hadoop Common
  Issue Type: Improvement
  Components: net
Affects Versions: 2.7.0, 2.6.0, 2.8.0
Reporter: He Tianyi
Priority: Minor


{{NetworkTopology.isAncestor}} is called in {{NetworkTopology.getLeaf}}, which 
is excessively used in block placement policies ({{chooseRandom}} for example).

Currently, the implementation calls {{getPath}} twice. And {{getPath}} performs 
string concatenation on the fly. On a busy NameNode, this introduces more GC 
pressure and CPU overhead for block allocation. 
Given that network location and node name does not generally change frequently, 
we can cache path as a properly of {{Node}} and update accordingly while 
network location or node name is changing.
Also, one of these {{getPath}} calls in {{getLeaf}} can be eliminated because 
we are expecting identical result in both calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13389) TestS3ATemporaryCredentials.testSTS error

2016-07-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13389:
---
Target Version/s: 2.8.0

> TestS3ATemporaryCredentials.testSTS error
> -
>
> Key: HADOOP-13389
> URL: https://issues.apache.org/jira/browse/HADOOP-13389
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Steven K. Wong
>
> {{org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS}} throws a 403 
> AccessDenied when run without any AWS credentials (access key and secret key) 
> in the config.
> {noformat}
> com.amazonaws.AmazonServiceException: Cannot call GetSessionToken with 
> session credentials (Service: AWSSecurityTokenService; Status Code: 403; 
> Error Code: AccessDenied; Request ID: X)
>   at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1106)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.getSessionToken(AWSSecurityTokenServiceClient.java:355)
>   at 
> org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS(TestS3ATemporaryCredentials.java:105)
> {noformat}
> It fails because the InstanceProfileCredentialsProvider in the credentials 
> chain (on line 91) is used, but an instance profile always provides a 
> temporary credential and GetSessionToken requires a long-term (not temporary) 
> credential.
> Suggestion on how to fix this test case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13389) TestS3ATemporaryCredentials.testSTS error

2016-07-19 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385333#comment-15385333
 ] 

Chris Nauroth commented on HADOOP-13389:


[~slider], thank you for the further details.  I think I understand now.  It 
sounds like you are trying to run the S3A test suite without an AWS access key 
ID and secret access key, instead relying on instance profile credentials 
provided in an EC2 VM.

The simplest immediate workaround for you is likely to set the following in 
your auth-keys.xml file:

{code}

  test.fs.s3a.sts.enabled
  false

{code}

However, I also agree that if the instance profile credentials are never 
suitable for this test case, then we would do well to remove 
{{InstanceProfileCredentialsProvider}} from the test and add explicit detection 
to {{skip}} if there is no access key ID and secret access key.  
{{S3AUtils#getAWSAccessKeys}} and {{S3xLoginHelper}} class are likely to be 
helpful for that logic.

> TestS3ATemporaryCredentials.testSTS error
> -
>
> Key: HADOOP-13389
> URL: https://issues.apache.org/jira/browse/HADOOP-13389
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Steven K. Wong
>
> {{org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS}} throws a 403 
> AccessDenied when run without any AWS credentials (access key and secret key) 
> in the config.
> {noformat}
> com.amazonaws.AmazonServiceException: Cannot call GetSessionToken with 
> session credentials (Service: AWSSecurityTokenService; Status Code: 403; 
> Error Code: AccessDenied; Request ID: X)
>   at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1106)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.getSessionToken(AWSSecurityTokenServiceClient.java:355)
>   at 
> org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS(TestS3ATemporaryCredentials.java:105)
> {noformat}
> It fails because the InstanceProfileCredentialsProvider in the credentials 
> chain (on line 91) is used, but an instance profile always provides a 
> temporary credential and GetSessionToken requires a long-term (not temporary) 
> credential.
> Suggestion on how to fix this test case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13381) KMS clients running in the same JVM should use updated KMS Delegation Token

2016-07-19 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385314#comment-15385314
 ] 

Xiao Chen commented on HADOOP-13381:


Thank you for the continued discussion, Arun.

Sorry I missed 1 point in your proposal... it wouldn't work as we hoped.
bq. 4. Then, we let the retry happen, at which point it will get a new 
delegation token.
IIUC, the {{authToken}} was to cache past successful authentications (so we 
don't have to authenticate every time). It does not 'get a new delegation 
token'. Instead, it just gets the {{kms-dt}} from the UGI's current user inside 
{{DelegationTokenAuthenticatedURL#openConnection}}, which happens inside the 
{{actualUgi.doAs}} in {{KMSCP#createConnection}}. So retries will still see the 
same expired DT (or no DT at all if we remove it). We have to get the DT from 
UGI's current user before actualUgi.doAs... right?

Let me elaborate on the race I was thinking:
I did a test as follows:
# set {{/tmp}} as an EZ
# run a MR job (wordcount) as user {{mapred}}, over {{/tmp}}. Let's call this 
job1
# run a MR job (wordcount) as user {{impala}}, over {{/tmp}}. Let's call this 
job2.
# get below logs from my customized logging in {{KMSCP#createConnection}}

{noformat}
2016-07-19 14:35:18,306 INFO 
org.apache.hadoop.crypto.key.kms.KMSClientProvider:  currentUGI:impala 
(auth:SIMPLE) creds: [Kind: kms-dt, Service: 172.31.9.35:16000, Ident: 00 06 69 
6d 70 61 6c 61 04 79 61 72 6e 00 8a 01 56 05 15 10 22 8a 01 56 05 17 cf 42 02 
02, Kind: mapreduce.job, Service: job_1468963667277_0002, Ident: 
(org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@2e951fb5), Kind: 
HDFS_DELEGATION_TOKEN, Service: 172.31.9.72:8020, Ident: (token for impala: 
HDFS_DELEGATION_TOKEN owner=imp...@gce.cloudera.com, renewer=yarn, realUser=, 
issueDate=1468964081478, maxDate=1468964381478, sequenceNumber=216, 
masterKeyId=20)]
2016-07-19 14:35:18,307 INFO 
org.apache.hadoop.crypto.key.kms.KMSClientProvider:  actualUGI: mapred 
(auth:SIMPLE) creds: [Kind: kms-dt, Service: 172.31.9.35:16000, Ident: 00 06 6d 
61 70 72 65 64 04 79 61 72 6e 00 8a 01 56 05 11 b5 db 8a 01 56 05 14 74 fb 01 
02, Kind: mapreduce.job, Service: job_1468963667277_0001, Ident: 
(org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@7fdacda0), Kind: 
HDFS_DELEGATION_TOKEN, Service: 172.31.9.72:8020, Ident: (token for mapred: 
HDFS_DELEGATION_TOKEN owner=map...@gce.cloudera.com, renewer=yarn, realUser=, 
issueDate=1468963861782, maxDate=1468964161782, sequenceNumber=215, 
masterKeyId=20)]
{noformat}
Note here the actual UGI is entirely mapred's. If job1 is about to 
{{actualUgi.doAs}} while job2 updated the credentials in {{actualUgi}}, job1 
will then see job2's dt when the invocation goes into DTAURL. right?

My drive-home thinking is that we should doAs current ugi in this specific case 
(or retry with currentUGI) Namely, when 
[this|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java#L535]
 is null.

> KMS clients running in the same JVM should use updated KMS Delegation Token
> ---
>
> Key: HADOOP-13381
> URL: https://issues.apache.org/jira/browse/HADOOP-13381
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HADOOP-13381.01.patch
>
>
> When {{/tmp}} is setup as an EZ, one may experience YARN log aggregation 
> failure after the very first KMS token is expired. The MR job itself runs 
> fine though.
> When this happens, YARN NodeManager's log will show 
> {{AuthenticationException}} with {{token is expired}} / {{token can't be 
> found in cache}}, depending on whether the expired token is removed by the 
> background or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11136) FTPInputStream should close wrapped stream

2016-07-19 Thread JerryXin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385243#comment-15385243
 ] 

JerryXin commented on HADOOP-11136:
---

I am running into this problem in ACTIVE mode.When you open a FTPInputStream 
and never use it, the  method will be blocked by  
client.completePendingCommand().
Could we just add wrappedStream.close() before boolean cmdCompleted = 
client.completePendingCommand()?
Or We can use pos check if the FTPInputStream has been read or not,if read ,we 
call method client.completePendingCommand(),if not,we simply ignore it?

> FTPInputStream should close wrapped stream
> --
>
> Key: HADOOP-11136
> URL: https://issues.apache.org/jira/browse/HADOOP-11136
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Dzianis Sokal
>
> This is reproducible in PASSIVE FTP mode, which is not supported by 
> now(HADOOP-11135). However if we hack FTPFileSystem to enter into local 
> passive mode, it will hang on client.completePendingCommand() in 
> FTPInputStream line 114:
> {code}
> ...
> public synchronized void close() throws IOException {
> ...
> boolean cmdCompleted = client.completePendingCommand();
> ...
> {code}
> Going to completePendingCommand() docs I see that the stream should be closed 
> before calling it. So seems like stream should be closed
> {code}
> wrappedStream.close();
> {code}
> right before 
> {code}
> boolean cmdCompleted = client.completePendingCommand();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13381) KMS clients running in the same JVM should use updated KMS Delegation Token

2016-07-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385195#comment-15385195
 ] 

Arun Suresh commented on HADOOP-13381:
--

bq. ..assuming we loosen the retry check of response message..
Agreed... I guess that should be fine..

With respect to the race condition, am not really worried.. the worst that can 
happen, if we follow the flow I specified in my earlier comment (when multiple 
threads call the same KMSClientProvider at  a time when the DT has expired), is 
that.. simultaneous refreshes of the UGI's credentials will happen, but dont 
think there would be any UGI state inconsistency. Besides, the 
UGI::addCredential is synchronized on the subject.

> KMS clients running in the same JVM should use updated KMS Delegation Token
> ---
>
> Key: HADOOP-13381
> URL: https://issues.apache.org/jira/browse/HADOOP-13381
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HADOOP-13381.01.patch
>
>
> When {{/tmp}} is setup as an EZ, one may experience YARN log aggregation 
> failure after the very first KMS token is expired. The MR job itself runs 
> fine though.
> When this happens, YARN NodeManager's log will show 
> {{AuthenticationException}} with {{token is expired}} / {{token can't be 
> found in cache}}, depending on whether the expired token is removed by the 
> background or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13381) KMS clients running in the same JVM should use updated KMS Delegation Token

2016-07-19 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385165#comment-15385165
 ] 

Xiao Chen commented on HADOOP-13381:


Thanks [~asuresh] for the quick response! The flow you mentioned would work, 
assuming we loosen the retry check of response message ([these 
lines|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java#L584-L587]),
 and add the remove token method to UGI.

On the multi-thread side, did I miss anything? If many threads running in 
{{LogAggregationService}} try to do log aggregation, and end up with the same 
cached KMSCP, would this cause a race? IMO this problem exists before this 
patch, but maybe I missed something... I don't think the cached {{authToken}} 
work under this scenario.


> KMS clients running in the same JVM should use updated KMS Delegation Token
> ---
>
> Key: HADOOP-13381
> URL: https://issues.apache.org/jira/browse/HADOOP-13381
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HADOOP-13381.01.patch
>
>
> When {{/tmp}} is setup as an EZ, one may experience YARN log aggregation 
> failure after the very first KMS token is expired. The MR job itself runs 
> fine though.
> When this happens, YARN NodeManager's log will show 
> {{AuthenticationException}} with {{token is expired}} / {{token can't be 
> found in cache}}, depending on whether the expired token is removed by the 
> background or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-07-19 Thread shimingfei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385155#comment-15385155
 ] 

shimingfei commented on HADOOP-12756:
-

Good catch! it is a potential problem.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0, HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13389) TestS3ATemporaryCredentials.testSTS error

2016-07-19 Thread Steven K. Wong (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385122#comment-15385122
 ] 

Steven K. Wong edited comment on HADOOP-13389 at 7/20/16 1:10 AM:
--

I have auth-keys.xml (that only configures test.fs.s3a.name), because I intend 
to run the S3A tests. All S3A tests -- except 
TestS3ATemporaryCredentials.testSTS -- succeed for me.

The InstanceProfileCredentialsProvider object on line 93 is unhelpful because 
its temporary credential is not compatible with the getSessionToken call on 
line 105 (as explained above). Hence, at a minimum I think 
InstanceProfileCredentialsProvider should be removed from the credentials chain 
in the test case. But that doesn't fix the test case failure. Perhaps testSTS 
should explicitly check for the absence of credentials in the config and skip 
itself (like what line 83 does)?


was (Author: slider):
I have auth-keys.xml (that only configures test.fs.s3a.name), because I intend 
to run the S3A tests. All S3A tests -- except 
TestS3ATemporaryCredentials.testSTS -- succeed for me.

The InstanceProfileCredentialsProvider object on line 93 is unhelpful because 
its temporary credential is not compatible with the getSessionToken call on 
line 105 (as explained above). Hence, at a minimum I think 
InstanceProfileCredentialsProvider should be removed from the credentials chain 
in the test case. But that doesn't fix the test case failure. Perhaps testSTS 
should explicitly check for the absence of credentials in the config and skip 
itself?

> TestS3ATemporaryCredentials.testSTS error
> -
>
> Key: HADOOP-13389
> URL: https://issues.apache.org/jira/browse/HADOOP-13389
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Steven K. Wong
>
> {{org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS}} throws a 403 
> AccessDenied when run without any AWS credentials (access key and secret key) 
> in the config.
> {noformat}
> com.amazonaws.AmazonServiceException: Cannot call GetSessionToken with 
> session credentials (Service: AWSSecurityTokenService; Status Code: 403; 
> Error Code: AccessDenied; Request ID: X)
>   at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1106)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.getSessionToken(AWSSecurityTokenServiceClient.java:355)
>   at 
> org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS(TestS3ATemporaryCredentials.java:105)
> {noformat}
> It fails because the InstanceProfileCredentialsProvider in the credentials 
> chain (on line 91) is used, but an instance profile always provides a 
> temporary credential and GetSessionToken requires a long-term (not temporary) 
> credential.
> Suggestion on how to fix this test case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13389) TestS3ATemporaryCredentials.testSTS error

2016-07-19 Thread Steven K. Wong (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385122#comment-15385122
 ] 

Steven K. Wong edited comment on HADOOP-13389 at 7/20/16 1:09 AM:
--

I have auth-keys.xml (that only configures test.fs.s3a.name), because I intend 
to run the S3A tests. All S3A tests -- except 
TestS3ATemporaryCredentials.testSTS -- succeed for me.

The InstanceProfileCredentialsProvider object on line 93 is unhelpful because 
its temporary credential is not compatible with the getSessionToken call on 
line 105 (as explained above). Hence, at a minimum I think 
InstanceProfileCredentialsProvider should be removed from the credentials chain 
in the test case. But that doesn't fix the test case failure. Perhaps testSTS 
should explicitly check for the absence of credentials in the config and skip 
itself?


was (Author: slider):
I have auth-keys.xml (that only configures test.fs.s3a.name), because I intend 
to run the S3A tests. All S3A tests -- except 
TestS3ATemporaryCredentials.testSTS -- succeed for me.

The InstanceProfileCredentialsProvider object on line 93 is unhelpful because 
its temporary credential is not compatible with the getSessionToken call on 
line 105 (as explained above). Hence, at a minimum I think 
InstanceProfileCredentialsProvider should be removed from the credentials chain 
in the test case. But that doesn't fix the test case failure.

> TestS3ATemporaryCredentials.testSTS error
> -
>
> Key: HADOOP-13389
> URL: https://issues.apache.org/jira/browse/HADOOP-13389
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Steven K. Wong
>
> {{org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS}} throws a 403 
> AccessDenied when run without any AWS credentials (access key and secret key) 
> in the config.
> {noformat}
> com.amazonaws.AmazonServiceException: Cannot call GetSessionToken with 
> session credentials (Service: AWSSecurityTokenService; Status Code: 403; 
> Error Code: AccessDenied; Request ID: X)
>   at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1106)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.getSessionToken(AWSSecurityTokenServiceClient.java:355)
>   at 
> org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS(TestS3ATemporaryCredentials.java:105)
> {noformat}
> It fails because the InstanceProfileCredentialsProvider in the credentials 
> chain (on line 91) is used, but an instance profile always provides a 
> temporary credential and GetSessionToken requires a long-term (not temporary) 
> credential.
> Suggestion on how to fix this test case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail

2016-07-19 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385146#comment-15385146
 ] 

John Zhuge commented on HADOOP-13240:
-

Timed out in unit test org.apache.hadoop.http.TestHttpServerLifecycle. 
Unrelated.

> TestAclCommands.testSetfaclValidations fail
> ---
>
> Key: HADOOP-13240
> URL: https://issues.apache.org/jira/browse/HADOOP-13240
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.7.1
> Environment: hadoop 2.4.1,as6.5
>Reporter: linbao111
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13240.001.patch, HADOOP-13240.002.patch, 
> HADOOP-13240.003.patch
>
>
> mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console 
> -Dtest=TestAclCommands#testSetfaclValidations failed with following message:
> ---
> Test set: org.apache.hadoop.fs.shell.TestAclCommands
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands
> testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands)  Time 
> elapsed: 0.534 sec  <<< FAILURE!
> java.lang.AssertionError: setfacl should fail ACL spec missing
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertFalse(Assert.java:68)
> at 
> org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81)
> i notice from 
> HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
>  code changed
> should 
> hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe
>  changed to:
> diff --git 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
>  
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> index b14cd37..463bfcd
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception {
>  "/path" }));
>  assertFalse("setfacl should fail ACL spec missing",
>  0 == runCommand(new String[] { "-setfacl", "-m",
> -"", "/path" }));
> +":", "/path" }));
>}
>  
>@Test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13381) KMS clients running in the same JVM should use updated KMS Delegation Token

2016-07-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385145#comment-15385145
 ] 

Arun Suresh commented on HADOOP-13381:
--

So... I was thinking we should do the following:
# Ensure the NM creates the DFSclient on boot up, so that the acutalUgi is the 
yarn user
# Add a method in {{UserGroupInformation}} to remove credentials, so that you 
can remove the KMS-DT from the actualUgi.
# After the token has expired and when we get an authorization exception, we, 
in addition to flushing the authToken (line 592 in KMSClientProvider), we also 
call the new method I mentioned in the previous point to remove the KMS-DT.
# Then, we let the retry happen, at which point it will get a new delegation 
token.
makes sense ?
 

> KMS clients running in the same JVM should use updated KMS Delegation Token
> ---
>
> Key: HADOOP-13381
> URL: https://issues.apache.org/jira/browse/HADOOP-13381
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HADOOP-13381.01.patch
>
>
> When {{/tmp}} is setup as an EZ, one may experience YARN log aggregation 
> failure after the very first KMS token is expired. The MR job itself runs 
> fine though.
> When this happens, YARN NodeManager's log will show 
> {{AuthenticationException}} with {{token is expired}} / {{token can't be 
> found in cache}}, depending on whether the expired token is removed by the 
> background or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13383) Update release notes for 3.0.0-alpha1

2016-07-19 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385144#comment-15385144
 ] 

Sangjin Lee commented on HADOOP-13383:
--

One minor point: we have been consistent in calling the YARN-2928 feature 
{{YARN Timeline Service v.2}}. I see the title here as {{YARN Application 
Timeline Server v.2}}. We should change it to {{YARN Timeline Service v.2}}. 
Thanks!

> Update release notes for 3.0.0-alpha1
> -
>
> Key: HADOOP-13383
> URL: https://issues.apache.org/jira/browse/HADOOP-13383
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
> Attachments: HADOOP-13383.001.patch, HADOOP-13383.002.patch, 
> HADOOP-13383.003.patch
>
>
> Per the release instructions (https://wiki.apache.org/hadoop/HowToRelease), 
> we need to update hadoop-project/src/site/markdown/index.md.vm to reflect the 
> right versions, new features and big improvements.
> I can put together some notes for HADOOP and HDFS, depending on others for 
> YARN and MR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail

2016-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385140#comment-15385140
 ] 

Hadoop QA commented on HADOOP-13240:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 34 unchanged - 11 fixed = 34 total (was 45) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 26s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818914/HADOOP-13240.003.patch
 |
| JIRA Issue | HADOOP-13240 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 87f3faeb16f0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dc065dd |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10028/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10028/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10028/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestAclCommands.testSetfaclValidations fail
> ---
>
> Key: HADOOP-13240
> URL: https://issues.apache.org/jira/browse/HADOOP-13240
> Project: Hadoop Common
>  Issue Type: Bug
>Affects 

[jira] [Comment Edited] (HADOOP-13389) TestS3ATemporaryCredentials.testSTS error

2016-07-19 Thread Steven K. Wong (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385122#comment-15385122
 ] 

Steven K. Wong edited comment on HADOOP-13389 at 7/20/16 12:46 AM:
---

I have auth-keys.xml (that only configures test.fs.s3a.name), because I intend 
to run the S3A tests. All S3A tests -- except 
TestS3ATemporaryCredentials.testSTS -- succeed for me.

The InstanceProfileCredentialsProvider object on line 93 is unhelpful because 
its temporary credential is not compatible with the getSessionToken call on 
line 105 (as explained above). Hence, at a minimum I think 
InstanceProfileCredentialsProvider should be removed from the credentials chain 
in the test case. But that doesn't fix the test case failure.


was (Author: slider):
I have auth-keys.xml (that only configures test.fs.s3a.name), because I intend 
to run the S3A tests. All S3A tests -- except 
TestS3ATemporaryCredentials.testSTS -- succeed for me.

The InstanceProfileCredentialsProvider object on line 93 is unhelpful because 
its temporary credential is not compatible with the getSessionToken call on 
line 105. Hence, at a minimum I think InstanceProfileCredentialsProvider should 
be removed from the credentials chain in the test case. But that doesn't fix 
the test case failure.

> TestS3ATemporaryCredentials.testSTS error
> -
>
> Key: HADOOP-13389
> URL: https://issues.apache.org/jira/browse/HADOOP-13389
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Steven K. Wong
>
> {{org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS}} throws a 403 
> AccessDenied when run without any AWS credentials (access key and secret key) 
> in the config.
> {noformat}
> com.amazonaws.AmazonServiceException: Cannot call GetSessionToken with 
> session credentials (Service: AWSSecurityTokenService; Status Code: 403; 
> Error Code: AccessDenied; Request ID: X)
>   at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1106)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.getSessionToken(AWSSecurityTokenServiceClient.java:355)
>   at 
> org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS(TestS3ATemporaryCredentials.java:105)
> {noformat}
> It fails because the InstanceProfileCredentialsProvider in the credentials 
> chain (on line 91) is used, but an instance profile always provides a 
> temporary credential and GetSessionToken requires a long-term (not temporary) 
> credential.
> Suggestion on how to fix this test case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13381) KMS clients running in the same JVM should use updated KMS Delegation Token

2016-07-19 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385130#comment-15385130
 ] 

Xiao Chen commented on HADOOP-13381:


bq. the race that multiple threads calling the same cached KMSCP
The problem becomes tougher when considering multi-thread The cached 
{{actualUgi}} is to handle proxy users, per HADOOP-10698 and HADOOP-11176, so 
we need that as initial UGI.

For the DT case, we want to pass in the latest credentials. However, the 
DT-fetching always happens inside {{actualUgi.doAs}}, which is cached and not 
updated. I can see the race where more than 1 thread in comment #1 reaching the 
same KMSCP, and what we do here would be troublesome.

Don't see a decent solution so far, need more thoughts... Feel free to speak up 
if any suggestions.

> KMS clients running in the same JVM should use updated KMS Delegation Token
> ---
>
> Key: HADOOP-13381
> URL: https://issues.apache.org/jira/browse/HADOOP-13381
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HADOOP-13381.01.patch
>
>
> When {{/tmp}} is setup as an EZ, one may experience YARN log aggregation 
> failure after the very first KMS token is expired. The MR job itself runs 
> fine though.
> When this happens, YARN NodeManager's log will show 
> {{AuthenticationException}} with {{token is expired}} / {{token can't be 
> found in cache}}, depending on whether the expired token is removed by the 
> background or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13389) TestS3ATemporaryCredentials.testSTS error

2016-07-19 Thread Steven K. Wong (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385122#comment-15385122
 ] 

Steven K. Wong commented on HADOOP-13389:
-

I have auth-keys.xml (that only configures test.fs.s3a.name), because I intend 
to run the S3A tests. All S3A tests -- except 
TestS3ATemporaryCredentials.testSTS -- succeed for me.

The InstanceProfileCredentialsProvider object on line 93 is unhelpful because 
its temporary credential is not compatible with the getSessionToken call on 
line 105. Hence, at a minimum I think InstanceProfileCredentialsProvider should 
be removed from the credentials chain in the test case. But that doesn't fix 
the test case failure.

> TestS3ATemporaryCredentials.testSTS error
> -
>
> Key: HADOOP-13389
> URL: https://issues.apache.org/jira/browse/HADOOP-13389
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Steven K. Wong
>
> {{org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS}} throws a 403 
> AccessDenied when run without any AWS credentials (access key and secret key) 
> in the config.
> {noformat}
> com.amazonaws.AmazonServiceException: Cannot call GetSessionToken with 
> session credentials (Service: AWSSecurityTokenService; Status Code: 403; 
> Error Code: AccessDenied; Request ID: X)
>   at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1106)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.getSessionToken(AWSSecurityTokenServiceClient.java:355)
>   at 
> org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS(TestS3ATemporaryCredentials.java:105)
> {noformat}
> It fails because the InstanceProfileCredentialsProvider in the credentials 
> chain (on line 91) is used, but an instance profile always provides a 
> temporary credential and GetSessionToken requires a long-term (not temporary) 
> credential.
> Suggestion on how to fix this test case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12991) Conflicting default ports in DelegateToFileSystem

2016-07-19 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385111#comment-15385111
 ] 

Kai Sasaki commented on HADOOP-12991:
-

[~ajisakaa] Thank you so much!

> Conflicting default ports in DelegateToFileSystem
> -
>
> Key: HADOOP-12991
> URL: https://issues.apache.org/jira/browse/HADOOP-12991
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Kevin Hogeland
>Assignee: Kai Sasaki
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-12991.01.patch, HADOOP-12991.02.patch, 
> HADOOP-12991.03.patch
>
>
> HADOOP-12304 introduced logic to ensure that the {{DelegateToFileSystem}} 
> constructor sets the default port to -1:
> {code:title=DelegateToFileSystem.java}
>   protected DelegateToFileSystem(URI theUri, FileSystem theFsImpl,
>   Configuration conf, String supportedScheme, boolean authorityRequired)
>   throws IOException, URISyntaxException {
> super(theUri, supportedScheme, authorityRequired, 
> getDefaultPortIfDefined(theFsImpl));
> fsImpl = theFsImpl;
> fsImpl.initialize(theUri, conf);
> fsImpl.statistics = getStatistics();
>   }
>   private static int getDefaultPortIfDefined(FileSystem theFsImpl) {
> int defaultPort = theFsImpl.getDefaultPort();
> return defaultPort != 0 ? defaultPort : -1;
>   }
> {code}
> However, {{DelegateToFileSystem#getUriDefaultPort}} returns 0:
> {code:title=DelegateToFileSystem.java}
>   public int getUriDefaultPort() {
> return 0;
>   }
> {code}
> This breaks {{AbstractFileSystem#checkPath}}:
> {code:title=AbstractFileSystem.java}
> int thisPort = this.getUri().getPort(); // If using DelegateToFileSystem, 
> this is -1
> int thatPort = uri.getPort(); // This is -1 by default in java.net.URI
> if (thatPort == -1) {
>   thatPort = this.getUriDefaultPort();  // Sets thatPort to 0
> }
> if (thisPort != thatPort) {
>   throw new InvalidPathException("Wrong FS: " + path + ", expected: "
>   + this.getUri());
> }
> {code}
> Which breaks any subclasses of {{DelegateToFileSystem}} that don't specify a 
> port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13389) TestS3ATemporaryCredentials.testSTS error

2016-07-19 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385094#comment-15385094
 ] 

Chris Nauroth commented on HADOOP-13389:


Hello [~slider].  All of the hadoop-aws tests should be getting skipped if 
there are no AWS credentials configured.  That's accomplished via this code in 
pom.xml:

{code}

  tests-off
  

  src/test/resources/auth-keys.xml

  
  
true
  

{code}

Is there something unique in your environment that is causing this test to run 
even when credentials are not configured?

> TestS3ATemporaryCredentials.testSTS error
> -
>
> Key: HADOOP-13389
> URL: https://issues.apache.org/jira/browse/HADOOP-13389
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Steven K. Wong
>
> {{org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS}} throws a 403 
> AccessDenied when run without any AWS credentials (access key and secret key) 
> in the config.
> {noformat}
> com.amazonaws.AmazonServiceException: Cannot call GetSessionToken with 
> session credentials (Service: AWSSecurityTokenService; Status Code: 403; 
> Error Code: AccessDenied; Request ID: X)
>   at 
> com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
>   at 
> com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1106)
>   at 
> com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.getSessionToken(AWSSecurityTokenServiceClient.java:355)
>   at 
> org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS(TestS3ATemporaryCredentials.java:105)
> {noformat}
> It fails because the InstanceProfileCredentialsProvider in the credentials 
> chain (on line 91) is used, but an instance profile always provides a 
> temporary credential and GetSessionToken requires a long-term (not temporary) 
> credential.
> Suggestion on how to fix this test case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13207) Specify FileSystem listStatus, listFiles and RemoteIterator

2016-07-19 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385088#comment-15385088
 ] 

Chris Nauroth commented on HADOOP-13207:


[~ste...@apache.org], thank you for the patch.

The tests looks great.  I have two minor comments.

# {{AbstractFSContractTestBase#methodName}} is unused.  Was this meant to be 
used like in {{AbstractS3ATestBase}} to make the JUnit thread's name 
descriptive to the specific test case?
# In {{ContractTestUtils#pathsToString}}, I recommend using the 
platform-specific line ending via something like {{String.format("%n")}} 
instead of {{\n}}.

On the documentation, I have a set of nit-picky proofreading comments.

{code}
### `FileStatus[] listStatus(Path p, PathFilter filter)`
{code}

In other parts of the patch, it looks like you are aiming for consistent use of 
"path" instead of "p".  Do you want to switch this method signature to use 
"path" too?

{code}
then the that file's `FileStatus` entry is returned in a single-element array.
{code}

Something looks off here.  Maybe remove "the"?

{code}
If the path refers to a directory, the call returns a list of all its immediate 
children
path which are accepted by the filter —and does not include the directory
itself.
{code}

Please change "path" to plural "paths".

{code}
* After an entry at path `P` is created, and before any other
 changes are made to the FileSystem, `listStatus(P)` MUST
 find the file and return its status.

* After an entry at path `P` is deleted, and before any other
 changes are made to the filesystem, `listStatus(P)` MUST
 raise a `FileNotFoundException`.
{code}

The various "before any other changes are made" clauses use a mix of 
"FileSystem" and "filesystem".  Would you please make that consistent?

{code}
There no guarantees as to whether paths are listed in a specific order, only
{code}

I think this was meant to be "There are no guarantees...".

{code}
`listLocatedStatus(Path path):`. Calls to it may deletegated through
{code}

Is this meant to be "...may be delegated..."?

{code}
There is no requirement more the iterator to provide a consistent view
{code}

I'm unclear about the word "more" here.

{code}
if isFile(FS, path) and filter.accept(path) :
  resultset =  [ getLocatedFileStatus(FS, path) ]

elif isFile(FS, path) and not filter.accept(P) :
{code}

Please switch to "path" in the second filter.accept call for consistency.

{code}
results, *even if no calls to `hasNext()` are made.
{code}

Is there a missing closing '*'?

{code}
on (possibly remote) filesystems. These filesystems are invaariably accessed
{code}

s/invaariably/invariably


> Specify FileSystem listStatus, listFiles and RemoteIterator
> ---
>
> Key: HADOOP-13207
> URL: https://issues.apache.org/jira/browse/HADOOP-13207
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13207-branch-2-001.patch, 
> HADOOP-13207-branch-2-002.patch, HADOOP-13207-branch-2-003.patch, 
> HADOOP-13207-branch-2-004.patch, HADOOP-13207-branch-2-005.patch, 
> HADOOP-13207-branch-2-006.patch, HADOOP-13207-branch-2-007.patch, 
> HADOOP-13207-branch-2-008.patch, HADOOP-13207-branch-2-009.patch, 
> HADOOP-13207-branch-2-010.patch
>
>
> The many `listStatus`, `listLocatedStatus` and `listFiles` operations have 
> not been completely covered in the FS specification. There's lots of implicit 
> use of {{listStatus()}} path, but no coverage or tests of the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13383) Update release notes for 3.0.0-alpha1

2016-07-19 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13383:
-
Attachment: HADOOP-13383.003.patch

Thanks for the review Akira, added JIRA links.

> Update release notes for 3.0.0-alpha1
> -
>
> Key: HADOOP-13383
> URL: https://issues.apache.org/jira/browse/HADOOP-13383
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
> Attachments: HADOOP-13383.001.patch, HADOOP-13383.002.patch, 
> HADOOP-13383.003.patch
>
>
> Per the release instructions (https://wiki.apache.org/hadoop/HowToRelease), 
> we need to update hadoop-project/src/site/markdown/index.md.vm to reflect the 
> right versions, new features and big improvements.
> I can put together some notes for HADOOP and HDFS, depending on others for 
> YARN and MR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13206) Delegation token cannot be fetched and used by different versions of client

2016-07-19 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385080#comment-15385080
 ] 

Konstantin Shvachko commented on HADOOP-13206:
--

This looks reasonable. Minor suggestion to check {{if (serviceMatch)}} first 
and return. Then you don't need to check {{if (!serviceMatch)}}. Like this:
{code}
boolean serviceMatch = service.equals(token.getService());
if (serviceMatch) {
  return (Token) token;
}
try {
  serviceMatch = 
NetUtils.createSocketAddr(token.getService().toString()).
  equals(NetUtils.createSocketAddr(service.toString()));
} catch (IllegalArgumentException e) {

{code}
Also as per Jenkins build, I did not find any whitespace violations, may be you 
will see, but checkstyle is probably complaining about one long line.

> Delegation token cannot be fetched and used by different versions of client
> ---
>
> Key: HADOOP-13206
> URL: https://issues.apache.org/jira/browse/HADOOP-13206
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.3.0, 2.6.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HADOOP-13206.00.patch, HADOOP-13206.01.patch, 
> HADOOP-13206.02.patch
>
>
> We have observed that an HDFS delegation token fetched by a 2.3.0 client 
> cannot be used by a 2.6.1 client, and vice versa. Through some debugging I 
> found that it's a mismatch between the token's {{service}} and the 
> {{service}} of the filesystem (e.g. {{webhdfs://host.something.com:50070/}}). 
> One would be in numerical IP address and one would be in non-numerical 
> hostname format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding

2016-07-19 Thread Thomas Poepping (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385074#comment-15385074
 ] 

Thomas Poepping commented on HADOOP-13344:
--

Sorry for the late response guys, had some other stuff come up.

I've experimented some with the patch for trunk, and it looks like I should be 
able to change the directory structure in 
hadoop-assemblies/src/main/resources/assemblies/hadoop-dist.xml. I've added a 
new directory: slf4j-lib for each component under 
/share/hadoop/${hadoop.component}/lib, and verified that the slf4j lib is not 
added for common.

Does this seem like it should work to you guys? The change "for each classpath" 
only affects common and the webserver modules. I wanted to run the 
implementation by you guys -- it doesn't seem as complicated as you made it 
sound, so I'm sure I'm missing something.

> Add option to exclude Hadoop's SLF4J binding
> 
>
> Key: HADOOP-13344
> URL: https://issues.apache.org/jira/browse/HADOOP-13344
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin, scripts
>Affects Versions: 2.8.0, 2.7.2
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>  Labels: patch
> Attachments: HADOOP-13344.patch
>
>
> If another application that uses the Hadoop classpath brings in its own SLF4J 
> binding for logging, and that jar is not the exact same as the one brought in 
> by Hadoop, then there will be a conflict between logging jars between the two 
> classpaths. This patch introduces an optional setting to remove Hadoop's 
> SLF4J binding from the classpath, to get rid of this problem.
> This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure 
> has been changed in 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13389) TestS3ATemporaryCredentials.testSTS error

2016-07-19 Thread Steven K. Wong (JIRA)
Steven K. Wong created HADOOP-13389:
---

 Summary: TestS3ATemporaryCredentials.testSTS error
 Key: HADOOP-13389
 URL: https://issues.apache.org/jira/browse/HADOOP-13389
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Steven K. Wong


{{org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS}} throws a 403 
AccessDenied when run without any AWS credentials (access key and secret key) 
in the config.

{noformat}
com.amazonaws.AmazonServiceException: Cannot call GetSessionToken with session 
credentials (Service: AWSSecurityTokenService; Status Code: 403; Error Code: 
AccessDenied; Request ID: X)
at 
com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
at 
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
at 
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at 
com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1106)
at 
com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.getSessionToken(AWSSecurityTokenServiceClient.java:355)
at 
org.apache.hadoop.fs.s3a.TestS3ATemporaryCredentials.testSTS(TestS3ATemporaryCredentials.java:105)
{noformat}

It fails because the InstanceProfileCredentialsProvider in the credentials 
chain (on line 91) is used, but an instance profile always provides a temporary 
credential and GetSessionToken requires a long-term (not temporary) credential.

Suggestion on how to fix this test case?




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13206) Delegation token cannot be fetched and used by different versions of client

2016-07-19 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385049#comment-15385049
 ] 

Zhe Zhang commented on HADOOP-13206:


I did more debugging and found the reason why different version of client 
return different formats of {{service}}.

In *trunk*, {{WebHdfsFileSystem#getDelegationToken}} sets {{service}} as:
{code}
if (token != null) {
  token.setService(tokenServiceName);
{code}

{{tokenServiceName}} is set as following:
{code}
this.tokenServiceName = isLogicalUri ?
HAUtilClient.buildTokenServiceForLogicalUri(uri, getScheme())
: SecurityUtil.buildTokenService(getCanonicalUri());
{code}

This essentially will create a logical URI like {{webhdfs://myhost}}.

In *branch-2.3*, the logic is as below, which results in numerical IPs.
{code}
SecurityUtil.setTokenService(token, getCurrentNNAddr());
...
this.nnAddrs = DFSUtil.resolveWebHdfsUri(this.uri, conf);
...
  /**
   * Resolve an HDFS URL into real INetSocketAddress. It works like a DNS 
resolver
   * when the URL points to an non-HA cluster. When the URL points to an HA
   * cluster, the resolver further resolves the logical name (i.e., the 
authority
   * in the URL) into real namenode addresses.
   */
  public static InetSocketAddress[] resolveWebHdfsUri(URI uri, Configuration 
conf)
  throws IOException {
int defaultPort;
String scheme = uri.getScheme();
if (WebHdfsFileSystem.SCHEME.equals(scheme)) {
  defaultPort = DFSConfigKeys.DFS_NAMENODE_HTTP_PORT_DEFAULT;
} else if (SWebHdfsFileSystem.SCHEME.equals(scheme)) {
  defaultPort = DFSConfigKeys.DFS_NAMENODE_HTTPS_PORT_DEFAULT;
} else {
  throw new IllegalArgumentException("Unsupported scheme: " + scheme);
}

ArrayList ret = new ArrayList();

if (!HAUtil.isLogicalUri(conf, uri)) {
  InetSocketAddress addr = NetUtils.createSocketAddr(uri.getAuthority(),
  defaultPort);
  ret.add(addr);

} else {
  Map> addresses = DFSUtil
  .getHaNnWebHdfsAddresses(conf, scheme);

  for (Map addrs : addresses.values()) {
for (InetSocketAddress addr : addrs.values()) {
  ret.add(addr);
}
  }
}

InetSocketAddress[] r = new InetSocketAddress[ret.size()];
return ret.toArray(r);
{code}

It's hard to add a unit test because we can't emulate a version 2.3 client in 
trunk code. But hope the above explanation is clear enough.

> Delegation token cannot be fetched and used by different versions of client
> ---
>
> Key: HADOOP-13206
> URL: https://issues.apache.org/jira/browse/HADOOP-13206
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.3.0, 2.6.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HADOOP-13206.00.patch, HADOOP-13206.01.patch, 
> HADOOP-13206.02.patch
>
>
> We have observed that an HDFS delegation token fetched by a 2.3.0 client 
> cannot be used by a 2.6.1 client, and vice versa. Through some debugging I 
> found that it's a mismatch between the token's {{service}} and the 
> {{service}} of the filesystem (e.g. {{webhdfs://host.something.com:50070/}}). 
> One would be in numerical IP address and one would be in non-numerical 
> hostname format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12991) Conflicting default ports in DelegateToFileSystem

2016-07-19 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385041#comment-15385041
 ] 

Akira Ajisaka edited comment on HADOOP-12991 at 7/19/16 11:09 PM:
--

Committed this to trunk, branch-2, branch-2.8, and branch-2.7. Thanks 
[~lewuathe] for the contribution and thanks [~hogeland] for reporting this!


was (Author: ajisakaa):
Committed this to trunk, branch-2, branch-2.8, and branch-2.7. Thanks 
[~lewuathe] for the contribution!

> Conflicting default ports in DelegateToFileSystem
> -
>
> Key: HADOOP-12991
> URL: https://issues.apache.org/jira/browse/HADOOP-12991
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Kevin Hogeland
>Assignee: Kai Sasaki
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-12991.01.patch, HADOOP-12991.02.patch, 
> HADOOP-12991.03.patch
>
>
> HADOOP-12304 introduced logic to ensure that the {{DelegateToFileSystem}} 
> constructor sets the default port to -1:
> {code:title=DelegateToFileSystem.java}
>   protected DelegateToFileSystem(URI theUri, FileSystem theFsImpl,
>   Configuration conf, String supportedScheme, boolean authorityRequired)
>   throws IOException, URISyntaxException {
> super(theUri, supportedScheme, authorityRequired, 
> getDefaultPortIfDefined(theFsImpl));
> fsImpl = theFsImpl;
> fsImpl.initialize(theUri, conf);
> fsImpl.statistics = getStatistics();
>   }
>   private static int getDefaultPortIfDefined(FileSystem theFsImpl) {
> int defaultPort = theFsImpl.getDefaultPort();
> return defaultPort != 0 ? defaultPort : -1;
>   }
> {code}
> However, {{DelegateToFileSystem#getUriDefaultPort}} returns 0:
> {code:title=DelegateToFileSystem.java}
>   public int getUriDefaultPort() {
> return 0;
>   }
> {code}
> This breaks {{AbstractFileSystem#checkPath}}:
> {code:title=AbstractFileSystem.java}
> int thisPort = this.getUri().getPort(); // If using DelegateToFileSystem, 
> this is -1
> int thatPort = uri.getPort(); // This is -1 by default in java.net.URI
> if (thatPort == -1) {
>   thatPort = this.getUriDefaultPort();  // Sets thatPort to 0
> }
> if (thisPort != thatPort) {
>   throw new InvalidPathException("Wrong FS: " + path + ", expected: "
>   + this.getUri());
> }
> {code}
> Which breaks any subclasses of {{DelegateToFileSystem}} that don't specify a 
> port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12991) Conflicting default ports in DelegateToFileSystem

2016-07-19 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-12991:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.7.4
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, branch-2.8, and branch-2.7. Thanks 
[~lewuathe] for the contribution!

> Conflicting default ports in DelegateToFileSystem
> -
>
> Key: HADOOP-12991
> URL: https://issues.apache.org/jira/browse/HADOOP-12991
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Kevin Hogeland
>Assignee: Kai Sasaki
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-12991.01.patch, HADOOP-12991.02.patch, 
> HADOOP-12991.03.patch
>
>
> HADOOP-12304 introduced logic to ensure that the {{DelegateToFileSystem}} 
> constructor sets the default port to -1:
> {code:title=DelegateToFileSystem.java}
>   protected DelegateToFileSystem(URI theUri, FileSystem theFsImpl,
>   Configuration conf, String supportedScheme, boolean authorityRequired)
>   throws IOException, URISyntaxException {
> super(theUri, supportedScheme, authorityRequired, 
> getDefaultPortIfDefined(theFsImpl));
> fsImpl = theFsImpl;
> fsImpl.initialize(theUri, conf);
> fsImpl.statistics = getStatistics();
>   }
>   private static int getDefaultPortIfDefined(FileSystem theFsImpl) {
> int defaultPort = theFsImpl.getDefaultPort();
> return defaultPort != 0 ? defaultPort : -1;
>   }
> {code}
> However, {{DelegateToFileSystem#getUriDefaultPort}} returns 0:
> {code:title=DelegateToFileSystem.java}
>   public int getUriDefaultPort() {
> return 0;
>   }
> {code}
> This breaks {{AbstractFileSystem#checkPath}}:
> {code:title=AbstractFileSystem.java}
> int thisPort = this.getUri().getPort(); // If using DelegateToFileSystem, 
> this is -1
> int thatPort = uri.getPort(); // This is -1 by default in java.net.URI
> if (thatPort == -1) {
>   thatPort = this.getUriDefaultPort();  // Sets thatPort to 0
> }
> if (thisPort != thatPort) {
>   throw new InvalidPathException("Wrong FS: " + path + ", expected: "
>   + this.getUri());
> }
> {code}
> Which breaks any subclasses of {{DelegateToFileSystem}} that don't specify a 
> port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13382) remove unneeded commons-httpclient dependencies from POM files in Hadoop and sub-projects

2016-07-19 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384907#comment-15384907
 ] 

Matt Foley edited comment on HADOOP-13382 at 7/19/16 10:15 PM:
---

Response to the Hadoop QA robot complaint about no new unit tests: This patch 
seeks to produce no functional change in the behavior of the code, therefore 
there are no new unit tests needed.  There are no new negative tests needed 
either, because if the patch breaks anything, it will be a gross breakage of 
hadoop-openstack.  Since all existing unit tests continue to work correctly, 
that's sufficient.


was (Author: mattf):
Response to the Hadoop QA complaint: This patch seeks to produce no functional 
change in the behavior of the code, therefore there are no new unit tests 
needed.  There are no new negative tests needed either, because if the patch 
breaks anything, it will be a gross breakage of hadoop-openstack.  Since all 
existing unit tests continue to work correctly, that's sufficient.

> remove unneeded commons-httpclient dependencies from POM files in Hadoop and 
> sub-projects
> -
>
> Key: HADOOP-13382
> URL: https://issues.apache.org/jira/browse/HADOOP-13382
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Matt Foley
>Assignee: Matt Foley
> Attachments: HADOOP-13382-branch-2.000.patch, 
> HADOOP-13382-branch-2.8.000.patch, HADOOP-13382.000.patch
>
>
> In branch-2.8 and later, the patches for various child and related bugs 
> listed in HADOOP-10105, most recently including HADOOP-11613, HADOOP-12710, 
> HADOOP-12711, HADOOP-12552, and HDFS-10623, eliminate all use of 
> "commons-httpclient" from Hadoop and its sub-projects (except for 
> hadoop-tools/hadoop-openstack; see HADOOP-11614).
> However, after incorporating these patches, "commons-httpclient" is still 
> listed as a dependency in these POM files:
> * hadoop-project/pom.xml
> * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml
> We wish to remove these, but since commons-httpclient is still used in many 
> files in hadoop-tools/hadoop-openstack, we'll need to _add_ the dependency to
> * hadoop-tools/hadoop-openstack/pom.xml
> (We'll add a note to HADOOP-11614 to undo this when commons-httpclient is 
> removed from hadoop-openstack.)
> In 2.8, this was mostly done by HADOOP-12552, but the version info formerly 
> inherited from hadoop-project/pom.xml also needs to be added, so that is in 
> the branch-2.8 version of the patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13212) Provide an option to set the socket buffers in S3AFileSystem

2016-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384934#comment-15384934
 ] 

Hadoop QA commented on HADOOP-13212:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
37s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
30s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
31s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
42s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
39s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Patch URL | 

[jira] [Commented] (HADOOP-13382) remove unneeded commons-httpclient dependencies from POM files in Hadoop and sub-projects

2016-07-19 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384907#comment-15384907
 ] 

Matt Foley commented on HADOOP-13382:
-

Response to the Hadoop QA complaint: This patch seeks to produce no functional 
change in the behavior of the code, therefore there are no new unit tests 
needed.  There are no new negative tests needed either, because if the patch 
breaks anything, it will be a gross breakage of hadoop-openstack.  Since all 
existing unit tests continue to work correctly, that's sufficient.

> remove unneeded commons-httpclient dependencies from POM files in Hadoop and 
> sub-projects
> -
>
> Key: HADOOP-13382
> URL: https://issues.apache.org/jira/browse/HADOOP-13382
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Matt Foley
>Assignee: Matt Foley
> Attachments: HADOOP-13382-branch-2.000.patch, 
> HADOOP-13382-branch-2.8.000.patch, HADOOP-13382.000.patch
>
>
> In branch-2.8 and later, the patches for various child and related bugs 
> listed in HADOOP-10105, most recently including HADOOP-11613, HADOOP-12710, 
> HADOOP-12711, HADOOP-12552, and HDFS-10623, eliminate all use of 
> "commons-httpclient" from Hadoop and its sub-projects (except for 
> hadoop-tools/hadoop-openstack; see HADOOP-11614).
> However, after incorporating these patches, "commons-httpclient" is still 
> listed as a dependency in these POM files:
> * hadoop-project/pom.xml
> * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml
> We wish to remove these, but since commons-httpclient is still used in many 
> files in hadoop-tools/hadoop-openstack, we'll need to _add_ the dependency to
> * hadoop-tools/hadoop-openstack/pom.xml
> (We'll add a note to HADOOP-11614 to undo this when commons-httpclient is 
> removed from hadoop-openstack.)
> In 2.8, this was mostly done by HADOOP-12552, but the version info formerly 
> inherited from hadoop-project/pom.xml also needs to be added, so that is in 
> the branch-2.8 version of the patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail

2016-07-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13240:
---
Hadoop Flags: Reviewed

+1, pending another pre-commit run.

> TestAclCommands.testSetfaclValidations fail
> ---
>
> Key: HADOOP-13240
> URL: https://issues.apache.org/jira/browse/HADOOP-13240
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.7.1
> Environment: hadoop 2.4.1,as6.5
>Reporter: linbao111
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13240.001.patch, HADOOP-13240.002.patch, 
> HADOOP-13240.003.patch
>
>
> mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console 
> -Dtest=TestAclCommands#testSetfaclValidations failed with following message:
> ---
> Test set: org.apache.hadoop.fs.shell.TestAclCommands
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands
> testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands)  Time 
> elapsed: 0.534 sec  <<< FAILURE!
> java.lang.AssertionError: setfacl should fail ACL spec missing
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertFalse(Assert.java:68)
> at 
> org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81)
> i notice from 
> HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
>  code changed
> should 
> hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe
>  changed to:
> diff --git 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
>  
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> index b14cd37..463bfcd
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception {
>  "/path" }));
>  assertFalse("setfacl should fail ACL spec missing",
>  0 == runCommand(new String[] { "-setfacl", "-m",
> -"", "/path" }));
> +":", "/path" }));
>}
>  
>@Test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13383) Update release notes for 3.0.0-alpha1

2016-07-19 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384893#comment-15384893
 ] 

Akira Ajisaka commented on HADOOP-13383:


LGTM. Would you add a link for all the jiras? There's a link to HADOOP-9902, 
but not for the other jiras.

> Update release notes for 3.0.0-alpha1
> -
>
> Key: HADOOP-13383
> URL: https://issues.apache.org/jira/browse/HADOOP-13383
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
> Attachments: HADOOP-13383.001.patch, HADOOP-13383.002.patch
>
>
> Per the release instructions (https://wiki.apache.org/hadoop/HowToRelease), 
> we need to update hadoop-project/src/site/markdown/index.md.vm to reflect the 
> right versions, new features and big improvements.
> I can put together some notes for HADOOP and HDFS, depending on others for 
> YARN and MR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12527) Upgrade Avro dependency to 1.7.7 or later

2016-07-19 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384851#comment-15384851
 ] 

Akira Ajisaka commented on HADOOP-12527:


bq. FWIW, I'd like a bump to 1.7.7 in Hadoop 2.8+ and 1.8.1 in Hadoop 3 alpha2.
+1 for the idea.

> Upgrade Avro dependency to 1.7.7 or later
> -
>
> Key: HADOOP-12527
> URL: https://issues.apache.org/jira/browse/HADOOP-12527
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.1
>Reporter: Jonathan Kelly
>
> Hadoop has depended upon Avro 1.7.4 for a couple of years now (see 
> HADOOP-9672), but Apache Spark depends upon what is currently the latest 
> version of Avro (1.7.7).
> This can cause issues if Spark is configured to include the full Hadoop 
> classpath, as the classpath would then contain both Avro 1.7.4 and 1.7.7, 
> with the 1.7.4 classes possibly winning depending on ordering. Here is an 
> example of this issue: 
> http://stackoverflow.com/questions/33159254/avro-error-on-aws-emr/33403111#33403111
> Would it be possible to upgrade Hadoop's Avro dependency to 1.7.7 now?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13387) users always get told off for using S3 —even when not using it.

2016-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384845#comment-15384845
 ] 

Hadoop QA commented on HADOOP-13387:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m  
2s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
37s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 48 unchanged - 0 fixed = 49 total (was 48) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818855/HADOOP-13887-branch-2-001.patch
 |
| JIRA Issue | HADOOP-13387 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b85e5ba93cc8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | 

[jira] [Updated] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail

2016-07-19 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13240:

Attachment: HADOOP-13240.003.patch

Patch 003:
* Use {{aclEntries.isEmpty()}}
* Pass both TestAclCommands and TestAclCLI unit tests

> TestAclCommands.testSetfaclValidations fail
> ---
>
> Key: HADOOP-13240
> URL: https://issues.apache.org/jira/browse/HADOOP-13240
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.7.1
> Environment: hadoop 2.4.1,as6.5
>Reporter: linbao111
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13240.001.patch, HADOOP-13240.002.patch, 
> HADOOP-13240.003.patch
>
>
> mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console 
> -Dtest=TestAclCommands#testSetfaclValidations failed with following message:
> ---
> Test set: org.apache.hadoop.fs.shell.TestAclCommands
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands
> testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands)  Time 
> elapsed: 0.534 sec  <<< FAILURE!
> java.lang.AssertionError: setfacl should fail ACL spec missing
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertFalse(Assert.java:68)
> at 
> org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81)
> i notice from 
> HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
>  code changed
> should 
> hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe
>  changed to:
> diff --git 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
>  
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> index b14cd37..463bfcd
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception {
>  "/path" }));
>  assertFalse("setfacl should fail ACL spec missing",
>  0 == runCommand(new String[] { "-setfacl", "-m",
> -"", "/path" }));
> +":", "/path" }));
>}
>  
>@Test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13179) GenericOptionsParser is not thread-safe because commons-cli OptionBuilder is not thread-safe

2016-07-19 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384792#comment-15384792
 ] 

Chris Nauroth commented on HADOOP-13179:


bq. Looking at the code, I don't think this actually fixes the problem of 
concurrent access to {{OptionsBuilder}}...there are lots of uses of the class 
in the Hadoop codebase, and they are all synchronized off different things.

That's correct.  The scope of this patch was limited to thread safety of 
{{GenericOptionsParser}}.

> GenericOptionsParser is not thread-safe because commons-cli OptionBuilder is 
> not thread-safe
> 
>
> Key: HADOOP-13179
> URL: https://issues.apache.org/jira/browse/HADOOP-13179
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: hongbin ma
>Assignee: hongbin ma
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13179-master.patch, HADOOP-13179.001.patch
>
>
> I'm running into similar issues like 
> http://stackoverflow.com/questions/22462665/is-hadoops-toorunner-thread-safe, 
> the author's observation seem to make sense to me. However when I checked the 
> hadoop github trunk I found the issue still not fixed.
> Chris Nauroth further investigated this issue, here's his quote: 
> {quote}
> The root cause is that commons-cli OptionBuilder is not thread-safe.
> https://commons.apache.org/proper/commons-cli/apidocs/org/apache/commons/cl
> i/OptionBuilder.html
> According to this issue, commons-cli doesn't plan to change that and
> instead chose to document the lack of thread-safety.
> https://issues.apache.org/jira/browse/CLI-209
> I think we can solve this in Hadoop, probably with a one-line change to
> make GenericOptionsParser#buildGeneralOptions a synchronized method.
> {quote}
> I'll soon upload a patch for this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13139) Branch-2: S3a to use thread pool that blocks clients

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13139:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> Branch-2: S3a to use thread pool that blocks clients
> 
>
> Key: HADOOP-13139
> URL: https://issues.apache.org/jira/browse/HADOOP-13139
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Pieter Reuse
>Assignee: Pieter Reuse
> Fix For: 2.8.0
>
> Attachments: HADOOP-13139-001.patch, HADOOP-13139-branch-2-003.patch, 
> HADOOP-13139-branch-2-004.patch, HADOOP-13139-branch-2-005.patch, 
> HADOOP-13139-branch-2-006.patch, HADOOP-13139-branch-2.001.patch, 
> HADOOP-13139-branch-2.002.patch
>
>
> HADOOP-11684 is accepted into trunk, but was not applied to branch-2. I will 
> attach a patch applicable to branch-2.
> It should be noted in CHANGES-2.8.0.txt that the config parameter 
> 'fs.s3a.threads.core' has been been removed and the behavior of the 
> ThreadPool for s3a has been changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13381) KMS clients running in the same JVM should use updated KMS Delegation Token

2016-07-19 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384730#comment-15384730
 ] 

Xiao Chen commented on HADOOP-13381:


I had an offline discussion with [~asuresh], and here's the minute:
- Arun brought up the point that there's {{authRetry}} in KMSCP, and when 
{{authToken}} is expired, a new {{DelegationTokenAuthenticatedURL.Token}} is 
created and the call is retried.
  This doesn't help in our case, since [(code inside the 
call)|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticatedURL.java#L290-L296]
 the UGI's credentials are used to get the kms-dt, which would be the same 
expired token.
- Regarding Yarn log aggregation, I explained that MR jobs will get tokens and 
run, and in the end NM will use that job's tokens to do Yarn log aggregation as 
a final MR job. So this part should be done as the MR user (as opposed to NM 
user: yarn), since this writes to the MR user's dir {{/tmp/logs/user/}}. cc 
[~rkanter] in case anything I said is not accurate.
- To minimize impact, we should only update {{kms-dt}} in the call.
- Arun has a general concern on updating the actualUgi's token, since normal 
use case is doAs / proxy user. This could be enhanced in another jira.


(My thought after the discussion): to counter the race that multiple threads 
calling the same cached KMSCP, we should create a new UGI object and update the 
tokens.
Will update a patch with more details.

> KMS clients running in the same JVM should use updated KMS Delegation Token
> ---
>
> Key: HADOOP-13381
> URL: https://issues.apache.org/jira/browse/HADOOP-13381
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HADOOP-13381.01.patch
>
>
> When {{/tmp}} is setup as an EZ, one may experience YARN log aggregation 
> failure after the very first KMS token is expired. The MR job itself runs 
> fine though.
> When this happens, YARN NodeManager's log will show 
> {{AuthenticationException}} with {{token is expired}} / {{token can't be 
> found in cache}}, depending on whether the expired token is removed by the 
> background or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13388) Clean up TestLocalFileSystemPermission

2016-07-19 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-13388:
--
Priority: Trivial  (was: Major)

> Clean up TestLocalFileSystemPermission
> --
>
> Key: HADOOP-13388
> URL: https://issues.apache.org/jira/browse/HADOOP-13388
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
>
> I see more problems with {{TestLocalFileSystemPermission}}:
> * Many checkstyle warnings
> * Relays on JUnit3 so Assume framework cannot be used for Windows checks.
> * In the tests in case of exception we get an error message but the test 
> itself will pass (because of the return).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13388) Clean up TestLocalFileSystemPermission

2016-07-19 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-13388:
--
Description: 
I see more problems with {{TestLocalFileSystemPermission}}:
* Many checkstyle warnings
* Relays on JUnit3 so Assume framework cannot be used for Windows checks.
* In the tests in case of exception we get an error message but the test itself 
will pass (because of the return).

  was:
FileSystemContractBaseTest#testMkdirsWithUmask is changing umask under the 
filesystem. RawLocalFileSystem reads the config on startup so it will not react 
if we change the umask.
It blocks [HADOOP-7363|https://issues.apache.org/jira/browse/HADOOP-7363] since 
testMkdirsWithUmask test will never work with RawLocalFileSystem.


> Clean up TestLocalFileSystemPermission
> --
>
> Key: HADOOP-13388
> URL: https://issues.apache.org/jira/browse/HADOOP-13388
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Fix For: 3.0.0-alpha2
>
>
> I see more problems with {{TestLocalFileSystemPermission}}:
> * Many checkstyle warnings
> * Relays on JUnit3 so Assume framework cannot be used for Windows checks.
> * In the tests in case of exception we get an error message but the test 
> itself will pass (because of the return).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13388) Clean up TestLocalFileSystemPermission

2016-07-19 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-13388:
-

 Summary: Clean up TestLocalFileSystemPermission
 Key: HADOOP-13388
 URL: https://issues.apache.org/jira/browse/HADOOP-13388
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Andras Bokor
Assignee: Andras Bokor
 Fix For: 3.0.0-alpha2


FileSystemContractBaseTest#testMkdirsWithUmask is changing umask under the 
filesystem. RawLocalFileSystem reads the config on startup so it will not react 
if we change the umask.
It blocks [HADOOP-7363|https://issues.apache.org/jira/browse/HADOOP-7363] since 
testMkdirsWithUmask test will never work with RawLocalFileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7363) TestRawLocalFileSystemContract is needed

2016-07-19 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-7363:
-
Affects Version/s: (was: 0.23.0)
   3.0.0-alpha2
   Status: Patch Available  (was: Open)

> TestRawLocalFileSystemContract is needed
> 
>
> Key: HADOOP-7363
> URL: https://issues.apache.org/jira/browse/HADOOP-7363
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Matt Foley
>Assignee: Andras Bokor
> Attachments: HADOOP-7363.01.patch
>
>
> FileSystemContractBaseTest is supposed to be run with each concrete 
> FileSystem implementation to insure adherence to the "contract" for 
> FileSystem behavior.  However, currently only HDFS and S3 do so.  
> RawLocalFileSystem, at least, needs to be added. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13212) Provide an option to set the socket buffers in S3AFileSystem

2016-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384646#comment-15384646
 ] 

Hadoop QA commented on HADOOP-13212:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
26s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
22s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
42s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
40s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
29s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Patch URL | 

[jira] [Commented] (HADOOP-13383) Update release notes for 3.0.0-alpha1

2016-07-19 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384640#comment-15384640
 ] 

Andrew Wang commented on HADOOP-13383:
--

BTW I would appreciate a +1 :)

> Update release notes for 3.0.0-alpha1
> -
>
> Key: HADOOP-13383
> URL: https://issues.apache.org/jira/browse/HADOOP-13383
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
> Attachments: HADOOP-13383.001.patch, HADOOP-13383.002.patch
>
>
> Per the release instructions (https://wiki.apache.org/hadoop/HowToRelease), 
> we need to update hadoop-project/src/site/markdown/index.md.vm to reflect the 
> right versions, new features and big improvements.
> I can put together some notes for HADOOP and HDFS, depending on others for 
> YARN and MR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11540) Raw Reed-Solomon coder using Intel ISA-L library

2016-07-19 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384636#comment-15384636
 ] 

Andrew Wang commented on HADOOP-11540:
--

I think ATM is planning to do another review pretty soon. I'm also fine with 
including this in alpha1; please just make sure to commit it to both branches 
when it goes in.

> Raw Reed-Solomon coder using Intel ISA-L library
> 
>
> Key: HADOOP-11540
> URL: https://issues.apache.org/jira/browse/HADOOP-11540
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-11540-initial.patch, HADOOP-11540-v1.patch, 
> HADOOP-11540-v10.patch, HADOOP-11540-v11.patch, HADOOP-11540-v12.patch, 
> HADOOP-11540-v2.patch, HADOOP-11540-v4.patch, HADOOP-11540-v5.patch, 
> HADOOP-11540-v6.patch, HADOOP-11540-v7.patch, HADOOP-11540-v8.patch, 
> HADOOP-11540-v9.patch, HADOOP-11540-with-11996-codes.patch, Native Erasure 
> Coder Performance - Intel ISAL-v1.pdf
>
>
> This is to provide RS codec implementation using Intel ISA-L library for 
> encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13387) users always get told off for using S3 —even when not using it.

2016-07-19 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13387:
---
Assignee: Steve Loughran

> users always get told off for using S3 —even when not using it.
> ---
>
> Key: HADOOP-13387
> URL: https://issues.apache.org/jira/browse/HADOOP-13387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13887-branch-2-001.patch
>
>
> The warning telling people not to use s3 appears during filesystem 
> initialization -even if you aren't using the FS. This is because it is 
> printed during static initialization, and when the FS code loads all 
> available filesystems, that static code is inited.
> It needs to be moved into the init() code of an instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13387) users always get told off for using S3 —even when not using it.

2016-07-19 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384625#comment-15384625
 ] 

Mingliang Liu commented on HADOOP-13387:


I missed the cases when FS code loads all available filesystems when I tested 
the [HADOOP-13239] manually. Thanks [~ste...@apache.org] for reporting this and 
providing a patch. The fix looks very good to me.

> users always get told off for using S3 —even when not using it.
> ---
>
> Key: HADOOP-13387
> URL: https://issues.apache.org/jira/browse/HADOOP-13387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
> Attachments: HADOOP-13887-branch-2-001.patch
>
>
> The warning telling people not to use s3 appears during filesystem 
> initialization -even if you aren't using the FS. This is because it is 
> printed during static initialization, and when the FS code loads all 
> available filesystems, that static code is inited.
> It needs to be moved into the init() code of an instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13387) users always get told off for using S3 —even when not using it.

2016-07-19 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384624#comment-15384624
 ] 

Chris Douglas commented on HADOOP-13387:


With the change to AtomicBoolean, {{warnDeprecation}} doesn't need to be 
synchronized.

+1

> users always get told off for using S3 —even when not using it.
> ---
>
> Key: HADOOP-13387
> URL: https://issues.apache.org/jira/browse/HADOOP-13387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
> Attachments: HADOOP-13887-branch-2-001.patch
>
>
> The warning telling people not to use s3 appears during filesystem 
> initialization -even if you aren't using the FS. This is because it is 
> printed during static initialization, and when the FS code loads all 
> available filesystems, that static code is inited.
> It needs to be moved into the init() code of an instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12928) Update netty to 3.10.5.Final to sync with zookeper

2016-07-19 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-12928:
---
Attachment: HADOOP-12928.01.patch

Updated the patch to address the test failure.

> Update netty to 3.10.5.Final to sync with zookeper
> --
>
> Key: HADOOP-12928
> URL: https://issues.apache.org/jira/browse/HADOOP-12928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.2
>Reporter: Hendy Irawan
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-12928.01.patch, HDFS-12928.00.patch
>
>
> Update netty to 3.7.1.Final because hadoop-client 2.7.2 depends on zookeeper 
> 3.4.6 which depends on netty 3.7.x. Related to HADOOP-12927
> Pull request: https://github.com/apache/hadoop/pull/85



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12527) Upgrade Avro dependency to 1.7.7 or later

2016-07-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384569#comment-15384569
 ] 

Steve Loughran commented on HADOOP-12527:
-

As an aside "Serializable" shouldn't be considered a good feature in 
distributed systems. as well as being brittle-unless-well-engineered, java 
serialization is now a common attack point.

> Upgrade Avro dependency to 1.7.7 or later
> -
>
> Key: HADOOP-12527
> URL: https://issues.apache.org/jira/browse/HADOOP-12527
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.1
>Reporter: Jonathan Kelly
>
> Hadoop has depended upon Avro 1.7.4 for a couple of years now (see 
> HADOOP-9672), but Apache Spark depends upon what is currently the latest 
> version of Avro (1.7.7).
> This can cause issues if Spark is configured to include the full Hadoop 
> classpath, as the classpath would then contain both Avro 1.7.4 and 1.7.7, 
> with the 1.7.4 classes possibly winning depending on ordering. Here is an 
> example of this issue: 
> http://stackoverflow.com/questions/33159254/avro-error-on-aws-emr/33403111#33403111
> Would it be possible to upgrade Hadoop's Avro dependency to 1.7.7 now?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12527) Upgrade Avro dependency to 1.7.7 or later

2016-07-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384565#comment-15384565
 ] 

Steve Loughran commented on HADOOP-12527:
-

"jackson' is a word to bring fear into the upgrade path. It sounds like moving 
up to Avro 1.7.7 is OK for Hadoop 2.8. Avro 1.8 sounds more traumatic all 
round. Joda time has had problems with Java versions, if it gets pulled into 
more than just hadoop-aws then we should start by managing it explicitly

> Upgrade Avro dependency to 1.7.7 or later
> -
>
> Key: HADOOP-12527
> URL: https://issues.apache.org/jira/browse/HADOOP-12527
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.1
>Reporter: Jonathan Kelly
>
> Hadoop has depended upon Avro 1.7.4 for a couple of years now (see 
> HADOOP-9672), but Apache Spark depends upon what is currently the latest 
> version of Avro (1.7.7).
> This can cause issues if Spark is configured to include the full Hadoop 
> classpath, as the classpath would then contain both Avro 1.7.4 and 1.7.7, 
> with the 1.7.4 classes possibly winning depending on ordering. Here is an 
> example of this issue: 
> http://stackoverflow.com/questions/33159254/avro-error-on-aws-emr/33403111#33403111
> Would it be possible to upgrade Hadoop's Avro dependency to 1.7.7 now?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13212) Provide an option to set the socket buffers in S3AFileSystem

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13212:

Attachment: HADOOP-13212-branch-2-004.patch

Patch 004: the full diff

> Provide an option to set the socket buffers in S3AFileSystem
> 
>
> Key: HADOOP-13212
> URL: https://issues.apache.org/jira/browse/HADOOP-13212
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13212-branch-2-001.patch, 
> HADOOP-13212-branch-2-002.patch, HADOOP-13212-branch-2-003.patch, 
> HADOOP-13212-branch-2-004.patch
>
>
> It should be possible to provide hints about send/receive buffers to 
> AmazonS3Client via ClientConfiguration. It would be good to expose these 
> parameters in S3AFileSystem for perf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13212) Provide an option to set the socket buffers in S3AFileSystem

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13212:

Status: Patch Available  (was: Open)

> Provide an option to set the socket buffers in S3AFileSystem
> 
>
> Key: HADOOP-13212
> URL: https://issues.apache.org/jira/browse/HADOOP-13212
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13212-branch-2-001.patch, 
> HADOOP-13212-branch-2-002.patch, HADOOP-13212-branch-2-003.patch, 
> HADOOP-13212-branch-2-004.patch
>
>
> It should be possible to provide hints about send/receive buffers to 
> AmazonS3Client via ClientConfiguration. It would be good to expose these 
> parameters in S3AFileSystem for perf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13212) Provide an option to set the socket buffers in S3AFileSystem

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13212:

Status: Open  (was: Patch Available)

patch 003 is only my diff, not the full patch

> Provide an option to set the socket buffers in S3AFileSystem
> 
>
> Key: HADOOP-13212
> URL: https://issues.apache.org/jira/browse/HADOOP-13212
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13212-branch-2-001.patch, 
> HADOOP-13212-branch-2-002.patch, HADOOP-13212-branch-2-003.patch
>
>
> It should be possible to provide hints about send/receive buffers to 
> AmazonS3Client via ClientConfiguration. It would be good to expose these 
> parameters in S3AFileSystem for perf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13387) users always get told off for using S3 —even when not using it.

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13387:

Status: Patch Available  (was: Open)

tested against s3 ireland

> users always get told off for using S3 —even when not using it.
> ---
>
> Key: HADOOP-13387
> URL: https://issues.apache.org/jira/browse/HADOOP-13387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
> Attachments: HADOOP-13887-branch-2-001.patch
>
>
> The warning telling people not to use s3 appears during filesystem 
> initialization -even if you aren't using the FS. This is because it is 
> printed during static initialization, and when the FS code loads all 
> available filesystems, that static code is inited.
> It needs to be moved into the init() code of an instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13387) users always get told off for using S3 —even when not using it.

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13387:

Attachment: HADOOP-13887-branch-2-001.patch

code, no tests.

Evidence it works from the output of one of the tests
{code}
2016-07-19 17:39:07,033 [Thread-0] WARN  fs.FileSystem 
(S3FileSystem.java:warnDeprecation(84)) - S3FileSystem is deprecated and will 
be removed in future releases. Use NativeS3FileSystem or S3AFileSystem instead.
2016-07-19 17:39:07,105 [Thread-0] INFO  contract.AbstractFSContractTestBase 
(AbstractFSContractTestBase.java:setup(172)) - Test filesystem = 
s3://hwdev-steve-new implemented by 
org.apache.hadoop.fs.s3.S3FileSystem@79791d72
2016-07-19 17:39:12,207 [Thread-0] INFO  contract.AbstractFSContractTestBase 
(AbstractFSContractTestBase.java:describe(240)) - readFully zero bytes from an 
offset past EOF
2016-07-19 17:39:12,371 [Thread-0] INFO  contract.AbstractContractSeekTest 
(AbstractContractSeekTest.java:testReadFullyZeroBytebufferPastEOF(504)) - 
Filesystem short-circuits 0-byte reads
{code}

The message is printed once only.

> users always get told off for using S3 —even when not using it.
> ---
>
> Key: HADOOP-13387
> URL: https://issues.apache.org/jira/browse/HADOOP-13387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
> Attachments: HADOOP-13887-branch-2-001.patch
>
>
> The warning telling people not to use s3 appears during filesystem 
> initialization -even if you aren't using the FS. This is because it is 
> printed during static initialization, and when the FS code loads all 
> available filesystems, that static code is inited.
> It needs to be moved into the init() code of an instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12527) Upgrade Avro dependency to 1.7.7 or later

2016-07-19 Thread Ben McCann (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384447#comment-15384447
 ] 

Ben McCann commented on HADOOP-12527:
-

That seems like a reasonable path forward to me

> Upgrade Avro dependency to 1.7.7 or later
> -
>
> Key: HADOOP-12527
> URL: https://issues.apache.org/jira/browse/HADOOP-12527
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.1
>Reporter: Jonathan Kelly
>
> Hadoop has depended upon Avro 1.7.4 for a couple of years now (see 
> HADOOP-9672), but Apache Spark depends upon what is currently the latest 
> version of Avro (1.7.7).
> This can cause issues if Spark is configured to include the full Hadoop 
> classpath, as the classpath would then contain both Avro 1.7.4 and 1.7.7, 
> with the 1.7.4 classes possibly winning depending on ordering. Here is an 
> example of this issue: 
> http://stackoverflow.com/questions/33159254/avro-error-on-aws-emr/33403111#33403111
> Would it be possible to upgrade Hadoop's Avro dependency to 1.7.7 now?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12527) Upgrade Avro dependency to 1.7.7 or later

2016-07-19 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384425#comment-15384425
 ] 

Sean Busbey commented on HADOOP-12527:
--

FWIW, I'd like a bump to 1.7.7 in Hadoop 2.8+ and 1.8.1 in Hadoop 3 alpha2.

> Upgrade Avro dependency to 1.7.7 or later
> -
>
> Key: HADOOP-12527
> URL: https://issues.apache.org/jira/browse/HADOOP-12527
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.1
>Reporter: Jonathan Kelly
>
> Hadoop has depended upon Avro 1.7.4 for a couple of years now (see 
> HADOOP-9672), but Apache Spark depends upon what is currently the latest 
> version of Avro (1.7.7).
> This can cause issues if Spark is configured to include the full Hadoop 
> classpath, as the classpath would then contain both Avro 1.7.4 and 1.7.7, 
> with the 1.7.4 classes possibly winning depending on ordering. Here is an 
> example of this issue: 
> http://stackoverflow.com/questions/33159254/avro-error-on-aws-emr/33403111#33403111
> Would it be possible to upgrade Hadoop's Avro dependency to 1.7.7 now?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12527) Upgrade Avro dependency to 1.7.7 or later

2016-07-19 Thread Ben McCann (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384409#comment-15384409
 ] 

Ben McCann commented on HADOOP-12527:
-

Going from Avro 1.7.4 to Avro 1.7.6 or 1.7.7 bumps Jackson from 1.8.x to 1.9.x. 
This should be a no-op though because Hadoop is already using Jackson 1.9.x.

Going to Avro 1.8.1 bumps paranamer from 2.3 to 2.7 and commons-compress from 
1.4.1 to 1.8.1. It also adds dependencies on xz 1.5 and joda-time 2.7.

Given that there's essentially no transitive dependency changes from bumping 
Avro to 1.7.7 and that bumping Avro is a low risk upgrade, could we upgrade to 
1.7.7 at least?

> Upgrade Avro dependency to 1.7.7 or later
> -
>
> Key: HADOOP-12527
> URL: https://issues.apache.org/jira/browse/HADOOP-12527
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.1
>Reporter: Jonathan Kelly
>
> Hadoop has depended upon Avro 1.7.4 for a couple of years now (see 
> HADOOP-9672), but Apache Spark depends upon what is currently the latest 
> version of Avro (1.7.7).
> This can cause issues if Spark is configured to include the full Hadoop 
> classpath, as the classpath would then contain both Avro 1.7.4 and 1.7.7, 
> with the 1.7.4 classes possibly winning depending on ordering. Here is an 
> example of this issue: 
> http://stackoverflow.com/questions/33159254/avro-error-on-aws-emr/33403111#33403111
> Would it be possible to upgrade Hadoop's Avro dependency to 1.7.7 now?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13212) Provide an option to set the socket buffers in S3AFileSystem

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13212:

Status: Patch Available  (was: Open)

> Provide an option to set the socket buffers in S3AFileSystem
> 
>
> Key: HADOOP-13212
> URL: https://issues.apache.org/jira/browse/HADOOP-13212
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13212-branch-2-001.patch, 
> HADOOP-13212-branch-2-002.patch, HADOOP-13212-branch-2-003.patch
>
>
> It should be possible to provide hints about send/receive buffers to 
> AmazonS3Client via ClientConfiguration. It would be good to expose these 
> parameters in S3AFileSystem for perf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13212) Provide an option to set the socket buffers in S3AFileSystem

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13212:

Attachment: HADOOP-13212-branch-2-003.patch

Tentative +1, pending Jenkins review (local tests pass)

this is the patch I'm going to apply.

It's patch 002 *with the use of the new options performed in the test case by 
referring to their definition in {{Constants}}*

> Provide an option to set the socket buffers in S3AFileSystem
> 
>
> Key: HADOOP-13212
> URL: https://issues.apache.org/jira/browse/HADOOP-13212
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13212-branch-2-001.patch, 
> HADOOP-13212-branch-2-002.patch, HADOOP-13212-branch-2-003.patch
>
>
> It should be possible to provide hints about send/receive buffers to 
> AmazonS3Client via ClientConfiguration. It would be good to expose these 
> parameters in S3AFileSystem for perf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13212) Provide an option to set the socket buffers in S3AFileSystem

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13212:

Status: Open  (was: Patch Available)

> Provide an option to set the socket buffers in S3AFileSystem
> 
>
> Key: HADOOP-13212
> URL: https://issues.apache.org/jira/browse/HADOOP-13212
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13212-branch-2-001.patch, 
> HADOOP-13212-branch-2-002.patch
>
>
> It should be possible to provide hints about send/receive buffers to 
> AmazonS3Client via ClientConfiguration. It would be good to expose these 
> parameters in S3AFileSystem for perf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13387) users always get told off for using S3 —even when not using it.

2016-07-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384320#comment-15384320
 ] 

Steve Loughran commented on HADOOP-13387:
-

example log
{code}
Discovery starting.
2016-07-19 16:07:43,375 INFO  s3.S3aIOSuite (Logging.scala:logInfo(54)) - 
Loading configuration from ../../cloud.xml
2016-07-19 16:07:43,713 WARN  util.NativeCodeLoader 
(NativeCodeLoader.java:(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
S3FileSystem is deprecated and will be removed in future releases. Use 
NativeS3FileSystem or S3AFileSystem instead.
{code}

> users always get told off for using S3 —even when not using it.
> ---
>
> Key: HADOOP-13387
> URL: https://issues.apache.org/jira/browse/HADOOP-13387
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> The warning telling people not to use s3 appears during filesystem 
> initialization -even if you aren't using the FS. This is because it is 
> printed during static initialization, and when the FS code loads all 
> available filesystems, that static code is inited.
> It needs to be moved into the init() code of an instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12527) Upgrade Avro dependency to 1.7.7 or later

2016-07-19 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384287#comment-15384287
 ] 

Sean Busbey commented on HADOOP-12527:
--

This ends up being related to a similar request in HBase (and probably Spark), 
esp if we're talking about going to 1.8.z. (which I'd prefer).

HBase being on 1.7.6 is probably an accident and not in any release. FWIW, I'll 
be trying to update HBase to match the version shipping with whatever version 
of Hadoop we default to when HBase 2.0.0 RCs come around (I hope that will be a 
Hadoop 3 of some stripe).

As I mentioned on that HBase JIRA, the two relevant breaking changes in Avro 
1.8 AFAICT are AVRO-1502 and AVRO-997. I believe I fixed Hive ages ago wrt 
AVRO-997. I believe HBase's (currently unreleased) Spark SQL-over-Avro-in-HBase 
code is currently incompatible with AVRO-997 (but should be fixable). I haven't 
examined any other Spark use.

> Upgrade Avro dependency to 1.7.7 or later
> -
>
> Key: HADOOP-12527
> URL: https://issues.apache.org/jira/browse/HADOOP-12527
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.1
>Reporter: Jonathan Kelly
>
> Hadoop has depended upon Avro 1.7.4 for a couple of years now (see 
> HADOOP-9672), but Apache Spark depends upon what is currently the latest 
> version of Avro (1.7.7).
> This can cause issues if Spark is configured to include the full Hadoop 
> classpath, as the classpath would then contain both Avro 1.7.4 and 1.7.7, 
> with the 1.7.4 classes possibly winning depending on ordering. Here is an 
> example of this issue: 
> http://stackoverflow.com/questions/33159254/avro-error-on-aws-emr/33403111#33403111
> Would it be possible to upgrade Hadoop's Avro dependency to 1.7.7 now?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13387) users always get told off for using S3 —even when not using it.

2016-07-19 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13387:
---

 Summary: users always get told off for using S3 —even when not 
using it.
 Key: HADOOP-13387
 URL: https://issues.apache.org/jira/browse/HADOOP-13387
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran


The warning telling people not to use s3 appears during filesystem 
initialization -even if you aren't using the FS. This is because it is printed 
during static initialization, and when the FS code loads all available 
filesystems, that static code is inited.

It needs to be moved into the init() code of an instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12527) Upgrade Avro dependency to 1.7.7 or later

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12527:

Summary: Upgrade Avro dependency to 1.7.7 or later  (was: Upgrade Avro 
dependency to 1.7.7)

> Upgrade Avro dependency to 1.7.7 or later
> -
>
> Key: HADOOP-12527
> URL: https://issues.apache.org/jira/browse/HADOOP-12527
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.1
>Reporter: Jonathan Kelly
>
> Hadoop has depended upon Avro 1.7.4 for a couple of years now (see 
> HADOOP-9672), but Apache Spark depends upon what is currently the latest 
> version of Avro (1.7.7).
> This can cause issues if Spark is configured to include the full Hadoop 
> classpath, as the classpath would then contain both Avro 1.7.4 and 1.7.7, 
> with the 1.7.4 classes possibly winning depending on ordering. Here is an 
> example of this issue: 
> http://stackoverflow.com/questions/33159254/avro-error-on-aws-emr/33403111#33403111
> Would it be possible to upgrade Hadoop's Avro dependency to 1.7.7 now?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13386) Upgrade Avro to 1.8.x

2016-07-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384136#comment-15384136
 ] 

Steve Loughran commented on HADOOP-13386:
-

successor to HADOOP-12527; given the discourse there I'm going to change the 
title of that one and close this as a duplicate.

see also: http://steveloughran.blogspot.co.uk/2016/05/fear-of-dependencies.html 
for some coverage of the problem. We aren't scared of Avro, but do need to take 
care of its transitive dependencies. Help there showing what they are and 
testing all down the stack is very much appreciated

> Upgrade Avro to 1.8.x
> -
>
> Key: HADOOP-13386
> URL: https://issues.apache.org/jira/browse/HADOOP-13386
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ben McCann
>
> Avro 1.8.x makes generated classes serializable which makes them much easier 
> to use with Spark. It would be great to upgrade Avro to 1.8.x



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13386) Upgrade Avro to 1.8.x

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13386.
-
Resolution: Duplicate

> Upgrade Avro to 1.8.x
> -
>
> Key: HADOOP-13386
> URL: https://issues.apache.org/jira/browse/HADOOP-13386
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ben McCann
>
> Avro 1.8.x makes generated classes serializable which makes them much easier 
> to use with Spark. It would be great to upgrade Avro to 1.8.x



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9844) NPE when trying to create an error message response of RPC

2016-07-19 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384124#comment-15384124
 ] 

Vinayakumar B commented on HADOOP-9844:
---

{code}-  headerBuilder.setExceptionClassName(errorClass);
-  headerBuilder.setErrorMsg(error);
+  headerBuilder.setExceptionClassName(errorClass != null
+  ? errorClass : "RPC failure with no error provided");
+  headerBuilder.setErrorMsg(error != null ? error : "");{code}
In current uses, {{errorClass}} can be null only if response is SUCCESS. Even 
though its null, {{Client.java#receiveRpcResponse()}} handled this case.
So instead of setting with custom string, need not set anything(not even null 
for both errorClass and errorMsg)
Like below,
{code}
 } else { // Rpc Failure
-  headerBuilder.setExceptionClassName(errorClass);
-  headerBuilder.setErrorMsg(error);
-  headerBuilder.setErrorDetail(erCode);
+  if (errorClass != null) {
+headerBuilder.setExceptionClassName(errorClass);
+  }
+  if (error != null) {
+headerBuilder.setErrorMsg(error);
+  }
+  if (erCode != null) {
+headerBuilder.setErrorDetail(erCode);
+  }
   RpcResponseHeaderProto header = headerBuilder.build();
{code}

+1 once fixed

> NPE when trying to create an error message response of RPC
> --
>
> Key: HADOOP-9844
> URL: https://issues.apache.org/jira/browse/HADOOP-9844
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1, 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9844-001.patch, HADOOP-9844-002.patch
>
>
> I'm seeing an NPE which is raised when the server is trying to create an 
> error response to send back to the caller and there is no error text.
> The root cause is probably somewhere in SASL, but sending something back to 
> the caller would seem preferable to NPE-ing server-side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9844) NPE when trying to create an error message response of RPC

2016-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15384056#comment-15384056
 ] 

Hadoop QA commented on HADOOP-9844:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 196 unchanged - 1 fixed = 196 total (was 197) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
25s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-9844 |
| GITHUB PR | https://github.com/apache/hadoop/pull/55 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c30bf85546b4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / fe20494 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10024/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10024/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NPE when trying to create an error message response of RPC
> --
>
> Key: HADOOP-9844
> URL: https://issues.apache.org/jira/browse/HADOOP-9844
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1, 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>   

[jira] [Updated] (HADOOP-9844) NPE when trying to create an error message response of RPC

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9844:
---
  Labels:   (was: BB2015-05-TBR)
Target Version/s: 2.9.0  (was: 2.8.0)
  Status: Patch Available  (was: Open)

> NPE when trying to create an error message response of RPC
> --
>
> Key: HADOOP-9844
> URL: https://issues.apache.org/jira/browse/HADOOP-9844
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1, 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-9844-001.patch, HADOOP-9844-002.patch
>
>
> I'm seeing an NPE which is raised when the server is trying to create an 
> error response to send back to the caller and there is no error text.
> The root cause is probably somewhere in SASL, but sending something back to 
> the caller would seem preferable to NPE-ing server-side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9844) NPE when trying to create an error message response of RPC

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9844:
---
Attachment: HADOOP-9844-002.patch

Patch 002, rebased to trunk

please can someone review this before its third birthday

> NPE when trying to create an error message response of RPC
> --
>
> Key: HADOOP-9844
> URL: https://issues.apache.org/jira/browse/HADOOP-9844
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1, 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9844-001.patch, HADOOP-9844-002.patch
>
>
> I'm seeing an NPE which is raised when the server is trying to create an 
> error response to send back to the caller and there is no error text.
> The root cause is probably somewhere in SASL, but sending something back to 
> the caller would seem preferable to NPE-ing server-side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9844) NPE when trying to create an error message response of RPC

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9844:
---
Status: Open  (was: Patch Available)

> NPE when trying to create an error message response of RPC
> --
>
> Key: HADOOP-9844
> URL: https://issues.apache.org/jira/browse/HADOOP-9844
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1, 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9844-001.patch
>
>
> I'm seeing an NPE which is raised when the server is trying to create an 
> error response to send back to the caller and there is no error text.
> The root cause is probably somewhere in SASL, but sending something back to 
> the caller would seem preferable to NPE-ing server-side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-9844) NPE when trying to create an error message response of RPC

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9844:
---
Comment: was deleted

(was: patch 002, rebased to trunk

please can someone review this before its third birthday)

> NPE when trying to create an error message response of RPC
> --
>
> Key: HADOOP-9844
> URL: https://issues.apache.org/jira/browse/HADOOP-9844
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1, 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9844-001.patch
>
>
> I'm seeing an NPE which is raised when the server is trying to create an 
> error response to send back to the caller and there is no error text.
> The root cause is probably somewhere in SASL, but sending something back to 
> the caller would seem preferable to NPE-ing server-side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9844) NPE when trying to create an error message response of RPC

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9844:
---
Attachment: (was: HADOOP-9844-002.patch)

> NPE when trying to create an error message response of RPC
> --
>
> Key: HADOOP-9844
> URL: https://issues.apache.org/jira/browse/HADOOP-9844
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1, 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9844-001.patch
>
>
> I'm seeing an NPE which is raised when the server is trying to create an 
> error response to send back to the caller and there is no error text.
> The root cause is probably somewhere in SASL, but sending something back to 
> the caller would seem preferable to NPE-ing server-side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9844) NPE when trying to create an error message response of RPC

2016-07-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9844:
---
Attachment: HADOOP-9844-002.patch

patch 002, rebased to trunk

please can someone review this before its third birthday

> NPE when trying to create an error message response of RPC
> --
>
> Key: HADOOP-9844
> URL: https://issues.apache.org/jira/browse/HADOOP-9844
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1, 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9844-001.patch, HADOOP-9844-002.patch
>
>
> I'm seeing an NPE which is raised when the server is trying to create an 
> error response to send back to the caller and there is no error text.
> The root cause is probably somewhere in SASL, but sending something back to 
> the caller would seem preferable to NPE-ing server-side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13179) GenericOptionsParser is not thread-safe because commons-cli OptionBuilder is not thread-safe

2016-07-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15383877#comment-15383877
 ] 

Steve Loughran commented on HADOOP-13179:
-

Looking at the code, I don't think this actually fixes the problem of 
concurrent access to {{OptionsBuilder}}...there are lots of uses of the class 
in the Hadoop codebase, and they are all synchronized off different things.

The way to do this safely would be to make them all 
{{synchronized(OptionsBuilder)}}

> GenericOptionsParser is not thread-safe because commons-cli OptionBuilder is 
> not thread-safe
> 
>
> Key: HADOOP-13179
> URL: https://issues.apache.org/jira/browse/HADOOP-13179
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: hongbin ma
>Assignee: hongbin ma
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13179-master.patch, HADOOP-13179.001.patch
>
>
> I'm running into similar issues like 
> http://stackoverflow.com/questions/22462665/is-hadoops-toorunner-thread-safe, 
> the author's observation seem to make sense to me. However when I checked the 
> hadoop github trunk I found the issue still not fixed.
> Chris Nauroth further investigated this issue, here's his quote: 
> {quote}
> The root cause is that commons-cli OptionBuilder is not thread-safe.
> https://commons.apache.org/proper/commons-cli/apidocs/org/apache/commons/cl
> i/OptionBuilder.html
> According to this issue, commons-cli doesn't plan to change that and
> instead chose to document the lack of thread-safety.
> https://issues.apache.org/jira/browse/CLI-209
> I think we can solve this in Hadoop, probably with a one-line change to
> make GenericOptionsParser#buildGeneralOptions a synchronized method.
> {quote}
> I'll soon upload a patch for this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13179) GenericOptionsParser is not thread-safe because commons-cli OptionBuilder is not thread-safe

2016-07-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15383875#comment-15383875
 ] 

Steve Loughran commented on HADOOP-13179:
-

This just broke my never-gets-reviewed YARN-679 patch, where I'd allowed for 
services to be able to define their own options parser. Not your fault, just 
one of those details which arises when patches don't get reviewed in a timely 
manner. I'l have to fix my code there.

FWIW, I think we should jump to JCommander for future work: 
introspection+annotation based, much easier to work with.

> GenericOptionsParser is not thread-safe because commons-cli OptionBuilder is 
> not thread-safe
> 
>
> Key: HADOOP-13179
> URL: https://issues.apache.org/jira/browse/HADOOP-13179
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: hongbin ma
>Assignee: hongbin ma
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13179-master.patch, HADOOP-13179.001.patch
>
>
> I'm running into similar issues like 
> http://stackoverflow.com/questions/22462665/is-hadoops-toorunner-thread-safe, 
> the author's observation seem to make sense to me. However when I checked the 
> hadoop github trunk I found the issue still not fixed.
> Chris Nauroth further investigated this issue, here's his quote: 
> {quote}
> The root cause is that commons-cli OptionBuilder is not thread-safe.
> https://commons.apache.org/proper/commons-cli/apidocs/org/apache/commons/cl
> i/OptionBuilder.html
> According to this issue, commons-cli doesn't plan to change that and
> instead chose to document the lack of thread-safety.
> https://issues.apache.org/jira/browse/CLI-209
> I think we can solve this in Hadoop, probably with a one-line change to
> make GenericOptionsParser#buildGeneralOptions a synchronized method.
> {quote}
> I'll soon upload a patch for this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13191) FileSystem#listStatus should not return null

2016-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15383815#comment-15383815
 ] 

Hadoop QA commented on HADOOP-13191:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 49s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
26s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem |
|   | hadoop.fs.TestFSMainOperationsLocalFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818764/HADOOP-13191.002.patch
 |
| JIRA Issue | HADOOP-13191 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5bc424547b17 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 92fe2db |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10023/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10023/testReport/ |
| modules | C: 

[jira] [Commented] (HADOOP-13192) org.apache.hadoop.util.LineReader cannot handle multibyte delimiters correctly

2016-07-19 Thread sunhaitao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15383710#comment-15383710
 ] 

sunhaitao commented on HADOOP-13192:


welcome :)

> org.apache.hadoop.util.LineReader cannot handle multibyte delimiters correctly
> --
>
> Key: HADOOP-13192
> URL: https://issues.apache.org/jira/browse/HADOOP-13192
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.2
>Reporter: binde
>Assignee: binde
>Priority: Critical
> Fix For: 2.7.3
>
> Attachments: 
> 0001-HADOOP-13192-org.apache.hadoop.util.LineReader-match.patch, 
> 0002-fix-bug-hadoop-1392-add-test-case-for-LineReader.patch, 
> HADOOP-13192.final.patch
>
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> org.apache.hadoop.util.LineReader.readCustomLine()  has a bug,
> when line is   bccc, recordDelimiter is aaab, the result should be a,ccc,
> show the code on line 310:
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>   bufferPosn--;
>   delPosn = 0;
> }
>   }
> shoud be :
>   for (; bufferPosn < bufferLength; ++bufferPosn) {
> if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) {
>   delPosn++;
>   if (delPosn >= recordDelimiterBytes.length) {
> bufferPosn++;
> break;
>   }
> } else if (delPosn != 0) {
>  // - change here - start 
>   bufferPosn -= delPosn;
>  // - change here - end 
>   
>   delPosn = 0;
> }
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13191) FileSystem#listStatus should not return null

2016-07-19 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13191:

Attachment: HADOOP-13191.002.patch

Patch 002:
* Rebase

> FileSystem#listStatus should not return null
> 
>
> Key: HADOOP-13191
> URL: https://issues.apache.org/jira/browse/HADOOP-13191
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13191.001.patch, HADOOP-13191.002.patch
>
>
> This came out of discussion in HADOOP-12718. The {{FileSystem#listStatus}} 
> contract does not indicate {{null}} is a valid return and some callers do not 
> test {{null}} before use:
> AbstractContractGetFileStatusTest#testListStatusEmptyDirectory:
> {code}
> assertEquals("ls on an empty directory not of length 0", 0,
> fs.listStatus(subfolder).length);
> {code}
> ChecksumFileSystem#copyToLocalFile:
> {code}
>   FileStatus[] srcs = listStatus(src);
>   for (FileStatus srcFile : srcs) {
> {code}
> SimpleCopyLIsting#getFileStatus:
> {code}
>   FileStatus[] fileStatuses = fileSystem.listStatus(path);
>   if (excludeList != null && excludeList.size() > 0) {
> ArrayList fileStatusList = new ArrayList<>();
> for(FileStatus status : fileStatuses) {
> {code}
> IMHO, there is no good reason for {{listStatus}} to return {{null}}. It 
> should throw IOExceptions upon errors or return empty list.
> To enforce the contract that null is an invalid return, update javadoc and 
> leverage @Nullable/@NotNull/@Nonnull annotations.
> So far, I am only aware of the following functions that can return null:
> * RawLocalFileSystem#listStatus



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-07-19 Thread uncleGen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15383693#comment-15383693
 ] 

uncleGen edited comment on HADOOP-12756 at 7/19/16 7:08 AM:


[~shimingfei] IMHO, when do 'multipartUploadObject' operation in class 
'AliyunOSSOutputStream', the part number is less than or equal to 1, so the 
part size need to be limited by 'fs.oss.multipart.upload.size' and  part number 
upper limit (now is 1). See the doc 
[here|https://help.aliyun.com/document_detail/31993.html?spm=5176.product31815.6.265.iPB9WC].


was (Author: unclegen):
[~shimingfei] IMHO, when do 'multipartUploadObject' operation in class 
'AliyunOSSOutputStream', the part number is less than or equal to 1, so the 
part size need to be limited by 'fs.oss.multipart.upload.size' and  part number 
upper limit (now is 1). See the doc 
[here](https://help.aliyun.com/document_detail/31993.html?spm=5176.product31815.6.265.iPB9WC).

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0, HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-07-19 Thread uncleGen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15383693#comment-15383693
 ] 

uncleGen commented on HADOOP-12756:
---

[~shimingfei] IMHO, when do 'multipartUploadObject' operation in class 
'AliyunOSSOutputStream', the part number is less than or equal to 1, so the 
part size need to be limited by 'fs.oss.multipart.upload.size' and  part number 
upper limit (now is 1). See the doc 
[here](https://help.aliyun.com/document_detail/31993.html?spm=5176.product31815.6.265.iPB9WC).

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0, HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13377) Some improvement for incorporating Aliyun OSS file system implementation

2016-07-19 Thread uncleGen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15383623#comment-15383623
 ] 

uncleGen commented on HADOOP-13377:
---

Waiting for [HADOOP-12756|https://issues.apache.org/jira/browse/HADOOP-12756] 
to be merged.

> Some improvement for incorporating Aliyun OSS file system implementation
> 
>
> Key: HADOOP-13377
> URL: https://issues.apache.org/jira/browse/HADOOP-13377
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0, HADOOP-12756
>Reporter: uncleGen
> Fix For: HADOOP-12756
>
>
> This work is based on 
> [HADOOP-12756|https://issues.apache.org/jira/browse/HADOOP-12756]. 
> There are some stability problems which we should pay attention to, include 
> but not limited to:
> 1. OSS will close long-time connection(> 3h) and idle connection(>1minute), 
> while it is pretty common.
> 2. The 'copy' operation is time-consuming, so we could use the existing 
> Job/Task executing logic, i.e. copy temp result from temp directory to final 
> directory.
> and some hack optimization:
> 1. use double buffering and  multi-thread when read oss data
> 2.  data is split in chunk and uploaded in ‘multipart’  way



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org