[jira] [Commented] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-20 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508461#comment-15508461
 ] 

Weiwei Yang commented on HADOOP-13628:
--

Hi [~steve_l]

Thank you for the comments. 

bq. Does it return the fully evaluated config?

Yes it does.

bq.  blocks

Thanks will do that.

bq. Unknown properties should return 404

That makes sense, will do that

bq. Could we have a text/plain one which returns just the text value?

I am a little hesitated to do that, I did not see any place else that supports 
to return plain text, we'd better keep consistency, what do you say?

bq. Presumably a MiniHDFS cluster serves up this data and would respond to a 
few Jersey requests?

Is this truly necessary ? The tests in {{TestConfiguration}} should cover all 
the possible scenarios of requests. The {{ConfServlet}} simply parse the 
parameter, call the method then return the message. The coverage looks enough 
to me, unless you have more concern.

> Support to retrieve specific property from configuration via REST API
> -
>
> Key: HADOOP-13628
> URL: https://issues.apache.org/jira/browse/HADOOP-13628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13628.01.patch, HADOOP-13628.02.patch
>
>
> Currently we can use rest API to retrieve all configuration properties per 
> daemon, but unable to get a specific property by name. This causes extra 
> parse work at client side when dealing with Hadoop configurations, and also 
> it's quite over head to send all configuration in a http response over 
> network. Propose to support following a {{name}} parameter in the http 
> request, by issuing
> {code}
> curl --header "Accept:application/json" 
> http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services
> {code}
> get output
> {code}
> {"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
> {code}
> This change is fully backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-20 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508421#comment-15508421
 ] 

Genmao Yu commented on HADOOP-12756:


Great, let us continue HADOOP-13584

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0, HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, 
> HADOOP-12756.003.patch, HADOOP-12756.004.patch, HADOOP-12756.005.patch, 
> HADOOP-12756.006.patch, HADOOP-12756.007.patch, HADOOP-12756.008.patch, 
> HADOOP-12756.009.patch, HADOOP-12756.010.patch, HCFS User manual.md, OSS 
> integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508289#comment-15508289
 ] 

Hadoop QA commented on HADOOP-10075:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 51 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client 
hadoop-mapreduce-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 15m  
5s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  5s{color} | {color:orange} root: The patch generated 52 new + 2283 
unchanged - 24 fixed = 2335 total (was 2307) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  8m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 582 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
18s{color} | {color:red} The patch 4277 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
27s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client 
hadoop-mapreduce-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 19m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
21s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-auth-examples in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
26s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-nfs in the patch passed. {color} |
| 

[jira] [Commented] (HADOOP-13632) Daemonization does not check process liveness before renicing

2016-09-20 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508265#comment-15508265
 ] 

Andrew Wang commented on HADOOP-13632:
--

Poked around a bit, I think we could move up the {{ps -p $!}} check from the 
end of the start functions to right after the sleep before the renice. And/or, 
add a hadoop_error message to the {{ps -p}} if case, something like:

{code}
  if ! ps -p $! > /dev/null 2>&1; then
hadoop_error "ERROR: Could not start ${daemonname}. Check ${outfile} for 
more information."
return 1;
  fi
{code}

[~aw] any opinions on this?

> Daemonization does not check process liveness before renicing
> -
>
> Key: HADOOP-13632
> URL: https://issues.apache.org/jira/browse/HADOOP-13632
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>
> If you try to daemonize a process that is incorrectly configured, it will die 
> quite quickly. However, the daemonization function will still try to renice 
> it even if it's down, leading to something like this for my namenode:
> {noformat}
> -> % bin/hdfs --daemon start namenode
> ERROR: Cannot set priority of namenode process 12036
> {noformat}
> It'd be more user-friendly instead of this renice error, we said that the 
> process couldn't be started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13632) Daemonization does not check process liveness before renicing

2016-09-20 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-13632:


 Summary: Daemonization does not check process liveness before 
renicing
 Key: HADOOP-13632
 URL: https://issues.apache.org/jira/browse/HADOOP-13632
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0-alpha1
Reporter: Andrew Wang


If you try to daemonize a process that is incorrectly configured, it will die 
quite quickly. However, the daemonization function will still try to renice it 
even if it's down, leading to something like this for my namenode:

{noformat}
-> % bin/hdfs --daemon start namenode
ERROR: Cannot set priority of namenode process 12036
{noformat}

It'd be more user-friendly instead of this renice error, we said that the 
process couldn't be started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-09-20 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508229#comment-15508229
 ] 

Sergey Shelukhin edited comment on HADOOP-13081 at 9/21/16 12:22 AM:
-

The synchronization issues and preserving the order seem fixable. UGI already 
iterates credentials (e.g. in getTGT or getCredentialsInternal) synchronizing 
on itself or subject only.
User principal only uses the LoginContext to relogin. We could clear it and 
posit that clones cannot be used to relogin (this is rather arbitrary, 
admittedly...)



was (Author: sershe):
The synchronization issues and preserving the order seem fixable. UGI already 
iterates credentials (e.g. in getTGT or getCredentialsInternal) synchronizing 
on itself or subject only.
User principal only uses the LoginContext to relogin. We could clear it and 
posit that clones cannot be logged in.


> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, 
> HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, 
> HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-09-20 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508229#comment-15508229
 ] 

Sergey Shelukhin commented on HADOOP-13081:
---

The synchronization issues and preserving the order seem fixable. UGI already 
iterates credentials (e.g. in getTGT or getCredentialsInternal) synchronizing 
on itself or subject only.
User principal only uses the LoginContext to relogin. We could clear it and 
posit that clones cannot be logged in.


> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, 
> HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, 
> HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13573) S3Guard: create basic contract tests for MetadataStore implementations

2016-09-20 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13573:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HADOOP-13345
   Status: Resolved  (was: Patch Available)

+1 for patch 003.  I have committed this to the feature branch.  Thank you, 
Aaron.

> S3Guard: create basic contract tests for MetadataStore implementations
> --
>
> Key: HADOOP-13573
> URL: https://issues.apache.org/jira/browse/HADOOP-13573
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13573-HADOOP-13345.002.patch, 
> HADOOP-13573-HADOOP-13345.003.patch, HADOOP-13573.001.patch
>
>
> We should have some contract-style unit tests for the MetadataStore interface 
> to validate that the different implementations provide correct semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508092#comment-15508092
 ] 

Hadoop QA commented on HADOOP-13590:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13590 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829439/HADOOP-13590.04.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c2c670cef956 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e80386d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10553/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10553/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch
>
>
> The UGI has a 

[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.

2016-09-20 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508089#comment-15508089
 ] 

Chris Nauroth commented on HADOOP-13448:


+1 for a {{MetadataStore#initialize}} method accepting the {{FileSystem}} base 
class, but allowing subclasses to demand and downcast to something more 
specific.  In my prototype, tightly coupling the DynamoDB integration to 
{{S3AFileSystem}} was helpful, because it allowed reuse of the S3A configured 
bucket, {{AWSCredentialsProvider}} and {{ClientConfiguration}}, which involves 
some fairly complex initialization logic.

I think passing {{FileSystem}} to {{initialize}} also allows us to remove the 
{{Configuration}} parameter.  Any {{FileSystem}} is a {{Configured}}, so we can 
get a {{Configuration}} out of it.

+1 also for requiring absolute paths in the arguments, and leaving 
responsibility for absolute path resolution to the {{FileSystem}}.

> S3Guard: Define MetadataStore interface.
> 
>
> Key: HADOOP-13448
> URL: https://issues.apache.org/jira/browse/HADOOP-13448
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13448-HADOOP-13345.001.patch, 
> HADOOP-13448-HADOOP-13345.002.patch, HADOOP-13448-HADOOP-13345.003.patch, 
> HADOOP-13448-HADOOP-13345.004.patch, HADOOP-13448-HADOOP-13345.005.patch
>
>
> Define the common interface for metadata store operations.  This is the 
> interface that any metadata back-end must implement in order to integrate 
> with S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-20 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508028#comment-15508028
 ] 

Kai Zheng commented on HADOOP-12756:


Yeah, quite agree. Thanks [~uncleGen] for the clarifying and 
[~ste...@apache.org] for the confirm. 

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0, HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, 
> HADOOP-12756.003.patch, HADOOP-12756.004.patch, HADOOP-12756.005.patch, 
> HADOOP-12756.006.patch, HADOOP-12756.007.patch, HADOOP-12756.008.patch, 
> HADOOP-12756.009.patch, HADOOP-12756.010.patch, HCFS User manual.md, OSS 
> integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-09-20 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508021#comment-15508021
 ] 

Chris Nauroth commented on HADOOP-13614:


The intent of this patch looks good to me.  I expect it will be ready to go 
after updating {{ITestS3ADeleteFilesOneByOne}} to address the compilation error.

> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13614-branch-2-001.patch
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-09-20 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508015#comment-15508015
 ] 

Kai Zheng commented on HADOOP-13081:


This is very interesting. Thanks all for the details.

If Java assumes the not thread-safe and ordering on the credentials set, it may 
be hard to clone the UGI/subject correctly. It can also cause some troubles if 
the derived UGIs are out of sync thereafter.

Just curious:
[~daryn], I thought the change introduced here only provided a new method doing 
the UGI cloning and the corresponding test. It might not affect existing 
methods. Did this new method be called somewhere and cause your the trouble?

[~sershe], in what specific you will run into Kerberos ticket related codes 
after you do the UGI cloning and the 2nd doAs? 

At least, after reverting, we probably should add some comment in the class 
header, asserting that UGI isn't expected to be cloned due to the reasons found 
here.

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, 
> HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, 
> HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13601) Fix typo in a log messages of AbstractDelegationTokenSecretManager

2016-09-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507889#comment-15507889
 ] 

Hudson commented on HADOOP-13601:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10469 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10469/])
HADOOP-13601. Fix a log message typo in (liuml07: rev 
e80386d69d5fb6a08aa3366e42d2518747af569f)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java


> Fix typo in a log messages of AbstractDelegationTokenSecretManager
> --
>
> Key: HADOOP-13601
> URL: https://issues.apache.org/jira/browse/HADOOP-13601
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mehran Hassani
>Assignee: Mehran Hassani
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-13601.001.patch
>
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. Typos in log 
> messages are one of the reoccurring bugs. Therefore, I made a tool find typos 
> in log statements. During my experiments, I managed to find the following 
> typos in Hadoop Common:
> in file 
> /hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java,
>  LOG.info("Token cancelation requested for identifier: "+id), 
> cancelation should be cancellation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13631) S3Guard: implement move() for LocalMetadataStore, add unit tests

2016-09-20 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-13631:
-

 Summary: S3Guard: implement move() for LocalMetadataStore, add 
unit tests
 Key: HADOOP-13631
 URL: https://issues.apache.org/jira/browse/HADOOP-13631
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Aaron Fabbri
Assignee: Aaron Fabbri


Building on HADOOP-13573 and HADOOP-13452, implement move() in 
LocalMetadataStore and associated MetadataStore unit tests.

(Making this a separate JIRA to break up work into decent-sized and reviewable 
chunks.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-09-20 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507805#comment-15507805
 ] 

Sergey Shelukhin commented on HADOOP-13081:
---

We don't have control over which parts of the code need kerberos or tokens; I 
suspect that usually only one would be needed but we don't know which one.

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, 
> HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, 
> HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-09-20 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507807#comment-15507807
 ] 

Sergey Shelukhin commented on HADOOP-13081:
---

Btw, we do already have the implementation using reflection ;)

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, 
> HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, 
> HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-09-20 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1550#comment-1550
 ] 

Chris Nauroth commented on HADOOP-13081:


Thank you, Sergey.

I think Daryn is asserting that you could potentially achieve this by running 
the portion of code that needs Kerberos auth inside one UGI.doAs, and then run 
the portion of code that needs delegation token auth inside a different 
UGI.doAs, where that second UGI was built with {{createRemoteUser}} and 
{{addToken}}.  Would that work, or is there some reason that the actions can't 
be separated, and you really need both the Kerberos credentials and the 
delegation token all at once in a single UGI.doAs?

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, 
> HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, 
> HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-09-20 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507759#comment-15507759
 ] 

Sergey Shelukhin edited comment on HADOOP-13081 at 9/20/16 8:51 PM:


[~cnauroth] the concrete use case is where a service runs multiple pieces of 
work on behalf of users; it can be set to log in as a particular user using 
Kerberos (specifically when running these), but the users can also add their 
own tokens.
We cannot add tokens to a single kerberos-based UGI because they will all mix; 
we also cannot log in for every piece of work in most cases, as it would 
overload the KDC.
Ideally, we should be able to reuse the kerberos login and create a separate 
UGI with it for each user, adding the user-specific tokens.


was (Author: sershe):
[~cnauroth] the concrete use case is where a service runs multiple pieces of 
work on behalf of users; it can be set to log in as a particular user using 
Kerberos, but the users can also add their own tokens.
We cannot add tokens to a single kerberos-based UGI because they will all mix; 
we also cannot log in for every piece of work in most cases, as it would 
overload the KDC.
Ideally, we should be able to reuse the kerberos login and create a separate 
UGI with it for each user, adding the user-specific tokens.

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, 
> HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, 
> HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-09-20 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507759#comment-15507759
 ] 

Sergey Shelukhin commented on HADOOP-13081:
---

[~cnauroth] the concrete use case is where a service runs multiple pieces of 
work on behalf of users; it can be set to log in as a particular user using 
Kerberos, but the users can also add their own tokens.
We cannot add tokens to a single kerberos-based UGI because they will all mix; 
we also cannot log in for every piece of work in most cases, as it would 
overload the KDC.
Ideally, we should be able to reuse the kerberos login and create a separate 
UGI with it for each user, adding the user-specific tokens.

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, 
> HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, 
> HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13452) S3Guard: Implement access policy for intra-client consistency with in-memory metadata store.

2016-09-20 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507731#comment-15507731
 ] 

Mingliang Liu commented on HADOOP-13452:


{quote}
I do that. This code looks the same as mine, except I do not override load 
factor to 1.0 (which saves space at expense of runtime).
{quote}
The point I was making is about the access order of the LinkedHashMap. 
{{super(initialCapacity);}} by default, is using the insert order. 
{{super(maxEntries + 1, 1.0f, true);}} along with the overridden 
{{removeEldestEntry()}} will just make a LruCache. Sorry I should not have 
mentioned {{removeEldestEntry()}} as your patch is pretty good regarding that. 
If your motivation is about flexibility, I think it makes sense. Thanks.

> S3Guard: Implement access policy for intra-client consistency with in-memory 
> metadata store.
> 
>
> Key: HADOOP-13452
> URL: https://issues.apache.org/jira/browse/HADOOP-13452
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13452.001.patch
>
>
> Implement an S3A access policy based on an in-memory metadata store.  This 
> can provide consistency within the same client without needing to integrate 
> with an external system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-09-20 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507720#comment-15507720
 ] 

Chris Nauroth commented on HADOOP-13081:


[~daryn], great review.  Thank you.

To summarize (at least from my own understanding), we need to revert the 
current patch and post a new revision that at least adds this:

* Address thread safety.  Is it sufficient to make the whole method body 
{{synchronized (subject)}}, similar to {{addCredentials}}, {{getCredentials}}, 
etc.?
* Clone into an insertion order preserving {{Set}} implementation 
({{LinkedHashSet}}).

bq. Relogin of a clone ugi will wipe out the kerberos credentials in the 
original ugi. The hadoop User principal contains the login context which 
references the original subject.

I had thought this part was OK, resulting in successful relogin for both 
original and cloned UGI.  Was I wrong?  ("wipe out" sounds bad.)  If this part 
is not OK, then is the proposed change to replace the {{User}} principal in the 
clone with a new instance, which in turn owns its own {{LoginContext}} instance?

[~sershe], could you please comment more about the concrete use case, and 
specifically address why it couldn't be solved with remote users or proxy users?

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, 
> HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, 
> HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13601) Fix typo in a log messages of AbstractDelegationTokenSecretManager

2016-09-20 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13601:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.7.4
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}} through {{branch-2.7}}.

> Fix typo in a log messages of AbstractDelegationTokenSecretManager
> --
>
> Key: HADOOP-13601
> URL: https://issues.apache.org/jira/browse/HADOOP-13601
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mehran Hassani
>Assignee: Mehran Hassani
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-13601.001.patch
>
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. Typos in log 
> messages are one of the reoccurring bugs. Therefore, I made a tool find typos 
> in log statements. During my experiments, I managed to find the following 
> typos in Hadoop Common:
> in file 
> /hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java,
>  LOG.info("Token cancelation requested for identifier: "+id), 
> cancelation should be cancellation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-09-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507692#comment-15507692
 ] 

Steve Loughran commented on HADOOP-13590:
-

[~owen.omalley] may have something to say, even if he will deny writing the 
original code

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13601) Fix typo in a log messages of AbstractDelegationTokenSecretManager

2016-09-20 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13601:
---
Assignee: Mehran Hassani
Target Version/s: 2.7.4
Hadoop Flags: Reviewed
 Summary: Fix typo in a log messages of 
AbstractDelegationTokenSecretManager  (was: Typo in a log messages)

Added [~MehranHassani] to the Hadoop Contributors 1 list. Now you can assign 
JIRAs to youself [~MehranHassani].

+1 for the patch. Will commit in a second.

> Fix typo in a log messages of AbstractDelegationTokenSecretManager
> --
>
> Key: HADOOP-13601
> URL: https://issues.apache.org/jira/browse/HADOOP-13601
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mehran Hassani
>Assignee: Mehran Hassani
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-13601.001.patch
>
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. Typos in log 
> messages are one of the reoccurring bugs. Therefore, I made a tool find typos 
> in log statements. During my experiments, I managed to find the following 
> typos in Hadoop Common:
> in file 
> /hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java,
>  LOG.info("Token cancelation requested for identifier: "+id), 
> cancelation should be cancellation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-09-20 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13590:
---
Attachment: HADOOP-13590.04.patch

Thanks for the review, Steve. Attached patch 4 to pass checkstyle.
Any suggestion on whose vote we should ask for?

I'm pinging [~atm] [~rkanter] [~cnauroth] [~lmccay]: dear Kerberos people, 
could you please review? Thanks in advance!

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13311) S3A shell entry point to support commands specific to S3A.

2016-09-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507656#comment-15507656
 ] 

Steve Loughran commented on HADOOP-13311:
-

+ add a purge-multipart-transfers  command

> S3A shell entry point to support commands specific to S3A.
> --
>
> Key: HADOOP-13311
> URL: https://issues.apache.org/jira/browse/HADOOP-13311
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Priority: Minor
>
> Create a new {{s3a}} shell entry point.  This can support diagnostic and 
> administrative commands that are specific to S3A and wouldn't make sense to 
> group under existing scripts like {{hadoop}} or {{hdfs}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12928) Update netty to 3.10.5.Final to sync with zookeeper

2016-09-20 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507623#comment-15507623
 ] 

Andrew Wang commented on HADOOP-12928:
--

Also as an FYI, I see that ZK 3.4.9 has been released, so maybe we can do the 
bumps that Tsuyoshi proposed.

> Update netty to 3.10.5.Final to sync with zookeeper
> ---
>
> Key: HADOOP-12928
> URL: https://issues.apache.org/jira/browse/HADOOP-12928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.2
>Reporter: Hendy Irawan
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12928-branch-2.00.patch, HADOOP-12928.01.patch, 
> HADOOP-12928.02.patch, HDFS-12928.00.patch
>
>
> Update netty to 3.7.1.Final because hadoop-client 2.7.2 depends on zookeeper 
> 3.4.6 which depends on netty 3.7.x. Related to HADOOP-12927
> Pull request: https://github.com/apache/hadoop/pull/85



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507590#comment-15507590
 ] 

Steve Loughran commented on HADOOP-13628:
-

I like this too. Does it return the fully evaluated config?

* JSON examples in javadoc should be wrapped in  blocks
* Unknown properties should return 404. This is a rest API, not a SOAP one 
where the result is hidden in the body.
* Could we have a text/plain one which returns just the text value? I could 
have some fun there. For example, it could be pulled straight into google 
sheets.
* We need a functional test for this. Presumably a MiniHDFS cluster serves up 
this data and would respond to a few Jersey requests?

> Support to retrieve specific property from configuration via REST API
> -
>
> Key: HADOOP-13628
> URL: https://issues.apache.org/jira/browse/HADOOP-13628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13628.01.patch, HADOOP-13628.02.patch
>
>
> Currently we can use rest API to retrieve all configuration properties per 
> daemon, but unable to get a specific property by name. This causes extra 
> parse work at client side when dealing with Hadoop configurations, and also 
> it's quite over head to send all configuration in a http response over 
> network. Propose to support following a {{name}} parameter in the http 
> request, by issuing
> {code}
> curl --header "Accept:application/json" 
> http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services
> {code}
> get output
> {code}
> {"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
> {code}
> This change is fully backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12928) Update netty to 3.10.5.Final to sync with zookeeper

2016-09-20 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12928:
-
Target Version/s:   (was: 3.0.0-alpha2)
   Fix Version/s: 3.0.0-alpha1

Looking at git log, it looks like this was in fact included in 3.0.0-alpha1 but 
the fix version wasn't set. That release already went out so the changelog will 
be inaccurate, but we can at least correct it here.

> Update netty to 3.10.5.Final to sync with zookeeper
> ---
>
> Key: HADOOP-12928
> URL: https://issues.apache.org/jira/browse/HADOOP-12928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.2
>Reporter: Hendy Irawan
>Assignee: Lei (Eddy) Xu
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12928-branch-2.00.patch, HADOOP-12928.01.patch, 
> HADOOP-12928.02.patch, HDFS-12928.00.patch
>
>
> Update netty to 3.7.1.Final because hadoop-client 2.7.2 depends on zookeeper 
> 3.4.6 which depends on netty 3.7.x. Related to HADOOP-12927
> Pull request: https://github.com/apache/hadoop/pull/85



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507568#comment-15507568
 ] 

Steve Loughran commented on HADOOP-12756:
-

if it's in use elsewhere, we may as well stick with it.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0, HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, 
> HADOOP-12756.003.patch, HADOOP-12756.004.patch, HADOOP-12756.005.patch, 
> HADOOP-12756.006.patch, HADOOP-12756.007.patch, HADOOP-12756.008.patch, 
> HADOOP-12756.009.patch, HADOOP-12756.010.patch, HCFS User manual.md, OSS 
> integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13262) set multipart delete timeout to 5 * 60s in S3ATestUtils.createTestFileSystem

2016-09-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13262:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> set multipart delete timeout to 5 * 60s in S3ATestUtils.createTestFileSystem
> 
>
> Key: HADOOP-13262
> URL: https://issues.apache.org/jira/browse/HADOOP-13262
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
> Environment: parallel test runs
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13262-001.patch
>
>
> HADOOP-13139 patch 003 test runs show that the multipart tests are failing on 
> parallel runs. The cause of this is that the FS init logic in 
> {{S3ATestUtils.createTestFileSystem}} sets the expiry to 0: any in-progress 
> multipart uploads will fail. 
> setting a 5 minute expiry will clean up from old runs, but not break anything 
> in progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13262) set multipart delete timeout to 5 * 60s in S3ATestUtils.createTestFileSystem

2016-09-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507565#comment-15507565
 ] 

Steve Loughran commented on HADOOP-13262:
-

I'd forgotten about this patch; in HADOOP-13560 I've done the same but bigger, 
because big multipart tests can be run in parallel with others. I'll close this 
as obsolete.

> set multipart delete timeout to 5 * 60s in S3ATestUtils.createTestFileSystem
> 
>
> Key: HADOOP-13262
> URL: https://issues.apache.org/jira/browse/HADOOP-13262
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
> Environment: parallel test runs
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13262-001.patch
>
>
> HADOOP-13139 patch 003 test runs show that the multipart tests are failing on 
> parallel runs. The cause of this is that the FS init logic in 
> {{S3ATestUtils.createTestFileSystem}} sets the expiry to 0: any in-progress 
> multipart uploads will fail. 
> setting a 5 minute expiry will clean up from old runs, but not break anything 
> in progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13601) Typo in a log messages

2016-09-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507545#comment-15507545
 ] 

Sean Busbey commented on HADOOP-13601:
--

+1 change looks good to me. We don't have tests for logging, so I wouldn't 
expect a test change. The timed out test looks unrelated.

> Typo in a log messages
> --
>
> Key: HADOOP-13601
> URL: https://issues.apache.org/jira/browse/HADOOP-13601
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mehran Hassani
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-13601.001.patch
>
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. Typos in log 
> messages are one of the reoccurring bugs. Therefore, I made a tool find typos 
> in log statements. During my experiments, I managed to find the following 
> typos in Hadoop Common:
> in file 
> /hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java,
>  LOG.info("Token cancelation requested for identifier: "+id), 
> cancelation should be cancellation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13630) split up AWS index.md

2016-09-20 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13630:
---

 Summary: split up AWS index.md
 Key: HADOOP-13630
 URL: https://issues.apache.org/jira/browse/HADOOP-13630
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation, fs/s3
Affects Versions: 2.9.0
Reporter: Steve Loughran


The AWS index.md file is too big, too much written by developers as we go 
along, not for end users.

I propose splitting it into its own docs

* Intro
* S3A
* S3N
* S3 (branch-2 only, obviously)
* testing
* maybe in future: something on effective coding against object stores,
though that could go toplevel, as it applies to all


I propose waiting for HADOOP-13560 to be in, as that changes the docs.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-09-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13560:

Attachment: HADOOP-13560-branch-2-003.patch

Patch 003

* (pooled) ByteBuffer now an option for buffering output, this should offer a 
in-memory performance with less risk of heap overflow. But it can still use 
enough memory that your Yarn hosted JVMs get killed; it's still only to be used 
with care
* replaced S3AFastOutputStream. The option is deprecated and downgraded to 
buffered + file.
* Pulled all fast output streams tests but a little one to verify that the 
options still work.
* I've not deleted the S3AFastOutputStream class —yet. It's there for comparing 
new vs. old
* javadocs in more places
* core-default.xml descriptions improved
* index.md updated with new values, more text
* tests pass down scale test maven options to sequential test runs.

Test endpoint: S3 ireland

I think this code is ready for review/testing by others. Can anyone doing this 
start with the documentation to see if it explains it, then go into the code. 
Ideally I'd like some testing of large distcps with the file buffering 
(verifies it scales) and the bytebuffer (to see how it fails, and add it to the 
troubleshooting docs)

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13452) S3Guard: Implement access policy for intra-client consistency with in-memory metadata store.

2016-09-20 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507496#comment-15507496
 ] 

Aaron Fabbri commented on HADOOP-13452:
---

Thanks for review [~liuml07].

{quote}Is the LruHashMap always supposed to access the entries via mruGet()? 
{quote}

I was trying to have the flexibility of allowing caller to only call get() when 
it doesn't need to MRU the entry.  I only use the plain get() once, and you 
could argue it could be an mruGet() instead.

{quote}And for fixed size, we can override the removeEldestEntry() 
method.{quote}
I do that. This code looks the same as mine, except I do not override load 
factor to 1.0 (which saves space at expense of runtime).

{quote}Basically we can simply make operating methods synchronized instead of 
synchronized blocks? This should improve readability if no obvious performance 
loss.{quote}

I can do that for the functions where the critical section is the whole method. 
 I like the explicit block style (often I don't notice synchronized in 
signatures for some reason), but I'm happy to change it.  I don't think it is 
addressed in style guide.

{quote}
Or can we simply use the off-the-shelf org.apache.commons.collections.map.LRUMap
{quote}
I didn't know that existed.  I'm happy to use it.  One concern is that it 
subclasses a deprecated, non-Java API linked hash map implementation. 


> S3Guard: Implement access policy for intra-client consistency with in-memory 
> metadata store.
> 
>
> Key: HADOOP-13452
> URL: https://issues.apache.org/jira/browse/HADOOP-13452
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13452.001.patch
>
>
> Implement an S3A access policy based on an in-memory metadata store.  This 
> can provide consistency within the same client without needing to integrate 
> with an external system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-09-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13560:

Status: Open  (was: Patch Available)

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13601) Typo in a log messages

2016-09-20 Thread Mehran Hassani (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507483#comment-15507483
 ] 

Mehran Hassani commented on HADOOP-13601:
-

@Yiqun Lin should I change something in my patch? Since its still not committed

> Typo in a log messages
> --
>
> Key: HADOOP-13601
> URL: https://issues.apache.org/jira/browse/HADOOP-13601
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mehran Hassani
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-13601.001.patch
>
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. Typos in log 
> messages are one of the reoccurring bugs. Therefore, I made a tool find typos 
> in log statements. During my experiments, I managed to find the following 
> typos in Hadoop Common:
> in file 
> /hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java,
>  LOG.info("Token cancelation requested for identifier: "+id), 
> cancelation should be cancellation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12928) Update netty to 3.10.5.Final to sync with zookeeper

2016-09-20 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12928:
-
Target Version/s: 3.0.0-alpha2

> Update netty to 3.10.5.Final to sync with zookeeper
> ---
>
> Key: HADOOP-12928
> URL: https://issues.apache.org/jira/browse/HADOOP-12928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.2
>Reporter: Hendy Irawan
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-12928-branch-2.00.patch, HADOOP-12928.01.patch, 
> HADOOP-12928.02.patch, HDFS-12928.00.patch
>
>
> Update netty to 3.7.1.Final because hadoop-client 2.7.2 depends on zookeeper 
> 3.4.6 which depends on netty 3.7.x. Related to HADOOP-12927
> Pull request: https://github.com/apache/hadoop/pull/85



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-20 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507421#comment-15507421
 ] 

Mingliang Liu commented on HADOOP-13628:


I like the idea. +1 for the proposal.

> Support to retrieve specific property from configuration via REST API
> -
>
> Key: HADOOP-13628
> URL: https://issues.apache.org/jira/browse/HADOOP-13628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13628.01.patch, HADOOP-13628.02.patch
>
>
> Currently we can use rest API to retrieve all configuration properties per 
> daemon, but unable to get a specific property by name. This causes extra 
> parse work at client side when dealing with Hadoop configurations, and also 
> it's quite over head to send all configuration in a http response over 
> network. Propose to support following a {{name}} parameter in the http 
> request, by issuing
> {code}
> curl --header "Accept:application/json" 
> http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services
> {code}
> get output
> {code}
> {"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
> {code}
> This change is fully backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13452) S3Guard: Implement access policy for intra-client consistency with in-memory metadata store.

2016-09-20 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507414#comment-15507414
 ] 

Mingliang Liu commented on HADOOP-13452:


Thanks for working on this, [~fabbri].

# Is the {{LruHashMap}} always supposed to access the entries via {{mruGet()}}? 
If so, I think a straight-forward approach to implementing an LRU cache is to 
use the _access order_ of a LinkedHashMap. And for fixed size, we can override 
the {{removeEldestEntry()}} method.
{code}
  class LruCache extends LinkedHashMap {
private final int MAX_ENTRIES;

public LruCache(int maxEntries) {
super(maxEntries + 1, 1.0f, true);
MAX_ENTRIES = maxEntries;
}

@Override
protected boolean removeEldestEntry(Map.Entry eldest) {
return size() > MAX_ENTRIES;
}
  }
{code}
Or can we simply use the off-the-shelf 
{{org.apache.commons.collections.map.LRUMap}}?
# Basically we can simply make operating methods synchronized instead of 
synchronized blocks? This should improve readability if no obvious performance 
loss.
# {quote}Would you rather get this v2 patch in, or wait until move() 
implementation is included?{quote} I'm fine either way as well. I'm not blocked 
anyway.

> S3Guard: Implement access policy for intra-client consistency with in-memory 
> metadata store.
> 
>
> Key: HADOOP-13452
> URL: https://issues.apache.org/jira/browse/HADOOP-13452
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13452.001.patch
>
>
> Implement an S3A access policy based on an in-memory metadata store.  This 
> can provide consistency within the same client without needing to integrate 
> with an external system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507388#comment-15507388
 ] 

Hadoop QA commented on HADOOP-13628:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 283 unchanged - 2 fixed = 284 total (was 285) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
45s{color} | {color:red} hadoop-common-project_hadoop-common generated 2 new + 
0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13628 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829414/HADOOP-13628.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 55643b223977 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c6d1d74 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10551/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10551/artifact/patchprocess/whitespace-eol.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10551/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10551/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10551/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   

[jira] [Comment Edited] (HADOOP-10075) Update jetty dependency to version 9

2016-09-20 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507314#comment-15507314
 ] 

Robert Kanter edited comment on HADOOP-10075 at 9/20/16 6:11 PM:
-

The 004 patch does the following:
- Fixed unit tests
- Fixed deprecated warnings
- Removed some missed Jetty 6 usage
- Removed {{js.gz}} and {{css.gz}} files, so only the original files are in the 
source codebase.  A [maven plugin|https://github.com/phuonghuynh/compressor] is 
used to gzip these files into the target directories during build time.  We now 
do this to all {{js}} and {{css} files, unlike before where we had some that 
were gzipped and some that were not.

Note that I used {{--binary}} when generating the patch because it deletes some 
gzip files.


was (Author: rkanter):
The 004 patch does the following:
- Fixed unit tests
- Fixed deprecated warnings
- Removed some missed Jetty 6 usage
- Removed {{js.gz}} and {{css.gz}} files, so only the original files are in the 
source codebase.  A [maven plugin|https://github.com/phuonghuynh/compressor] is 
used to gzip these files into the target directories during build time.

Note that I used {{--binary}} when generating the patch because it deletes some 
gzip files.

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10075) Update jetty dependency to version 9

2016-09-20 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-10075:
---
Attachment: HADOOP-10075.004.patch

The 004 patch does the following:
- Fixed unit tests
- Fixed deprecated warnings
- Removed some missed Jetty 6 usage
- Removed {{js.gz}} and {{css.gz}} files, so only the original files are in the 
source codebase.  A [maven plugin|https://github.com/phuonghuynh/compressor] is 
used to gzip these files into the target directories during build time.

Note that I used {{--binary}} when generating the patch because it deletes some 
gzip files.

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13629) Make RemoteEditLogManifest.committedTxnId optional in Protocol Buffers

2016-09-20 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-13629:
--

 Summary: Make RemoteEditLogManifest.committedTxnId optional in 
Protocol Buffers
 Key: HADOOP-13629
 URL: https://issues.apache.org/jira/browse/HADOOP-13629
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Mackrory
Assignee: Sean Mackrory


HDFS-10519 introduced a new field in the RemoteEditLogManifest message. It can 
be made optional to improve wire-compatibility with previous versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13628:
-
Attachment: HADOOP-13628.02.patch

> Support to retrieve specific property from configuration via REST API
> -
>
> Key: HADOOP-13628
> URL: https://issues.apache.org/jira/browse/HADOOP-13628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13628.01.patch, HADOOP-13628.02.patch
>
>
> Currently we can use rest API to retrieve all configuration properties per 
> daemon, but unable to get a specific property by name. This causes extra 
> parse work at client side when dealing with Hadoop configurations, and also 
> it's quite over head to send all configuration in a http response over 
> network. Propose to support following a {{name}} parameter in the http 
> request, by issuing
> {code}
> curl --header "Accept:application/json" 
> http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services
> {code}
> get output
> {code}
> {"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
> {code}
> This change is fully backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13367) Support more types of Store in S3Native File System

2016-09-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13367.
-
   Resolution: Not A Problem
Fix Version/s: 2.8.0

resolving as the Aliyun work addresses the specific need

> Support more types of Store in S3Native File System
> ---
>
> Key: HADOOP-13367
> URL: https://issues.apache.org/jira/browse/HADOOP-13367
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: liu chang
>Priority: Minor
> Fix For: 2.8.0
>
>
> There are a lot of object storage services whose protocol is similar to S3. 
> We could add more types of NativeFileSystemStore to support those services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507155#comment-15507155
 ] 

Hadoop QA commented on HADOOP-13628:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
49s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 49s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 4 new + 284 unchanged - 2 fixed = 288 total (was 286) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-common-project_hadoop-common generated 2 new + 
0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13628 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829411/HADOOP-13628.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cf27e95f7cd8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c6d1d74 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10550/artifact/patchprocess/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10550/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10550/artifact/patchprocess/patch-compile-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10550/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10550/artifact/patchprocess/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
| findbugs | 

[jira] [Updated] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13628:
-
Status: Patch Available  (was: Open)

> Support to retrieve specific property from configuration via REST API
> -
>
> Key: HADOOP-13628
> URL: https://issues.apache.org/jira/browse/HADOOP-13628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13628.01.patch
>
>
> Currently we can use rest API to retrieve all configuration properties per 
> daemon, but unable to get a specific property by name. This causes extra 
> parse work at client side when dealing with Hadoop configurations, and also 
> it's quite over head to send all configuration in a http response over 
> network. Propose to support following a {{name}} parameter in the http 
> request, by issuing
> {code}
> curl --header "Accept:application/json" 
> http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services
> {code}
> get output
> {code}
> {"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
> {code}
> This change is fully backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-20 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507091#comment-15507091
 ] 

Weiwei Yang commented on HADOOP-13628:
--

Upload v1 patch, it only contains the support for JSON format response, I am 
open for comments if it looks OK or not. Then I can move forward to finish the 
XML part. Thanks a lot.

> Support to retrieve specific property from configuration via REST API
> -
>
> Key: HADOOP-13628
> URL: https://issues.apache.org/jira/browse/HADOOP-13628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13628.01.patch
>
>
> Currently we can use rest API to retrieve all configuration properties per 
> daemon, but unable to get a specific property by name. This causes extra 
> parse work at client side when dealing with Hadoop configurations, and also 
> it's quite over head to send all configuration in a http response over 
> network. Propose to support following a {{name}} parameter in the http 
> request, by issuing
> {code}
> curl --header "Accept:application/json" 
> http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services
> {code}
> get output
> {code}
> {"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
> {code}
> This change is fully backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13628:
-
Attachment: HADOOP-13628.01.patch

> Support to retrieve specific property from configuration via REST API
> -
>
> Key: HADOOP-13628
> URL: https://issues.apache.org/jira/browse/HADOOP-13628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13628.01.patch
>
>
> Currently we can use rest API to retrieve all configuration properties per 
> daemon, but unable to get a specific property by name. This causes extra 
> parse work at client side when dealing with Hadoop configurations, and also 
> it's quite over head to send all configuration in a http response over 
> network. Propose to support following a {{name}} parameter in the http 
> request, by issuing
> {code}
> curl --header "Accept:application/json" 
> http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services
> {code}
> get output
> {code}
> {"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
> {code}
> This change is fully backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-20 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13628:
-
Description: 
Currently we can use rest API to retrieve all configuration properties per 
daemon, but unable to get a specific property by name. This causes extra parse 
work at client side when dealing with Hadoop configurations, and also it's 
quite over head to send all configuration in a http response over network. 
Propose to support following a {{name}} parameter in the http request, by 
issuing

{code}
curl --header "Accept:application/json" 
http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services
{code}

get output
{code}
{"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
{code}

This change is fully backwards compatible.

  was:
Currently we can use rest API to retrieve all configuration properties per 
daemon, but unable to get a specific property by name. This causes extra parse 
work at client side when dealing with Hadoop configurations, and also it's 
quite over head to send all configuration in a http response over network. 
Propose to support following a {{name}} parameter in the http request,

{code}
curl --header "Accept:application/json" 
http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services

{"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
{code}

This change is fully backwards compatible.


> Support to retrieve specific property from configuration via REST API
> -
>
> Key: HADOOP-13628
> URL: https://issues.apache.org/jira/browse/HADOOP-13628
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> Currently we can use rest API to retrieve all configuration properties per 
> daemon, but unable to get a specific property by name. This causes extra 
> parse work at client side when dealing with Hadoop configurations, and also 
> it's quite over head to send all configuration in a http response over 
> network. Propose to support following a {{name}} parameter in the http 
> request, by issuing
> {code}
> curl --header "Accept:application/json" 
> http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services
> {code}
> get output
> {code}
> {"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
> {code}
> This change is fully backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13628) Support to retrieve specific property from configuration via REST API

2016-09-20 Thread Weiwei Yang (JIRA)
Weiwei Yang created HADOOP-13628:


 Summary: Support to retrieve specific property from configuration 
via REST API
 Key: HADOOP-13628
 URL: https://issues.apache.org/jira/browse/HADOOP-13628
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.7.3
Reporter: Weiwei Yang
Assignee: Weiwei Yang


Currently we can use rest API to retrieve all configuration properties per 
daemon, but unable to get a specific property by name. This causes extra parse 
work at client side when dealing with Hadoop configurations, and also it's 
quite over head to send all configuration in a http response over network. 
Propose to support following a {{name}} parameter in the http request,

{code}
curl --header "Accept:application/json" 
http://${RM_HOST}/conf?name=yarn.nodemanager.aux-services

{"property"{"key":"yarn.resourcemanager.hostname","value":"${RM_HOST}","isFinal":false,"resource":"yarn-site.xml"}}
{code}

This change is fully backwards compatible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13607) Specify and test contract for FileSystem#close.

2016-09-20 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506956#comment-15506956
 ] 

Chris Nauroth commented on HADOOP-13607:


Thanks, I missed this, because I only looked at the Abstract* contract test 
suites.  In that case, the scope of this issue is just {{FileSystem#close}}.

> Specify and test contract for FileSystem#close.
> ---
>
> Key: HADOOP-13607
> URL: https://issues.apache.org/jira/browse/HADOOP-13607
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Chris Nauroth
>
> This issue proposes to enhance the {{FileSystem}} specification by describing 
> the expected semantics of {{FileSystem#close}} and adding corresponding 
> contract tests.  Notable aspects are that the method must be idempotent as 
> dictated by {{java.io.Closeable}} and closing also interacts with the 
> delete-on-exit feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12834) Credentials to include text of inner IOE when rethrowing wrapped

2016-09-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506895#comment-15506895
 ] 

Steve Loughran commented on HADOOP-12834:
-

Looking at the code again, I think FileNotFoundExceptions could be rethrown 
directory as FNFEs, rather than caught and wrapped. Filesystems all include the 
filename here, and the exception class is more meaningfull. 

Class generally looks ready for migration to try-with-resources, SLF4J, ...

> Credentials to include text of inner IOE when rethrowing wrapped
> 
>
> Key: HADOOP-12834
> URL: https://issues.apache.org/jira/browse/HADOOP-12834
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Currently Credentials read/write methods catch IOEs and rethrow with the 
> filename (good), only they don't include the text of the caught exception in 
> the new string message (bad) ... you need to delve into the stack traces to 
> find the cause.
> fix: include the {{toString()}} value of the caught IOE in the new exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13627) Have an explicit KerberosAuthException for UGI to throw, text from public constants

2016-09-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506891#comment-15506891
 ] 

Steve Loughran commented on HADOOP-13627:
-

see also HADOOP-12834; Credentials could benefit there. 

> Have an explicit KerberosAuthException for UGI to throw, text from public 
> constants
> ---
>
> Key: HADOOP-13627
> URL: https://issues.apache.org/jira/browse/HADOOP-13627
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Xiao Chen
>
> UGI creates simple IOEs on failure, making it impossible to catch them, 
> ignore them, have smart retry logic around them, etc.
> # Have an explicit exception like {{KerberosAuthException extends 
> IOException}} to raise instead. We can't use {{AuthenticationException}} as 
> that doesn't extend IOE.
> # move {{UGI}}, {{SecurityUtil}} and things related off simple IOEs and into 
> the new one
> # review exceptions raised and consider if they can provide more information
> # for the strings that get created, put them as public static constants, so 
> that tests can look for them explicitly —tests that don't break if the text 
> is changed.
> # maybe, {{getUGIFromTicketCache}} to throw this rather than an RTE if no 
> login principals were found (it throws IOEs on login failures, after all)
> # keep KDiag in sync with this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-09-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506881#comment-15506881
 ] 

Steve Loughran commented on HADOOP-13590:
-

LGTM, though other Kerberos people need to look at the code too...this is so 
sensitive we almost need multiple votes on it.

Can you add an accessor to stop that checkstyle warning?

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13604) Abort retry loop when RPC has an unrecoverable Auth error

2016-09-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506865#comment-15506865
 ] 

Steve Loughran commented on HADOOP-13604:
-

Thank  you for volunteering: created HADOOP-13627 for you

Bear in mind we are all scared of the code and changes breaking things; keep 
the diffs minimal, and don't change the text messages we have today. Not 
because they are good, but because they are searchable in existing JIRAs and 
Stack Overflow topics

> Abort retry loop when RPC has an unrecoverable Auth error
> -
>
> Key: HADOOP-13604
> URL: https://issues.apache.org/jira/browse/HADOOP-13604
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Henry Robinson
>Assignee: Xiao Chen
>
> I've seen an issue where, after an RPC client hit an error obtaining a TGT 
> from Kerberos, the RPC client continues to retry even though there's no 
> chance of success (the no login window is set to 600s).
> In this particular deployment, the client retries 15 times at 15s intervals, 
> leading to a delay of more than three minutes before the failure is bubbled 
> up to the client when the RPC ultimately fails.
> Unrecoverable errors (like failures to login to Kerberos) should lead to fast 
> aborts of the retry loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13627) Have an explicit KerberosAuthException for UGI to throw, text from public constants

2016-09-20 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13627:
---

 Summary: Have an explicit KerberosAuthException for UGI to throw, 
text from public constants
 Key: HADOOP-13627
 URL: https://issues.apache.org/jira/browse/HADOOP-13627
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.7.3
Reporter: Steve Loughran
Assignee: Xiao Chen


UGI creates simple IOEs on failure, making it impossible to catch them, ignore 
them, have smart retry logic around them, etc.

# Have an explicit exception like {{KerberosAuthException extends IOException}} 
to raise instead. We can't use {{AuthenticationException}} as that doesn't 
extend IOE.
# move {{UGI}}, {{SecurityUtil}} and things related off simple IOEs and into 
the new one
# review exceptions raised and consider if they can provide more information
# for the strings that get created, put them as public static constants, so 
that tests can look for them explicitly —tests that don't break if the text is 
changed.
# maybe, {{getUGIFromTicketCache}} to throw this rather than an RTE if no login 
principals were found (it throws IOEs on login failures, after all)
# keep KDiag in sync with this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13607) Specify and test contract for FileSystem#close.

2016-09-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506827#comment-15506827
 ] 

Steve Loughran commented on HADOOP-13607:
-

There's a couple of tests in {{FileSystemContractBaseTest}} contract tests 
which do this, {{testInputStreamClosedTwice}} and 
{{testOutputStreamClosedTwice}}

> Specify and test contract for FileSystem#close.
> ---
>
> Key: HADOOP-13607
> URL: https://issues.apache.org/jira/browse/HADOOP-13607
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Chris Nauroth
>
> This issue proposes to enhance the {{FileSystem}} specification by describing 
> the expected semantics of {{FileSystem#close}} and adding corresponding 
> contract tests.  Notable aspects are that the method must be idempotent as 
> dictated by {{java.io.Closeable}} and closing also interacts with the 
> delete-on-exit feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-20 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506808#comment-15506808
 ] 

Genmao Yu edited comment on HADOOP-12756 at 9/20/16 3:05 PM:
-

[~drankye] +1 to your suggestion, but the truth is many developers are familiar 
with ‘oss://’ in Aliyun E-MapReduce, and Aliyun OSS itself is using 'oss://' in 
many places, like https://help.aliyun.com/document_detail/32185.html. So, i 
think it is better to continue to use 'oss://'. 


was (Author: unclegen):
[~drankye] +1 to your suggestion, but the truth is many developers are familiar 
with ‘oss’ in Aliyun E-MapReduce, and Aliyun OSS itself is using 'oss://' in 
many places, like https://help.aliyun.com/document_detail/32185.html. So, i 
think it is better to continue to use 'oss'. 

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0, HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, 
> HADOOP-12756.003.patch, HADOOP-12756.004.patch, HADOOP-12756.005.patch, 
> HADOOP-12756.006.patch, HADOOP-12756.007.patch, HADOOP-12756.008.patch, 
> HADOOP-12756.009.patch, HADOOP-12756.010.patch, HCFS User manual.md, OSS 
> integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-20 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506808#comment-15506808
 ] 

Genmao Yu commented on HADOOP-12756:


[~drankye] +1 to your suggestion, but the truth is many developers are familiar 
with ‘oss’ in Aliyun E-MapReduce, and Aliyun OSS itself is using 'oss://' in 
many places, like https://help.aliyun.com/document_detail/32185.html. So, i 
think it is better to continue to use 'oss'. 

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0, HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, 
> HADOOP-12756.003.patch, HADOOP-12756.004.patch, HADOOP-12756.005.patch, 
> HADOOP-12756.006.patch, HADOOP-12756.007.patch, HADOOP-12756.008.patch, 
> HADOOP-12756.009.patch, HADOOP-12756.010.patch, HCFS User manual.md, OSS 
> integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-09-20 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506773#comment-15506773
 ] 

Daryn Sharp commented on HADOOP-13081:
--

Just noticed this due to a conflict. This is very broken and should be reverted.

Neither the hadoop Credentials copy ctor nor the subject's cred sets are 
thread-safe.

The subject's creds cannot be iterated w/o synchronizing on the set. Best case 
you'll get a CME. Worst case a copy of the set in an inconsistent state. For 
instance, the GSSAPI adds and removes service tickets from the subject. A 
snapshot at the wrong time will have a stale service ticket. Reusing the 
service ticket in the new ugi will cause replay attack exceptions. Or if 
another thread is attempting to relogin, the subject in the ugi being copied 
will not contain any kerberos creds.

The subject's cred sets aren't actually sets. They are backed by a linked list. 
GSSAPI often relies on ordering of tickets. Cloning into a hash set loses the 
implied ordering. Crazy exceptions occur when the client starts requesting 
tickets from the KDC with a TGS instead of a TGT.  Other ipc bugs cause the 
process to be unable to authenticate until a restart (ex. ran into this with 
oozie).  I have an internal patch I need to push out.

Relogin of a clone ugi will wipe out the kerberos credentials in the original 
ugi. The hadoop User principal contains the login context which references the 
original subject.
–
Perhaps I missed it, but what is a concrete use case? The description and the 
javadoc don't make sense to me: "... allowing multiple users with different 
tokens to reuse the UGI without re-authenticating with Kerberos". Using tokens 
makes kerberos irrelevant.

If the intention is mixing a ugi with kerberos creds for user1, and tokens for 
user2 - that's playing with fire esp. if user1 is a privileged user.  The ugi 
should only contain user2 tokens for allowed services, otherwise there's the 
security risk of being user1 to some services.  Proxy users exist for this 
reason.

Why isn't UGI.createRemoteUser(username) and ugi.addToken(token) sufficient if 
no further kerberos auth is intended, or a proxy user that contains the 
intended tokens if you need a mix of token and kerberos auth?

> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, 
> HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, 
> HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13262) set multipart delete timeout to 5 * 60s in S3ATestUtils.createTestFileSystem

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506580#comment-15506580
 ] 

Hadoop QA commented on HADOOP-13262:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13262 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12809870/HADOOP-13262-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e27ae2a7ca52 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2b66d9e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10549/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10549/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> set multipart delete timeout to 5 * 60s in S3ATestUtils.createTestFileSystem
> 
>
> Key: HADOOP-13262
> URL: https://issues.apache.org/jira/browse/HADOOP-13262
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
> Environment: parallel test runs
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13262-001.patch
>
>
> HADOOP-13139 patch 003 test runs show that the multipart tests are 

[jira] [Commented] (HADOOP-13262) set multipart delete timeout to 5 * 60s in S3ATestUtils.createTestFileSystem

2016-09-20 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506494#comment-15506494
 ] 

Thomas Demoor commented on HADOOP-13262:


+1

> set multipart delete timeout to 5 * 60s in S3ATestUtils.createTestFileSystem
> 
>
> Key: HADOOP-13262
> URL: https://issues.apache.org/jira/browse/HADOOP-13262
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
> Environment: parallel test runs
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13262-001.patch
>
>
> HADOOP-13139 patch 003 test runs show that the multipart tests are failing on 
> parallel runs. The cause of this is that the FS init logic in 
> {{S3ATestUtils.createTestFileSystem}} sets the expiry to 0: any in-progress 
> multipart uploads will fail. 
> setting a 5 minute expiry will clean up from old runs, but not break anything 
> in progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel

2016-09-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506465#comment-15506465
 ] 

Steve Loughran commented on HADOOP-13600:
-

If we do this using the same thread pool as for block uploads, then some 
priority queuing should be used for the renames, so that they get priority over 
uploads, the latter being much slower.

> S3a rename() to copy files in a directory in parallel
> -
>
> Key: HADOOP-13600
> URL: https://issues.apache.org/jira/browse/HADOOP-13600
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>
> Currently a directory rename does a one-by-one copy, making the request 
> O(files * data). If the copy operations were launched in parallel, the 
> duration of the copy may be reducable to the duration of the longest copy. 
> For a directory with many files, this will be significant



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13584) Merge HADOOP-12756 branch to latest trunk

2016-09-20 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13584:
---
Attachment: HADOOP-13584.003.patch

[~drankye] add HADOOP-13624

> Merge HADOOP-12756 branch to latest trunk
> -
>
> Key: HADOOP-13584
> URL: https://issues.apache.org/jira/browse/HADOOP-13584
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: shimingfei
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13584.001.patch, HADOOP-13584.002.patch, 
> HADOOP-13584.003.patch
>
>
> We have finished a round of improvement over Hadoop-12756 branch, which 
> intends to incorporate Aliyun OSS support in Hadoop. This feature provides 
> basic support for data access to Aliyun OSS from Hadoop applications.
> In the implementation, we follow the style of S3 support in Hadooop. Besides 
> we also provide FileSystem contract test over real Aliyun OSS environment. By 
> simple configuration, it can be enabled/disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-09-20 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505892#comment-15505892
 ] 

Kai Zheng commented on HADOOP-12756:


Hi [~shimingfei], [~uncleGen],

The scheme for this new file system is still using {{oss://}}, but as very 
earlier discussion mentioned, {{oss}} is too generic and can mean {{open source 
software}}, {{object store service}} and etc., as [~ste...@apache.org] and 
[~hitliuyi] pointed out. Could we change it to use a more specific one? Maybe 
like {{alioss}}? Hope it won't be too late. Thanks.

Also note the doc needs to be updated.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0, HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, 
> HADOOP-12756.003.patch, HADOOP-12756.004.patch, HADOOP-12756.005.patch, 
> HADOOP-12756.006.patch, HADOOP-12756.007.patch, HADOOP-12756.008.patch, 
> HADOOP-12756.009.patch, HADOOP-12756.010.patch, HCFS User manual.md, OSS 
> integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13624) Rename TestAliyunOSSContractDispCp

2016-09-20 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-13624:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to the branch. Thanks [~uncleGen] for the contribution!

> Rename TestAliyunOSSContractDispCp
> --
>
> Key: HADOOP-13624
> URL: https://issues.apache.org/jira/browse/HADOOP-13624
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13624-HADOOP-12756.001.patch
>
>
> It should be TestAliyunOSSContractDistCp.java instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13624) Rename TestAliyunOSSContractDispCp

2016-09-20 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505712#comment-15505712
 ] 

Kai Zheng commented on HADOOP-13624:


+1 on the patch.

> Rename TestAliyunOSSContractDispCp
> --
>
> Key: HADOOP-13624
> URL: https://issues.apache.org/jira/browse/HADOOP-13624
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13624-HADOOP-12756.001.patch
>
>
> It should be TestAliyunOSSContractDistCp.java instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org