[jira] [Updated] (HADOOP-14627) Support MSI and DeviceCode token provider in ADLS

2017-08-08 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14627:

Summary: Support MSI and DeviceCode token provider in ADLS  (was: Support 
MSI and DeviceCode token provider)

> Support MSI and DeviceCode token provider in ADLS
> -
>
> Key: HADOOP-14627
> URL: https://issues.apache.org/jira/browse/HADOOP-14627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
> Environment: MSI Change applies only to Hadoop running in an Azure VM
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14627-001.patch, HADOOP-14627.002.patch
>
>
> This change is to upgrade the Hadoop ADLS connector to enable new auth 
> features exposed by the ADLS Java SDK.
> Specifically:
> MSI Tokens: MSI (Managed Service Identity) is a way to provide an identity to 
> an Azure Service. In the case of VMs, they can be used to give an identity to 
> a VM deployment. This simplifies managing Service Principals, since the creds 
> don’t have to be managed in core-site files anymore. The way this works is 
> that during VM deployment, the ARM (Azure Resource Manager) template needs to 
> be modified to enable MSI. Once deployed, the MSI extension runs a service on 
> the VM that exposes a token endpoint to http://localhost at a port specified 
> in the template. The SDK has a new TokenProvider to fetch the token from this 
> local endpoint. This change would expose that TokenProvider as an auth option.
> DeviceCode auth: This enables a token to be obtained from an interactive 
> login. The user is given a URL and a token to use on the login screen. User 
> can use the token to login from any device. Once the login is done, the token 
> that is obtained is in the name of the user who logged in. Note that because 
> of the interactive login involved, this is not very suitable for job 
> scenarios, but can work for ad-hoc scenarios like running “hdfs dfs” commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14708) FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL

2017-08-08 Thread Lantao Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119410#comment-16119410
 ] 

Lantao Jin commented on HADOOP-14708:
-

Can the title change to "Allow client with KERBEROS_SSL auth method to 
negotiate to server in security mode"?

> FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL
> ---
>
> Key: HADOOP-14708
> URL: https://issues.apache.org/jira/browse/HADOOP-14708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.3, 2.8.1, 3.0.0-alpha3
>Reporter: Lantao Jin
>Assignee: Lantao Jin
> Attachments: FSCK-2.log, FSCK.log, HADOOP-14708.001.patch
>
>
> FSCK started by xx (auth:KERBEROS_SSL) failed with exception msg "fsck 
> encountered internal errors!"
> FSCK use FSCKServlet to submit RPC to NameNode, it use {{KERBEROS_SSL}} as 
> its {{AuthenticationMethod}} in {{JspHelper.java}}
> {code}
>   /** Same as getUGI(context, request, conf, KERBEROS_SSL, true). */
>   public static UserGroupInformation getUGI(ServletContext context,
>   HttpServletRequest request, Configuration conf) throws IOException {
> return getUGI(context, request, conf, AuthenticationMethod.KERBEROS_SSL, 
> true);
>   }
> {code}
> But when setup SaslConnection with server, KERBEROS_SSL will failed to create 
> SaslClient instance. See {{SaslRpcClient.java}}
> {code}
> private SaslClient createSaslClient(SaslAuth authType)
>   throws SaslException, IOException {
>   
>   case KERBEROS: {
> if (ugi.getRealAuthenticationMethod().getAuthMethod() !=
> AuthMethod.KERBEROS) {
>   return null; // client isn't using kerberos
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14628) Upgrade maven enforcer plugin to 3.0.0-M1

2017-08-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14628:
---
Fix Version/s: 3.0.0-beta1

> Upgrade maven enforcer plugin to 3.0.0-M1
> -
>
> Key: HADOOP-14628
> URL: https://issues.apache.org/jira/browse/HADOOP-14628
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14626-testing.02.patch, 
> HADOOP-14626-testing.03.patch, HADOOP-14626.testing.patch, 
> HADOOP-14628.001.patch
>
>
> Maven enforcer plugin fails after Java 9 build 175 (MENFORCER-274). Let's 
> upgrade the version to 3.0.0 when released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14355) Update maven-war-plugin to 3.1.0

2017-08-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14355:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Committed. Thanks Steve!

> Update maven-war-plugin to 3.1.0
> 
>
> Key: HADOOP-14355
> URL: https://issues.apache.org/jira/browse/HADOOP-14355
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14335.testing.patch, HADOOP-14355.01.patch
>
>
> Due to MWAR-405, build fails with Java 9. The issue is fixed in maven war 
> plugin 3.1.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14628) Upgrade maven enforcer plugin to 3.0.0-M1

2017-08-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14628:
---
Summary: Upgrade maven enforcer plugin to 3.0.0-M1  (was: Upgrade maven 
enforcer plugin to 3.0.0)

> Upgrade maven enforcer plugin to 3.0.0-M1
> -
>
> Key: HADOOP-14628
> URL: https://issues.apache.org/jira/browse/HADOOP-14628
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14626-testing.02.patch, 
> HADOOP-14626-testing.03.patch, HADOOP-14626.testing.patch, 
> HADOOP-14628.001.patch
>
>
> Maven enforcer plugin fails after Java 9 build 175 (MENFORCER-274). Let's 
> upgrade the version to 3.0.0 when released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14628) Upgrade maven enforcer plugin to 3.0.0

2017-08-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14628:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed this to trunk. Thank you, Steve!

> Upgrade maven enforcer plugin to 3.0.0
> --
>
> Key: HADOOP-14628
> URL: https://issues.apache.org/jira/browse/HADOOP-14628
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14626-testing.02.patch, 
> HADOOP-14626-testing.03.patch, HADOOP-14626.testing.patch, 
> HADOOP-14628.001.patch
>
>
> Maven enforcer plugin fails after Java 9 build 175 (MENFORCER-274). Let's 
> upgrade the version to 3.0.0 when released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119352#comment-16119352
 ] 

Hadoop QA commented on HADOOP-14741:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 6s{color} | {color:green} root: The patch generated 0 new + 340 unchanged - 3 
fixed = 340 total (was 343) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 28s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
35s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 42m 
40s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.conf.TestCommonConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14741 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880941/HADOOP-14741-004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 1cb652635f87 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 

[jira] [Commented] (HADOOP-14726) Remove FileStatus#isDir

2017-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119327#comment-16119327
 ] 

Hadoop QA commented on HADOOP-14726:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
40s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
8s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
42s{color} | {color:green} root generated 0 new + 1355 unchanged - 22 fixed = 
1355 total (was 1377) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 10s{color} | {color:orange} root: The patch generated 1 new + 582 unchanged 
- 5 fixed = 583 total (was 587) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 12s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
30s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
21s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}101m 
59s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
31s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}294m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.sftp.TestSFTPFileSystem |
|   | hadoop.security.TestKDiag |
|   | hadoop.net.TestDNS |
|   | 

[jira] [Commented] (HADOOP-14743) CompositeGroupsMapping should not swallow exceptions

2017-08-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119321#comment-16119321
 ] 

Kai Zheng commented on HADOOP-14743:


+1 for the nice change. Thanks!

> CompositeGroupsMapping should not swallow exceptions
> 
>
> Key: HADOOP-14743
> URL: https://issues.apache.org/jira/browse/HADOOP-14743
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-14743.001.patch, HADOOP-14743.002.patch
>
>
> {code:title=CompositeGroupsMapping#getGroups}
>for (GroupMappingServiceProvider provider : providersList) {
>   try {
> groups = provider.getGroups(user);
>   } catch (Exception e) {
> //LOG.warn("Exception trying to get groups for user " + user, e); 
>  
>   }
>   if (groups != null && ! groups.isEmpty()) {
> groupSet.addAll(groups);
> if (!combined) break;
>   }
> }
> {code}
> If anything fails inside the underlying groups mapping service provider, 
> there's no way to tell what went wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12077) Provide a multi-URI replication Inode for ViewFs

2017-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119300#comment-16119300
 ] 

Hadoop QA commented on HADOOP-12077:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 53s{color} 
| {color:red} root generated 1 new + 1377 unchanged - 0 fixed = 1378 total (was 
1377) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
28s{color} | {color:green} root: The patch generated 0 new + 153 unchanged - 11 
fixed = 153 total (was 164) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 18s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-12077 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880927/HADOOP-12077.009.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 57190f6925a5 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f4e1aa0 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Updated] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-14741:
-
Attachment: HADOOP-14741-004.patch

Fixed unit tests.

> Refactor curator based ZooKeeper communication into common library
> --
>
> Key: HADOOP-14741
> URL: https://issues.apache.org/jira/browse/HADOOP-14741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14741-000.patch, HADOOP-14741-001.patch, 
> HADOOP-14741-002.patch, HADOOP-14741-003.patch, HADOOP-14741-004.patch
>
>
> Currently we have ZooKeeper based store implementations for multiple state 
> stores like RM, YARN Federation, HDFS router-based federation, RM queue 
> configs etc. This jira proposes to unify the curator based ZK communication 
> to eliminate redundancies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119218#comment-16119218
 ] 

Hadoop QA commented on HADOOP-14741:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  7s{color} | {color:orange} root: The patch generated 2 new + 321 unchanged 
- 3 fixed = 323 total (was 324) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 348 unchanged - 4 fixed = 348 total (was 352) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 57s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 32s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore |
|   | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
|   | org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14741 |
| JIRA Patch URL | 

[jira] [Updated] (HADOOP-14598) Blacklist Http/HttpsFileSystem in FsUrlStreamHandlerFactory

2017-08-08 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14598:
-
Fix Version/s: 3.0.0-beta1

> Blacklist Http/HttpsFileSystem in FsUrlStreamHandlerFactory
> ---
>
> Key: HADOOP-14598
> URL: https://issues.apache.org/jira/browse/HADOOP-14598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14598-002.patch, HADOOP-14598-003.patch, 
> HADOOP-14598-004.patch, HADOOP-14598-005.patch
>
>
> my downstream-of-spark cloud integration tests (where I haven't been running 
> the azure ones for a while) now have a few of the tests failing
> {code}
>  org.apache.hadoop.fs.azure.AzureException: 
> com.microsoft.azure.storage.StorageException: 
> org.apache.hadoop.fs.FsUrlConnection cannot be cast to 
> java.net.HttpURLConnection
> {code}
> No obvious cause, and it's only apparently happening in some of the 
> (scalatest) tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14598) Blacklist Http/HttpsFileSystem in FsUrlStreamHandlerFactory

2017-08-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-14598:

Fix Version/s: 2.9.0

> Blacklist Http/HttpsFileSystem in FsUrlStreamHandlerFactory
> ---
>
> Key: HADOOP-14598
> URL: https://issues.apache.org/jira/browse/HADOOP-14598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.9.0
>
> Attachments: HADOOP-14598-002.patch, HADOOP-14598-003.patch, 
> HADOOP-14598-004.patch, HADOOP-14598-005.patch
>
>
> my downstream-of-spark cloud integration tests (where I haven't been running 
> the azure ones for a while) now have a few of the tests failing
> {code}
>  org.apache.hadoop.fs.azure.AzureException: 
> com.microsoft.azure.storage.StorageException: 
> org.apache.hadoop.fs.FsUrlConnection cannot be cast to 
> java.net.HttpURLConnection
> {code}
> No obvious cause, and it's only apparently happening in some of the 
> (scalatest) tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14598) Blacklist Http/HttpsFileSystem in FsUrlStreamHandlerFactory

2017-08-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-14598:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2. Thanks [~steve_l] for contribution.

> Blacklist Http/HttpsFileSystem in FsUrlStreamHandlerFactory
> ---
>
> Key: HADOOP-14598
> URL: https://issues.apache.org/jira/browse/HADOOP-14598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14598-002.patch, HADOOP-14598-003.patch, 
> HADOOP-14598-004.patch, HADOOP-14598-005.patch
>
>
> my downstream-of-spark cloud integration tests (where I haven't been running 
> the azure ones for a while) now have a few of the tests failing
> {code}
>  org.apache.hadoop.fs.azure.AzureException: 
> com.microsoft.azure.storage.StorageException: 
> org.apache.hadoop.fs.FsUrlConnection cannot be cast to 
> java.net.HttpURLConnection
> {code}
> No obvious cause, and it's only apparently happening in some of the 
> (scalatest) tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14598) Blacklist Http/HttpsFileSystem in FsUrlStreamHandlerFactory

2017-08-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-14598:

Priority: Major  (was: Blocker)

> Blacklist Http/HttpsFileSystem in FsUrlStreamHandlerFactory
> ---
>
> Key: HADOOP-14598
> URL: https://issues.apache.org/jira/browse/HADOOP-14598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14598-002.patch, HADOOP-14598-003.patch, 
> HADOOP-14598-004.patch, HADOOP-14598-005.patch
>
>
> my downstream-of-spark cloud integration tests (where I haven't been running 
> the azure ones for a while) now have a few of the tests failing
> {code}
>  org.apache.hadoop.fs.azure.AzureException: 
> com.microsoft.azure.storage.StorageException: 
> org.apache.hadoop.fs.FsUrlConnection cannot be cast to 
> java.net.HttpURLConnection
> {code}
> No obvious cause, and it's only apparently happening in some of the 
> (scalatest) tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14598) Blacklist Http/HttpsFileSystem in FsUrlStreamHandlerFactory

2017-08-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-14598:

Summary: Blacklist Http/HttpsFileSystem in FsUrlStreamHandlerFactory  (was: 
Wasb connection failing: FsUrlConnection cannot be cast to HttpURLConnection)

> Blacklist Http/HttpsFileSystem in FsUrlStreamHandlerFactory
> ---
>
> Key: HADOOP-14598
> URL: https://issues.apache.org/jira/browse/HADOOP-14598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-14598-002.patch, HADOOP-14598-003.patch, 
> HADOOP-14598-004.patch, HADOOP-14598-005.patch
>
>
> my downstream-of-spark cloud integration tests (where I haven't been running 
> the azure ones for a while) now have a few of the tests failing
> {code}
>  org.apache.hadoop.fs.azure.AzureException: 
> com.microsoft.azure.storage.StorageException: 
> org.apache.hadoop.fs.FsUrlConnection cannot be cast to 
> java.net.HttpURLConnection
> {code}
> No obvious cause, and it's only apparently happening in some of the 
> (scalatest) tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-08 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119162#comment-16119162
 ] 

Haibo Chen commented on HADOOP-14284:
-

Following through the discussions, folks seem to prefer shading just client 
jars since we still want to use/upgrade guava in hadoop and we do not want to 
enforce the guava version on downstream projects.
[~djp] Can you elaborate on why you think the shaded yarn/mr client is blocking 
the work here? Cann't we do them in parallel?
To reduce build time, we could move the third-party module into an auxiliary 
hadoop project, like what HBase community is doing, so that we avoid shading 
every time hadoop is built. Thoughts?

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12077) Provide a multi-URI replication Inode for ViewFs

2017-08-08 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-12077:
---
Attachment: HADOOP-12077.009.patch

> Provide a multi-URI replication Inode for ViewFs
> 
>
> Key: HADOOP-12077
> URL: https://issues.apache.org/jira/browse/HADOOP-12077
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-12077.001.patch, HADOOP-12077.002.patch, 
> HADOOP-12077.003.patch, HADOOP-12077.004.patch, HADOOP-12077.005.patch, 
> HADOOP-12077.006.patch, HADOOP-12077.007.patch, HADOOP-12077.008.patch, 
> HADOOP-12077.009.patch
>
>
> This JIRA is to provide simple "replication" capabilities for applications 
> that maintain logically equivalent paths in multiple locations for caching or 
> failover (e.g., S3 and HDFS). We noticed a simple common HDFS usage pattern 
> in our applications. They host their data on some logical cluster C. There 
> are corresponding HDFS clusters in multiple datacenters. When the application 
> runs in DC1, it prefers to read from C in DC1, and the applications prefers 
> to failover to C in DC2 if the application is migrated to DC2 or when C in 
> DC1 is unavailable. New application data versions are created 
> periodically/relatively infrequently. 
> In order to address many common scenarios in a general fashion, and to avoid 
> unnecessary code duplication, we implement this functionality in ViewFs (our 
> default FileSystem spanning all clusters in all datacenters) in a project 
> code-named Nfly (N as in N datacenters). Currently each ViewFs Inode points 
> to a single URI via ChRootedFileSystem. Consequently, we introduce a new type 
> of links that points to a list of URIs that are each going to be wrapped in 
> ChRootedFileSystem. A typical usage: 
> /nfly/C/user->/DC1/C/user,/DC2/C/user,... This collection of 
> ChRootedFileSystem instances is fronted by the Nfly filesystem object that is 
> actually used for the mount point/Inode. Nfly filesystems backs a single 
> logical path /nfly/C/user//path by multiple physical paths.
> Nfly filesystem supports setting minReplication. As long as the number of 
> URIs on which an update has succeeded is greater than or equal to 
> minReplication exceptions are only logged but not thrown. Each update 
> operation is currently executed serially (client-bandwidth driven parallelism 
> will be added later). 
> A file create/write: 
> # Creates a temporary invisible _nfly_tmp_file in the intended chrooted 
> filesystem. 
> # Returns a FSDataOutputStream that wraps output streams returned by 1
> # All writes are forwarded to each output stream.
> # On close of stream created by 2, all n streams are closed, and the files 
> are renamed from _nfly_tmp_file to file. All files receive the same mtime 
> corresponding to the client system time as of beginning of this step. 
> # If at least minReplication destinations has gone through steps 1-4 without 
> failures the transaction is considered logically committed, otherwise a 
> best-effort attempt of cleaning up the temporary files is attempted.
> As for reads, we support a notion of locality similar to HDFS  /DC/rack/node. 
> We sort Inode URIs using NetworkTopology by their authorities. These are 
> typically host names in simple HDFS URIs. If the authority is missing as is 
> the case with the local file:/// the local host name is assumed 
> InetAddress.getLocalHost(). This makes sure that the local file system is 
> always the closest one to the reader in this approach. For our Hadoop 2 hdfs 
> URIs that are based on nameservice ids instead of hostnames it is very easy 
> to adjust the topology script since our nameservice ids already contain the 
> datacenter. As for rack and node we can simply output any string such as 
> /DC/rack-nsid/node-nsid, since we only care about datacenter-locality for 
> such filesystem clients.
> There are 2 policies/additions to the read call path that makes it more 
> expensive, but improve user experience:
> - readMostRecent - when this policy is enabled, Nfly first checks mtime for 
> the path under all URIs, sorts them from most recent to least recent. Nfly 
> then sorts the set of most recent URIs topologically in the same manner as 
> described above.
> - repairOnRead - when readMostRecent is enabled Nfly already has to RPC all 
> underlying destinations. With repairOnRead, Nfly filesystem would 
> additionally attempt to refresh destinations with the path missing or a stale 
> version of the path using the nearest available most recent destination. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: 

[jira] [Updated] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14715:

Component/s: test

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14715-001.patch, HADOOP-14715-002.patch, 
> HADOOP-14715-003.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14715:

   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

committed to branch-2 too. Thanks!

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14715-001.patch, HADOOP-14715-002.patch, 
> HADOOP-14715-003.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119141#comment-16119141
 ] 

Steve Loughran commented on HADOOP-14715:
-

+1

committed to trunk; doing a built & test on branch-2 before committing there

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
> Attachments: HADOOP-14715-001.patch, HADOOP-14715-002.patch, 
> HADOOP-14715-003.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119127#comment-16119127
 ] 

Steve Loughran commented on HADOOP-14749:
-

Testing

All well apart from existing failures HADOOP-14735 and HADOOP-14733 (patches 
available); one run failed with HADOOP-14750 stack trace

> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14749-HADOOP-13345-001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119123#comment-16119123
 ] 

Steve Loughran commented on HADOOP-14749:
-

{{S3Guard.assertQualified}} added a vargs version to make things shorter...not 
something I'm too opinionated about

{{DirectoryStatus checkPathForDirectory}} seems to always go to S3 if the path 
maps to a file, even if the store has a record in s3guard. Have I misread it?

h3. site docs
* should we use the term {{MetadataStore}} in the docs, or {{Metadata Store}}?
* what architecture doc should go in? There's a lot in the javadocs...we could 
just say "look there", but its nice to have a good online doc we can point 
people at.

> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14749-HADOOP-13345-001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14598) Wasb connection failing: FsUrlConnection cannot be cast to HttpURLConnection

2017-08-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119124#comment-16119124
 ] 

Haohui Mai commented on HADOOP-14598:
-

So essentially the downstream users assume that an {{URLConnection}} can be 
always casted to {{HttpConnection}} then. It's not ideal but yes, many people 
do that.

The patch looks good to me. +1. I'll commit it shortly.

> Wasb connection failing: FsUrlConnection cannot be cast to HttpURLConnection
> 
>
> Key: HADOOP-14598
> URL: https://issues.apache.org/jira/browse/HADOOP-14598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-14598-002.patch, HADOOP-14598-003.patch, 
> HADOOP-14598-004.patch, HADOOP-14598-005.patch
>
>
> my downstream-of-spark cloud integration tests (where I haven't been running 
> the azure ones for a while) now have a few of the tests failing
> {code}
>  org.apache.hadoop.fs.azure.AzureException: 
> com.microsoft.azure.storage.StorageException: 
> org.apache.hadoop.fs.FsUrlConnection cannot be cast to 
> java.net.HttpURLConnection
> {code}
> No obvious cause, and it's only apparently happening in some of the 
> (scalatest) tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119111#comment-16119111
 ] 

Steve Loughran edited comment on HADOOP-14749 at 8/8/17 10:24 PM:
--

Big review

* docs reviewed, edited. Added: per-bucket config example, security, more 
troubleshooting.
* moved section on testing into main testing.md file
* javadocs audited
* moved imports *on new files* into the project's preferred order.
* tuned the tests

Other than javadocs, imports and some layout, the only real code change in 
production is to use a switch statement in {{S3AFileSystem.innerMkdirs()}}.


was (Author: ste...@apache.org):
Big review

* docs reviewed, edited. Added: per-bucket config example, security, more 
troubleshooting.
* moved section on testing into main testing.md file
* javadocs audited
* moved imports *on new files* into the project's preferred order.
* tuned the tests

Other than javadocs, imports and some layout, the only real code change in 
production is to use a switch statement in 
{{S3AFileSystem.checkPathForDirectory()}}.

> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14749-HADOOP-13345-001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14749:

Attachment: HADOOP-14749-HADOOP-13345-001.patch

Big review

* docs reviewed, edited. Added: per-bucket config example, security, more 
troubleshooting.
* moved section on testing into main testing.md file
* javadocs audited
* moved imports *on new files* into the project's preferred order.
* tuned the tests

Other than javadocs, imports and some layout, the only real code change in 
production is to use a switch statement in 
{{S3AFileSystem.checkPathForDirectory()}}.

> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14749-HADOOP-13345-001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14750) s3guard to provide better diags on ddb init failures

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119100#comment-16119100
 ] 

Steve Loughran commented on HADOOP-14750:
-

This was actually a parallel test run with 10 threads and {{-Ds3guard 
-Ddynamodblocal}}

> s3guard to provide better diags on ddb init failures
> 
>
> Key: HADOOP-14750
> URL: https://issues.apache.org/jira/browse/HADOOP-14750
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Priority: Minor
>
> When you can't connect to DDB you get an Http exception; it'd be good to 
> include more info here (table name & region in particular)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14747) S3AInputStream to implement CanUnbuffer

2017-08-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14747:

Parent Issue: HADOOP-13204  (was: HADOOP-13345)

> S3AInputStream to implement CanUnbuffer
> ---
>
> Key: HADOOP-14747
> URL: https://issues.apache.org/jira/browse/HADOOP-14747
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>
> HBase relies on FileSystems implementing {{CanUnbuffer.unbuffer()}} to force 
> input streams to free up remote connections (HBASE-9393). This works for 
> HDFS, but not elsewhere.
> S3A input stream can implement {{CanUnbuffer.unbuffer()}} by closing the 
> input stream and relying on lazy seek to reopen it on demand.
> Needs
> * Contract specification of unbuffer. As in "who added a new feature to 
> filesystems but forgot to mention what it should do?"
> * Contract test for filesystems which declare their support. 
> * S3AInputStream to call {{closeStream()}} on a call to {{unbuffer()}}.
> * Test case



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14750) s3guard to provide better diags on ddb init failures

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119099#comment-16119099
 ] 

Steve Loughran commented on HADOOP-14750:
-

Got a test failure during; laptop doing other downloads at the time
{code}
testDirectoryBecomesNonEmpty(org.apache.hadoop.fs.s3a.ITestS3AEmptyDirectory)  
Time elapsed: 26.451 sec  <<< ERROR!
java.io.InterruptedIOException: initTable: com.amazonaws.SdkClientException: 
Unable to execute HTTP request: Read timed out
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:141)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:831)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:243)
at 
org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:96)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:292)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3258)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3307)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3275)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)
at 
org.apache.hadoop.fs.contract.AbstractBondedFSContract.init(AbstractBondedFSContract.java:72)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: 
Read timed out
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1069)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1035)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:2089)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:2065)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeDescribeTable(AmazonDynamoDBClient.java:1048)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.describeTable(AmazonDynamoDBClient.java:1024)
at 
com.amazonaws.services.dynamodbv2.document.Table.describe(Table.java:137)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:790)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:243)
at 
org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:96)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:292)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3258)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3307)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3275)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)
at 
org.apache.hadoop.fs.contract.AbstractBondedFSContract.init(AbstractBondedFSContract.java:72)
at 

[jira] [Created] (HADOOP-14750) s3guard to provide better diags on ddb init failures

2017-08-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14750:
---

 Summary: s3guard to provide better diags on ddb init failures
 Key: HADOOP-14750
 URL: https://issues.apache.org/jira/browse/HADOOP-14750
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: HADOOP-13345
Reporter: Steve Loughran
Priority: Minor


When you can't connect to DDB you get an Http exception; it'd be good to 
include more info here (table name & region in particular)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14708) FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL

2017-08-08 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119090#comment-16119090
 ] 

John Zhuge commented on HADOOP-14708:
-

[~cltlfcjin] Added you as a contributor. Assigned the JIRA to you. Thank you 
for the contribution!

> FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL
> ---
>
> Key: HADOOP-14708
> URL: https://issues.apache.org/jira/browse/HADOOP-14708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.3, 2.8.1, 3.0.0-alpha3
>Reporter: Lantao Jin
>Assignee: Lantao Jin
> Attachments: FSCK-2.log, FSCK.log, HADOOP-14708.001.patch
>
>
> FSCK started by xx (auth:KERBEROS_SSL) failed with exception msg "fsck 
> encountered internal errors!"
> FSCK use FSCKServlet to submit RPC to NameNode, it use {{KERBEROS_SSL}} as 
> its {{AuthenticationMethod}} in {{JspHelper.java}}
> {code}
>   /** Same as getUGI(context, request, conf, KERBEROS_SSL, true). */
>   public static UserGroupInformation getUGI(ServletContext context,
>   HttpServletRequest request, Configuration conf) throws IOException {
> return getUGI(context, request, conf, AuthenticationMethod.KERBEROS_SSL, 
> true);
>   }
> {code}
> But when setup SaslConnection with server, KERBEROS_SSL will failed to create 
> SaslClient instance. See {{SaslRpcClient.java}}
> {code}
> private SaslClient createSaslClient(SaslAuth authType)
>   throws SaslException, IOException {
>   
>   case KERBEROS: {
> if (ugi.getRealAuthenticationMethod().getAuthMethod() !=
> AuthMethod.KERBEROS) {
>   return null; // client isn't using kerberos
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14708) FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL

2017-08-08 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HADOOP-14708:
---

Assignee: Lantao Jin

> FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL
> ---
>
> Key: HADOOP-14708
> URL: https://issues.apache.org/jira/browse/HADOOP-14708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.3, 2.8.1, 3.0.0-alpha3
>Reporter: Lantao Jin
>Assignee: Lantao Jin
> Attachments: FSCK-2.log, FSCK.log, HADOOP-14708.001.patch
>
>
> FSCK started by xx (auth:KERBEROS_SSL) failed with exception msg "fsck 
> encountered internal errors!"
> FSCK use FSCKServlet to submit RPC to NameNode, it use {{KERBEROS_SSL}} as 
> its {{AuthenticationMethod}} in {{JspHelper.java}}
> {code}
>   /** Same as getUGI(context, request, conf, KERBEROS_SSL, true). */
>   public static UserGroupInformation getUGI(ServletContext context,
>   HttpServletRequest request, Configuration conf) throws IOException {
> return getUGI(context, request, conf, AuthenticationMethod.KERBEROS_SSL, 
> true);
>   }
> {code}
> But when setup SaslConnection with server, KERBEROS_SSL will failed to create 
> SaslClient instance. See {{SaslRpcClient.java}}
> {code}
> private SaslClient createSaslClient(SaslAuth authType)
>   throws SaslException, IOException {
>   
>   case KERBEROS: {
> if (ugi.getRealAuthenticationMethod().getAuthMethod() !=
> AuthMethod.KERBEROS) {
>   return null; // client isn't using kerberos
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14691) Shell command "hadoop fs -put" multiple close problem

2017-08-08 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119087#comment-16119087
 ] 

John Zhuge commented on HADOOP-14691:
-

Can we just add {{out = null}} after {{out.close()}} to ensure no-op to close a 
closed stream?
{code:java|title=FSDataOutputStream.PositionCache#close}
@Override
public void close() throws IOException {
  // ensure close works even if a null reference was passed in
  if (out != null) {
out.close();
out = null;  <==
  }
}
{code}

{code:java|title=FSDataOutputStream#close}
  public void close() throws IOException {
if (out != null) { =
  out.close(); // This invokes PositionCache.close()
  out = null; ==
}
  }
{code}


> Shell command "hadoop fs -put" multiple close problem
> -
>
> Key: HADOOP-14691
> URL: https://issues.apache.org/jira/browse/HADOOP-14691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
> Environment: CentOS7.0
> JDK1.8.0_121
> hadoop2.7.3
>Reporter: Eric Lei
>Assignee: Eric Lei
>  Labels: close, filesystem, hadoop, multi
> Attachments: CommandWithDestination.patch, 
> hadoop_common_unit_test_result_after_modification.docx, 
> hadoop_common_unit_test_result_before_modification.docx, IOUtils.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> 1.Bug description
> Shell command “Hadoop fs -put” is a write operation. In this process, 
> FSDataOutputStream is new created and closed lastly. Finally, the 
> FSDataOutputStream.close() calls the close method in HDFS to end up the 
> communication of this write process between the server and client.
> With the command “Hadoop fs -put”, for each created FSDataOutputStream 
> object, FSDataOutputStream.close() is called twice, which means the close 
> method, in the underlying distributed file system, is called twice. This is 
> the error, that’s because the communication process, for example socket, 
> might be repeated shut down. Unfortunately, if there is no error protection 
> for the socket, there might be error for the socket in the second close. 
> Further, we think a correct upper file system design should keep the one time 
> close principle. It means that each creation of underlying distributed file 
> system object should correspond with close only once. 
> For the command “Hadoop fs -put”, there are double close as follows:
> a.The first close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
> at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> b.The second close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:261)
> at 
> 

[jira] [Updated] (HADOOP-14726) Remove FileStatus#isDir

2017-08-08 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14726:
---
Attachment: HADOOP-14726.003.patch

Some of the test failures are related to the {{FileStatus::toString}} changes. 
It looks like it's exposing a bug, so I'll revert it for this JIRA.

> Remove FileStatus#isDir
> ---
>
> Key: HADOOP-14726
> URL: https://issues.apache.org/jira/browse/HADOOP-14726
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HADOOP-14726.000.patch, HADOOP-14726.001.patch, 
> HADOOP-14726.002.patch, HADOOP-14726.003.patch
>
>
> FileStatus#isDir was deprecated in 0.21 (HADOOP-6585).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-14741:
-
Attachment: HADOOP-14741-003.patch

Fixed unit tests.

> Refactor curator based ZooKeeper communication into common library
> --
>
> Key: HADOOP-14741
> URL: https://issues.apache.org/jira/browse/HADOOP-14741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14741-000.patch, HADOOP-14741-001.patch, 
> HADOOP-14741-002.patch, HADOOP-14741-003.patch
>
>
> Currently we have ZooKeeper based store implementations for multiple state 
> stores like RM, YARN Federation, HDFS router-based federation, RM queue 
> configs etc. This jira proposes to unify the curator based ZK communication 
> to eliminate redundancies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14598) Wasb connection failing: FsUrlConnection cannot be cast to HttpURLConnection

2017-08-08 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119055#comment-16119055
 ] 

John Zhuge commented on HADOOP-14598:
-

+1 LGTM. Wrap {{LOG.debug}} with {{if (LOG.isDebugEnabled())}} in fast paths?

> Wasb connection failing: FsUrlConnection cannot be cast to HttpURLConnection
> 
>
> Key: HADOOP-14598
> URL: https://issues.apache.org/jira/browse/HADOOP-14598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-14598-002.patch, HADOOP-14598-003.patch, 
> HADOOP-14598-004.patch, HADOOP-14598-005.patch
>
>
> my downstream-of-spark cloud integration tests (where I haven't been running 
> the azure ones for a while) now have a few of the tests failing
> {code}
>  org.apache.hadoop.fs.azure.AzureException: 
> com.microsoft.azure.storage.StorageException: 
> org.apache.hadoop.fs.FsUrlConnection cannot be cast to 
> java.net.HttpURLConnection
> {code}
> No obvious cause, and it's only apparently happening in some of the 
> (scalatest) tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client

2017-08-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118973#comment-16118973
 ] 

Sean Busbey commented on HADOOP-13917:
--

yeah i think it's needed. I'll try to add it at the same time I chase down 
HADOOP-14089 late this week / early next week.

> Ensure nightly builds run the integration tests for the shaded client
> -
>
> Key: HADOOP-13917
> URL: https://issues.apache.org/jira/browse/HADOOP-13917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> Either QBT or a different jenkins job should run our integration tests, 
> specifically the ones added for the shaded client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14089) Shaded Hadoop client runtime includes non-shaded classes

2017-08-08 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118971#comment-16118971
 ] 

Sean Busbey commented on HADOOP-14089:
--

yeah, the test catches the problem this jira was filed against. I need to do a 
pass on which of them need to be relocated and which are fine


> Shaded Hadoop client runtime includes non-shaded classes
> 
>
> Key: HADOOP-14089
> URL: https://issues.apache.org/jira/browse/HADOOP-14089
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha2
>Reporter: David Phillips
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HADOOP-14089.WIP.0.patch
>
>
> The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, 
> {{javax/ws}}, {{mozilla}}, etc.
> An easy way to verify this is to look at the contents of the jar:
> {code}
> jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop'
> {code}
> For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS 
> {{javax.ws}}, it makes sense for those to be normal dependencies in the POM 
> -- they are standard, so version conflicts shouldn't be a problem. The JSR 
> 305 annotations can be {{true}} since they aren't needed 
> at runtime (this is what Guava does).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14716) SwiftNativeFileSystem should not eat the exception when rename

2017-08-08 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-14716:
-
Attachment: HADOOP-14716-WIP.patch

WIP patch. 

> SwiftNativeFileSystem should not eat the exception when rename
> --
>
> Key: HADOOP-14716
> URL: https://issues.apache.org/jira/browse/HADOOP-14716
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Chen He
>Assignee: Chen He
>Priority: Minor
> Attachments: HADOOP-14716-WIP.patch
>
>
> Currently, if "rename" will eat excpetions and return "false" in 
> SwiftNativeFileSystem. It is not easy for user to find root cause about why 
> rename failed. It has to, at least, write out some logs instead of directly 
> eats these exceptions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14738) Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1

2017-08-08 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118841#comment-16118841
 ] 

Aaron Fabbri commented on HADOOP-14738:
---

I'd vote to remove s3n in 3.0.. and make this JIRA a blocker to make sure we 
get this in.

> Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1
> -
>
> Key: HADOOP-14738
> URL: https://issues.apache.org/jira/browse/HADOOP-14738
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>
> We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf 
> since Hadoop 2.8
> It's now time to kill S3N off, remove the source, the tests, the transitive 
> dependencies.
> I propose that in Hadoop 3.0 beta we tell people off from using it, and link 
> to a doc page (wiki?) about how to migrate (Change URLs, update config ops).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-08 Thread Esfandiar Manii (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118826#comment-16118826
 ] 

Esfandiar Manii commented on HADOOP-14715:
--

All the tests ran against:
wasb://testcontai...@xhdfs.blob.core.windows.net

*When secure mode is on and authorization caching is enabled in azure-test.xml*
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 137.797 sec - 
in org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper

Results :

Tests run: 10, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 02:20 min
[INFO] Finished at: 2017-08-08T18:18:36+00:00
[INFO] Final Memory: 22M/315M
[INFO] 

*When secure mode is on and authorization caching is disabled in azure-test.xml*
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 147.801 sec - 
in org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper

Results :

Tests run: 10, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 02:32 min
[INFO] Finished at: 2017-08-08T18:24:54+00:00
[INFO] Final Memory: 35M/283M
[INFO] 



> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
> Attachments: HADOOP-14715-001.patch, HADOOP-14715-002.patch, 
> HADOOP-14715-003.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14743) CompositeGroupsMapping should not swallow exceptions

2017-08-08 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118579#comment-16118579
 ] 

Wei-Chiu Chuang commented on HADOOP-14743:
--

[~drankye] could you help review this patch? Thx

> CompositeGroupsMapping should not swallow exceptions
> 
>
> Key: HADOOP-14743
> URL: https://issues.apache.org/jira/browse/HADOOP-14743
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-14743.001.patch, HADOOP-14743.002.patch
>
>
> {code:title=CompositeGroupsMapping#getGroups}
>for (GroupMappingServiceProvider provider : providersList) {
>   try {
> groups = provider.getGroups(user);
>   } catch (Exception e) {
> //LOG.warn("Exception trying to get groups for user " + user, e); 
>  
>   }
>   if (groups != null && ! groups.isEmpty()) {
> groupSet.addAll(groups);
> if (!combined) break;
>   }
> }
> {code}
> If anything fails inside the underlying groups mapping service provider, 
> there's no way to tell what went wrong.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-08-08 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118545#comment-16118545
 ] 

Andras Bokor commented on HADOOP-14698:
---

TestKDiag is not related.

> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch, 
> HADOOP-14698.03.patch, HADOOP-14698.04.patch, HADOOP-14698.05.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118538#comment-16118538
 ] 

Hadoop QA commented on HADOOP-14698:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 54 unchanged - 12 fixed = 54 total (was 66) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 59s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14698 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880844/HADOOP-14698.05.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux fb2477f7b91c 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9891295 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12985/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12985/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12985/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: 

[jira] [Commented] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118432#comment-16118432
 ] 

Steve Loughran commented on HADOOP-14749:
-

+
* review javadocs
* arranging imports in roughly the same order as our style requirements.
* review tests

> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-08-08 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14698:
--
Attachment: HADOOP-14698.05.patch

Attaching patch to address check style issues.

> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch, 
> HADOOP-14698.03.patch, HADOOP-14698.04.patch, HADOOP-14698.05.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-14749 started by Steve Loughran.
---
> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14708) FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL

2017-08-08 Thread Lantao Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118429#comment-16118429
 ] 

Lantao Jin commented on HADOOP-14708:
-

Sorry I don't know why the ugi client in NN (in FSCK servlet) use KERBEROS_SSL. 
I guess it inherited from the JspHelper. But I wonder the logical any 
KERBEROS_SSL from rpc can not pass through the NEGOTIATE.
Return {{null}} illustrate that client isn't using kerberos. But KERBEROS_SSL 
is also kerberos, right? Please correct me if I misunderstand.

> FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL
> ---
>
> Key: HADOOP-14708
> URL: https://issues.apache.org/jira/browse/HADOOP-14708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.3, 2.8.1, 3.0.0-alpha3
>Reporter: Lantao Jin
> Attachments: FSCK-2.log, FSCK.log, HADOOP-14708.001.patch
>
>
> FSCK started by xx (auth:KERBEROS_SSL) failed with exception msg "fsck 
> encountered internal errors!"
> FSCK use FSCKServlet to submit RPC to NameNode, it use {{KERBEROS_SSL}} as 
> its {{AuthenticationMethod}} in {{JspHelper.java}}
> {code}
>   /** Same as getUGI(context, request, conf, KERBEROS_SSL, true). */
>   public static UserGroupInformation getUGI(ServletContext context,
>   HttpServletRequest request, Configuration conf) throws IOException {
> return getUGI(context, request, conf, AuthenticationMethod.KERBEROS_SSL, 
> true);
>   }
> {code}
> But when setup SaslConnection with server, KERBEROS_SSL will failed to create 
> SaslClient instance. See {{SaslRpcClient.java}}
> {code}
> private SaslClient createSaslClient(SaslAuth authType)
>   throws SaslException, IOException {
>   
>   case KERBEROS: {
> if (ugi.getRealAuthenticationMethod().getAuthMethod() !=
> AuthMethod.KERBEROS) {
>   return null; // client isn't using kerberos
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14749:
---

 Summary: review s3guard docs & code prior to merge
 Key: HADOOP-14749
 URL: https://issues.apache.org/jira/browse/HADOOP-14749
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation, fs/s3
Affects Versions: HADOOP-13345
Reporter: Steve Loughran
Assignee: Steve Loughran


Pre-merge cleanup while it's still easy to do

* Read through all the docs, tune
* Diff the trunk/branch files to see if we can reduce the delta (and hence the 
changes)
* Review the new tests




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14708) FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL

2017-08-08 Thread Lantao Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118414#comment-16118414
 ] 

Lantao Jin commented on HADOOP-14708:
-

Hi [~jojochuang], [^FSCK-2.log] is the new log I added some debug code. I use 
user lajin to do FSCK from 192.168.1.22. The namenode which started with user 
hadoop with kerberos is handling this in 192.168.1.1. From the debug log. The 
ugi from DFSClient (in NN) has no tokens in it and its {{AuthenticationMethod}} 
is KERBEROS_SSL. I don't know why but seems the patch I submitted can work 
around.

> FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL
> ---
>
> Key: HADOOP-14708
> URL: https://issues.apache.org/jira/browse/HADOOP-14708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.3, 2.8.1, 3.0.0-alpha3
>Reporter: Lantao Jin
> Attachments: FSCK-2.log, FSCK.log, HADOOP-14708.001.patch
>
>
> FSCK started by xx (auth:KERBEROS_SSL) failed with exception msg "fsck 
> encountered internal errors!"
> FSCK use FSCKServlet to submit RPC to NameNode, it use {{KERBEROS_SSL}} as 
> its {{AuthenticationMethod}} in {{JspHelper.java}}
> {code}
>   /** Same as getUGI(context, request, conf, KERBEROS_SSL, true). */
>   public static UserGroupInformation getUGI(ServletContext context,
>   HttpServletRequest request, Configuration conf) throws IOException {
> return getUGI(context, request, conf, AuthenticationMethod.KERBEROS_SSL, 
> true);
>   }
> {code}
> But when setup SaslConnection with server, KERBEROS_SSL will failed to create 
> SaslClient instance. See {{SaslRpcClient.java}}
> {code}
> private SaslClient createSaslClient(SaslAuth authType)
>   throws SaslException, IOException {
>   
>   case KERBEROS: {
> if (ugi.getRealAuthenticationMethod().getAuthMethod() !=
> AuthMethod.KERBEROS) {
>   return null; // client isn't using kerberos
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14708) FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL

2017-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118413#comment-16118413
 ] 

Hadoop QA commented on HADOOP-14708:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
3s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-14708 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14708 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880841/FSCK-2.log |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12984/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL
> ---
>
> Key: HADOOP-14708
> URL: https://issues.apache.org/jira/browse/HADOOP-14708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.3, 2.8.1, 3.0.0-alpha3
>Reporter: Lantao Jin
> Attachments: FSCK-2.log, FSCK.log, HADOOP-14708.001.patch
>
>
> FSCK started by xx (auth:KERBEROS_SSL) failed with exception msg "fsck 
> encountered internal errors!"
> FSCK use FSCKServlet to submit RPC to NameNode, it use {{KERBEROS_SSL}} as 
> its {{AuthenticationMethod}} in {{JspHelper.java}}
> {code}
>   /** Same as getUGI(context, request, conf, KERBEROS_SSL, true). */
>   public static UserGroupInformation getUGI(ServletContext context,
>   HttpServletRequest request, Configuration conf) throws IOException {
> return getUGI(context, request, conf, AuthenticationMethod.KERBEROS_SSL, 
> true);
>   }
> {code}
> But when setup SaslConnection with server, KERBEROS_SSL will failed to create 
> SaslClient instance. See {{SaslRpcClient.java}}
> {code}
> private SaslClient createSaslClient(SaslAuth authType)
>   throws SaslException, IOException {
>   
>   case KERBEROS: {
> if (ugi.getRealAuthenticationMethod().getAuthMethod() !=
> AuthMethod.KERBEROS) {
>   return null; // client isn't using kerberos
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14708) FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL

2017-08-08 Thread Lantao Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lantao Jin updated HADOOP-14708:

Attachment: FSCK-2.log

> FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL
> ---
>
> Key: HADOOP-14708
> URL: https://issues.apache.org/jira/browse/HADOOP-14708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.3, 2.8.1, 3.0.0-alpha3
>Reporter: Lantao Jin
> Attachments: FSCK-2.log, FSCK.log, HADOOP-14708.001.patch
>
>
> FSCK started by xx (auth:KERBEROS_SSL) failed with exception msg "fsck 
> encountered internal errors!"
> FSCK use FSCKServlet to submit RPC to NameNode, it use {{KERBEROS_SSL}} as 
> its {{AuthenticationMethod}} in {{JspHelper.java}}
> {code}
>   /** Same as getUGI(context, request, conf, KERBEROS_SSL, true). */
>   public static UserGroupInformation getUGI(ServletContext context,
>   HttpServletRequest request, Configuration conf) throws IOException {
> return getUGI(context, request, conf, AuthenticationMethod.KERBEROS_SSL, 
> true);
>   }
> {code}
> But when setup SaslConnection with server, KERBEROS_SSL will failed to create 
> SaslClient instance. See {{SaslRpcClient.java}}
> {code}
> private SaslClient createSaslClient(SaslAuth authType)
>   throws SaslException, IOException {
>   
>   case KERBEROS: {
> if (ugi.getRealAuthenticationMethod().getAuthMethod() !=
> AuthMethod.KERBEROS) {
>   return null; // client isn't using kerberos
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14733) ITestS3GuardConcurrentOps failing with -Ddynamodblocal -Ds3guard

2017-08-08 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118368#comment-16118368
 ] 

Aaron Fabbri commented on HADOOP-14733:
---

+1 LGTM.

> ITestS3GuardConcurrentOps failing with -Ddynamodblocal -Ds3guard
> 
>
> Key: HADOOP-14733
> URL: https://issues.apache.org/jira/browse/HADOOP-14733
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14733-HADOOP-13345-001.patch
>
>
> Test failure with local ddb server for s3guard
> {code}
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 128.876 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps
> testConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps)
>   Time elapsed: 128.785 sec  <<< ERROR!
> com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Cannot do 
> operations on a non-existent table (Service: AmazonDynamoDBv2; Status Code: 
> 400; Error Code: ResourceNotFoundException; Request ID: 
> 82dbf479-3ec1-40fa-bd5c-ca0f206685e7)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118363#comment-16118363
 ] 

Hadoop QA commented on HADOOP-14698:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 55 unchanged - 11 fixed = 57 total (was 66) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  2s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14698 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880827/HADOOP-14698.04.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 11f759820cf5 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9891295 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12983/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12983/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12983/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12983/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make copyFromLocal's -t 

[jira] [Commented] (HADOOP-14643) Clean up Test(HDFS|LocalFS)FileContextMainOperations and FileContextMainOperationsBaseTest

2017-08-08 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118360#comment-16118360
 ] 

Andras Bokor commented on HADOOP-14643:
---

JUnits are not related.

> Clean up Test(HDFS|LocalFS)FileContextMainOperations and 
> FileContextMainOperationsBaseTest
> --
>
> Key: HADOOP-14643
> URL: https://issues.apache.org/jira/browse/HADOOP-14643
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Attachments: HADOOP-14643.01.patch, HADOOP-14643.02.patch, 
> HADOOP-14643.03.patch, HADOOP-14643.04.patch
>
>
> I was working with classes in summary. It's good time to clean up them as 
> "Boy Scout Rule"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14691) Shell command "hadoop fs -put" multiple close problem

2017-08-08 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118303#comment-16118303
 ] 

Andras Bokor edited comment on HADOOP-14691 at 8/8/17 2:02 PM:
---

It's a very good catch but the solution seems makes the things more complicated 
and it is not backward-compatible since we change the signature of a method.
Instead, I suggest fixing HADOOP-5943. Using try-with-resources at the same 
place where the resource is created seems a better practice and used widely in 
Java world.
We can introduce the new methods without closing ability and keep the old ones 
as deprecated to keep the compatibility. I am happy to send a patch for 
HADOOP-5943.
Thoughts?

P.s.: I am removing the linked issue since it is not related to the exception 
in HDFS-10429.


was (Author: boky01):
It's a very good catch but the solution seems makes the things more complicated 
and it is not backward-compatible since we change the signature of a method.
Instead, I suggest fixing HADOOP-5943. Using try-with-resources at the same 
place where the resource is created seems a better practice and used widely in 
Java world.
We can introduce the new methods without closing ability and keep the old ones 
as deprecated to keep the compatibility. I am happy to send a patch for 
HADOOP-5943.
Thoughts?

P.s.: I removing the linked issue since it is not related to the exception in 
HDFS-10429.

> Shell command "hadoop fs -put" multiple close problem
> -
>
> Key: HADOOP-14691
> URL: https://issues.apache.org/jira/browse/HADOOP-14691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
> Environment: CentOS7.0
> JDK1.8.0_121
> hadoop2.7.3
>Reporter: Eric Lei
>Assignee: Eric Lei
>  Labels: close, filesystem, hadoop, multi
> Attachments: CommandWithDestination.patch, 
> hadoop_common_unit_test_result_after_modification.docx, 
> hadoop_common_unit_test_result_before_modification.docx, IOUtils.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> 1.Bug description
> Shell command “Hadoop fs -put” is a write operation. In this process, 
> FSDataOutputStream is new created and closed lastly. Finally, the 
> FSDataOutputStream.close() calls the close method in HDFS to end up the 
> communication of this write process between the server and client.
> With the command “Hadoop fs -put”, for each created FSDataOutputStream 
> object, FSDataOutputStream.close() is called twice, which means the close 
> method, in the underlying distributed file system, is called twice. This is 
> the error, that’s because the communication process, for example socket, 
> might be repeated shut down. Unfortunately, if there is no error protection 
> for the socket, there might be error for the socket in the second close. 
> Further, we think a correct upper file system design should keep the one time 
> close principle. It means that each creation of underlying distributed file 
> system object should correspond with close only once. 
> For the command “Hadoop fs -put”, there are double close as follows:
> a.The first close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
> at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at 

[jira] [Updated] (HADOOP-14680) Azure: IndexOutOfBoundsException in BlockBlobInputStream

2017-08-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14680:

   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

> Azure: IndexOutOfBoundsException in BlockBlobInputStream
> 
>
> Key: HADOOP-14680
> URL: https://issues.apache.org/jira/browse/HADOOP-14680
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Rajesh Balamohan
>Assignee: Thomas Marquardt
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14680-001.patch, HADOOP-14680-branch-2.01.patch
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/BlockBlobInputStream.java#L361
> On certain conditions, BlockBlobInputStream can throw 
> IndexOutOfBoundsException. Following is an example
> {{length:297898, offset:4194304, buf.len:4492202, writePos:4194304}} : 
> In this case, {{MemoryOutputStream::capacity()}} would end up returning 
> negative value and can cause {{IndexOutOfBoundsException}}
> It should be {{return buffer.length - offset;}} to determine current capacity.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14748) Wasb input streams to implement CanUnbuffer

2017-08-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14748:

Affects Version/s: 2.9.0
 Priority: Minor  (was: Major)

> Wasb input streams to implement CanUnbuffer
> ---
>
> Key: HADOOP-14748
> URL: https://issues.apache.org/jira/browse/HADOOP-14748
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
>
> HBase relies on FileSystems implementing CanUnbuffer.unbuffer() to force 
> input streams to free up remote connections (HBASE-9393Link). This works for 
> HDFS, but not elsewhere.
> WASB {{BlockBlobInputStream}} can implement this by closing the stream 
>  in ({{closeBlobInputStream}}, so it will be re-opened elsewhere.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14748) Wasb input streams to implement CanUnbuffer

2017-08-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14748:
---

 Summary: Wasb input streams to implement CanUnbuffer
 Key: HADOOP-14748
 URL: https://issues.apache.org/jira/browse/HADOOP-14748
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Steve Loughran


HBase relies on FileSystems implementing CanUnbuffer.unbuffer() to force input 
streams to free up remote connections (HBASE-9393Link). This works for HDFS, 
but not elsewhere.

WASB {{BlockBlobInputStream}} can implement this by closing the stream 
 in ({{closeBlobInputStream}}, so it will be re-opened elsewhere.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12805) Annotate CanUnbuffer with @InterfaceAudience.Public

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118339#comment-16118339
 ] 

Steve Loughran commented on HADOOP-12805:
-

Would have been nice for a new public API do have had an entry in the FS Spec 
markdown files too, & some tests for other FS implementors. Please bear that in 
mind in future. thx.

> Annotate CanUnbuffer with @InterfaceAudience.Public
> ---
>
> Key: HADOOP-12805
> URL: https://issues.apache.org/jira/browse/HADOOP-12805
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HADOOP-12805.v1.patch, HADOOP-12805.v2.patch
>
>
> See comments toward the tail of HBASE-9393.
> The change in HBASE-9393 adds dependency on CanUnbuffer interface which is 
> currently marked @InterfaceAudience.Private
> To facilitate downstream projects such as HBase in using this interface, 
> CanUnbuffer interface should be annotated @LimitedPrivate(\{"HBase", 
> "HDFS"\}).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14747) S3AInputStream to implement CanUnbuffer

2017-08-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14747:
---

 Summary: S3AInputStream to implement CanUnbuffer
 Key: HADOOP-14747
 URL: https://issues.apache.org/jira/browse/HADOOP-14747
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.1
Reporter: Steve Loughran


HBase relies on FileSystems implementing {{CanUnbuffer.unbuffer()}} to force 
input streams to free up remote connections (HBASE-9393). This works for HDFS, 
but not elsewhere.

S3A input stream can implement {{CanUnbuffer.unbuffer()}} by closing the input 
stream and relying on lazy seek to reopen it on demand.

Needs
* Contract specification of unbuffer. As in "who added a new feature to 
filesystems but forgot to mention what it should do?"
* Contract test for filesystems which declare their support. 
* S3AInputStream to call {{closeStream()}} on a call to {{unbuffer()}}.
* Test case



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14598) Wasb connection failing: FsUrlConnection cannot be cast to HttpURLConnection

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118327#comment-16118327
 ] 

Steve Loughran commented on HADOOP-14598:
-

HADOOP-14383 was designed to allow anyone to use an http or https URL as a 
source of data in anything which takes a filesystem for reading things. This is 
good, and changing the schema to anything other than http/https doesn't make 
sense.

All thats problematic is that the bit of code which exports every Hadoop FS 
client as a URL via the JVM mustn't register the core JVM HTTP/HTTPS clients, 
as those work very well and other bits of code (here: Azure SDK), have 
assumptions/requirements about the class returned when you try to open such 
URLs.

This patch stops the new schemas from being registered, sets things up for 
future schemas to go in too.

> Wasb connection failing: FsUrlConnection cannot be cast to HttpURLConnection
> 
>
> Key: HADOOP-14598
> URL: https://issues.apache.org/jira/browse/HADOOP-14598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-14598-002.patch, HADOOP-14598-003.patch, 
> HADOOP-14598-004.patch, HADOOP-14598-005.patch
>
>
> my downstream-of-spark cloud integration tests (where I haven't been running 
> the azure ones for a while) now have a few of the tests failing
> {code}
>  org.apache.hadoop.fs.azure.AzureException: 
> com.microsoft.azure.storage.StorageException: 
> org.apache.hadoop.fs.FsUrlConnection cannot be cast to 
> java.net.HttpURLConnection
> {code}
> No obvious cause, and it's only apparently happening in some of the 
> (scalatest) tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14355) Update maven-war-plugin to 3.1.0

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118320#comment-16118320
 ] 

Steve Loughran commented on HADOOP-14355:
-

+1

> Update maven-war-plugin to 3.1.0
> 
>
> Key: HADOOP-14355
> URL: https://issues.apache.org/jira/browse/HADOOP-14355
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14335.testing.patch, HADOOP-14355.01.patch
>
>
> Due to MWAR-405, build fails with Java 9. The issue is fixed in maven war 
> plugin 3.1.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14745) s3a getFileStatus can't return expect result when existing a file and directory with the same name

2017-08-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14745.
-
Resolution: Invalid

> s3a getFileStatus can't return expect result when existing a file and 
> directory with the same name
> --
>
> Key: HADOOP-14745
> URL: https://issues.apache.org/jira/browse/HADOOP-14745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Yonger
>Assignee: Yonger
>
> {code}
> [ hadoop-aws]# /root/hadoop/s3cmd/s3cmd ls s3://test-aws-s3a/user/root/
>DIR   s3://test-aws-s3a/user/root/ccc/
> 2017-08-08 07:04 0   s3://test-aws-s3a/user/root/ccc
> {code}
> if we expect to ccc is a directory by code :
> {code}
> Path test=new Path("ccc");
> fs.getFileStatus(test);
> {code}
> actually, it will tell us it is a file:
> {code}
> 2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AFileSystem.java:getFileStatus(1576)) - Getting path status for 
> s3a://test-aws-s3a/user/root/ccc  (user/root/ccc)
> 2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AStorageStatistics.java:incrementCounter(60)) - object_metadata_requests 
> += 1  ->  3
> 2017-08-08 15:08:40,580 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AFileSystem.java:getFileStatus(1585)) - Found exact file: normal file
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-08-08 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14698:
--
Attachment: HADOOP-14698.04.patch

Attaching the same patch to kick [~hadoopqa].

> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch, 
> HADOOP-14698.03.patch, HADOOP-14698.04.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14745) s3a getFileStatus can't return expect result when existing a file and directory with the same name

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118315#comment-16118315
 ] 

Steve Loughran commented on HADOOP-14745:
-

Sorry, no. it is possible to create s3 data through other tooling with names 
s3a can't handle

* data in path/
* data in pathpath2
* +probably some characters in path elements we don't allow

Not only that, s3a can and will delete what it thinks are mock directories, e.g 
/path/, without even checking to see if they contain data. 

You're going to have to set up a process which doesn't create paths which 
confuse S3a. Hadoop s3a makes sure it does this itself. For other tools, you 
are going to have to be aware of the limitations and avoid them.

Sorry. Closing as a WONTFIX, for reasons discussed.

> s3a getFileStatus can't return expect result when existing a file and 
> directory with the same name
> --
>
> Key: HADOOP-14745
> URL: https://issues.apache.org/jira/browse/HADOOP-14745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Yonger
>Assignee: Yonger
>
> {code}
> [ hadoop-aws]# /root/hadoop/s3cmd/s3cmd ls s3://test-aws-s3a/user/root/
>DIR   s3://test-aws-s3a/user/root/ccc/
> 2017-08-08 07:04 0   s3://test-aws-s3a/user/root/ccc
> {code}
> if we expect to ccc is a directory by code :
> {code}
> Path test=new Path("ccc");
> fs.getFileStatus(test);
> {code}
> actually, it will tell us it is a file:
> {code}
> 2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AFileSystem.java:getFileStatus(1576)) - Getting path status for 
> s3a://test-aws-s3a/user/root/ccc  (user/root/ccc)
> 2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AStorageStatistics.java:incrementCounter(60)) - object_metadata_requests 
> += 1  ->  3
> 2017-08-08 15:08:40,580 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AFileSystem.java:getFileStatus(1585)) - Found exact file: normal file
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14691) Shell command "hadoop fs -put" multiple close problem

2017-08-08 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118303#comment-16118303
 ] 

Andras Bokor commented on HADOOP-14691:
---

It's a very good catch but the solution seems makes the things more complicated 
and it is not backward-compatible since we change the signature of a method.
Instead, I suggest fixing HADOOP-5943. Using try-with-resources at the same 
place where the resource is created seems a better practice and used widely in 
Java world.
We can introduce the new methods without closing ability and keep the old ones 
as deprecated to keep the compatibility. I am happy to send a patch for 
HADOOP-5943.
Thoughts?

P.s.: I removing the linked issue since it is not related to the exception in 
HDFS-10429.

> Shell command "hadoop fs -put" multiple close problem
> -
>
> Key: HADOOP-14691
> URL: https://issues.apache.org/jira/browse/HADOOP-14691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
> Environment: CentOS7.0
> JDK1.8.0_121
> hadoop2.7.3
>Reporter: Eric Lei
>Assignee: Eric Lei
>  Labels: close, filesystem, hadoop, multi
> Attachments: CommandWithDestination.patch, 
> hadoop_common_unit_test_result_after_modification.docx, 
> hadoop_common_unit_test_result_before_modification.docx, IOUtils.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> 1.Bug description
> Shell command “Hadoop fs -put” is a write operation. In this process, 
> FSDataOutputStream is new created and closed lastly. Finally, the 
> FSDataOutputStream.close() calls the close method in HDFS to end up the 
> communication of this write process between the server and client.
> With the command “Hadoop fs -put”, for each created FSDataOutputStream 
> object, FSDataOutputStream.close() is called twice, which means the close 
> method, in the underlying distributed file system, is called twice. This is 
> the error, that’s because the communication process, for example socket, 
> might be repeated shut down. Unfortunately, if there is no error protection 
> for the socket, there might be error for the socket in the second close. 
> Further, we think a correct upper file system design should keep the one time 
> close principle. It means that each creation of underlying distributed file 
> system object should correspond with close only once. 
> For the command “Hadoop fs -put”, there are double close as follows:
> a.The first close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
> at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> b.The second close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
> at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:261)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:468)

[jira] [Commented] (HADOOP-14628) Upgrade maven enforcer plugin to 3.0.0

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118295#comment-16118295
 ] 

Steve Loughran commented on HADOOP-14628:
-

OK

+1

> Upgrade maven enforcer plugin to 3.0.0
> --
>
> Key: HADOOP-14628
> URL: https://issues.apache.org/jira/browse/HADOOP-14628
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14626-testing.02.patch, 
> HADOOP-14626-testing.03.patch, HADOOP-14626.testing.patch, 
> HADOOP-14628.001.patch
>
>
> Maven enforcer plugin fails after Java 9 build 175 (MENFORCER-274). Let's 
> upgrade the version to 3.0.0 when released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118292#comment-16118292
 ] 

Steve Loughran commented on HADOOP-14715:
-

OK, checkstyle is happy. I just need the declaration of endpoint/options you've 
tested against so I can be confident it works

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
> Attachments: HADOOP-14715-001.patch, HADOOP-14715-002.patch, 
> HADOOP-14715-003.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14628) Upgrade maven enforcer plugin to 3.0.0

2017-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118259#comment-16118259
 ] 

Hadoop QA commented on HADOOP-14628:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 32s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14628 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880187/HADOOP-14628.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 54918cc87a97 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 55a181f |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12981/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12981/testReport/ |
| modules | C: hadoop-client-modules/hadoop-client-check-invariants 
hadoop-client-modules/hadoop-client-check-test-invariants . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12981/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upgrade maven enforcer plugin to 3.0.0
> --
>
> Key: HADOOP-14628
> URL: https://issues.apache.org/jira/browse/HADOOP-14628
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14626-testing.02.patch, 
> HADOOP-14626-testing.03.patch, 

[jira] [Assigned] (HADOOP-5943) IOUtils#copyBytes methods should not close streams that are passed in as parameters

2017-08-08 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HADOOP-5943:


Assignee: Andras Bokor

> IOUtils#copyBytes methods should not close streams that are passed in as 
> parameters
> ---
>
> Key: HADOOP-5943
> URL: https://issues.apache.org/jira/browse/HADOOP-5943
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Hairong Kuang
>Assignee: Andras Bokor
>
> The following methods in IOUtils close the streams that are passed in as 
> parameters. Calling these methods can easily trigger findbug OBL: Method may 
> fail to clean up stream or resource (OBL_UNSATISFIED_OBLIGATION). A good 
> practice should be to close a stream in the same method where the stream is 
> opened. 
> public static void copyBytes(InputStream in, OutputStream out, int buffSize, 
> boolean close) 
> public static void copyBytes(InputStream in, OutputStream out, Configuration 
> conf, boolean close)
> These methods should be deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14730) Support protobuf FileStatus in AdlFileSystem

2017-08-08 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118212#comment-16118212
 ] 

Vishwajeet Dusane commented on HADOOP-14730:


Thanks a lot [~chris.douglas] and [~jzhuge].

> Support protobuf FileStatus in AdlFileSystem
> 
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch, HADOOP-14730.002.patch, 
> HADOOP-14730.003.patch, HADOOP-14730.004.patch, HADOOP-14730.005.patch, 
> HADOOP-14730.006.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118190#comment-16118190
 ] 

Hadoop QA commented on HADOOP-14715:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
8s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14715 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880720/HADOOP-14715-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 018fa1e5c4dd 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9891295 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12982/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12982/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
> Attachments: HADOOP-14715-001.patch, HADOOP-14715-002.patch, 
> HADOOP-14715-003.patch
>
>
> 

[jira] [Commented] (HADOOP-14739) Add build instruction for docker for Mac instead of docker toolbox.

2017-08-08 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118188#comment-16118188
 ] 

Akira Ajisaka commented on HADOOP-14739:


Nice catch, Elek!
My user id is 1000. It is not necessary for me but removing it breaks users who 
are using boot2docker.

> Add build instruction for docker for Mac instead of docker toolbox.
> ---
>
> Key: HADOOP-14739
> URL: https://issues.apache.org/jira/browse/HADOOP-14739
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> HADOOP-12575 added build instruction for docker toolbox.
> Now Docker for Mac (https://www.docker.com/docker-mac) is available and it 
> can skip some procedures written in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14745) s3a getFileStatus can't return expect result when existing a file and directory with the same name

2017-08-08 Thread Yonger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118177#comment-16118177
 ] 

Yonger commented on HADOOP-14745:
-

I create that path structure through s3cmd, and yes , we can't do this by s3a 
itself. 
But, s3 or s3-compatible storage allow them existed within the same folder, and 
the common workload running on Hadoop on Ceph is data analysis, which means the 
data in these storage should be store by other way instead of s3a, and read 
data from storage through s3a, which can't stop this issue happen again.  

> s3a getFileStatus can't return expect result when existing a file and 
> directory with the same name
> --
>
> Key: HADOOP-14745
> URL: https://issues.apache.org/jira/browse/HADOOP-14745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Yonger
>Assignee: Yonger
>
> {code}
> [ hadoop-aws]# /root/hadoop/s3cmd/s3cmd ls s3://test-aws-s3a/user/root/
>DIR   s3://test-aws-s3a/user/root/ccc/
> 2017-08-08 07:04 0   s3://test-aws-s3a/user/root/ccc
> {code}
> if we expect to ccc is a directory by code :
> {code}
> Path test=new Path("ccc");
> fs.getFileStatus(test);
> {code}
> actually, it will tell us it is a file:
> {code}
> 2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AFileSystem.java:getFileStatus(1576)) - Getting path status for 
> s3a://test-aws-s3a/user/root/ccc  (user/root/ccc)
> 2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AStorageStatistics.java:incrementCounter(60)) - object_metadata_requests 
> += 1  ->  3
> 2017-08-08 15:08:40,580 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AFileSystem.java:getFileStatus(1585)) - Found exact file: normal file
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-5943) IOUtils#copyBytes methods should not close streams that are passed in as parameters

2017-08-08 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-5943:
-
Summary: IOUtils#copyBytes methods should not close streams that are passed 
in as parameters  (was: OUtils#copyBytes methods should not close streams that 
are passed in as parameters)

> IOUtils#copyBytes methods should not close streams that are passed in as 
> parameters
> ---
>
> Key: HADOOP-5943
> URL: https://issues.apache.org/jira/browse/HADOOP-5943
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Hairong Kuang
>
> The following methods in IOUtils close the streams that are passed in as 
> parameters. Calling these methods can easily trigger findbug OBL: Method may 
> fail to clean up stream or resource (OBL_UNSATISFIED_OBLIGATION). A good 
> practice should be to close a stream in the same method where the stream is 
> opened. 
> public static void copyBytes(InputStream in, OutputStream out, int buffSize, 
> boolean close) 
> public static void copyBytes(InputStream in, OutputStream out, Configuration 
> conf, boolean close)
> These methods should be deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14745) s3a getFileStatus can't return expect result when existing a file and directory with the same name

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118156#comment-16118156
 ] 

Steve Loughran commented on HADOOP-14745:
-

You shouldn't have a file and path with the same name: we don't like it & our 
attempt to make S3 look like a filesystem depends on this.

I don't believe you could have created this path structure through S3A itself. 
Try it to see: create a dir then create a file with the name and the / prefix 
removed. The getFileStatus() call inside open() should pick up the dir and 
reject the call.

if that doesn't happen: bug in S3A. If it does happen, closing this as a 
WONTFIX. Sorry

> s3a getFileStatus can't return expect result when existing a file and 
> directory with the same name
> --
>
> Key: HADOOP-14745
> URL: https://issues.apache.org/jira/browse/HADOOP-14745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Yonger
>Assignee: Yonger
>
> {code}
> [ hadoop-aws]# /root/hadoop/s3cmd/s3cmd ls s3://test-aws-s3a/user/root/
>DIR   s3://test-aws-s3a/user/root/ccc/
> 2017-08-08 07:04 0   s3://test-aws-s3a/user/root/ccc
> {code}
> if we expect to ccc is a directory by code :
> {code}
> Path test=new Path("ccc");
> fs.getFileStatus(test);
> {code}
> actually, it will tell us it is a file:
> {code}
> 2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AFileSystem.java:getFileStatus(1576)) - Getting path status for 
> s3a://test-aws-s3a/user/root/ccc  (user/root/ccc)
> 2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AStorageStatistics.java:incrementCounter(60)) - object_metadata_requests 
> += 1  ->  3
> 2017-08-08 15:08:40,580 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AFileSystem.java:getFileStatus(1585)) - Found exact file: normal file
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14715:

Status: Patch Available  (was: Open)

looks reasonable. I've hit the submit button for yetus to review; I suspect 
it'll want lines cutting down.

As usual, what what was your test policy. Presumably you checked with both 
secure and insecure?

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
> Attachments: HADOOP-14715-001.patch, HADOOP-14715-002.patch, 
> HADOOP-14715-003.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14746) Cut S3AOutputStream

2017-08-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14746:
---

 Summary: Cut S3AOutputStream
 Key: HADOOP-14746
 URL: https://issues.apache.org/jira/browse/HADOOP-14746
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.1
Reporter: Steve Loughran
Priority: Minor


We've been happy with the new S3A BlockOutputStream, with better scale, 
performance, instrumentation & recovery. I propose cutting the 
older{{S3AOutputStream}} code entirely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14628) Upgrade maven enforcer plugin to 3.0.0

2017-08-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14628:
---
Attachment: (was: HADOOP-14628.001-tests.patch)

> Upgrade maven enforcer plugin to 3.0.0
> --
>
> Key: HADOOP-14628
> URL: https://issues.apache.org/jira/browse/HADOOP-14628
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14626-testing.02.patch, 
> HADOOP-14626-testing.03.patch, HADOOP-14626.testing.patch, 
> HADOOP-14628.001.patch
>
>
> Maven enforcer plugin fails after Java 9 build 175 (MENFORCER-274). Let's 
> upgrade the version to 3.0.0 when released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14628) Upgrade maven enforcer plugin to 3.0.0

2017-08-08 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118147#comment-16118147
 ] 

Akira Ajisaka commented on HADOOP-14628:


I ran all the tests by {{dev-support/bin/qbt HADOOP-14628.001.patch 
--run-tests}}. Some tests failed but they are not related to the patch.


> Upgrade maven enforcer plugin to 3.0.0
> --
>
> Key: HADOOP-14628
> URL: https://issues.apache.org/jira/browse/HADOOP-14628
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14626-testing.02.patch, 
> HADOOP-14626-testing.03.patch, HADOOP-14626.testing.patch, 
> HADOOP-14628.001.patch
>
>
> Maven enforcer plugin fails after Java 9 build 175 (MENFORCER-274). Let's 
> upgrade the version to 3.0.0 when released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14738) Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118144#comment-16118144
 ] 

Steve Loughran commented on HADOOP-14738:
-

Good Q.

On one hand: we've not given any warning

On the other: we have a full replacement, so migration is all you need to do. 
It's not like we cut S3N out.

In that case we could go
# branch-2, 2.8 -> warn 
# trunk -> add that wrapper class which tells off user, maybe links to wiki 
entry on how to migrate. Given changes in management of auth details (which is 
much more than just changed key names), I'd not want to rush to copying over 
the old secret names, instead help people move

> Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1
> -
>
> Key: HADOOP-14738
> URL: https://issues.apache.org/jira/browse/HADOOP-14738
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>
> We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf 
> since Hadoop 2.8
> It's now time to kill S3N off, remove the source, the tests, the transitive 
> dependencies.
> I propose that in Hadoop 3.0 beta we tell people off from using it, and link 
> to a doc page (wiki?) about how to migrate (Change URLs, update config ops).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12143) Add a style guide to the Hadoop documentation

2017-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118136#comment-16118136
 ] 

Steve Loughran commented on HADOOP-12143:
-

Given how controversial some options are, we may want to split it into "must", 
"should" and "may"; ideally pull out sections on "handling scale" and "writing 
good tests" into their own docs

> Add a style guide to the Hadoop documentation
> -
>
> Key: HADOOP-12143
> URL: https://issues.apache.org/jira/browse/HADOOP-12143
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> We don't have a documented style guide for the Hadoop source or its tests 
> other than "use the Java rules with two spaces". 
> That doesn't cover policy like
> # exception handling
> # logging
> # metrics
> # what makes a good test
> # why features that have O\(n\) or worse complexity or extra memory load on 
> the NN & RM Are "unwelcome",
> # ... etc
> We have those in our heads, and we reject patches for not following them —but 
> as they aren't written down, how can we expect new submitters to follow them, 
> or back up our vetos with a policy to point at.
> I propose having an up to date style guide which defines the best practises 
> we expect for new codes. That can be stricter than the existing codebase: we 
> want things to improve.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14726) Remove FileStatus#isDir

2017-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118126#comment-16118126
 ] 

Hadoop QA commented on HADOOP-14726:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
17s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
32s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
59s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
31s{color} | {color:green} root generated 0 new + 1355 unchanged - 22 fixed = 
1355 total (was 1377) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} root: The patch generated 0 new + 582 unchanged - 5 
fixed = 582 total (was 587) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  6s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
24s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
1s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 98m 
35s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
28s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}287m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
|   | 

[jira] [Updated] (HADOOP-14745) s3a getFileStatus can't return expect result when existing a file and directory with the same name

2017-08-08 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HADOOP-14745:

Description: 
{code}
[ hadoop-aws]# /root/hadoop/s3cmd/s3cmd ls s3://test-aws-s3a/user/root/
   DIR   s3://test-aws-s3a/user/root/ccc/
2017-08-08 07:04 0   s3://test-aws-s3a/user/root/ccc
{code}
if we expect to ccc is a directory by code :
{code}
Path test=new Path("ccc");
fs.getFileStatus(test);
{code}
actually, it will tell us it is a file:
{code}
2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:getFileStatus(1576)) - Getting path status for 
s3a://test-aws-s3a/user/root/ccc  (user/root/ccc)
2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
(S3AStorageStatistics.java:incrementCounter(60)) - object_metadata_requests += 
1  ->  3
2017-08-08 15:08:40,580 [JUnit-case1] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:getFileStatus(1585)) - Found exact file: normal file
{code}


  was:
[ hadoop-aws]# /root/hadoop/s3cmd/s3cmd ls s3://test-aws-s3a/user/root/
   DIR   s3://test-aws-s3a/user/root/ccc/
2017-08-08 07:04 0   s3://test-aws-s3a/user/root/ccc

if we expect to ccc is a directory by code :
Path test=new Path("ccc");
fs.getFileStatus(test);

actually, it will tell us it is a file:

2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:getFileStatus(1576)) - Getting path status for 
s3a://test-aws-s3a/user/root/ccc  (user/root/ccc)
2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
(S3AStorageStatistics.java:incrementCounter(60)) - object_metadata_requests += 
1  ->  3
2017-08-08 15:08:40,580 [JUnit-case1] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:getFileStatus(1585)) - Found exact file: normal file




> s3a getFileStatus can't return expect result when existing a file and 
> directory with the same name
> --
>
> Key: HADOOP-14745
> URL: https://issues.apache.org/jira/browse/HADOOP-14745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Yonger
>Assignee: Yonger
>
> {code}
> [ hadoop-aws]# /root/hadoop/s3cmd/s3cmd ls s3://test-aws-s3a/user/root/
>DIR   s3://test-aws-s3a/user/root/ccc/
> 2017-08-08 07:04 0   s3://test-aws-s3a/user/root/ccc
> {code}
> if we expect to ccc is a directory by code :
> {code}
> Path test=new Path("ccc");
> fs.getFileStatus(test);
> {code}
> actually, it will tell us it is a file:
> {code}
> 2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AFileSystem.java:getFileStatus(1576)) - Getting path status for 
> s3a://test-aws-s3a/user/root/ccc  (user/root/ccc)
> 2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AStorageStatistics.java:incrementCounter(60)) - object_metadata_requests 
> += 1  ->  3
> 2017-08-08 15:08:40,580 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AFileSystem.java:getFileStatus(1585)) - Found exact file: normal file
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14745) s3a getFileStatus can't return expect result when existing a file and directory with the same name

2017-08-08 Thread Yonger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118026#comment-16118026
 ] 

Yonger commented on HADOOP-14745:
-

On the other hand, I saw hdfs implementation not allow existing any file with 
the same name of directory under it's parent path.
 

> s3a getFileStatus can't return expect result when existing a file and 
> directory with the same name
> --
>
> Key: HADOOP-14745
> URL: https://issues.apache.org/jira/browse/HADOOP-14745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Yonger
>Assignee: Yonger
>
> [ hadoop-aws]# /root/hadoop/s3cmd/s3cmd ls s3://test-aws-s3a/user/root/
>DIR   s3://test-aws-s3a/user/root/ccc/
> 2017-08-08 07:04 0   s3://test-aws-s3a/user/root/ccc
> if we expect to ccc is a directory by code :
> Path test=new Path("ccc");
> fs.getFileStatus(test);
> actually, it will tell us it is a file:
> 2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AFileSystem.java:getFileStatus(1576)) - Getting path status for 
> s3a://test-aws-s3a/user/root/ccc  (user/root/ccc)
> 2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AStorageStatistics.java:incrementCounter(60)) - object_metadata_requests 
> += 1  ->  3
> 2017-08-08 15:08:40,580 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AFileSystem.java:getFileStatus(1585)) - Found exact file: normal file



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14745) s3a getFileStatus can't return expect result when existing a file and directory with the same name

2017-08-08 Thread Yonger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16118009#comment-16118009
 ] 

Yonger commented on HADOOP-14745:
-

So i think we should call getFileStatus with explicitly expect,  e.g.
getFileStatus(path,true)
true means we think the path we input is a directory

In internal of getFileStatus, i think the call like above will skip the first 
two getObjectMetadata call, and only list the object under this given path, 
which also benefit the performance of this network consuming function. 



> s3a getFileStatus can't return expect result when existing a file and 
> directory with the same name
> --
>
> Key: HADOOP-14745
> URL: https://issues.apache.org/jira/browse/HADOOP-14745
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Yonger
>Assignee: Yonger
>
> [ hadoop-aws]# /root/hadoop/s3cmd/s3cmd ls s3://test-aws-s3a/user/root/
>DIR   s3://test-aws-s3a/user/root/ccc/
> 2017-08-08 07:04 0   s3://test-aws-s3a/user/root/ccc
> if we expect to ccc is a directory by code :
> Path test=new Path("ccc");
> fs.getFileStatus(test);
> actually, it will tell us it is a file:
> 2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AFileSystem.java:getFileStatus(1576)) - Getting path status for 
> s3a://test-aws-s3a/user/root/ccc  (user/root/ccc)
> 2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AStorageStatistics.java:incrementCounter(60)) - object_metadata_requests 
> += 1  ->  3
> 2017-08-08 15:08:40,580 [JUnit-case1] DEBUG s3a.S3AFileSystem 
> (S3AFileSystem.java:getFileStatus(1585)) - Found exact file: normal file



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14745) s3a getFileStatus can't return expect result when existing a file and directory with the same name

2017-08-08 Thread Yonger (JIRA)
Yonger created HADOOP-14745:
---

 Summary: s3a getFileStatus can't return expect result when 
existing a file and directory with the same name
 Key: HADOOP-14745
 URL: https://issues.apache.org/jira/browse/HADOOP-14745
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Yonger
Assignee: Yonger


[ hadoop-aws]# /root/hadoop/s3cmd/s3cmd ls s3://test-aws-s3a/user/root/
   DIR   s3://test-aws-s3a/user/root/ccc/
2017-08-08 07:04 0   s3://test-aws-s3a/user/root/ccc

if we expect to ccc is a directory by code :
Path test=new Path("ccc");
fs.getFileStatus(test);

actually, it will tell us it is a file:

2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:getFileStatus(1576)) - Getting path status for 
s3a://test-aws-s3a/user/root/ccc  (user/root/ccc)
2017-08-08 15:08:40,566 [JUnit-case1] DEBUG s3a.S3AFileSystem 
(S3AStorageStatistics.java:incrementCounter(60)) - object_metadata_requests += 
1  ->  3
2017-08-08 15:08:40,580 [JUnit-case1] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:getFileStatus(1585)) - Found exact file: normal file





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117919#comment-16117919
 ] 

Hadoop QA commented on HADOOP-14705:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 
151 unchanged - 2 fixed = 151 total (was 153) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
57s{color} | {color:red} hadoop-common-project_hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  6s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
5s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14705 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880786/HADOOP-14705.03.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 800ff8c0560a 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 55a181f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12979/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12979/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results |