[jira] [Commented] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9

2018-02-26 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378155#comment-16378155
 ] 

Akira Ajisaka commented on HADOOP-12760:


Hi [~tasanuma0829], what command did you run to build Apache Hadoop?
I applied HADOOP-12760 and HDFS-11610 and ran {{mvn clean install -DskipTests}}.

> sun.misc.Cleaner has moved to a new location in OpenJDK 9
> -
>
> Key: HADOOP-12760
> URL: https://issues.apache.org/jira/browse/HADOOP-12760
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Hegarty
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-12760.00.patch, HADOOP-12760.01.patch, 
> HADOOP-12760.02.patch, HADOOP-12760.03.patch, HADOOP-12760.04.patch, 
> HADOOP-12760.05.patch, HADOOP-12760.06.patch
>
>
> This is a heads-up: there are upcoming changes in JDK 9 that will require, at 
> least, a small update to org.apache.hadoop.crypto.CryptoStreamUtils & 
> org.apache.hadoop.io.nativeio.NativeIO.
> OpenJDK issue no. 8148117: "Move sun.misc.Cleaner to jdk.internal.ref" [1], 
> will move the Cleaner class from sun.misc to jdk.internal.ref. There is 
> ongoing discussion about the possibility of providing a public supported API, 
> maybe in the JDK 9 timeframe, for releasing NIO direct buffer native memory, 
> see the core-libs-dev mail thread [2]. At the very least CryptoStreamUtils & 
> NativeIO [3] should be updated to have knowledge of the new location of the 
> JDK Cleaner.
> [1] https://bugs.openjdk.java.net/browse/JDK-8148117
> [2] 
> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038243.html
> [3] https://github.com/apache/hadoop/search?utf8=✓=sun.misc.Cleaner



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14959) DelegationTokenAuthenticator.authenticate() to wrap network exceptions

2018-02-26 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378134#comment-16378134
 ] 

Bharat Viswanadham commented on HADOOP-14959:
-

+1 LGTM.

As HADOOP-15178 is created to handle my comment.

 

> DelegationTokenAuthenticator.authenticate() to wrap network exceptions
> --
>
> Key: HADOOP-14959
> URL: https://issues.apache.org/jira/browse/HADOOP-14959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net, security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14959.001.patch, HADOOP-14959.002.patch
>
>
> network errors raised in {{DelegationTokenAuthenticator.authenticate()}} 
> aren't being wrapped, so only return the usual limited-value java.net error 
> text. using {{NetUtils.wrapException()}} can address that



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15268) Back port HADOOP-13972 to 2.8.1 and 2.8.3

2018-02-26 Thread Omkar Aradhya K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Aradhya K S resolved HADOOP-15268.

  Resolution: Invalid
Target Version/s: 2.8.3, 2.8.1  (was: 2.8.1, 2.8.3)

This is not required.

> Back port HADOOP-13972 to 2.8.1 and 2.8.3
> -
>
> Key: HADOOP-15268
> URL: https://issues.apache.org/jira/browse/HADOOP-15268
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 2.8.1, 2.8.3
>Reporter: Omkar Aradhya K S
>Assignee: Omkar Aradhya K S
>Priority: Major
>
> Back port the HADOOP-13972 to branch-2.8.1 and branch-2.8.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15268) Back port HADOOP-13972 to 2.8.1 and 2.8.3

2018-02-26 Thread Omkar Aradhya K S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378131#comment-16378131
 ] 

Omkar Aradhya K S commented on HADOOP-15268:


Thanks [~jojochuang] I will delete this sub-task. This is not required.

> Back port HADOOP-13972 to 2.8.1 and 2.8.3
> -
>
> Key: HADOOP-15268
> URL: https://issues.apache.org/jira/browse/HADOOP-15268
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 2.8.1, 2.8.3
>Reporter: Omkar Aradhya K S
>Assignee: Omkar Aradhya K S
>Priority: Major
>
> Back port the HADOOP-13972 to branch-2.8.1 and branch-2.8.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15268) Back port HADOOP-13972 to 2.8.1 and 2.8.3

2018-02-26 Thread Omkar Aradhya K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Aradhya K S updated HADOOP-15268:
---
Summary: Back port HADOOP-13972 to 2.8.1 and 2.8.3  (was: Back port to 
2.8.1 and 2.8.3)

> Back port HADOOP-13972 to 2.8.1 and 2.8.3
> -
>
> Key: HADOOP-15268
> URL: https://issues.apache.org/jira/browse/HADOOP-15268
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 2.8.1, 2.8.3
>Reporter: Omkar Aradhya K S
>Assignee: Omkar Aradhya K S
>Priority: Major
>
> Back port the HADOOP-13972 to branch-2.8.1 and branch-2.8.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15261) Upgrade commons-io from 2.4 to 2.5

2018-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378130#comment-16378130
 ] 

ASF GitHub Bot commented on HADOOP-15261:
-

Github user PandaMonkey commented on the issue:

https://github.com/apache/hadoop/pull/347
  
Related issue:  https://issues.apache.org/jira/browse/HADOOP-15261


> Upgrade commons-io from 2.4 to 2.5
> --
>
> Key: HADOOP-15261
> URL: https://issues.apache.org/jira/browse/HADOOP-15261
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: minikdc
>Affects Versions: 3.0.0-alpha3
>Reporter: PandaMonkey
>Priority: Major
> Attachments: hadoop.txt
>
>
> Hi, after analyzing hadoop-common-project\hadoop-minikdc\pom.xml, we found 
> that Hadoop depends on org.apache.kerby:kerb-simplekdc 1.0.1, which 
> transitivity introduced commons-io:2.5. 
> At the same time, hadoop directly depends on a older version of 
> commons-io:2.4. By further look into the source code, these two versions of 
> commons-io have many different features. The dependency conflict problem 
> brings high risks of "NotClassDefFoundError:" or "NoSuchMethodError" issues 
> at runtime. Please notice this problem. Maybe upgrading commons-io from 2.4 
> to 2.5 is a good choice. Hope this report can help you. Thanks!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15178) Generalize NetUtils#wrapException to handle other subclasses with String Constructor.

2018-02-26 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378123#comment-16378123
 ] 

Bharat Viswanadham commented on HADOOP-15178:
-

Thank You [~ajayydv] for the patch.

+1 LGTM

> Generalize NetUtils#wrapException to handle other subclasses with String 
> Constructor.
> -
>
> Key: HADOOP-15178
> URL: https://issues.apache.org/jira/browse/HADOOP-15178
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-15178.001.patch, HADOOP-15178.002.patch, 
> HADOOP-15178.003.patch
>
>
> NetUtils#wrapException returns an IOException if exception passed to it is 
> not of type 
> SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
> By default, it  should always return instance (subclass of IOException) of 
> same type unless a String constructor is not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15268) Back port to 2.8.1 and 2.8.3

2018-02-26 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378118#comment-16378118
 ] 

Wei-Chiu Chuang commented on HADOOP-15268:
--

Please don't set Fix Versions. 2.8.1 and 2.8.3 are already released. Perhaps 
you mean 2.8.4?

> Back port to 2.8.1 and 2.8.3
> 
>
> Key: HADOOP-15268
> URL: https://issues.apache.org/jira/browse/HADOOP-15268
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 2.8.1, 2.8.3
>Reporter: Omkar Aradhya K S
>Assignee: Omkar Aradhya K S
>Priority: Major
>
> Back port the HADOOP-13972 to branch-2.8.1 and branch-2.8.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15268) Back port to 2.8.1 and 2.8.3

2018-02-26 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15268:
-
Fix Version/s: (was: 2.8.3)
   (was: 2.8.1)

> Back port to 2.8.1 and 2.8.3
> 
>
> Key: HADOOP-15268
> URL: https://issues.apache.org/jira/browse/HADOOP-15268
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 2.8.1, 2.8.3
>Reporter: Omkar Aradhya K S
>Assignee: Omkar Aradhya K S
>Priority: Major
>
> Back port the HADOOP-13972 to branch-2.8.1 and branch-2.8.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster

2018-02-26 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378116#comment-16378116
 ] 

Rohith Sharma K S commented on HADOOP-15137:


Thanks [~bharatviswa] for working on this jira. The patch looks reasonable to 
me which removes hadoop-yarn-server-common dependencies. 

> ClassNotFoundException: 
> org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using 
> hadoop-client-minicluster
> --
>
> Key: HADOOP-15137
> URL: https://issues.apache.org/jira/browse/HADOOP-15137
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15137.01.patch, HADOOP-15137.02.patch, 
> YARN-7673.00.patch
>
>
> I'd like to use hadoop-client-minicluster for hadoop downstream project, but 
> I encounter the following exception when starting hadoop minicluster.  And I 
> check the hadoop-client-minicluster, it indeed does not have this class. Is 
> this something that is missing when packaging the published jar ?
> {code}
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15110) Gauges are getting logged in exceptions from AutoRenewalThreadForUserCreds

2018-02-26 Thread LiXin Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LiXin Ge updated HADOOP-15110:
--
Status: Patch Available  (was: Open)

> Gauges are getting logged in exceptions from AutoRenewalThreadForUserCreds
> --
>
> Key: HADOOP-15110
> URL: https://issues.apache.org/jira/browse/HADOOP-15110
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: metrics, security
>Affects Versions: 3.0.0-alpha2, 2.8.0
>Reporter: Harshakiran Reddy
>Assignee: LiXin Ge
>Priority: Minor
> Attachments: HADOOP-15110.001.patch
>
>
> *scenario*:
> -
> While Running the renewal command for principal it's printing the direct 
> objects for *renewalFailures *and *renewalFailuresTotal*
> {noformat}
> bin> ./hdfs dfs -ls /
> 2017-12-12 12:31:50,910 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2017-12-12 12:31:52,312 WARN security.UserGroupInformation: Exception 
> encountered while running the renewal command for principal_name. (TGT end 
> time:1513070122000, renewalFailures: 
> org.apache.hadoop.metrics2.lib.MutableGaugeInt@1bbb43eb,renewalFailuresTotal: 
> org.apache.hadoop.metrics2.lib.MutableGaugeLong@424a0549)
> ExitCodeException exitCode=1: kinit: KDC can't fulfill requested option while 
> renewing credentials
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:994)
> at org.apache.hadoop.util.Shell.run(Shell.java:887)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1212)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1306)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1288)
> at 
> org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:1067)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> *Expected Result*:
> it's should be user understandable value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15268) Back port to 2.8.1 and 2.8.3

2018-02-26 Thread Omkar Aradhya K S (JIRA)
Omkar Aradhya K S created HADOOP-15268:
--

 Summary: Back port to 2.8.1 and 2.8.3
 Key: HADOOP-15268
 URL: https://issues.apache.org/jira/browse/HADOOP-15268
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/adl
Affects Versions: 2.8.3, 2.8.1
Reporter: Omkar Aradhya K S
Assignee: Omkar Aradhya K S
 Fix For: 2.8.3, 2.8.1


Back port the HADOOP-13972 to branch-2.8.1 and branch-2.8.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15222) Refine proxy user authorization to support multiple ACL list

2018-02-26 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-15222:
---
Description: 
This Jira is responding to follow up work for HADOOP-14077.  The original goal 
of HADOOP-14077 is to have ability to support multiple ACL lists.  The original 
problem is a separation of duty use case where the Hadoop cluster hosting 
company monitors Hadoop cluster through jmx.  Application logs and hdfs 
contents should not be visible to hosting company system administrators.  When 
checking for proxy user authorization in AuthenticationFilter to ensure there 
is a way to authorize normal users and admin users using separate proxy users 
ACL lists.  This was suggested in HADOOP-14060 to configure 
AuthenticationFilterWithProxyUser this way:

AuthenticationFilterWithProxyUser->StaticUserWebFilter->AuthenticationFIlterWithProxyUser

This enables the second AuthenticationFilterWithProxyUser validates both 
credentials claim by proxy user, and end user.

However, there is a side effect that unauthorized users are not properly 
rejected with 403 FORBIDDEN message if there is no other web filter configured 
to handle the required authorization work.

This JIRA is intend to discuss the work of HADOOP-14077 by either combine 
StaticUserWebFilter + second AuthenticationFilterWithProxyUser into a 
AuthorizationFilterWithProxyUser as a final filter to evict unauthorized user, 
or revert both HADOOP-14077 and HADOOP-13119 to eliminate the false positive in 
user authorization and impersonation.

  was:
This Jira is responding to follow up work for HADOOP-14077.  The original goal 
of HADOOP-14077 is to have ability to support multiple ACL lists.  When 
checking for proxy user authorization in AuthenticationFilter to ensure there 
is a way to authorize normal users and admin users using separate proxy users 
ACL lists.  This was suggested in HADOOP-14060 to configure 
AuthenticationFilterWithProxyUser this way:

AuthenticationFilterWithProxyUser->StaticUserWebFilter->AuthenticationFIlterWithProxyUser

This enables the second AuthenticationFilterWithProxyUser validates both 
credentials claim by proxy user, and end user.

However, there is a side effect that unauthorized users are not properly 
rejected with 403 FORBIDDEN message if there is no other web filter configured 
to handle the required authorization work.

This JIRA is intend to discuss the work of HADOOP-14077 by either combine 
StaticUserWebFilter + second AuthenticationFilterWithProxyUser into a 
AuthorizationFilterWithProxyUser as a final filter to evict unauthorized user, 
or revert both HADOOP-14077 and HADOOP-13119 to eliminate the false positive in 
user authorization.


> Refine proxy user authorization to support multiple ACL list
> 
>
> Key: HADOOP-15222
> URL: https://issues.apache.org/jira/browse/HADOOP-15222
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Eric Yang
>Priority: Major
>
> This Jira is responding to follow up work for HADOOP-14077.  The original 
> goal of HADOOP-14077 is to have ability to support multiple ACL lists.  The 
> original problem is a separation of duty use case where the Hadoop cluster 
> hosting company monitors Hadoop cluster through jmx.  Application logs and 
> hdfs contents should not be visible to hosting company system administrators. 
>  When checking for proxy user authorization in AuthenticationFilter to ensure 
> there is a way to authorize normal users and admin users using separate proxy 
> users ACL lists.  This was suggested in HADOOP-14060 to configure 
> AuthenticationFilterWithProxyUser this way:
> AuthenticationFilterWithProxyUser->StaticUserWebFilter->AuthenticationFIlterWithProxyUser
> This enables the second AuthenticationFilterWithProxyUser validates both 
> credentials claim by proxy user, and end user.
> However, there is a side effect that unauthorized users are not properly 
> rejected with 403 FORBIDDEN message if there is no other web filter 
> configured to handle the required authorization work.
> This JIRA is intend to discuss the work of HADOOP-14077 by either combine 
> StaticUserWebFilter + second AuthenticationFilterWithProxyUser into a 
> AuthorizationFilterWithProxyUser as a final filter to evict unauthorized 
> user, or revert both HADOOP-14077 and HADOOP-13119 to eliminate the false 
> positive in user authorization and impersonation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16378086#comment-16378086
 ] 

genericqa commented on HADOOP-15253:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 15s{color} | {color:orange} root: The patch generated 1 new + 207 unchanged 
- 0 fixed = 208 total (was 207) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}206m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15253 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912178/HADOOP-15253.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5866f0cbaf14 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ae290a4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 

[jira] [Updated] (HADOOP-15251) Backport HADOOP-13514 (surefire upgrade) to branch-2

2018-02-26 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15251:
---
Fix Version/s: 2.10.0

> Backport HADOOP-13514 (surefire upgrade) to branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Major
> Fix For: 2.10.0, 2.9.1
>
> Attachments: HADOOP-15251-branch-2.001.patch, 
> HADOOP-15251-branch-2.002.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15267) S3A fails to store my data when multipart size is set ot 5 Mb and SSE-C encryption is enabled

2018-02-26 Thread Harshavardhana (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377961#comment-16377961
 ] 

Harshavardhana commented on HADOOP-15267:
-

This issue can also be observed with [Minio|https://minio.io/] (AWS S3 
compatible server) as well when SSE-C is being used. 

> S3A fails to store my data when multipart size is set ot 5 Mb and SSE-C 
> encryption is enabled
> -
>
> Key: HADOOP-15267
> URL: https://issues.apache.org/jira/browse/HADOOP-15267
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: Hadoop 3.1 Snapshot
>Reporter: Anis Elleuch
>Priority: Critical
> Attachments: hadoop-fix.patch
>
>
> When I enable SSE-C encryption in Hadoop 3.1 and set  fs.s3a.multipart.size 
> to 5 Mb, storing data in AWS doesn't work anymore. For example, running the 
> following code:
> {code}
> >>> df1 = spark.read.json('/home/user/people.json')
> >>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
> {code}
> shows the following exception:
> {code:java}
> com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload 
> initiate requested encryption. Subsequent part requests must include the 
> appropriate encryption parameters.
> {code}
> After some investigation, I discovered that hadoop-aws doesn't send SSE-C 
> headers in Put Object Part as stated in AWS specification: 
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html]
> {code:java}
> If you requested server-side encryption using a customer-provided encryption 
> key in your initiate multipart upload request, you must provide identical 
> encryption information in each part upload using the following headers.
> {code}
>  
> You can find a patch attached to this issue for a better clarification of the 
> problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster

2018-02-26 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377919#comment-16377919
 ] 

Bharat Viswanadham edited comment on HADOOP-15137 at 2/27/18 2:45 AM:
--

[~shaneku...@gmail.com]

Uploaded patch v02 to address review comments.

Can you help in reviewing the patch.


was (Author: bharatviswa):
[~shaneku...@gmail.com]

Uploaded pathc v02 to address review comments.

Can you help in reviewing the patch.

> ClassNotFoundException: 
> org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using 
> hadoop-client-minicluster
> --
>
> Key: HADOOP-15137
> URL: https://issues.apache.org/jira/browse/HADOOP-15137
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15137.01.patch, HADOOP-15137.02.patch, 
> YARN-7673.00.patch
>
>
> I'd like to use hadoop-client-minicluster for hadoop downstream project, but 
> I encounter the following exception when starting hadoop minicluster.  And I 
> check the hadoop-client-minicluster, it indeed does not have this class. Is 
> this something that is missing when packaging the published jar ?
> {code}
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15137) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster

2018-02-26 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377919#comment-16377919
 ] 

Bharat Viswanadham commented on HADOOP-15137:
-

[~shaneku...@gmail.com]

Uploaded pathc v02 to address review comments.

Can you help in reviewing the patch.

> ClassNotFoundException: 
> org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using 
> hadoop-client-minicluster
> --
>
> Key: HADOOP-15137
> URL: https://issues.apache.org/jira/browse/HADOOP-15137
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15137.01.patch, HADOOP-15137.02.patch, 
> YARN-7673.00.patch
>
>
> I'd like to use hadoop-client-minicluster for hadoop downstream project, but 
> I encounter the following exception when starting hadoop minicluster.  And I 
> check the hadoop-client-minicluster, it indeed does not have this class. Is 
> this something that is missing when packaging the published jar ?
> {code}
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-02-26 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377911#comment-16377911
 ] 

Tao Jie commented on HADOOP-15253:
--

Added test case for this patch. [~shv] [~xyao] Would you give it a quick review?

> Should update maxQueueSize when refresh call queue
> --
>
> Key: HADOOP-15253
> URL: https://issues.apache.org/jira/browse/HADOOP-15253
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-15253.001.patch, HADOOP-15253.002.patch
>
>
> When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
> {{maxQueueSize}} should also be updated.
> In case of changing CallQueue instance to FairCallQueue, the length of each 
> queue in FairCallQueue would be 1/priorityLevels of original length of 
> DefaultCallQueue. So it would be helpful for us to set the length of 
> callqueue to a proper value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-02-26 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15253:
-
Attachment: HADOOP-15253.002.patch

> Should update maxQueueSize when refresh call queue
> --
>
> Key: HADOOP-15253
> URL: https://issues.apache.org/jira/browse/HADOOP-15253
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-15253.001.patch, HADOOP-15253.002.patch
>
>
> When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
> {{maxQueueSize}} should also be updated.
> In case of changing CallQueue instance to FairCallQueue, the length of each 
> queue in FairCallQueue would be 1/priorityLevels of original length of 
> DefaultCallQueue. So it would be helpful for us to set the length of 
> callqueue to a proper value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-02-26 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15253:
-
Affects Version/s: 2.8.2
   Status: Patch Available  (was: Open)

> Should update maxQueueSize when refresh call queue
> --
>
> Key: HADOOP-15253
> URL: https://issues.apache.org/jira/browse/HADOOP-15253
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-15253.001.patch, HADOOP-15253.002.patch
>
>
> When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
> {{maxQueueSize}} should also be updated.
> In case of changing CallQueue instance to FairCallQueue, the length of each 
> queue in FairCallQueue would be 1/priorityLevels of original length of 
> DefaultCallQueue. So it would be helpful for us to set the length of 
> callqueue to a proper value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-15261) Upgrade commons-io from 2.4 to 2.5

2018-02-26 Thread PandaMonkey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PandaMonkey updated HADOOP-15261:
-
Comment: was deleted

(was: @Ajay Kumar Hi, Thx for reviewing my issue. Would appreciate if you can 
check it. :))

> Upgrade commons-io from 2.4 to 2.5
> --
>
> Key: HADOOP-15261
> URL: https://issues.apache.org/jira/browse/HADOOP-15261
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: minikdc
>Affects Versions: 3.0.0-alpha3
>Reporter: PandaMonkey
>Priority: Major
> Attachments: hadoop.txt
>
>
> Hi, after analyzing hadoop-common-project\hadoop-minikdc\pom.xml, we found 
> that Hadoop depends on org.apache.kerby:kerb-simplekdc 1.0.1, which 
> transitivity introduced commons-io:2.5. 
> At the same time, hadoop directly depends on a older version of 
> commons-io:2.4. By further look into the source code, these two versions of 
> commons-io have many different features. The dependency conflict problem 
> brings high risks of "NotClassDefFoundError:" or "NoSuchMethodError" issues 
> at runtime. Please notice this problem. Maybe upgrading commons-io from 2.4 
> to 2.5 is a good choice. Hope this report can help you. Thanks!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15261) Upgrade commons-io from 2.4 to 2.5

2018-02-26 Thread PandaMonkey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377903#comment-16377903
 ] 

PandaMonkey edited comment on HADOOP-15261 at 2/27/18 2:13 AM:
---

@Ajay Kumar Hi, Thx for reviewing my issue. Would appreciate if you can check 
it. :)


was (Author: pandamonkey):
[ ajayydv ] Hi, Thx for reviewing my issue. Would appreciate if you can check 
it. :)

> Upgrade commons-io from 2.4 to 2.5
> --
>
> Key: HADOOP-15261
> URL: https://issues.apache.org/jira/browse/HADOOP-15261
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: minikdc
>Affects Versions: 3.0.0-alpha3
>Reporter: PandaMonkey
>Priority: Major
> Attachments: hadoop.txt
>
>
> Hi, after analyzing hadoop-common-project\hadoop-minikdc\pom.xml, we found 
> that Hadoop depends on org.apache.kerby:kerb-simplekdc 1.0.1, which 
> transitivity introduced commons-io:2.5. 
> At the same time, hadoop directly depends on a older version of 
> commons-io:2.4. By further look into the source code, these two versions of 
> commons-io have many different features. The dependency conflict problem 
> brings high risks of "NotClassDefFoundError:" or "NoSuchMethodError" issues 
> at runtime. Please notice this problem. Maybe upgrading commons-io from 2.4 
> to 2.5 is a good choice. Hope this report can help you. Thanks!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15261) Upgrade commons-io from 2.4 to 2.5

2018-02-26 Thread PandaMonkey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377903#comment-16377903
 ] 

PandaMonkey edited comment on HADOOP-15261 at 2/27/18 2:12 AM:
---

[ ajayydv ] Hi, Thx for reviewing my issue. Would appreciate if you can check 
it. :)


was (Author: pandamonkey):
[~ajayk5] Hi, Thx for reviewing my issue. Would appreciate if you can check it. 
:)

> Upgrade commons-io from 2.4 to 2.5
> --
>
> Key: HADOOP-15261
> URL: https://issues.apache.org/jira/browse/HADOOP-15261
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: minikdc
>Affects Versions: 3.0.0-alpha3
>Reporter: PandaMonkey
>Priority: Major
> Attachments: hadoop.txt
>
>
> Hi, after analyzing hadoop-common-project\hadoop-minikdc\pom.xml, we found 
> that Hadoop depends on org.apache.kerby:kerb-simplekdc 1.0.1, which 
> transitivity introduced commons-io:2.5. 
> At the same time, hadoop directly depends on a older version of 
> commons-io:2.4. By further look into the source code, these two versions of 
> commons-io have many different features. The dependency conflict problem 
> brings high risks of "NotClassDefFoundError:" or "NoSuchMethodError" issues 
> at runtime. Please notice this problem. Maybe upgrading commons-io from 2.4 
> to 2.5 is a good choice. Hope this report can help you. Thanks!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15261) Upgrade commons-io from 2.4 to 2.5

2018-02-26 Thread PandaMonkey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377903#comment-16377903
 ] 

PandaMonkey commented on HADOOP-15261:
--

[~ajayk5] Hi, Thx for reviewing my issue. Would appreciate if you can check it. 
:)

> Upgrade commons-io from 2.4 to 2.5
> --
>
> Key: HADOOP-15261
> URL: https://issues.apache.org/jira/browse/HADOOP-15261
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: minikdc
>Affects Versions: 3.0.0-alpha3
>Reporter: PandaMonkey
>Priority: Major
> Attachments: hadoop.txt
>
>
> Hi, after analyzing hadoop-common-project\hadoop-minikdc\pom.xml, we found 
> that Hadoop depends on org.apache.kerby:kerb-simplekdc 1.0.1, which 
> transitivity introduced commons-io:2.5. 
> At the same time, hadoop directly depends on a older version of 
> commons-io:2.4. By further look into the source code, these two versions of 
> commons-io have many different features. The dependency conflict problem 
> brings high risks of "NotClassDefFoundError:" or "NoSuchMethodError" issues 
> at runtime. Please notice this problem. Maybe upgrading commons-io from 2.4 
> to 2.5 is a good choice. Hope this report can help you. Thanks!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15205) maven release: missing source attachments for hadoop-mapreduce-client-core

2018-02-26 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377894#comment-16377894
 ] 

Konstantin Shvachko commented on HADOOP-15205:
--

I checked on [Nexus|https://repository.apache.org/]. It looks like the releases 
were staged like that. See e.g.
 
[https://repository.apache.org/content/repositories/releases/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.5/]
 * For 2.7.5 it's very strange, because some sub-projects like hadoop-common 
and hadoop-hdfs do have sources
 
[https://repository.apache.org/content/repositories/releases/org/apache/hadoop/hadoop-common/2.7.5/]
 
[https://repository.apache.org/content/repositories/releases/org/apache/hadoop/hadoop-hdfs/2.7.5/]
 while others don't.
 * For 2.8.3, 2.9.0, and 3.0.0 sources are consistently not there.
 * The 3.0.1 RC0, which is currently staged, seems to have all sources. Didn't 
check all sub-projects though
https://repository.apache.org/content/repositories/orgapachehadoop-1078/org/apache/hadoop/hadoop-mapreduce-client-common/3.0.1/

I will try to debug this when I release 2.7.6. May be we need to update the 
[HowToRelease|https://wiki.apache.org/hadoop/HowToRelease] runbook.

> maven release: missing source attachments for hadoop-mapreduce-client-core
> --
>
> Key: HADOOP-15205
> URL: https://issues.apache.org/jira/browse/HADOOP-15205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.5, 3.0.0
>Reporter: Zoltan Haindrich
>Priority: Major
>
> I wanted to use the source attachment; however it looks like since 2.7.5 that 
> artifact is not present at maven central ; it looks like the last release 
> which had source attachments / javadocs was 2.7.4
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.5/
> this seems to be not limited to mapreduce; as the same change is present for 
> yarn-common as well
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.5/
> and also hadoop-common
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.5/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.0.0/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15261) Upgrade commons-io from 2.4 to 2.5

2018-02-26 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HADOOP-15261:
---

Assignee: (was: Ajay Kumar)

> Upgrade commons-io from 2.4 to 2.5
> --
>
> Key: HADOOP-15261
> URL: https://issues.apache.org/jira/browse/HADOOP-15261
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: minikdc
>Affects Versions: 3.0.0-alpha3
>Reporter: PandaMonkey
>Priority: Major
> Attachments: hadoop.txt
>
>
> Hi, after analyzing hadoop-common-project\hadoop-minikdc\pom.xml, we found 
> that Hadoop depends on org.apache.kerby:kerb-simplekdc 1.0.1, which 
> transitivity introduced commons-io:2.5. 
> At the same time, hadoop directly depends on a older version of 
> commons-io:2.4. By further look into the source code, these two versions of 
> commons-io have many different features. The dependency conflict problem 
> brings high risks of "NotClassDefFoundError:" or "NoSuchMethodError" issues 
> at runtime. Please notice this problem. Maybe upgrading commons-io from 2.4 
> to 2.5 is a good choice. Hope this report can help you. Thanks!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15261) Upgrade commons-io from 2.4 to 2.5

2018-02-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377887#comment-16377887
 ] 

ASF GitHub Bot commented on HADOOP-15261:
-

GitHub user PandaMonkey opened a pull request:

https://github.com/apache/hadoop/pull/347

[HADOOP-15261]move commons-io up to 2.5

move commons-io up to 2.5 which introduced by kerb-simplekdc.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/PandaMonkey/hadoop trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/347.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #347


commit 625a8e94c85a263cd7687fb9dfff4ec4229e9b29
Author: PandaMonkey <36159621+pandamonkey@...>
Date:   2018-02-27T01:34:07Z

move commons-io up to 2.5

move commons-io up to 2.5 which introduced by kerb-simplekdc.




> Upgrade commons-io from 2.4 to 2.5
> --
>
> Key: HADOOP-15261
> URL: https://issues.apache.org/jira/browse/HADOOP-15261
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: minikdc
>Affects Versions: 3.0.0-alpha3
>Reporter: PandaMonkey
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: hadoop.txt
>
>
> Hi, after analyzing hadoop-common-project\hadoop-minikdc\pom.xml, we found 
> that Hadoop depends on org.apache.kerby:kerb-simplekdc 1.0.1, which 
> transitivity introduced commons-io:2.5. 
> At the same time, hadoop directly depends on a older version of 
> commons-io:2.4. By further look into the source code, these two versions of 
> commons-io have many different features. The dependency conflict problem 
> brings high risks of "NotClassDefFoundError:" or "NoSuchMethodError" issues 
> at runtime. Please notice this problem. Maybe upgrading commons-io from 2.4 
> to 2.5 is a good choice. Hope this report can help you. Thanks!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15267) S3A fails to store my data when multipart size is set ot 5 Mb and SSE-C encryption is enabled

2018-02-26 Thread Anis Elleuch (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anis Elleuch updated HADOOP-15267:
--
Description: 
When I enable SSE-C encryption in Hadoop 3.1 and set  fs.s3a.multipart.size to 
5 Mb, storing data in AWS doesn't work anymore. For example, running the 
following code:
{code}
>>> df1 = spark.read.json('/home/user/people.json')
>>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
{code}
shows the following exception:
{code:java}
com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload 
initiate requested encryption. Subsequent part requests must include the 
appropriate encryption parameters.
{code}

After some investigation, I discovered that hadoop-aws doesn't send SSE-C 
headers in Put Object Part as stated in AWS specification: 
[https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html]
{code:java}
If you requested server-side encryption using a customer-provided encryption 
key in your initiate multipart upload request, you must provide identical 
encryption information in each part upload using the following headers.
{code}
 
You can find a patch attached to this issue for a better clarification of the 
problem.



  was:
With Spark with Hadoop 3.1.0, when I enable SSE-C encryption and set  
fs.s3a.multipart.size to 5 Mb, storing data in AWS won't work anymore. For 
example, running the following code:
{code}
>>> df1 = spark.read.json('/home/user/people.json')
>>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
{code}
shows the following exception:
{code:java}
com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload 
initiate requested encryption. Subsequent part requests must include the 
appropriate encryption parameters.
{code}
After some investigation, I discovered that hadoop-aws doesn't send SSE-C 
headers in Put Object Part as stated in AWS specification: 
[https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html]
{code:java}
If you requested server-side encryption using a customer-provided encryption 
key in your initiate multipart upload request, you must provide identical 
encryption information in each part upload using the following headers.
{code}
 
You can find a patch attached to this issue for a better clarification of the 
problem.




> S3A fails to store my data when multipart size is set ot 5 Mb and SSE-C 
> encryption is enabled
> -
>
> Key: HADOOP-15267
> URL: https://issues.apache.org/jira/browse/HADOOP-15267
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: Hadoop 3.1 Snapshot
>Reporter: Anis Elleuch
>Priority: Critical
> Attachments: hadoop-fix.patch
>
>
> When I enable SSE-C encryption in Hadoop 3.1 and set  fs.s3a.multipart.size 
> to 5 Mb, storing data in AWS doesn't work anymore. For example, running the 
> following code:
> {code}
> >>> df1 = spark.read.json('/home/user/people.json')
> >>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
> {code}
> shows the following exception:
> {code:java}
> com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload 
> initiate requested encryption. Subsequent part requests must include the 
> appropriate encryption parameters.
> {code}
> After some investigation, I discovered that hadoop-aws doesn't send SSE-C 
> headers in Put Object Part as stated in AWS specification: 
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html]
> {code:java}
> If you requested server-side encryption using a customer-provided encryption 
> key in your initiate multipart upload request, you must provide identical 
> encryption information in each part upload using the following headers.
> {code}
>  
> You can find a patch attached to this issue for a better clarification of the 
> problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15267) S3A fails to store my data when multipart size is set ot 5 Mb and SSE-C encryption is enabled

2018-02-26 Thread Anis Elleuch (JIRA)
Anis Elleuch created HADOOP-15267:
-

 Summary: S3A fails to store my data when multipart size is set ot 
5 Mb and SSE-C encryption is enabled
 Key: HADOOP-15267
 URL: https://issues.apache.org/jira/browse/HADOOP-15267
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.1.0
 Environment: Hadoop 3.1 Snapshot
Reporter: Anis Elleuch
 Attachments: hadoop-fix.patch

With Spark with Hadoop 3.1.0, when I enable SSE-C encryption and set  
fs.s3a.multipart.size to 5 Mb, storing data in AWS won't work anymore. For 
example, running the following code:
{code}
>>> df1 = spark.read.json('/home/user/people.json')
>>> df1.write.mode("overwrite").json("s3a://testbucket/people.json")
{code}
shows the following exception:
{code:java}
com.amazonaws.services.s3.model.AmazonS3Exception: The multipart upload 
initiate requested encryption. Subsequent part requests must include the 
appropriate encryption parameters.
{code}
After some investigation, I discovered that hadoop-aws doesn't send SSE-C 
headers in Put Object Part as stated in AWS specification: 
[https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html]
{code:java}
If you requested server-side encryption using a customer-provided encryption 
key in your initiate multipart upload request, you must provide identical 
encryption information in each part upload using the following headers.
{code}
 
You can find a patch attached to this issue for a better clarification of the 
problem.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15251) Backport HADOOP-13514 (surefire upgrade) to branch-2

2018-02-26 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-15251:
---
   Resolution: Fixed
 Assignee: Chris Douglas
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.1
   Status: Resolved  (was: Patch Available)

Thanks, Akira. I committed this.

> Backport HADOOP-13514 (surefire upgrade) to branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Assignee: Chris Douglas
>Priority: Major
> Fix For: 2.9.1
>
> Attachments: HADOOP-15251-branch-2.001.patch, 
> HADOOP-15251-branch-2.002.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15251) Backport HADOOP-13514 (surefire upgrade) to branch-2

2018-02-26 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-15251:
---
Summary: Backport HADOOP-13514 (surefire upgrade) to branch-2  (was: 
Upgrade surefire version in branch-2)

> Backport HADOOP-13514 (surefire upgrade) to branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Priority: Major
> Attachments: HADOOP-15251-branch-2.001.patch, 
> HADOOP-15251-branch-2.002.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14728) Configuring AuthenticationFilterInitializer throws IllegalArgumentException: Null user

2018-02-26 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377850#comment-16377850
 ] 

Eric Yang commented on HADOOP-14728:


Null is not introduced by HADOOP-13119, but HADOOP-14077.  
AuthorizationException will be thrown when proxy user kerberos ticket is not 
valid.

There are two conditions that null users are returned:
1.  guest user doesn't associate with any group in proxy user ACL.
2.  guest user is coming from an address that is not allowed in proxy user ACL.

HADOOP-14077 decided to return null  for enable additional filter chains to 
check other proxy ACL or other challenge/response filters that can take place.  
The purpose is to channel doAs users to check other proxy ACL list on demand.

> Configuring AuthenticationFilterInitializer throws IllegalArgumentException: 
> Null user
> --
>
> Key: HADOOP-14728
> URL: https://issues.apache.org/jira/browse/HADOOP-14728
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Krishna Pandey
>Priority: Major
> Attachments: HADOOP-14728.01.patch
>
>
> Configured AuthenticationFilterInitializer and started a cluster. When 
> accessing YARN UI using doAs, encountering following error. 
> URL : http://localhost:25005/cluster??doAs=guest
> {noformat}
> org.apache.hadoop.security.authentication.util.SignerException: Invalid 
> signature
> 2017-08-01 15:34:22,163 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error 
> handling URI: /cluster
> java.lang.IllegalArgumentException: Null user
>   at 
> org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1499)
>   at 
> org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1486)
>   at 
> org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteOrProxyUser(AuthenticationWithProxyUserFilter.java:82)
>   at 
> org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteUser(AuthenticationWithProxyUserFilter.java:92)
>   at 
> javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207)
>   at 
> javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207)
>   at 
> org.apache.hadoop.yarn.webapp.view.HeaderBlock.render(HeaderBlock.java:28)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>   at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:61)
>   at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>   at org.apache.hadoop.yarn.webapp.Dispatcher.render(Dispatcher.java:206)
>   at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:165)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287)
>   at 
> com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15265) Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively

2018-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377701#comment-16377701
 ] 

Hudson commented on HADOOP-15265:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13718 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13718/])
HADOOP-15265. Exclude json-smart explicitly in hadoop-auth avoid being (arp: 
rev 78a10029ec5b2ecc7b9448be6dc6a1875196a68f)
* (edit) hadoop-common-project/hadoop-auth/pom.xml


> Exclude json-smart explicitly in hadoop-auth avoid being pulled in 
> transitively
> ---
>
> Key: HADOOP-15265
> URL: https://issues.apache.org/jira/browse/HADOOP-15265
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15265.2.patch, HADOOP-15265.patch
>
>
> this is an extension of - https://issues.apache.org/jira/browse/HADOOP-14903
> We need to exclude the dependency explicitly in hadoop-auth pom.xml and add 
> the correct version so that it is not being pulled transitively. 
> In Druid we use, 
> [https://github.com/tesla/tesla-aether/blob/master/src/main/java/io/tesla/aether/TeslaAether.java]
>  to fetch dependencies transitively, which is still pulling in wrong version 
> of json-smart jar.
> {code:java}
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT{code}
>  
> Full Stack trace 
> {code:java}
>  2018/02/26 03:47:22 INFO: 2018-02-26T03:47:22,878 ERROR [main] 
> io.druid.cli.PullDependencies - Unable to resolve artifacts for 
> [io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 (runtime) -> 
> [] < [ (https://repo1.maven.org/maven2/, releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, 
> releases+snapshots)]].
> 2018/02/26 03:47:22 INFO: 
> org.eclipse.aether.resolution.DependencyResolutionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:380)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
>  ~[tesla-aether-0.0.5.jar:0.0.5]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.downloadExtension(PullDependencies.java:350) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.run(PullDependencies.java:249) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.Main.main(Main.java:108) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.collection.DependencyCollectionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:292)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:342)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: ... 4 more
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read 
> artifact descriptor for net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> 

[jira] [Updated] (HADOOP-15265) Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively

2018-02-26 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-15265:
---
Fix Version/s: 3.1.0

> Exclude json-smart explicitly in hadoop-auth avoid being pulled in 
> transitively
> ---
>
> Key: HADOOP-15265
> URL: https://issues.apache.org/jira/browse/HADOOP-15265
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15265.2.patch, HADOOP-15265.patch
>
>
> this is an extension of - https://issues.apache.org/jira/browse/HADOOP-14903
> We need to exclude the dependency explicitly in hadoop-auth pom.xml and add 
> the correct version so that it is not being pulled transitively. 
> In Druid we use, 
> [https://github.com/tesla/tesla-aether/blob/master/src/main/java/io/tesla/aether/TeslaAether.java]
>  to fetch dependencies transitively, which is still pulling in wrong version 
> of json-smart jar.
> {code:java}
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT{code}
>  
> Full Stack trace 
> {code:java}
>  2018/02/26 03:47:22 INFO: 2018-02-26T03:47:22,878 ERROR [main] 
> io.druid.cli.PullDependencies - Unable to resolve artifacts for 
> [io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 (runtime) -> 
> [] < [ (https://repo1.maven.org/maven2/, releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, 
> releases+snapshots)]].
> 2018/02/26 03:47:22 INFO: 
> org.eclipse.aether.resolution.DependencyResolutionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:380)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
>  ~[tesla-aether-0.0.5.jar:0.0.5]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.downloadExtension(PullDependencies.java:350) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.run(PullDependencies.java:249) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.Main.main(Main.java:108) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.collection.DependencyCollectionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:292)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:342)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: ... 4 more
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read 
> artifact descriptor for net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:335)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO:

[jira] [Updated] (HADOOP-15265) Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively

2018-02-26 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-15265:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've committed this. Thanks for reporting and fixing this [~nishantbangarwa].

> Exclude json-smart explicitly in hadoop-auth avoid being pulled in 
> transitively
> ---
>
> Key: HADOOP-15265
> URL: https://issues.apache.org/jira/browse/HADOOP-15265
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HADOOP-15265.2.patch, HADOOP-15265.patch
>
>
> this is an extension of - https://issues.apache.org/jira/browse/HADOOP-14903
> We need to exclude the dependency explicitly in hadoop-auth pom.xml and add 
> the correct version so that it is not being pulled transitively. 
> In Druid we use, 
> [https://github.com/tesla/tesla-aether/blob/master/src/main/java/io/tesla/aether/TeslaAether.java]
>  to fetch dependencies transitively, which is still pulling in wrong version 
> of json-smart jar.
> {code:java}
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT{code}
>  
> Full Stack trace 
> {code:java}
>  2018/02/26 03:47:22 INFO: 2018-02-26T03:47:22,878 ERROR [main] 
> io.druid.cli.PullDependencies - Unable to resolve artifacts for 
> [io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 (runtime) -> 
> [] < [ (https://repo1.maven.org/maven2/, releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, 
> releases+snapshots)]].
> 2018/02/26 03:47:22 INFO: 
> org.eclipse.aether.resolution.DependencyResolutionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:380)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
>  ~[tesla-aether-0.0.5.jar:0.0.5]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.downloadExtension(PullDependencies.java:350) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.run(PullDependencies.java:249) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.Main.main(Main.java:108) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.collection.DependencyCollectionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:292)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:342)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: ... 4 more
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read 
> artifact descriptor for net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:335)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> 

[jira] [Updated] (HADOOP-15266) [branch-2] Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-26 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-15266:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.10.0
   Status: Resolved  (was: Patch Available)

+1. I've committed this. The javac warning is in a file not touched by this 
patch. The UT failure is unrelated.

Thanks [~nandakumar131].

>  [branch-2] Upper/Lower case conversion support for group names in 
> LdapGroupsMapping
> 
>
> Key: HADOOP-15266
> URL: https://issues.apache.org/jira/browse/HADOOP-15266
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 2.10.0
>
> Attachments: HADOOP-15266-branch-2.000.patch
>
>
> On most LDAP servers the user and group names are case-insensitive. When we 
> use {{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it 
> is possible to configure {{SSSD}} to force the group names to be returned in 
> lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.
> This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
> implementation based on LdapGroupsMapping which supports force lower/upper 
> case group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15266) [branch-2] Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377605#comment-16377605
 ] 

genericqa commented on HADOOP-15266:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 2s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
45s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m  8s{color} 
| {color:red} root generated 1 new + 1434 unchanged - 1 fixed = 1435 total (was 
1435) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 11s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
56s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-common:1 |
| Timed out junit tests | org.apache.hadoop.log.TestLogLevel |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 |
| JIRA Issue | HADOOP-15266 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912107/HADOOP-15266-branch-2.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 4c62e71f085b 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 4b43f2a |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14214/artifact/out/diff-compile-javac-root.txt
 |
| Unreaped Processes Log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14214/artifact/out/patch-unit-hadoop-common-project_hadoop-common-reaper.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14214/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14214/testReport/ |
| Max. process+thread count | 1343 (vs. ulimit of 1) |

[jira] [Commented] (HADOOP-15265) Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377567#comment-16377567
 ] 

genericqa commented on HADOOP-15265:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
44m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
4s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15265 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912100/HADOOP-15265.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 69b5100f344e 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 451265a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14213/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14213/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Exclude json-smart explicitly in hadoop-auth avoid being pulled in 
> transitively
> ---
>
> Key: HADOOP-15265
> URL: https://issues.apache.org/jira/browse/HADOOP-15265
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> 

[jira] [Commented] (HADOOP-15263) hadoop cloud-storage module to mark hadoop-common as provided; add azure-datalake

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377525#comment-16377525
 ] 

genericqa commented on HADOOP-15263:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
49m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-cloud-storage in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15263 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912082/HADOOP-15263-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 659f97a5d95a 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 451265a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14210/testReport/ |
| Max. process+thread count | 289 (vs. ulimit of 1) |
| modules | C: hadoop-project hadoop-tools/hadoop-azure-datalake 
hadoop-cloud-storage-project/hadoop-cloud-storage U: . |
| Console output | 

[jira] [Updated] (HADOOP-15266) [branch-2] Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-26 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15266:
-
Status: Patch Available  (was: Open)

>  [branch-2] Upper/Lower case conversion support for group names in 
> LdapGroupsMapping
> 
>
> Key: HADOOP-15266
> URL: https://issues.apache.org/jira/browse/HADOOP-15266
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15266-branch-2.000.patch
>
>
> On most LDAP servers the user and group names are case-insensitive. When we 
> use {{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it 
> is possible to configure {{SSSD}} to force the group names to be returned in 
> lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.
> This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
> implementation based on LdapGroupsMapping which supports force lower/upper 
> case group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15266) [branch-2] Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-26 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15266:
-
Attachment: HADOOP-15266-branch-2.000.patch

>  [branch-2] Upper/Lower case conversion support for group names in 
> LdapGroupsMapping
> 
>
> Key: HADOOP-15266
> URL: https://issues.apache.org/jira/browse/HADOOP-15266
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15266-branch-2.000.patch
>
>
> On most LDAP servers the user and group names are case-insensitive. When we 
> use {{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it 
> is possible to configure {{SSSD}} to force the group names to be returned in 
> lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.
> This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
> implementation based on LdapGroupsMapping which supports force lower/upper 
> case group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15265) Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively

2018-02-26 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377478#comment-16377478
 ] 

Arpit Agarwal commented on HADOOP-15265:


+1 for the rebased patch, pending Jenkins.

> Exclude json-smart explicitly in hadoop-auth avoid being pulled in 
> transitively
> ---
>
> Key: HADOOP-15265
> URL: https://issues.apache.org/jira/browse/HADOOP-15265
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HADOOP-15265.2.patch, HADOOP-15265.patch
>
>
> this is an extension of - https://issues.apache.org/jira/browse/HADOOP-14903
> We need to exclude the dependency explicitly in hadoop-auth pom.xml and add 
> the correct version so that it is not being pulled transitively. 
> In Druid we use, 
> [https://github.com/tesla/tesla-aether/blob/master/src/main/java/io/tesla/aether/TeslaAether.java]
>  to fetch dependencies transitively, which is still pulling in wrong version 
> of json-smart jar.
> {code:java}
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT{code}
>  
> Full Stack trace 
> {code:java}
>  2018/02/26 03:47:22 INFO: 2018-02-26T03:47:22,878 ERROR [main] 
> io.druid.cli.PullDependencies - Unable to resolve artifacts for 
> [io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 (runtime) -> 
> [] < [ (https://repo1.maven.org/maven2/, releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, 
> releases+snapshots)]].
> 2018/02/26 03:47:22 INFO: 
> org.eclipse.aether.resolution.DependencyResolutionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:380)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
>  ~[tesla-aether-0.0.5.jar:0.0.5]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.downloadExtension(PullDependencies.java:350) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.run(PullDependencies.java:249) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.Main.main(Main.java:108) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.collection.DependencyCollectionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:292)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:342)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: ... 4 more
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read 
> artifact descriptor for net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:335)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 

[jira] [Updated] (HADOOP-15266) [branch-2] Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-26 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15266:
-
Hadoop Flags:   (was: Reviewed)

>  [branch-2] Upper/Lower case conversion support for group names in 
> LdapGroupsMapping
> 
>
> Key: HADOOP-15266
> URL: https://issues.apache.org/jira/browse/HADOOP-15266
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> On most LDAP servers the user and group names are case-insensitive. When we 
> use {{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it 
> is possible to configure {{SSSD}} to force the group names to be returned in 
> lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.
> This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
> implementation based on LdapGroupsMapping which supports force lower/upper 
> case group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15266) [branch-2] Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-26 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15266:
-
Fix Version/s: (was: 3.1.0)

>  [branch-2] Upper/Lower case conversion support for group names in 
> LdapGroupsMapping
> 
>
> Key: HADOOP-15266
> URL: https://issues.apache.org/jira/browse/HADOOP-15266
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> On most LDAP servers the user and group names are case-insensitive. When we 
> use {{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it 
> is possible to configure {{SSSD}} to force the group names to be returned in 
> lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.
> This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
> implementation based on LdapGroupsMapping which supports force lower/upper 
> case group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15265) Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively

2018-02-26 Thread Nishant Bangarwa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377457#comment-16377457
 ] 

Nishant Bangarwa commented on HADOOP-15265:
---

[~arpitagarwal] rebased from apache trunk. 

> Exclude json-smart explicitly in hadoop-auth avoid being pulled in 
> transitively
> ---
>
> Key: HADOOP-15265
> URL: https://issues.apache.org/jira/browse/HADOOP-15265
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HADOOP-15265.2.patch, HADOOP-15265.patch
>
>
> this is an extension of - https://issues.apache.org/jira/browse/HADOOP-14903
> We need to exclude the dependency explicitly in hadoop-auth pom.xml and add 
> the correct version so that it is not being pulled transitively. 
> In Druid we use, 
> [https://github.com/tesla/tesla-aether/blob/master/src/main/java/io/tesla/aether/TeslaAether.java]
>  to fetch dependencies transitively, which is still pulling in wrong version 
> of json-smart jar.
> {code:java}
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT{code}
>  
> Full Stack trace 
> {code:java}
>  2018/02/26 03:47:22 INFO: 2018-02-26T03:47:22,878 ERROR [main] 
> io.druid.cli.PullDependencies - Unable to resolve artifacts for 
> [io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 (runtime) -> 
> [] < [ (https://repo1.maven.org/maven2/, releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, 
> releases+snapshots)]].
> 2018/02/26 03:47:22 INFO: 
> org.eclipse.aether.resolution.DependencyResolutionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:380)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
>  ~[tesla-aether-0.0.5.jar:0.0.5]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.downloadExtension(PullDependencies.java:350) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.run(PullDependencies.java:249) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.Main.main(Main.java:108) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.collection.DependencyCollectionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:292)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:342)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: ... 4 more
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read 
> artifact descriptor for net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:335)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 

[jira] [Updated] (HADOOP-15265) Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively

2018-02-26 Thread Nishant Bangarwa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated HADOOP-15265:
--
Attachment: HADOOP-15265.2.patch

> Exclude json-smart explicitly in hadoop-auth avoid being pulled in 
> transitively
> ---
>
> Key: HADOOP-15265
> URL: https://issues.apache.org/jira/browse/HADOOP-15265
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HADOOP-15265.2.patch, HADOOP-15265.patch
>
>
> this is an extension of - https://issues.apache.org/jira/browse/HADOOP-14903
> We need to exclude the dependency explicitly in hadoop-auth pom.xml and add 
> the correct version so that it is not being pulled transitively. 
> In Druid we use, 
> [https://github.com/tesla/tesla-aether/blob/master/src/main/java/io/tesla/aether/TeslaAether.java]
>  to fetch dependencies transitively, which is still pulling in wrong version 
> of json-smart jar.
> {code:java}
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT{code}
>  
> Full Stack trace 
> {code:java}
>  2018/02/26 03:47:22 INFO: 2018-02-26T03:47:22,878 ERROR [main] 
> io.druid.cli.PullDependencies - Unable to resolve artifacts for 
> [io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 (runtime) -> 
> [] < [ (https://repo1.maven.org/maven2/, releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, 
> releases+snapshots)]].
> 2018/02/26 03:47:22 INFO: 
> org.eclipse.aether.resolution.DependencyResolutionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:380)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
>  ~[tesla-aether-0.0.5.jar:0.0.5]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.downloadExtension(PullDependencies.java:350) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.run(PullDependencies.java:249) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.Main.main(Main.java:108) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.collection.DependencyCollectionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:292)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:342)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: ... 4 more
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read 
> artifact descriptor for net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:335)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> 

[jira] [Created] (HADOOP-15266) [branch-2] Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-26 Thread Nanda kumar (JIRA)
Nanda kumar created HADOOP-15266:


 Summary:  [branch-2] Upper/Lower case conversion support for group 
names in LdapGroupsMapping
 Key: HADOOP-15266
 URL: https://issues.apache.org/jira/browse/HADOOP-15266
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Nanda kumar
Assignee: Nanda kumar
 Fix For: 3.1.0


On most LDAP servers the user and group names are case-insensitive. When we use 
{{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it is 
possible to configure {{SSSD}} to force the group names to be returned in 
lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.

This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
implementation based on LdapGroupsMapping which supports force lower/upper case 
group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15265) Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively

2018-02-26 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377423#comment-16377423
 ] 

Arpit Agarwal commented on HADOOP-15265:


+1 from me also, pending Jenkins.

[~nishantbangarwa], can you please rebase your patch to Apache trunk?

> Exclude json-smart explicitly in hadoop-auth avoid being pulled in 
> transitively
> ---
>
> Key: HADOOP-15265
> URL: https://issues.apache.org/jira/browse/HADOOP-15265
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HADOOP-15265.patch
>
>
> this is an extension of - https://issues.apache.org/jira/browse/HADOOP-14903
> We need to exclude the dependency explicitly in hadoop-auth pom.xml and add 
> the correct version so that it is not being pulled transitively. 
> In Druid we use, 
> [https://github.com/tesla/tesla-aether/blob/master/src/main/java/io/tesla/aether/TeslaAether.java]
>  to fetch dependencies transitively, which is still pulling in wrong version 
> of json-smart jar.
> {code:java}
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT{code}
>  
> Full Stack trace 
> {code:java}
>  2018/02/26 03:47:22 INFO: 2018-02-26T03:47:22,878 ERROR [main] 
> io.druid.cli.PullDependencies - Unable to resolve artifacts for 
> [io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 (runtime) -> 
> [] < [ (https://repo1.maven.org/maven2/, releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, 
> releases+snapshots)]].
> 2018/02/26 03:47:22 INFO: 
> org.eclipse.aether.resolution.DependencyResolutionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:380)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
>  ~[tesla-aether-0.0.5.jar:0.0.5]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.downloadExtension(PullDependencies.java:350) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.run(PullDependencies.java:249) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.Main.main(Main.java:108) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.collection.DependencyCollectionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:292)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:342)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: ... 4 more
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read 
> artifact descriptor for net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:335)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
>  

[jira] [Commented] (HADOOP-15265) Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively

2018-02-26 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377416#comment-16377416
 ] 

Bharat Viswanadham commented on HADOOP-15265:
-

+1.

LGTM.

> Exclude json-smart explicitly in hadoop-auth avoid being pulled in 
> transitively
> ---
>
> Key: HADOOP-15265
> URL: https://issues.apache.org/jira/browse/HADOOP-15265
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HADOOP-15265.patch
>
>
> this is an extension of - https://issues.apache.org/jira/browse/HADOOP-14903
> We need to exclude the dependency explicitly in hadoop-auth pom.xml and add 
> the correct version so that it is not being pulled transitively. 
> In Druid we use, 
> [https://github.com/tesla/tesla-aether/blob/master/src/main/java/io/tesla/aether/TeslaAether.java]
>  to fetch dependencies transitively, which is still pulling in wrong version 
> of json-smart jar.
> {code:java}
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT{code}
>  
> Full Stack trace 
> {code:java}
>  2018/02/26 03:47:22 INFO: 2018-02-26T03:47:22,878 ERROR [main] 
> io.druid.cli.PullDependencies - Unable to resolve artifacts for 
> [io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 (runtime) -> 
> [] < [ (https://repo1.maven.org/maven2/, releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, 
> releases+snapshots)]].
> 2018/02/26 03:47:22 INFO: 
> org.eclipse.aether.resolution.DependencyResolutionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:380)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
>  ~[tesla-aether-0.0.5.jar:0.0.5]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.downloadExtension(PullDependencies.java:350) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.run(PullDependencies.java:249) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.Main.main(Main.java:108) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.collection.DependencyCollectionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:292)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:342)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: ... 4 more
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read 
> artifact descriptor for net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:335)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> 

[jira] [Assigned] (HADOOP-15265) Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively

2018-02-26 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HADOOP-15265:
--

Assignee: Nishant Bangarwa

> Exclude json-smart explicitly in hadoop-auth avoid being pulled in 
> transitively
> ---
>
> Key: HADOOP-15265
> URL: https://issues.apache.org/jira/browse/HADOOP-15265
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HADOOP-15265.patch
>
>
> this is an extension of - https://issues.apache.org/jira/browse/HADOOP-14903
> We need to exclude the dependency explicitly in hadoop-auth pom.xml and add 
> the correct version so that it is not being pulled transitively. 
> In Druid we use, 
> [https://github.com/tesla/tesla-aether/blob/master/src/main/java/io/tesla/aether/TeslaAether.java]
>  to fetch dependencies transitively, which is still pulling in wrong version 
> of json-smart jar.
> {code:java}
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT{code}
>  
> Full Stack trace 
> {code:java}
>  2018/02/26 03:47:22 INFO: 2018-02-26T03:47:22,878 ERROR [main] 
> io.druid.cli.PullDependencies - Unable to resolve artifacts for 
> [io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 (runtime) -> 
> [] < [ (https://repo1.maven.org/maven2/, releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, 
> releases+snapshots)]].
> 2018/02/26 03:47:22 INFO: 
> org.eclipse.aether.resolution.DependencyResolutionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:380)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
>  ~[tesla-aether-0.0.5.jar:0.0.5]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.downloadExtension(PullDependencies.java:350) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.run(PullDependencies.java:249) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.Main.main(Main.java:108) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.collection.DependencyCollectionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:292)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:342)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: ... 4 more
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read 
> artifact descriptor for net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:335)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> 

[jira] [Commented] (HADOOP-15265) Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377417#comment-16377417
 ] 

genericqa commented on HADOOP-15265:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-15265 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15265 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912093/HADOOP-15265.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14212/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Exclude json-smart explicitly in hadoop-auth avoid being pulled in 
> transitively
> ---
>
> Key: HADOOP-15265
> URL: https://issues.apache.org/jira/browse/HADOOP-15265
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HADOOP-15265.patch
>
>
> this is an extension of - https://issues.apache.org/jira/browse/HADOOP-14903
> We need to exclude the dependency explicitly in hadoop-auth pom.xml and add 
> the correct version so that it is not being pulled transitively. 
> In Druid we use, 
> [https://github.com/tesla/tesla-aether/blob/master/src/main/java/io/tesla/aether/TeslaAether.java]
>  to fetch dependencies transitively, which is still pulling in wrong version 
> of json-smart jar.
> {code:java}
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT{code}
>  
> Full Stack trace 
> {code:java}
>  2018/02/26 03:47:22 INFO: 2018-02-26T03:47:22,878 ERROR [main] 
> io.druid.cli.PullDependencies - Unable to resolve artifacts for 
> [io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 (runtime) -> 
> [] < [ (https://repo1.maven.org/maven2/, releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, 
> releases+snapshots)]].
> 2018/02/26 03:47:22 INFO: 
> org.eclipse.aether.resolution.DependencyResolutionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:380)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
>  ~[tesla-aether-0.0.5.jar:0.0.5]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.downloadExtension(PullDependencies.java:350) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.run(PullDependencies.java:249) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.Main.main(Main.java:108) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.collection.DependencyCollectionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:292)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> 

[jira] [Updated] (HADOOP-15265) Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively

2018-02-26 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-15265:
---
Status: Patch Available  (was: Open)

> Exclude json-smart explicitly in hadoop-auth avoid being pulled in 
> transitively
> ---
>
> Key: HADOOP-15265
> URL: https://issues.apache.org/jira/browse/HADOOP-15265
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nishant Bangarwa
>Priority: Major
> Attachments: HADOOP-15265.patch
>
>
> this is an extension of - https://issues.apache.org/jira/browse/HADOOP-14903
> We need to exclude the dependency explicitly in hadoop-auth pom.xml and add 
> the correct version so that it is not being pulled transitively. 
> In Druid we use, 
> [https://github.com/tesla/tesla-aether/blob/master/src/main/java/io/tesla/aether/TeslaAether.java]
>  to fetch dependencies transitively, which is still pulling in wrong version 
> of json-smart jar.
> {code:java}
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT{code}
>  
> Full Stack trace 
> {code:java}
>  2018/02/26 03:47:22 INFO: 2018-02-26T03:47:22,878 ERROR [main] 
> io.druid.cli.PullDependencies - Unable to resolve artifacts for 
> [io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 (runtime) -> 
> [] < [ (https://repo1.maven.org/maven2/, releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, 
> releases+snapshots)]].
> 2018/02/26 03:47:22 INFO: 
> org.eclipse.aether.resolution.DependencyResolutionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:380)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
>  ~[tesla-aether-0.0.5.jar:0.0.5]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.downloadExtension(PullDependencies.java:350) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.run(PullDependencies.java:249) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.Main.main(Main.java:108) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.collection.DependencyCollectionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:292)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:342)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: ... 4 more
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read 
> artifact descriptor for net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:335)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> 

[jira] [Commented] (HADOOP-15265) Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively

2018-02-26 Thread Nishant Bangarwa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377405#comment-16377405
 ] 

Nishant Bangarwa commented on HADOOP-15265:
---

+cc [~arpitagarwal] [~rchiang]

 

> Exclude json-smart explicitly in hadoop-auth avoid being pulled in 
> transitively
> ---
>
> Key: HADOOP-15265
> URL: https://issues.apache.org/jira/browse/HADOOP-15265
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nishant Bangarwa
>Priority: Major
> Attachments: HADOOP-15265.patch
>
>
> this is an extension of - https://issues.apache.org/jira/browse/HADOOP-14903
> We need to exclude the dependency explicitly in hadoop-auth pom.xml and add 
> the correct version so that it is not being pulled transitively. 
> In Druid we use, 
> [https://github.com/tesla/tesla-aether/blob/master/src/main/java/io/tesla/aether/TeslaAether.java]
>  to fetch dependencies transitively, which is still pulling in wrong version 
> of json-smart jar.
> {code:java}
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT{code}
>  
> Full Stack trace 
> {code:java}
>  2018/02/26 03:47:22 INFO: 2018-02-26T03:47:22,878 ERROR [main] 
> io.druid.cli.PullDependencies - Unable to resolve artifacts for 
> [io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 (runtime) -> 
> [] < [ (https://repo1.maven.org/maven2/, releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, 
> releases+snapshots)]].
> 2018/02/26 03:47:22 INFO: 
> org.eclipse.aether.resolution.DependencyResolutionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:380)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
>  ~[tesla-aether-0.0.5.jar:0.0.5]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.downloadExtension(PullDependencies.java:350) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.run(PullDependencies.java:249) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.Main.main(Main.java:108) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.collection.DependencyCollectionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:292)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:342)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: ... 4 more
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read 
> artifact descriptor for net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:335)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> 

[jira] [Updated] (HADOOP-15265) Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively

2018-02-26 Thread Nishant Bangarwa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated HADOOP-15265:
--
Attachment: HADOOP-15265.patch

> Exclude json-smart explicitly in hadoop-auth avoid being pulled in 
> transitively
> ---
>
> Key: HADOOP-15265
> URL: https://issues.apache.org/jira/browse/HADOOP-15265
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nishant Bangarwa
>Priority: Major
> Attachments: HADOOP-15265.patch
>
>
> this is an extension of - https://issues.apache.org/jira/browse/HADOOP-14903
> We need to exclude the dependency explicitly in hadoop-auth pom.xml and add 
> the correct version so that it is not being pulled transitively. 
> In Druid we use, 
> [https://github.com/tesla/tesla-aether/blob/master/src/main/java/io/tesla/aether/TeslaAether.java]
>  to fetch dependencies transitively, which is still pulling in wrong version 
> of json-smart jar.
> {code:java}
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT{code}
>  
> Full Stack trace 
> {code:java}
>  2018/02/26 03:47:22 INFO: 2018-02-26T03:47:22,878 ERROR [main] 
> io.druid.cli.PullDependencies - Unable to resolve artifacts for 
> [io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 (runtime) -> 
> [] < [ (https://repo1.maven.org/maven2/, releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (http://nexus-private.hortonworks.com/nexus/content/groups/public, 
> releases+snapshots),  
> (https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, 
> releases+snapshots)]].
> 2018/02/26 03:47:22 INFO: 
> org.eclipse.aether.resolution.DependencyResolutionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:380)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
>  ~[tesla-aether-0.0.5.jar:0.0.5]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.downloadExtension(PullDependencies.java:350) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.PullDependencies.run(PullDependencies.java:249) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: at 
> io.druid.cli.Main.main(Main.java:108) 
> [druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.collection.DependencyCollectionException: Failed to 
> collect dependencies at 
> io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
> org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
> com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
> net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:292)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:342)
>  ~[aether-impl-0.9.0.M2.jar:?]
> 2018/02/26 03:47:22 INFO: ... 4 more
> 2018/02/26 03:47:22 INFO: Caused by: 
> org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read 
> artifact descriptor for net.minidev:json-smart:jar:2.3-SNAPSHOT
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:335)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
>  ~[maven-aether-provider-3.1.1.jar:3.1.1]
> 2018/02/26 03:47:22 INFO: at 
> 

[jira] [Created] (HADOOP-15265) Exclude json-smart explicitly in hadoop-auth avoid being pulled in transitively

2018-02-26 Thread Nishant Bangarwa (JIRA)
Nishant Bangarwa created HADOOP-15265:
-

 Summary: Exclude json-smart explicitly in hadoop-auth avoid being 
pulled in transitively
 Key: HADOOP-15265
 URL: https://issues.apache.org/jira/browse/HADOOP-15265
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Nishant Bangarwa


this is an extension of - https://issues.apache.org/jira/browse/HADOOP-14903

We need to exclude the dependency explicitly in hadoop-auth pom.xml and add the 
correct version so that it is not being pulled transitively. 

In Druid we use, 
[https://github.com/tesla/tesla-aether/blob/master/src/main/java/io/tesla/aether/TeslaAether.java]
 to fetch dependencies transitively, which is still pulling in wrong version of 
json-smart jar.
{code:java}
org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
net.minidev:json-smart:jar:2.3-SNAPSHOT{code}
 

Full Stack trace 
{code:java}
 2018/02/26 03:47:22 INFO: 2018-02-26T03:47:22,878 ERROR [main] 
io.druid.cli.PullDependencies - Unable to resolve artifacts for 
[io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 (runtime) -> [] 
< [ (https://repo1.maven.org/maven2/, releases+snapshots),  
(http://nexus-private.hortonworks.com/nexus/content/groups/public, 
releases+snapshots),  
(http://nexus-private.hortonworks.com/nexus/content/groups/public, 
releases+snapshots),  
(http://nexus-private.hortonworks.com/nexus/content/groups/public, 
releases+snapshots),  
(https://metamx.artifactoryonline.com/metamx/pub-libs-releases-local, 
releases+snapshots)]].
2018/02/26 03:47:22 INFO: 
org.eclipse.aether.resolution.DependencyResolutionException: Failed to collect 
dependencies at io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 
-> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
net.minidev:json-smart:jar:2.3-SNAPSHOT
2018/02/26 03:47:22 INFO:   at 
org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:380)
 ~[aether-impl-0.9.0.M2.jar:?]
2018/02/26 03:47:22 INFO:   at 
io.tesla.aether.internal.DefaultTeslaAether.resolveArtifacts(DefaultTeslaAether.java:289)
 ~[tesla-aether-0.0.5.jar:0.0.5]
2018/02/26 03:47:22 INFO:   at 
io.druid.cli.PullDependencies.downloadExtension(PullDependencies.java:350) 
[druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
2018/02/26 03:47:22 INFO:   at 
io.druid.cli.PullDependencies.run(PullDependencies.java:249) 
[druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
2018/02/26 03:47:22 INFO:   at io.druid.cli.Main.main(Main.java:108) 
[druid-services-0.10.1.2.6.5.0-129.jar:0.10.1.2.6.5.0-129]
2018/02/26 03:47:22 INFO: Caused by: 
org.eclipse.aether.collection.DependencyCollectionException: Failed to collect 
dependencies at io.druid.extensions:druid-hdfs-storage:jar:0.10.1.2.6.5.0-129 
-> org.apache.hadoop:hadoop-client:jar:2.7.3.2.6.5.0-129 -> 
org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-129 -> 
org.apache.hadoop:hadoop-auth:jar:2.7.3.2.6.5.0-129 -> 
com.nimbusds:nimbus-jose-jwt:jar:4.41.1 -> 
net.minidev:json-smart:jar:2.3-SNAPSHOT
2018/02/26 03:47:22 INFO:   at 
org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:292)
 ~[aether-impl-0.9.0.M2.jar:?]
2018/02/26 03:47:22 INFO:   at 
org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:342)
 ~[aether-impl-0.9.0.M2.jar:?]
2018/02/26 03:47:22 INFO:   ... 4 more
2018/02/26 03:47:22 INFO: Caused by: 
org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read 
artifact descriptor for net.minidev:json-smart:jar:2.3-SNAPSHOT
2018/02/26 03:47:22 INFO:   at 
org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:335)
 ~[maven-aether-provider-3.1.1.jar:3.1.1]
2018/02/26 03:47:22 INFO:   at 
org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
 ~[maven-aether-provider-3.1.1.jar:3.1.1]
2018/02/26 03:47:22 INFO:   at 
org.eclipse.aether.internal.impl.DefaultDependencyCollector.process(DefaultDependencyCollector.java:461)
 ~[aether-impl-0.9.0.M2.jar:?]
2018/02/26 03:47:22 INFO:   at 
org.eclipse.aether.internal.impl.DefaultDependencyCollector.process(DefaultDependencyCollector.java:573)
 ~[aether-impl-0.9.0.M2.jar:?]
2018/02/26 03:47:22 INFO:   at 
org.eclipse.aether.internal.impl.DefaultDependencyCollector.process(DefaultDependencyCollector.java:573)
 ~[aether-impl-0.9.0.M2.jar:?]
2018/02/26 03:47:22 INFO:   at 

[jira] [Commented] (HADOOP-15264) AWS "shaded" SDK 1.271 is pulling in netty 4.1.17

2018-02-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377371#comment-16377371
 ] 

Steve Loughran commented on HADOOP-15264:
-

I've got a patch which excludes the JARs; they're only needed 

I'd rather do that than upgrade the AWS SDK to a new one, in case there are 
other surprises. We've been using the 1.11.271 for a few weeks, no NPE stack 
traces, no complaints that we are closing streams in an abort() call, etc. 
Happy.

Patch attached, tested against AWS S3 London; one failure in 
ITestS3GuardToolLocal; trying s3 ireland ann everything is rejected at 400. I 
think something is up with my S3 binding today, as I've been seeing failures 
elsewhere. Assume unrelated.

> AWS "shaded" SDK 1.271 is pulling in netty 4.1.17
> -
>
> Key: HADOOP-15264
> URL: https://issues.apache.org/jira/browse/HADOOP-15264
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15264-001.patch
>
>
> The latest versions of the AWS Shaded SDK are declaring a dependency on netty 
> 4.1.17
> {code}
> [INFO] +- org.apache.hadoop:hadoop-aws:jar:3.2.0-SNAPSHOT:compile
> [INFO] |  \- com.amazonaws:aws-java-sdk-bundle:jar:1.11.271:compile
> [INFO] | +- io.netty:netty-codec-http:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-codec:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-handler:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-buffer:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-common:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-transport:jar:4.1.17.Final:compile
> [INFO] | \- io.netty:netty-resolver:jar:4.1.17.Final:compile
> {code}
> We either exclude these or roll back HADOOP-15040.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15264) AWS "shaded" SDK 1.271 is pulling in netty 4.1.17

2018-02-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15264:

Attachment: HADOOP-15264-001.patch

> AWS "shaded" SDK 1.271 is pulling in netty 4.1.17
> -
>
> Key: HADOOP-15264
> URL: https://issues.apache.org/jira/browse/HADOOP-15264
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15264-001.patch
>
>
> The latest versions of the AWS Shaded SDK are declaring a dependency on netty 
> 4.1.17
> {code}
> [INFO] +- org.apache.hadoop:hadoop-aws:jar:3.2.0-SNAPSHOT:compile
> [INFO] |  \- com.amazonaws:aws-java-sdk-bundle:jar:1.11.271:compile
> [INFO] | +- io.netty:netty-codec-http:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-codec:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-handler:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-buffer:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-common:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-transport:jar:4.1.17.Final:compile
> [INFO] | \- io.netty:netty-resolver:jar:4.1.17.Final:compile
> {code}
> We either exclude these or roll back HADOOP-15040.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14696) parallel tests don't work for Windows

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377353#comment-16377353
 ] 

genericqa commented on HADOOP-14696:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-14696 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14696 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887479/HADOOP-14696.07.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14211/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> parallel tests don't work for Windows
> -
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-14696-002.patch, HADOOP-14696-003.patch, 
> HADOOP-14696.00.patch, HADOOP-14696.01.patch, HADOOP-14696.04.patch, 
> HADOOP-14696.05.patch, HADOOP-14696.06.patch, HADOOP-14696.07.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (create-parallel-tests-dirs) on project hadoop-common: An Ant BuildException 
> has occured: Directory 
> F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\jenkinsjenkins-slaveworkspacehadoop-trunk-winshadoop-common-projecthadoop-common
> arget\test\data\1 creation was not successful for an unknown reason
> [ERROR] around Ant part 

[jira] [Commented] (HADOOP-14696) parallel tests don't work for Windows

2018-02-26 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377343#comment-16377343
 ] 

Íñigo Goiri commented on HADOOP-14696:
--

We are hitting this when building on Azure.
[~aw] did you move forward with the yetus plugin?

> parallel tests don't work for Windows
> -
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-14696-002.patch, HADOOP-14696-003.patch, 
> HADOOP-14696.00.patch, HADOOP-14696.01.patch, HADOOP-14696.04.patch, 
> HADOOP-14696.05.patch, HADOOP-14696.06.patch, HADOOP-14696.07.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (create-parallel-tests-dirs) on project hadoop-common: An Ant BuildException 
> has occured: Directory 
> F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\jenkinsjenkins-slaveworkspacehadoop-trunk-winshadoop-common-projecthadoop-common
> arget\test\data\1 creation was not successful for an unknown reason
> [ERROR] around Ant part 

[jira] [Updated] (HADOOP-15263) hadoop cloud-storage module to mark hadoop-common as provided; add azure-datalake

2018-02-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15263:

Attachment: HADOOP-15263-001.patch

> hadoop cloud-storage module to mark hadoop-common as provided; add 
> azure-datalake
> -
>
> Key: HADOOP-15263
> URL: https://issues.apache.org/jira/browse/HADOOP-15263
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15263-001.patch
>
>
> Reviewing hadoop-cloud-storage module for use
> * we should cut out hadoop-common so that if something downstream is already 
> doing the heavy lifting of excluding it to get jackson & guava in sync, it's 
> not sneaking back in.
> * and add azure-datalake



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15263) hadoop cloud-storage module to mark hadoop-common as provided; add azure-datalake

2018-02-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15263:

Status: Patch Available  (was: Open)

> hadoop cloud-storage module to mark hadoop-common as provided; add 
> azure-datalake
> -
>
> Key: HADOOP-15263
> URL: https://issues.apache.org/jira/browse/HADOOP-15263
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15263-001.patch
>
>
> Reviewing hadoop-cloud-storage module for use
> * we should cut out hadoop-common so that if something downstream is already 
> doing the heavy lifting of excluding it to get jackson & guava in sync, it's 
> not sneaking back in.
> * and add azure-datalake



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15264) AWS "shaded" SDK 1.271 is pulling in netty 4.1.17

2018-02-26 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377331#comment-16377331
 ] 

Wangda Tan commented on HADOOP-15264:
-

[~ste...@apache.org], from the latest comment on 
https://github.com/aws/aws-sdk-java/issues/1488, Andrew Shore mentioned he will 
add the mapping. So is there anything we need to do from our side? Should we 
still mark this to be blocker of 3.1.0?

> AWS "shaded" SDK 1.271 is pulling in netty 4.1.17
> -
>
> Key: HADOOP-15264
> URL: https://issues.apache.org/jira/browse/HADOOP-15264
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Blocker
>
> The latest versions of the AWS Shaded SDK are declaring a dependency on netty 
> 4.1.17
> {code}
> [INFO] +- org.apache.hadoop:hadoop-aws:jar:3.2.0-SNAPSHOT:compile
> [INFO] |  \- com.amazonaws:aws-java-sdk-bundle:jar:1.11.271:compile
> [INFO] | +- io.netty:netty-codec-http:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-codec:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-handler:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-buffer:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-common:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-transport:jar:4.1.17.Final:compile
> [INFO] | \- io.netty:netty-resolver:jar:4.1.17.Final:compile
> {code}
> We either exclude these or roll back HADOOP-15040.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14898) Create official Docker images for development and testing features

2018-02-26 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377272#comment-16377272
 ] 

Anu Engineer commented on HADOOP-14898:
---

[~aw], [~chris.douglas], [~ste...@apache.org] I plan to create these branches - 
HADOOP-15083 / HADOOP-15084 / HADOOP-15256 and file a ticket with INFRA so that 
they can push this image as an Apache Image to DockerHub. This will allow INFRA 
to push these dockerHub Images if and when we make the changes. That is, we 
will have official Apache base images which can be trusted by end-users and 
updated with various releases if needed.
{quote}HADOOP-15257, HADOOP-15258, HADOOP-15259 should be committed to trunk.
{quote}
Once that is done, I will commit these patches to the trunk.

 

Please let me know if you see any issues or have any concerns.

 

> Create official Docker images for development and testing features 
> ---
>
> Key: HADOOP-14898
> URL: https://issues.apache.org/jira/browse/HADOOP-14898
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-14898.001.tar.gz, HADOOP-14898.002.tar.gz, 
> HADOOP-14898.003.tgz, docker_design.pdf
>
>
> This is the original mail from the mailing list:
> {code}
> TL;DR: I propose to create official hadoop images and upload them to the 
> dockerhub.
> GOAL/SCOPE: I would like improve the existing documentation with easy-to-use 
> docker based recipes to start hadoop clusters with various configuration.
> The images also could be used to test experimental features. For example 
> ozone could be tested easily with these compose file and configuration:
> https://gist.github.com/elek/1676a97b98f4ba561c9f51fce2ab2ea6
> Or even the configuration could be included in the compose file:
> https://github.com/elek/hadoop/blob/docker-2.8.0/example/docker-compose.yaml
> I would like to create separated example compose files for federation, ha, 
> metrics usage, etc. to make it easier to try out and understand the features.
> CONTEXT: There is an existing Jira 
> https://issues.apache.org/jira/browse/HADOOP-13397
> But it’s about a tool to generate production quality docker images (multiple 
> types, in a flexible way). If no objections, I will create a separated issue 
> to create simplified docker images for rapid prototyping and investigating 
> new features. And register the branch to the dockerhub to create the images 
> automatically.
> MY BACKGROUND: I am working with docker based hadoop/spark clusters quite a 
> while and run them succesfully in different environments (kubernetes, 
> docker-swarm, nomad-based scheduling, etc.) My work is available from here: 
> https://github.com/flokkr but they could handle more complex use cases (eg. 
> instrumenting java processes with btrace, or read/reload configuration from 
> consul).
>  And IMHO in the official hadoop documentation it’s better to suggest to use 
> official apache docker images and not external ones (which could be changed).
> {code}
> The next list will enumerate the key decision points regarding to docker 
> image creating
> A. automated dockerhub build  / jenkins build
> Docker images could be built on the dockerhub (a branch pattern should be 
> defined for a github repository and the location of the Docker files) or 
> could be built on a CI server and pushed.
> The second one is more flexible (it's more easy to create matrix build, for 
> example)
> The first one had the advantage that we can get an additional flag on the 
> dockerhub that the build is automated (and built from the source by the 
> dockerhub).
> The decision is easy as ASF supports the first approach: (see 
> https://issues.apache.org/jira/browse/INFRA-12781?focusedCommentId=15824096=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15824096)
> B. source: binary distribution or source build
> The second question is about creating the docker image. One option is to 
> build the software on the fly during the creation of the docker image the 
> other one is to use the binary releases.
> I suggest to use the second approach as:
> 1. In that case the hadoop:2.7.3 could contain exactly the same hadoop 
> distrubution as the downloadable one
> 2. We don't need to add development tools to the image, the image could be 
> more smaller (which is important as the goal for this image to getting 
> started as fast as possible)
> 3. The docker definition will be more simple (and more easy to maintain)
> Usually this approach is used in other projects (I checked Apache Zeppelin 
> and Apache Nutch)
> C. branch usage
> Other question is the location of the Docker file. It could be on the 
> official source-code branches (branch-2, trunk, etc.) or we can 

[jira] [Commented] (HADOOP-15264) AWS "shaded" SDK 1.271 is pulling in netty 4.1.17

2018-02-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377262#comment-16377262
 ] 

Steve Loughran commented on HADOOP-15264:
-

filed AWS SDK issue [1488|https://github.com/aws/aws-sdk-java/issues/1488]

> AWS "shaded" SDK 1.271 is pulling in netty 4.1.17
> -
>
> Key: HADOOP-15264
> URL: https://issues.apache.org/jira/browse/HADOOP-15264
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Blocker
>
> The latest versions of the AWS Shaded SDK are declaring a dependency on netty 
> 4.1.17
> {code}
> [INFO] +- org.apache.hadoop:hadoop-aws:jar:3.2.0-SNAPSHOT:compile
> [INFO] |  \- com.amazonaws:aws-java-sdk-bundle:jar:1.11.271:compile
> [INFO] | +- io.netty:netty-codec-http:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-codec:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-handler:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-buffer:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-common:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-transport:jar:4.1.17.Final:compile
> [INFO] | \- io.netty:netty-resolver:jar:4.1.17.Final:compile
> {code}
> We either exclude these or roll back HADOOP-15040.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15264) AWS "shaded" SDK 1.271 is pulling in netty 4.1.17

2018-02-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15264:

Summary: AWS "shaded" SDK 1.271 is pulling in netty 4.1.17  (was: AWS 
"shaded" SDK 1.271 is pulling in netty 4.2)

> AWS "shaded" SDK 1.271 is pulling in netty 4.1.17
> -
>
> Key: HADOOP-15264
> URL: https://issues.apache.org/jira/browse/HADOOP-15264
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Blocker
>
> The latest versions of the AWS Shaded SDK are declaring a dependency on netty 
> 4.1.17
> {code}
> [INFO] +- org.apache.hadoop:hadoop-aws:jar:3.2.0-SNAPSHOT:compile
> [INFO] |  \- com.amazonaws:aws-java-sdk-bundle:jar:1.11.271:compile
> [INFO] | +- io.netty:netty-codec-http:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-codec:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-handler:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-buffer:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-common:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-transport:jar:4.1.17.Final:compile
> [INFO] | \- io.netty:netty-resolver:jar:4.1.17.Final:compile
> {code}
> We either exclude these or roll back HADOOP-15040.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15264) AWS "shaded" SDK 1.271 is pulling in netty 4.2

2018-02-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377206#comment-16377206
 ] 

Steve Loughran commented on HADOOP-15264:
-

+ [~fabbri] [~wangda]

Going to try to cut them & see what breaks. Assuming its only some new part of 
the AWS SDK chain, this should be fine

> AWS "shaded" SDK 1.271 is pulling in netty 4.2
> --
>
> Key: HADOOP-15264
> URL: https://issues.apache.org/jira/browse/HADOOP-15264
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Blocker
>
> The latest versions of the AWS Shaded SDK are declaring a dependency on netty 
> 4.1.17
> {code}
> [INFO] +- org.apache.hadoop:hadoop-aws:jar:3.2.0-SNAPSHOT:compile
> [INFO] |  \- com.amazonaws:aws-java-sdk-bundle:jar:1.11.271:compile
> [INFO] | +- io.netty:netty-codec-http:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-codec:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-handler:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-buffer:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-common:jar:4.1.17.Final:compile
> [INFO] | +- io.netty:netty-transport:jar:4.1.17.Final:compile
> [INFO] | \- io.netty:netty-resolver:jar:4.1.17.Final:compile
> {code}
> We either exclude these or roll back HADOOP-15040.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15264) AWS "shaded" SDK 1.271 is pulling in netty 4.2

2018-02-26 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15264:
---

 Summary: AWS "shaded" SDK 1.271 is pulling in netty 4.2
 Key: HADOOP-15264
 URL: https://issues.apache.org/jira/browse/HADOOP-15264
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: Steve Loughran


The latest versions of the AWS Shaded SDK are declaring a dependency on netty 
4.1.17
{code}
[INFO] +- org.apache.hadoop:hadoop-aws:jar:3.2.0-SNAPSHOT:compile
[INFO] |  \- com.amazonaws:aws-java-sdk-bundle:jar:1.11.271:compile
[INFO] | +- io.netty:netty-codec-http:jar:4.1.17.Final:compile
[INFO] | +- io.netty:netty-codec:jar:4.1.17.Final:compile
[INFO] | +- io.netty:netty-handler:jar:4.1.17.Final:compile
[INFO] | +- io.netty:netty-buffer:jar:4.1.17.Final:compile
[INFO] | +- io.netty:netty-common:jar:4.1.17.Final:compile
[INFO] | +- io.netty:netty-transport:jar:4.1.17.Final:compile
[INFO] | \- io.netty:netty-resolver:jar:4.1.17.Final:compile
{code}

We either exclude these or roll back HADOOP-15040.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14898) Create official Docker images for development and testing features

2018-02-26 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377194#comment-16377194
 ] 

Elek, Marton commented on HADOOP-14898:
---

I propose to finish this with the following branch structure:
 * HADOOP-15083 (base image)
 ** should be committed to an empty branch (docker-runner)
 ** and branch should be registered by INFRA to produce apache/hadoop-runner 
dockerhub images
 * HADOOP-15084 (hadoop2 image)
 ** should be committed to an empty branch (docker-hadoop-2)
 ** and branch should be registered by INFRA to produce apache/hadoop:2 
dockerhub images
 * HADOOP-15256 (base image)
 ** should be committed to an empty branch (docker-hadoop-3) and
 ** branch should be registered by INFRA to produce apache/hadoop:3 and 
apache/hadoop-runner:latest dockerhub images

HADOOP-15257, HADOOP-15258, HADOOP-15259 should be committed to trunk.

> Create official Docker images for development and testing features 
> ---
>
> Key: HADOOP-14898
> URL: https://issues.apache.org/jira/browse/HADOOP-14898
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-14898.001.tar.gz, HADOOP-14898.002.tar.gz, 
> HADOOP-14898.003.tgz, docker_design.pdf
>
>
> This is the original mail from the mailing list:
> {code}
> TL;DR: I propose to create official hadoop images and upload them to the 
> dockerhub.
> GOAL/SCOPE: I would like improve the existing documentation with easy-to-use 
> docker based recipes to start hadoop clusters with various configuration.
> The images also could be used to test experimental features. For example 
> ozone could be tested easily with these compose file and configuration:
> https://gist.github.com/elek/1676a97b98f4ba561c9f51fce2ab2ea6
> Or even the configuration could be included in the compose file:
> https://github.com/elek/hadoop/blob/docker-2.8.0/example/docker-compose.yaml
> I would like to create separated example compose files for federation, ha, 
> metrics usage, etc. to make it easier to try out and understand the features.
> CONTEXT: There is an existing Jira 
> https://issues.apache.org/jira/browse/HADOOP-13397
> But it’s about a tool to generate production quality docker images (multiple 
> types, in a flexible way). If no objections, I will create a separated issue 
> to create simplified docker images for rapid prototyping and investigating 
> new features. And register the branch to the dockerhub to create the images 
> automatically.
> MY BACKGROUND: I am working with docker based hadoop/spark clusters quite a 
> while and run them succesfully in different environments (kubernetes, 
> docker-swarm, nomad-based scheduling, etc.) My work is available from here: 
> https://github.com/flokkr but they could handle more complex use cases (eg. 
> instrumenting java processes with btrace, or read/reload configuration from 
> consul).
>  And IMHO in the official hadoop documentation it’s better to suggest to use 
> official apache docker images and not external ones (which could be changed).
> {code}
> The next list will enumerate the key decision points regarding to docker 
> image creating
> A. automated dockerhub build  / jenkins build
> Docker images could be built on the dockerhub (a branch pattern should be 
> defined for a github repository and the location of the Docker files) or 
> could be built on a CI server and pushed.
> The second one is more flexible (it's more easy to create matrix build, for 
> example)
> The first one had the advantage that we can get an additional flag on the 
> dockerhub that the build is automated (and built from the source by the 
> dockerhub).
> The decision is easy as ASF supports the first approach: (see 
> https://issues.apache.org/jira/browse/INFRA-12781?focusedCommentId=15824096=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15824096)
> B. source: binary distribution or source build
> The second question is about creating the docker image. One option is to 
> build the software on the fly during the creation of the docker image the 
> other one is to use the binary releases.
> I suggest to use the second approach as:
> 1. In that case the hadoop:2.7.3 could contain exactly the same hadoop 
> distrubution as the downloadable one
> 2. We don't need to add development tools to the image, the image could be 
> more smaller (which is important as the goal for this image to getting 
> started as fast as possible)
> 3. The docker definition will be more simple (and more easy to maintain)
> Usually this approach is used in other projects (I checked Apache Zeppelin 
> and Apache Nutch)
> C. branch usage
> Other question is the location of the Docker file. It could be on the 
> official 

[jira] [Created] (HADOOP-15263) hadoop cloud-storage module to mark hadoop-common as provided; add azure-datalake

2018-02-26 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15263:
---

 Summary: hadoop cloud-storage module to mark hadoop-common as 
provided; add azure-datalake
 Key: HADOOP-15263
 URL: https://issues.apache.org/jira/browse/HADOOP-15263
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Steve Loughran
Assignee: Steve Loughran


Reviewing hadoop-cloud-storage module for use

* we should cut out hadoop-common so that if something downstream is already 
doing the heavy lifting of excluding it to get jackson & guava in sync, it's 
not sneaking back in.
* and add azure-datalake



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15261) Upgrade commons-io from 2.4 to 2.5

2018-02-26 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HADOOP-15261:
---

Assignee: Ajay Kumar

> Upgrade commons-io from 2.4 to 2.5
> --
>
> Key: HADOOP-15261
> URL: https://issues.apache.org/jira/browse/HADOOP-15261
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: minikdc
>Affects Versions: 3.0.0-alpha3
>Reporter: PandaMonkey
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: hadoop.txt
>
>
> Hi, after analyzing hadoop-common-project\hadoop-minikdc\pom.xml, we found 
> that Hadoop depends on org.apache.kerby:kerb-simplekdc 1.0.1, which 
> transitivity introduced commons-io:2.5. 
> At the same time, hadoop directly depends on a older version of 
> commons-io:2.4. By further look into the source code, these two versions of 
> commons-io have many different features. The dependency conflict problem 
> brings high risks of "NotClassDefFoundError:" or "NoSuchMethodError" issues 
> at runtime. Please notice this problem. Maybe upgrading commons-io from 2.4 
> to 2.5 is a good choice. Hope this report can help you. Thanks!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14693) Upgrade JUnit from 4 to 5

2018-02-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16377168#comment-16377168
 ] 

Steve Loughran commented on HADOOP-14693:
-

How ready are we to start playing with this?

> Upgrade JUnit from 4 to 5
> -
>
> Key: HADOOP-14693
> URL: https://issues.apache.org/jira/browse/HADOOP-14693
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> JUnit 4 does not support Java 9. We need to upgrade this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-02-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13761:

Status: Open  (was: Patch Available)

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761.001.patch, HADOOP-13761.002.patch, 
> HADOOP-13761.003.patch, HADOOP-13761.004.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-02-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376983#comment-16376983
 ] 

Steve Loughran commented on HADOOP-13761:
-

-1

I'd committed this locally and was doing the cherry pick to branch-3.1 & got a 
test timeout in {{ITestS3AFailureHandling.testReadFileChanged}} on that branch

{code}
java.lang.Exception: test timed out after 60 milliseconds
at java.lang.Thread.sleep(Native Method)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:344)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:256)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:231)
at 
org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:181)
at 
org.apache.hadoop.fs.s3a.S3AInputStream.lambda$lazySeek$1(S3AInputStream.java:327)
at 
org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$23/570183744.execute(Unknown 
Source)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$2(Invoker.java:190)
at 
org.apache.hadoop.fs.s3a.Invoker$$Lambda$24/1791082625.execute(Unknown Source)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
at 
org.apache.hadoop.fs.s3a.Invoker$$Lambda$13/1380113967.execute(Unknown Source)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:256)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:188)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:210)
at 
org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:320)
at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:423)
at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
at 
org.apache.hadoop.fs.s3a.ITestS3AFailureHandling.testReadFileChanged(ITestS3AFailureHandling.java:94)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}

This is the bit where read failures are being retried: are EOF exceptions being 
over-retried? 

Switching back to my dev terminal and trunk and I managed to get a 400 on all 
those failure tests, which makes me think maybe S3 ireland has startedf is 
playing up: switching to london fixes it.

Anyway, assuming there is a problem with S3 in a region, is this recovery code 
going to keep trying too often. That is: are we overdoing in retry on retry, as 
lazySeek does a retry with the chosen retryInvoker, and reopen does its retry 
too: with retry on retry things are taking so long to fail in a read that tests 
time out.

I think what needs to be done is to not have that double retry, or have the 
outer retry policy only handle FNFEs, and even then, only on s3guard.

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761.001.patch, HADOOP-13761.002.patch, 
> HADOOP-13761.003.patch, HADOOP-13761.004.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.

[jira] [Commented] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376781#comment-16376781
 ] 

genericqa commented on HADOOP-13761:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-tools/hadoop-aws: The patch generated 0 new + 
13 unchanged - 1 fixed = 13 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m  
3s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-13761 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912023/HADOOP-13761-011.patch
 |
| Optional Tests |  asflicense  findbugs  xml  compile  javac  javadoc  
mvninstall  mvnsite  unit  shadedclient  checkstyle  |
| uname | Linux 73bfc4656ad6 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2fa7963 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14209/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14209/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org 

[jira] [Commented] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9

2018-02-26 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376714#comment-16376714
 ] 

Takanobu Asanuma commented on HADOOP-12760:
---

Hi [~ajisakaa], thanks for your work.

I have faced errors while building it.

{noformat}
[ERROR] 
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java:359:
 Undefined reference: jdk.internal.ref.Cleaner
[ERROR] 
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java:65:
 Undefined reference: jdk.internal.ref.Cleaner
{noformat}

Should we need to ignore {{jdk.internal.ref.Cleaner}} in 
animal-sniffer-maven-plugin?


> sun.misc.Cleaner has moved to a new location in OpenJDK 9
> -
>
> Key: HADOOP-12760
> URL: https://issues.apache.org/jira/browse/HADOOP-12760
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Hegarty
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-12760.00.patch, HADOOP-12760.01.patch, 
> HADOOP-12760.02.patch, HADOOP-12760.03.patch, HADOOP-12760.04.patch, 
> HADOOP-12760.05.patch, HADOOP-12760.06.patch
>
>
> This is a heads-up: there are upcoming changes in JDK 9 that will require, at 
> least, a small update to org.apache.hadoop.crypto.CryptoStreamUtils & 
> org.apache.hadoop.io.nativeio.NativeIO.
> OpenJDK issue no. 8148117: "Move sun.misc.Cleaner to jdk.internal.ref" [1], 
> will move the Cleaner class from sun.misc to jdk.internal.ref. There is 
> ongoing discussion about the possibility of providing a public supported API, 
> maybe in the JDK 9 timeframe, for releasing NIO direct buffer native memory, 
> see the core-libs-dev mail thread [2]. At the very least CryptoStreamUtils & 
> NativeIO [3] should be updated to have knowledge of the new location of the 
> JDK Cleaner.
> [1] https://bugs.openjdk.java.net/browse/JDK-8148117
> [2] 
> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038243.html
> [3] https://github.com/apache/hadoop/search?utf8=✓=sun.misc.Cleaner



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-02-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13761:

Status: Patch Available  (was: Open)

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761.001.patch, HADOOP-13761.002.patch, 
> HADOOP-13761.003.patch, HADOOP-13761.004.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-02-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376698#comment-16376698
 ] 

Steve Loughran commented on HADOOP-13761:
-

* ..forgot to include the wrapped-> private change in patch 010; here it is in 
patch 011
*. w.r.t annotations, I see findbugs has some. I was wondering if they'd allow 
us to annotate lambdas

I was thinking findbugs should be able to conclude that, "a closure declared 
and executed within a synchronized env is itself synchronized", but I now think 
that fb can't be sure. It doesn't know that Invoker.once() executes the closure 
in that method...it queuing it for async use. So findbugs is right to warn

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761.001.patch, HADOOP-13761.002.patch, 
> HADOOP-13761.003.patch, HADOOP-13761.004.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-02-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13761:

Attachment: HADOOP-13761-011.patch

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761.001.patch, HADOOP-13761.002.patch, 
> HADOOP-13761.003.patch, HADOOP-13761.004.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-02-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13761:

Status: Open  (was: Patch Available)

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761-010.patch, HADOOP-13761-010.patch, 
> HADOOP-13761-011.patch, HADOOP-13761.001.patch, HADOOP-13761.002.patch, 
> HADOOP-13761.003.patch, HADOOP-13761.004.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15158) AliyunOSS: Supports role based credential in URL

2018-02-26 Thread wujinhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376518#comment-16376518
 ] 

wujinhu commented on HADOOP-15158:
--

Thanks [~ste...@apache.org] for the review. I have looked at the code in 
HADOOP-15141 and improved my tests. I add a simple implementation in 
*TestAliyunCredentials.* We will add tests & docs just as HADOOP-15141 did when 
we bring in actual implementations.

> AliyunOSS: Supports role based credential in URL
> 
>
> Key: HADOOP-15158
> URL: https://issues.apache.org/jira/browse/HADOOP-15158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 3.0.0
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 3.1.0, 2.9.1, 3.0.1
>
> Attachments: HADOOP-15158.001.patch, HADOOP-15158.002.patch, 
> HADOOP-15158.003.patch, HADOOP-15158.004.patch, HADOOP-15158.005.patch
>
>
> Currently, AliyunCredentialsProvider supports credential by 
> configuration(core-site.xml). Sometimes, admin wants to create different 
> temporary credential(key/secret/token) for different roles so that one role 
> cannot read data that belongs to another role.
> So, our code should support pass in the URI when creates an 
> XXXCredentialsProvider so that we can get user info(role) from the URI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15158) AliyunOSS: Supports role based credential in URL

2018-02-26 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376511#comment-16376511
 ] 

genericqa commented on HADOOP-15158:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 
3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15158 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911996/HADOOP-15158.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 06b51ac30501 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2fa7963 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14208/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14208/testReport/ |
| Max. process+thread count | 409 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14208/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |