[jira] [Updated] (HADOOP-15007) Stabilize and document Configuration element

2018-02-05 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15007:

Attachment: (was: HADOOP-15007.000.patch)

> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HADOOP-15007.000.patch
>
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15007) Stabilize and document Configuration element

2018-02-05 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15007:

Attachment: HADOOP-15007.000.patch

> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HADOOP-15007.000.patch
>
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15203) Support composite trusted channel resolver that supports both whitelist and blacklist

2018-02-05 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15203:

Attachment: HADOOP-15203.001.patch

> Support composite trusted channel resolver that supports both whitelist and 
> blacklist
> -
>
> Key: HADOOP-15203
> URL: https://issues.apache.org/jira/browse/HADOOP-15203
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15203.000.patch, HADOOP-15203.001.patch
>
>
> support composite trusted channel resolver that supports both whitelist and 
> blacklist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-02-02 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351209#comment-16351209
 ] 

Ajay Kumar commented on HADOOP-15170:
-

[~jlowe], thanks for review and commit.

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15170.001.patch, HADOOP-15170.002.patch, 
> HADOOP-15170.003.patch, HADOOP-15170.004.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15203) Support composite trusted channel resolver that supports both whitelist and blacklist

2018-02-02 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15203:

Status: Patch Available  (was: Open)

> Support composite trusted channel resolver that supports both whitelist and 
> blacklist
> -
>
> Key: HADOOP-15203
> URL: https://issues.apache.org/jira/browse/HADOOP-15203
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15203.000.patch
>
>
> support composite trusted channel resolver that supports both whitelist and 
> blacklist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15203) Support composite trusted channel resolver that supports both whitelist and blacklist

2018-02-02 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15203:

Attachment: HADOOP-15203.000.patch

> Support composite trusted channel resolver that supports both whitelist and 
> blacklist
> -
>
> Key: HADOOP-15203
> URL: https://issues.apache.org/jira/browse/HADOOP-15203
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15203.000.patch
>
>
> support composite trusted channel resolver that supports both whitelist and 
> blacklist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15203) Support composite trusted channel resolver that supports both whitelist and blacklist

2018-02-02 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15203:

Attachment: (was: HADOOP-15203.000.patch)

> Support composite trusted channel resolver that supports both whitelist and 
> blacklist
> -
>
> Key: HADOOP-15203
> URL: https://issues.apache.org/jira/browse/HADOOP-15203
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15203.000.patch
>
>
> support composite trusted channel resolver that supports both whitelist and 
> blacklist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15203) Support composite trusted channel resolver that supports both whitelist and blacklist

2018-02-02 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15203:

Attachment: HADOOP-15203.000.patch

> Support composite trusted channel resolver that supports both whitelist and 
> blacklist
> -
>
> Key: HADOOP-15203
> URL: https://issues.apache.org/jira/browse/HADOOP-15203
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15203.000.patch
>
>
> support composite trusted channel resolver that supports both whitelist and 
> blacklist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.

2018-02-02 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HADOOP-14775:
---

Assignee: (was: Ajay Kumar)

> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 
> --
>
> Key: HADOOP-14775
> URL: https://issues.apache.org/jira/browse/HADOOP-14775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Ajay Kumar
>Priority: Major
>  Labels: junit5
> Attachments: HADOOP-14775.01.patch, HADOOP-14775.02.patch
>
>
> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15178) Generalize NetUtils#wrapException to handle other subclasses with String Constructor.

2018-02-02 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339717#comment-16339717
 ] 

Ajay Kumar edited comment on HADOOP-15178 at 2/2/18 4:25 PM:
-

[~ste...@apache.org], I don't understand the proposal clearly. Could you please 
elaborate 2 points you mentioned. We have test case in 
{{TesIOUtils#testWrapException}} which tests IOE with no String constructor.  
Shall we add one to {{TestNetUtils}} as well?


was (Author: ajayydv):
[~ste...@apache.org], Sorry, i don't understand the proposal clearly. Could you 
please elaborate 2 points you mentioned. We have test case in 
{{TesIOUtils#testWrapException}} which tests IOE with no String constructor.  
Shall we add one to {{TestNetUtils}} as well?

> Generalize NetUtils#wrapException to handle other subclasses with String 
> Constructor.
> -
>
> Key: HADOOP-15178
> URL: https://issues.apache.org/jira/browse/HADOOP-15178
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-15178.001.patch
>
>
> NetUtils#wrapException returns an IOException if exception passed to it is 
> not of type 
> SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
> By default, it  should always return instance (subclass of IOException) of 
> same type unless a String constructor is not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-02-02 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350601#comment-16350601
 ] 

Ajay Kumar commented on HADOOP-12897:
-

[~ste...@apache.org],[~arpitagarwal] Requesting review of patch v5 when 
possible.

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch, HADOOP-12897.004.patch, HADOOP-12897.005.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-02-01 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349030#comment-16349030
 ] 

Ajay Kumar commented on HADOOP-15170:
-

[~jlowe], thanks for testing, updated patch to make symlink as-is.

{code}
$ mkdir testdir
$ cd testdir/
$ ln -s a b
$ ls -ltr
total 0
lrwxr-xr-x  1 user  group  1 Feb  1 09:33 b -> a
$ ln -s /tmp/foo c
$ ls -ltr
total 0
lrwxr-xr-x  1 user  group  1 Feb  1 09:33 b -> a
lrwxr-xr-x  1 user  group  8 Feb  1 09:34 c -> /tmp/foo
$ cd ..
$  tar zcf testdir.tgz testdir
$ ls -ltr testdir2
total 0
drwxr-xr-x  4 user  group  128 Feb  1 09:36 testdir
$ ls -ltr testdir2/testdir/
total 0
lrwxr-xr-x  1 user  group  8 Feb  1 09:36 c -> /tmp/foo
lrwxr-xr-x  1 user  group  1 Feb  1 09:36 b -> a
{code}

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch, HADOOP-15170.002.patch, 
> HADOOP-15170.003.patch, HADOOP-15170.004.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-02-01 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15170:

Attachment: HADOOP-15170.004.patch

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch, HADOOP-15170.002.patch, 
> HADOOP-15170.003.patch, HADOOP-15170.004.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15007) Stabilize and document Configuration element

2018-01-31 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HADOOP-15007:
---

Assignee: Ajay Kumar  (was: Anu Engineer)

> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Blocker
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15202) Deprecate CombinedIPWhiteList to use CombinedIPList

2018-01-31 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347623#comment-16347623
 ] 

Ajay Kumar commented on HADOOP-15202:
-

Orignal suggestion from [~xyao]. 
[HDFS-13060|https://issues.apache.org/jira/browse/HDFS-13060?focusedCommentId=16347397=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16347397]

>  Deprecate CombinedIPWhiteList to use CombinedIPList 
> -
>
> Key: HADOOP-15202
> URL: https://issues.apache.org/jira/browse/HADOOP-15202
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
>  Deprecate CombinedIPWhiteList to use CombinedIPList. 
> Orignal suggestion from [~xyao]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15202) Deprecate CombinedIPWhiteList to use CombinedIPList

2018-01-31 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15202:

Description: 
 Deprecate CombinedIPWhiteList to use CombinedIPList. 
Orignal suggestion from [~xyao]

  was: Deprecate CombinedIPWhiteList to use CombinedIPList.


>  Deprecate CombinedIPWhiteList to use CombinedIPList 
> -
>
> Key: HADOOP-15202
> URL: https://issues.apache.org/jira/browse/HADOOP-15202
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
>  Deprecate CombinedIPWhiteList to use CombinedIPList. 
> Orignal suggestion from [~xyao]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15203) Support composite trusted channel resolver that supports both whitelist and blacklist

2018-01-31 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar moved HDFS-13090 to HADOOP-15203:


Key: HADOOP-15203  (was: HDFS-13090)
Project: Hadoop Common  (was: Hadoop HDFS)

> Support composite trusted channel resolver that supports both whitelist and 
> blacklist
> -
>
> Key: HADOOP-15203
> URL: https://issues.apache.org/jira/browse/HADOOP-15203
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: security
>
> support composite trusted channel resolver that supports both whitelist and 
> blacklist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15202) Deprecate CombinedIPWhiteList to use CombinedIPList

2018-01-31 Thread Ajay Kumar (JIRA)
Ajay Kumar created HADOOP-15202:
---

 Summary:  Deprecate CombinedIPWhiteList to use CombinedIPList 
 Key: HADOOP-15202
 URL: https://issues.apache.org/jira/browse/HADOOP-15202
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Ajay Kumar
Assignee: Ajay Kumar


 Deprecate CombinedIPWhiteList to use CombinedIPList.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15007) Stabilize and document Configuration element

2018-01-31 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347587#comment-16347587
 ] 

Ajay Kumar edited comment on HADOOP-15007 at 1/31/18 8:55 PM:
--

[~ste...@apache.org], Since excessive logging is the issue, will it be ok if we 
log a single line at debug level.(something like " Invalid tag 'secret' found 
for property:test.fs.s3a.name Source")  without stacktrace? This debug line is 
applicable even if change tags to String instead if enum.
Below is response to some of the other questions raised:
{quote}Is this expected to be a stable change? Or at least: when do we expect 
it to be stable?{quote}
We have tested it with older versions (xml config with no tags as well.). Ready 
to make any changes required to make it more stable.
{quote}What is going to happen when an older version of Hadoop encounters an 
XML file with a new field in it?{quote}
Tags will be left unprocessed. 
{quote}we need some tests which actually set out to break the mechanism. 
Invalid tags etc.{quote}
Will add a test case for this.
{quote}Javadocs and end-user docs to cover what can and cannot be done with 
tags, how to use. Configuration's own javadocs are the defacto documentation of 
the XML format: they need to be updated with the change.{quote}
Already added javadocs for changes in this functionality. Will add/update 
javadocs if missed. 
{quote}what's going to to happen when existing code which 
serializes/deserializes configs using Hadoop writables encounters configs with 
tags? Can an old hadoop-common lib deserialize, say core-default.xml with tags 
added? I don't see any tests for that, and I'm assuming "no" unless it can be 
demonstrated.{quote}
Will add test for this.
{quote}Can I add tags to a property retrieved with getPassword()?{quote}
Only if If getPassword falls back to config. Current change is not applocable 
to {{CredentialProviderFactory}}. So 
{{HADOOP_SECURITY_CREDENTIAL_PROVIDER_PATH}} can be tagged but not the 
properties retrieved from it.
{quote}If I overrride a -default property in a file, does it inherit the tags 
from the parent?{quote}
Yes
{quote}If I add tags to an overidden property, do the tags override or replace 
existing ones?{quote}
New tag will not replace the old one but will be an addition to it. It will 
result in same property being returned for two different tags by 
{{getAllPropertiesByTag}}. Ex SECURITY, CLIENT
{quote}Configuration.readTagFromConfig. If there's an invalid tag in a -default 
file, is it going to fill the log files with info messages on every load of 
every single configuration file? If so, it's too noisy. One warning per 
(file/property) the way we do for deprecated tags.{quote}
Will fix this.
{quote}Configuration.getPropertyTag returns a PropertyTag enum; which is marked 
as Private/Evolving. But configuration is Public/Stable. Either we need to mark 
PropertyTag as Public/ {Evolving/Unstable}
or getPropertyTag is marked as Private.{quote}
Open to  both suggestions on this one.
{quote}Could there have been a way to allow PropertyTag enums to be registered 
dynamically/via classname strings, so that they can be kept in the specific 
modules. We've now tainted hadoop-common with details about yarn, hdfs, 
ozone.{quote}
I see your point but how we will register a class which is not in classpath of 
common? Another option is to create a common class for property tags and use it 
for everything. i.e hdfs,yarn etc.



was (Author: ajayydv):
[~ste...@apache.org], Since excessive logging is the issue, will it be ok if we 
log a single line at debug level.(something like " Invalid tag 'secret' found 
for property:test.fs.s3a.name Source")  without stacktrace? This debug line is 
i think will be applicable even if change tags to String instead if enum.

{quote}Is this expected to be a stable change? Or at least: when do we expect 
it to be stable?{quote}
We have tested it with older versions (xml config with no tags as well.). Ready 
to make any changes required to make it more stable.
{quote}What is going to happen when an older version of Hadoop encounters an 
XML file with a new field in it?{quote}
Tags will be left unprocessed. 
{quote}we need some tests which actually set out to break the mechanism. 
Invalid tags etc.{quote}
Will add a test case for this.
{quote}Javadocs and end-user docs to cover what can and cannot be done with 
tags, how to use. Configuration's own javadocs are the defacto documentation of 
the XML format: they need to be updated with the change.{quote}
Already added javadocs for changes in this functionality. Will add/update 
javadocs if missed. 
{quote}what's going to to happen when existing code which 
serializes/deserializes configs using Hadoop writables encounters configs with 
tags? Can an old hadoop-common lib deserialize, say core-default.xml with tags 
added? I don't see any tests for that, and I'm assuming 

[jira] [Commented] (HADOOP-15007) Stabilize and document Configuration element

2018-01-31 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347587#comment-16347587
 ] 

Ajay Kumar commented on HADOOP-15007:
-

[~ste...@apache.org], Since excessive logging is the issue, will it be ok if we 
log a single line at debug level.(something like " Invalid tag 'secret' found 
for property:test.fs.s3a.name Source")  without stacktrace? This debug line is 
i think will be applicable even if change tags to String instead if enum.

{quote}Is this expected to be a stable change? Or at least: when do we expect 
it to be stable?{quote}
We have tested it with older versions (xml config with no tags as well.). Ready 
to make any changes required to make it more stable.
{quote}What is going to happen when an older version of Hadoop encounters an 
XML file with a new field in it?{quote}
Tags will be left unprocessed. 
{quote}we need some tests which actually set out to break the mechanism. 
Invalid tags etc.{quote}
Will add a test case for this.
{quote}Javadocs and end-user docs to cover what can and cannot be done with 
tags, how to use. Configuration's own javadocs are the defacto documentation of 
the XML format: they need to be updated with the change.{quote}
Already added javadocs for changes in this functionality. Will add/update 
javadocs if missed. 
{quote}what's going to to happen when existing code which 
serializes/deserializes configs using Hadoop writables encounters configs with 
tags? Can an old hadoop-common lib deserialize, say core-default.xml with tags 
added? I don't see any tests for that, and I'm assuming "no" unless it can be 
demonstrated.{quote}
Will add test for this.
{quote}Can I add tags to a property retrieved with getPassword()?{quote}
Only if If getPassword falls back to config. Current change is not applocable 
to {{CredentialProviderFactory}}. So 
{{HADOOP_SECURITY_CREDENTIAL_PROVIDER_PATH}} can be tagged but not the 
properties retrieved from it.
{quote}If I overrride a -default property in a file, does it inherit the tags 
from the parent?{quote}
Yes
{quote}If I add tags to an overidden property, do the tags override or replace 
existing ones?{quote}
New tag will not replace the old one but will be an addition to it. It will 
result in same property being returned for two different tags by 
{{getAllPropertiesByTag}}. Ex SECURITY, CLIENT
{quote}Configuration.readTagFromConfig. If there's an invalid tag in a -default 
file, is it going to fill the log files with info messages on every load of 
every single configuration file? If so, it's too noisy. One warning per 
(file/property) the way we do for deprecated tags.{quote}
Will fix this.
{quote}Configuration.getPropertyTag returns a PropertyTag enum; which is marked 
as Private/Evolving. But configuration is Public/Stable. Either we need to mark 
PropertyTag as Public/ {Evolving/Unstable}
or getPropertyTag is marked as Private.{quote}
Open to  both suggestions on this one.
{quote}Could there have been a way to allow PropertyTag enums to be registered 
dynamically/via classname strings, so that they can be kept in the specific 
modules. We've now tainted hadoop-common with details about yarn, hdfs, 
ozone.{quote}
I see your point but how we will register a class which is not in classpath of 
common? Another option is to create a common class for property tags and use it 
for everything. i.e hdfs,yarn etc.


> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Anu Engineer
>Priority: Blocker
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-01-30 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345506#comment-16345506
 ] 

Ajay Kumar commented on HADOOP-12897:
-

Updated patch v5 to remove checkstyle issue.

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch, HADOOP-12897.004.patch, HADOOP-12897.005.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-01-30 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-12897:

Attachment: HADOOP-12897.005.patch

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch, HADOOP-12897.004.patch, HADOOP-12897.005.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-01-29 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-12897:

Attachment: HADOOP-12897.004.patch

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch, HADOOP-12897.004.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-01-29 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-12897:

Attachment: (was: HADOOP-12897.004.patch)

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch, HADOOP-12897.004.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-01-29 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-12897:

Attachment: HADOOP-12897.004.patch

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch, HADOOP-12897.004.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-01-29 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344165#comment-16344165
 ] 

Ajay Kumar commented on HADOOP-12897:
-

Patch v4 adds log message at DEBUG level to address [~ste...@apache.org] 
suggestion on {{wrapExceptionWithMessage}}.

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-01-29 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344139#comment-16344139
 ] 

Ajay Kumar edited comment on HADOOP-12897 at 1/29/18 10:46 PM:
---

[~arpitagarwal], thanks for suggestion. Using multi-catch in this case gives 
another compile time error which basically expects throws clause in function to 
be Exception. Seems like a bug in JDK.


was (Author: ajayydv):
[~arpitagarwal], thanks for suggestion. Using multi-catch in this case gives 
another compile time error which basically expects throws clause in function to 
Exception. Seems like a bug in JDK.

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-01-29 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344139#comment-16344139
 ] 

Ajay Kumar commented on HADOOP-12897:
-

[~arpitagarwal], thanks for suggestion. Using multi-catch in this case gives 
another compile time error which basically expects throws clause in function to 
Exception. Seems like a bug in JDK.

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15129) Datanode caches namenode DNS lookup failure and cannot startup

2018-01-29 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16344037#comment-16344037
 ] 

Ajay Kumar commented on HADOOP-15129:
-

{quote}local host is: (unknown); {quote}
can we pass "localhost" for {{NetUtils.wrapException}} instead of null. Above 
message in logs is little misleading.

> Datanode caches namenode DNS lookup failure and cannot startup
> --
>
> Key: HADOOP-15129
> URL: https://issues.apache.org/jira/browse/HADOOP-15129
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.2
> Environment: Google Compute Engine.
> I'm using Java 8, Debian 8, Hadoop 2.8.2.
>Reporter: Karthik Palaniappan
>Assignee: Karthik Palaniappan
>Priority: Minor
> Attachments: HADOOP-15129.001.patch, HADOOP-15129.002.patch
>
>
> On startup, the Datanode creates an InetSocketAddress to register with each 
> namenode. Though there are retries on connection failure throughout the 
> stack, the same InetSocketAddress is reused.
> InetSocketAddress is an interesting class, because it resolves DNS names to 
> IP addresses on construction, and it is never refreshed. Hadoop re-creates an 
> InetSocketAddress in some cases just in case the remote IP has changed for a 
> particular DNS name: https://issues.apache.org/jira/browse/HADOOP-7472.
> Anyway, on startup, you cna see the Datanode log: "Namenode...remains 
> unresolved" -- referring to the fact that DNS lookup failed.
> {code:java}
> 2017-11-02 16:01:55,115 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Refresh request received for nameservices: null
> 2017-11-02 16:01:55,153 WARN org.apache.hadoop.hdfs.DFSUtilClient: Namenode 
> for null remains unresolved for ID null. Check your hdfs-site.xml file to 
> ensure namenodes are configured properly.
> 2017-11-02 16:01:55,156 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Starting BPOfferServices for nameservices: 
> 2017-11-02 16:01:55,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool  (Datanode Uuid unassigned) service to 
> cluster-32f5-m:8020 starting to offer service
> {code}
> The Datanode then proceeds to use this unresolved address, as it may work if 
> the DN is configured to use a proxy. Since I'm not using a proxy, it forever 
> prints out this message:
> {code:java}
> 2017-12-15 00:13:40,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:45,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:50,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:55,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:14:00,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> {code}
> Unfortunately, the log doesn't contain the exception that triggered it, but 
> the culprit is actually in IPC Client: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L444.
> This line was introduced in https://issues.apache.org/jira/browse/HADOOP-487 
> to give a clear error message when somebody mispells an address.
> However, the fix in HADOOP-7472 doesn't apply here, because that code happens 
> in Client#getConnection after the Connection is constructed.
> My proposed fix (will attach a patch) is to move this exception out of the 
> constructor and into a place that will trigger HADOOP-7472's logic to 
> re-resolve addresses. If the DNS failure was temporary, this will allow the 
> connection to succeed. If not, the connection will fail after ipc client 
> retries (default 10 seconds worth of retries).
> I want to fix this in ipc client rather than just in Datanode startup, as 
> this fixes temporary DNS issues for all of Hadoop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-29 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343828#comment-16343828
 ] 

Ajay Kumar commented on HADOOP-15170:
-

[~jlowe], thanks for review. Updated patch v3 with suggested changes.

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch, HADOOP-15170.002.patch, 
> HADOOP-15170.003.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-29 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15170:

Attachment: HADOOP-15170.003.patch

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch, HADOOP-15170.002.patch, 
> HADOOP-15170.003.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-29 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343727#comment-16343727
 ] 

Ajay Kumar commented on HADOOP-14969:
-

Failed test is unrelated, passes locally.

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-14969.001.patch, HADOOP-14969.002.patch, 
> HADOOP-14969.003.patch, HADOOP-14969.004.patch, HADOOP-14969.005.patch, 
> HADOOP-14969.006.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-26 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341924#comment-16341924
 ] 

Ajay Kumar commented on HADOOP-14969:
-

Addressed test failure and findbug warning in patch v6. 

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-14969.001.patch, HADOOP-14969.002.patch, 
> HADOOP-14969.003.patch, HADOOP-14969.004.patch, HADOOP-14969.005.patch, 
> HADOOP-14969.006.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-26 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14969:

Attachment: HADOOP-14969.006.patch

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-14969.001.patch, HADOOP-14969.002.patch, 
> HADOOP-14969.003.patch, HADOOP-14969.004.patch, HADOOP-14969.005.patch, 
> HADOOP-14969.006.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-01-26 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341847#comment-16341847
 ] 

Ajay Kumar edited comment on HADOOP-12897 at 1/27/18 12:27 AM:
---


https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-HDFS-Build/22816/console

-1 overall

| Vote |Subsystem |  Runtime   | Comment

|   0  |  reexec  |   0m 17s   | Docker mode activated. 
|  |  || Prechecks 
|  +1  | @author  |   0m  0s   | The patch does not contain any @author 
|  |  || tags.
|  +1  |  test4tests  |   0m  0s   | The patch appears to include 2 new or 
|  |  || modified test files.
|  |  || trunk Compile Tests 
|  +1  |  mvninstall  |  15m 24s   | trunk passed 
|  +1  | compile  |   0m 50s   | trunk passed 
|  +1  |  checkstyle  |   0m 37s   | trunk passed 
|  +1  | mvnsite  |   0m 56s   | trunk passed 
|  +1  |shadedclient  |  10m 59s   | branch has no errors when building and 
|  |  || testing our client artifacts.
|  +1  |findbugs  |   1m 46s   | trunk passed 
|  +1  | javadoc  |   0m 53s   | trunk passed 
|  |  || Patch Compile Tests 
|  +1  |  mvninstall  |   0m 54s   | the patch passed 
|  +1  | compile  |   0m 47s   | the patch passed 
|  +1  |   javac  |   0m 47s   | the patch passed 
|  +1  |  checkstyle  |   0m 33s   | the patch passed 
|  +1  | mvnsite  |   0m 53s   | the patch passed 
|  +1  |  whitespace  |   0m  0s   | The patch has no whitespace issues. 
|  +1  |shadedclient  |  10m 21s   | patch has no errors when building and 
|  |  || testing our client artifacts.
|  +1  |findbugs  |   1m 50s   | the patch passed 
|  +1  | javadoc  |   0m 50s   | the patch passed 
|  |  || Other Tests 
|  -1  |unit  | 124m 26s   | hadoop-hdfs in the patch failed. 
|  +1  |  asflicense  |   0m 18s   | The patch does not generate ASF 
|  |  || License warnings.
|  |  | 172m 33s   | 


  Reason | Tests
 Failed junit tests  |  hadoop.hdfs.web.TestWebHdfsTimeouts 
 |  hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 


|| Subsystem || Report/Notes ||

| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12897 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905833/HDFS-12897.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bfb52449b016 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 82cc6f6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22816/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22816/testReport/ |
| Max. process+thread count | 3425 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22816/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


was (Author: ajayydv):
-1 overall

| Vote |Subsystem |  Runtime   | Comment

|   0  |  reexec  |   0m 17s   | Docker mode activated. 
|  |  || Prechecks 
|  +1  | @author  |   0m  0s   | The patch does not contain any @author 
|  |  || tags.
|  +1  |  test4tests  |   0m  0s   | The patch appears to include 2 new or 
|  |  || modified test files.
|  |  || trunk Compile Tests 
|  +1  |  mvninstall  |  15m 24s   | trunk passed 
|  +1  | compile  |   0m 50s   | trunk passed 
|  +1  |  checkstyle  |   0m 37s   | trunk passed 
|  +1  | mvnsite  |   0m 56s   | trunk passed 
|  +1  |shadedclient  |  10m 59s   | branch has no errors when building and 
|  |  || testing our client artifacts.
|  +1  |findbugs  |   1m 46s   | trunk passed 
|  +1  | javadoc 

[jira] [Commented] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-01-26 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16341847#comment-16341847
 ] 

Ajay Kumar commented on HADOOP-12897:
-

-1 overall

| Vote |Subsystem |  Runtime   | Comment

|   0  |  reexec  |   0m 17s   | Docker mode activated. 
|  |  || Prechecks 
|  +1  | @author  |   0m  0s   | The patch does not contain any @author 
|  |  || tags.
|  +1  |  test4tests  |   0m  0s   | The patch appears to include 2 new or 
|  |  || modified test files.
|  |  || trunk Compile Tests 
|  +1  |  mvninstall  |  15m 24s   | trunk passed 
|  +1  | compile  |   0m 50s   | trunk passed 
|  +1  |  checkstyle  |   0m 37s   | trunk passed 
|  +1  | mvnsite  |   0m 56s   | trunk passed 
|  +1  |shadedclient  |  10m 59s   | branch has no errors when building and 
|  |  || testing our client artifacts.
|  +1  |findbugs  |   1m 46s   | trunk passed 
|  +1  | javadoc  |   0m 53s   | trunk passed 
|  |  || Patch Compile Tests 
|  +1  |  mvninstall  |   0m 54s   | the patch passed 
|  +1  | compile  |   0m 47s   | the patch passed 
|  +1  |   javac  |   0m 47s   | the patch passed 
|  +1  |  checkstyle  |   0m 33s   | the patch passed 
|  +1  | mvnsite  |   0m 53s   | the patch passed 
|  +1  |  whitespace  |   0m  0s   | The patch has no whitespace issues. 
|  +1  |shadedclient  |  10m 21s   | patch has no errors when building and 
|  |  || testing our client artifacts.
|  +1  |findbugs  |   1m 50s   | the patch passed 
|  +1  | javadoc  |   0m 50s   | the patch passed 
|  |  || Other Tests 
|  -1  |unit  | 124m 26s   | hadoop-hdfs in the patch failed. 
|  +1  |  asflicense  |   0m 18s   | The patch does not generate ASF 
|  |  || License warnings.
|  |  | 172m 33s   | 


  Reason | Tests
 Failed junit tests  |  hadoop.hdfs.web.TestWebHdfsTimeouts 
 |  hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 


|| Subsystem || Report/Notes ||

| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12897 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905833/HDFS-12897.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bfb52449b016 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 82cc6f6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22816/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22816/testReport/ |
| Max. process+thread count | 3425 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22816/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-25 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16340451#comment-16340451
 ] 

Ajay Kumar commented on HADOOP-14969:
-

[~xyao], thanks for review. Addressed your suggestions in patch v5.
[~ste...@apache.org] Shall we address runtime to IOE conversion in separate 
jira as this jira is directed towards improving diagnostics?

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-14969.001.patch, HADOOP-14969.002.patch, 
> HADOOP-14969.003.patch, HADOOP-14969.004.patch, HADOOP-14969.005.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-25 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14969:

Attachment: HADOOP-14969.005.patch

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-14969.001.patch, HADOOP-14969.002.patch, 
> HADOOP-14969.003.patch, HADOOP-14969.004.patch, HADOOP-14969.005.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-01-25 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-12897:

Status: Open  (was: Patch Available)

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-01-25 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-12897:

Status: Patch Available  (was: Open)

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-25 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339859#comment-16339859
 ] 

Ajay Kumar commented on HADOOP-15170:
-

[~jlowe], Could you please review attached patch.

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch, HADOOP-15170.002.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15178) Generalize NetUtils#wrapException to handle other subclasses with String Constructor.

2018-01-25 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339717#comment-16339717
 ] 

Ajay Kumar commented on HADOOP-15178:
-

[~ste...@apache.org], Sorry, i don't understand the proposal clearly. Could you 
please elaborate 2 points you mentioned. We have test case in 
{{TesIOUtils#testWrapException}} which tests IOE with no String constructor.  
Shall we add one to {{TestNetUtils}} as well?

> Generalize NetUtils#wrapException to handle other subclasses with String 
> Constructor.
> -
>
> Key: HADOOP-15178
> URL: https://issues.apache.org/jira/browse/HADOOP-15178
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-15178.001.patch
>
>
> NetUtils#wrapException returns an IOException if exception passed to it is 
> not of type 
> SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
> By default, it  should always return instance (subclass of IOException) of 
> same type unless a String constructor is not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-01-23 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16336257#comment-16336257
 ] 

Ajay Kumar edited comment on HADOOP-12897 at 1/23/18 7:10 PM:
--

[~arpitagarwal] thanks for review. Updated patch v3 to include suggested 
changes. Also updated test case to include check for AuthenticationException.


was (Author: ajayydv):
[~arpitagarwal] thanks for review. Updated patch to include suggested changes. 
Also updated test case to include check for AuthenticationException.

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12897) KerberosAuthenticator.authenticate to include URL on IO failures

2018-01-23 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-12897:

Attachment: HADOOP-12897.003.patch

> KerberosAuthenticator.authenticate to include URL on IO failures
> 
>
> Key: HADOOP-12897
> URL: https://issues.apache.org/jira/browse/HADOOP-12897
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-12897.001.patch, HADOOP-12897.002.patch, 
> HADOOP-12897.003.patch
>
>
> If {{KerberosAuthenticator.authenticate}} can't connect to the endpoint, you 
> get a stack trace, but without the URL it is trying to talk to.
> That is: it doesn't have any equivalent of the {{NetUtils.wrapException}} 
> handler —which can't be called here as its not in the {{hadoop-auth}} module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs

2018-01-19 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332739#comment-16332739
 ] 

Ajay Kumar commented on HADOOP-14788:
-

[~ste...@apache.org],[~hanishakoneru] thanks for review and commit.

> Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
> --
>
> Key: HADOOP-14788
> URL: https://issues.apache.org/jira/browse/HADOOP-14788
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, 
> HADOOP-14788.003.patch, HADOOP-14788.004.patch, HADOOP-14788.005.patch, 
> HADOOP-14788.006.patch, HADOOP-14788.007.patch
>
>
> When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps 
> with the filename, so losing the exception class information.
> Is this needed. or can it pass everything up?
> If it is needed, well, it's a common pattern: wrapping the exception with the 
> path & operation. Maybe it's time to add an IOE version of 
> {{NetworkUtils.wrapException()}} which handles the broader set of IOEs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-19 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332737#comment-16332737
 ] 

Ajay Kumar commented on HADOOP-15114:
-

 thanks [~ste...@apache.org],[~arpitagarwal] for review and commit, 
[~brahmareddy] for reporting.

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch, 
> HADOOP-15114.addendum.patch, HADOOP-15114.addendum1.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-19 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16332639#comment-16332639
 ] 

Ajay Kumar commented on HADOOP-15170:
-

fixed checkstyle issues.

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch, HADOOP-15170.002.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-19 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15170:

Attachment: HADOOP-15170.002.patch

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch, HADOOP-15170.002.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15178) Generalize NetUtils#wrapException to handle other subclasses with String Constructor.

2018-01-18 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331548#comment-16331548
 ] 

Ajay Kumar commented on HADOOP-15178:
-

asflicense warning and unit test failures are unrelated.

> Generalize NetUtils#wrapException to handle other subclasses with String 
> Constructor.
> -
>
> Key: HADOOP-15178
> URL: https://issues.apache.org/jira/browse/HADOOP-15178
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-15178.001.patch
>
>
> NetUtils#wrapException returns an IOException if exception passed to it is 
> not of type 
> SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
> By default, it  should always return instance (subclass of IOException) of 
> same type unless a String constructor is not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-18 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15114:

Attachment: (was: HADOOP-15114.addendum1.patch)

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch, 
> HADOOP-15114.addendum.patch, HADOOP-15114.addendum1.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-18 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15114:

Attachment: HADOOP-15114.addendum1.patch

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch, 
> HADOOP-15114.addendum.patch, HADOOP-15114.addendum1.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-18 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15114:

Attachment: HADOOP-15114.addendum1.patch

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch, 
> HADOOP-15114.addendum.patch, HADOOP-15114.addendum1.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-18 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15114:

Attachment: (was: HADOOP-15114.addendum1.patch)

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch, 
> HADOOP-15114.addendum.patch, HADOOP-15114.addendum1.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-18 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15114:

Attachment: HADOOP-15114.addendum1.patch

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch, 
> HADOOP-15114.addendum.patch, HADOOP-15114.addendum1.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs

2018-01-17 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329919#comment-16329919
 ] 

Ajay Kumar commented on HADOOP-14788:
-

fixed checkstyle issues patch v6. 

> Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
> --
>
> Key: HADOOP-14788
> URL: https://issues.apache.org/jira/browse/HADOOP-14788
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, 
> HADOOP-14788.003.patch, HADOOP-14788.004.patch, HADOOP-14788.005.patch, 
> HADOOP-14788.006.patch
>
>
> When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps 
> with the filename, so losing the exception class information.
> Is this needed. or can it pass everything up?
> If it is needed, well, it's a common pattern: wrapping the exception with the 
> path & operation. Maybe it's time to add an IOE version of 
> {{NetworkUtils.wrapException()}} which handles the broader set of IOEs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14788:

Attachment: HADOOP-14788.006.patch

> Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
> --
>
> Key: HADOOP-14788
> URL: https://issues.apache.org/jira/browse/HADOOP-14788
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, 
> HADOOP-14788.003.patch, HADOOP-14788.004.patch, HADOOP-14788.005.patch, 
> HADOOP-14788.006.patch
>
>
> When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps 
> with the filename, so losing the exception class information.
> Is this needed. or can it pass everything up?
> If it is needed, well, it's a common pattern: wrapping the exception with the 
> path & operation. Maybe it's time to add an IOE version of 
> {{NetworkUtils.wrapException()}} which handles the broader set of IOEs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15170:

Attachment: (was: HADOOP-15170.001.patch)

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15170:

Status: Patch Available  (was: Open)

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15170:

Attachment: HADOOP-15170.001.patch

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15170:

Attachment: HADOOP-15170.001.patch

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15170.001.patch
>
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14959) DelegationTokenAuthenticator.authenticate() to wrap network exceptions

2018-01-17 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329452#comment-16329452
 ] 

Ajay Kumar commented on HADOOP-14959:
-

[~bharatviswa], thanks for review. Added [HADOOP-15178] to make changes in 
NetUtils#wrapException.

> DelegationTokenAuthenticator.authenticate() to wrap network exceptions
> --
>
> Key: HADOOP-14959
> URL: https://issues.apache.org/jira/browse/HADOOP-14959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net, security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14959.001.patch, HADOOP-14959.002.patch
>
>
> network errors raised in {{DelegationTokenAuthenticator.authenticate()}} 
> aren't being wrapped, so only return the usual limited-value java.net error 
> text. using {{NetUtils.wrapException()}} can address that



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15178) Generalize NetUtils#wrapException to handle other subclasses with String Constructor.

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15178:

Description: 
NetUtils#wrapException returns an IOException if exception passed to it is not 
of type 
SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
By default, it  should always return instance (subclass of IOException) of same 
type unless a String constructor is not available.

  was:
NetUtils#wrapException returns an IOException if exception passed to it is not 
of type 
SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
# By default, it  should always return instance (subclass of IOException) of 
same type unless a String constructor is not available.


> Generalize NetUtils#wrapException to handle other subclasses with String 
> Constructor.
> -
>
> Key: HADOOP-15178
> URL: https://issues.apache.org/jira/browse/HADOOP-15178
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> NetUtils#wrapException returns an IOException if exception passed to it is 
> not of type 
> SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
> By default, it  should always return instance (subclass of IOException) of 
> same type unless a String constructor is not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15178) Generalize NetUtils#wrapException to handle other subclasses with String Constructor.

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15178:

Environment: (was: NetUtils#wrapException returns an IOException if 
exception passed to it is not of type 
SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
By default, it  should always return instance (subclass of IOException) of same 
type unless a String constructor is not available.)

> Generalize NetUtils#wrapException to handle other subclasses with String 
> Constructor.
> -
>
> Key: HADOOP-15178
> URL: https://issues.apache.org/jira/browse/HADOOP-15178
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15178) Generalize NetUtils#wrapException to handle other subclasses with String Constructor.

2018-01-17 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15178:

Description: 
NetUtils#wrapException returns an IOException if exception passed to it is not 
of type 
SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
# By default, it  should always return instance (subclass of IOException) of 
same type unless a String constructor is not available.

> Generalize NetUtils#wrapException to handle other subclasses with String 
> Constructor.
> -
>
> Key: HADOOP-15178
> URL: https://issues.apache.org/jira/browse/HADOOP-15178
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> NetUtils#wrapException returns an IOException if exception passed to it is 
> not of type 
> SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
> # By default, it  should always return instance (subclass of IOException) of 
> same type unless a String constructor is not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15178) Generalize NetUtils#wrapException to handle other subclasses with String Constructor.

2018-01-17 Thread Ajay Kumar (JIRA)
Ajay Kumar created HADOOP-15178:
---

 Summary: Generalize NetUtils#wrapException to handle other 
subclasses with String Constructor.
 Key: HADOOP-15178
 URL: https://issues.apache.org/jira/browse/HADOOP-15178
 Project: Hadoop Common
  Issue Type: Bug
 Environment: NetUtils#wrapException returns an IOException if 
exception passed to it is not of type 
SocketException,EOFException,NoRouteToHostException,SocketTimeoutException,UnknownHostException,ConnectException,BindException.
By default, it  should always return instance (subclass of IOException) of same 
type unless a String constructor is not available.
Reporter: Ajay Kumar
Assignee: Ajay Kumar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs

2018-01-12 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324900#comment-16324900
 ] 

Ajay Kumar commented on HADOOP-14788:
-

[~hanishakoneru] thanks for review and comments,[~ste...@apache.org] thanks for 
clarification. Updated test case with calls to {{LambdaTestUtils.intercept}} in 
patch v5.

> Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
> --
>
> Key: HADOOP-14788
> URL: https://issues.apache.org/jira/browse/HADOOP-14788
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, 
> HADOOP-14788.003.patch, HADOOP-14788.004.patch, HADOOP-14788.005.patch
>
>
> When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps 
> with the filename, so losing the exception class information.
> Is this needed. or can it pass everything up?
> If it is needed, well, it's a common pattern: wrapping the exception with the 
> path & operation. Maybe it's time to add an IOE version of 
> {{NetworkUtils.wrapException()}} which handles the broader set of IOEs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs

2018-01-12 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14788:

Attachment: HADOOP-14788.005.patch

> Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
> --
>
> Key: HADOOP-14788
> URL: https://issues.apache.org/jira/browse/HADOOP-14788
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, 
> HADOOP-14788.003.patch, HADOOP-14788.004.patch, HADOOP-14788.005.patch
>
>
> When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps 
> with the filename, so losing the exception class information.
> Is this needed. or can it pass everything up?
> If it is needed, well, it's a common pattern: wrapping the exception with the 
> path & operation. Maybe it's time to add an IOE version of 
> {{NetworkUtils.wrapException()}} which handles the broader set of IOEs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15167) [viewfs] ls will fail when user doesn't exist

2018-01-12 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324785#comment-16324785
 ] 

Ajay Kumar commented on HADOOP-15167:
-

Personally i think if group can't be determined than its better to keep it 
blank. There might be valid business scenarios where some operations might be 
done based on users group. Having no group at all is a valid case in that. 
Welcome any comments/suggestions from others.

> [viewfs] ls will fail when user doesn't exist
> -
>
> Key: HADOOP-15167
> URL: https://issues.apache.org/jira/browse/HADOOP-15167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-15167.patch
>
>
> Have Secure federated cluster with atleast two nameservices
> Configure viewfs related configs 
>  When we run the {{ls}} cmd in HDFS client ,we will call the method: 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.InternalDirOfViewFs#getFileStatus
>  it will try to get the group of the kerberos user. If the node has not this 
> user, it fails. 
> Throws the following and exits.UserGroupInformation#getPrimaryGroupName
> {code}
> if (groups.isEmpty()) {
>   throw new IOException("There is no primary group for UGI " + this);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.

2018-01-12 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14775:

Status: Open  (was: Patch Available)

> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 
> --
>
> Key: HADOOP-14775
> URL: https://issues.apache.org/jira/browse/HADOOP-14775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>  Labels: junit5
> Attachments: HADOOP-14775.01.patch, HADOOP-14775.02.patch
>
>
> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-12 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324488#comment-16324488
 ] 

Ajay Kumar commented on HADOOP-14969:
-

Hi [~ste...@apache.org], thanks for feedback. In this particular case 
configuration passed for DataNode is inconsistent which warrants DataNode to be 
stopped without further processing. Atleast to me, a Runtime exception seems 
more suitable than IOE in this case. Do you still think we should change it to 
IOE? 

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-14969.001.patch, HADOOP-14969.002.patch, 
> HADOOP-14969.003.patch, HADOOP-14969.004.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-12 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324251#comment-16324251
 ] 

Ajay Kumar commented on HADOOP-15114:
-

[~arpitagarwal] thanks for review and commit. [~ste...@apache.org] thanks for 
review. [~jlowe] thanks for suggestion.

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: 3.1.0
>
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-12 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HADOOP-15170:
---

Assignee: Ajay Kumar

> Add symlink support to FileUtil#unTarUsingJava 
> ---
>
> Key: HADOOP-15170
> URL: https://issues.apache.org/jira/browse/HADOOP-15170
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Minor
>
> Now that JDK7 or later is required, we can leverage 
> java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
> archives that contain symbolic links.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-11 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323273#comment-16323273
 ] 

Ajay Kumar commented on HADOOP-15114:
-

[~arpitagarwal],[~ste...@apache.org] Could you please review the patch.

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-11 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14969:

Attachment: HADOOP-14969.004.patch

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-14969.001.patch, HADOOP-14969.002.patch, 
> HADOOP-14969.003.patch, HADOOP-14969.004.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-11 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323172#comment-16323172
 ] 

Ajay Kumar commented on HADOOP-14969:
-

[~arpitagarwal], thanks for review. Update the patch v4 with new url

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-14969.001.patch, HADOOP-14969.002.patch, 
> HADOOP-14969.003.patch, HADOOP-14969.004.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-11 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322544#comment-16322544
 ] 

Ajay Kumar commented on HADOOP-15121:
-

LGTM. There is one minor checkstyle issue for line 684 in DecayRpcScheduler. +1 
with that addressed (non binding)

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch, HADOOP-15121.004.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15167) [viewfs] ls will fail when user doesn't exist

2018-01-11 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322540#comment-16322540
 ] 

Ajay Kumar commented on HADOOP-15167:
-

[~brahmareddy], thanks for working on this. Client with no group should be 
allowed to retrieve fileStatus, however group field in FileStatus is not 
mandatory. Since in this scenario we don't get a group for client we can pass 
null for group. FileStatus constructor will handle it without complaining.
{code}this.owner = (owner == null) ? "" : owner;{code}


> [viewfs] ls will fail when user doesn't exist
> -
>
> Key: HADOOP-15167
> URL: https://issues.apache.org/jira/browse/HADOOP-15167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.7.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-15167.patch
>
>
> Have Secure federated cluster with atleast two nameservices
> Configure viewfs related configs 
>  When we run the {{ls}} cmd in HDFS client ,we will call the method: 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.InternalDirOfViewFs#getFileStatus
>  it will try to get the group of the kerberos user. If the node has not this 
> user, it fails. 
> Throws the following and exits.UserGroupInformation#getPrimaryGroupName
> {code}
> if (groups.isEmpty()) {
>   throw new IOException("There is no primary group for UGI " + this);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-08 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14969:

Attachment: HADOOP-14969.003.patch

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-14969.001.patch, HADOOP-14969.002.patch, 
> HADOOP-14969.003.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-08 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14969:

Attachment: (was: HADOOP-14969.003.patch)

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-14969.001.patch, HADOOP-14969.002.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-08 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16317191#comment-16317191
 ] 

Ajay Kumar edited comment on HADOOP-14969 at 1/8/18 10:13 PM:
--

[~arpitagarwal], Thanks for review. Improved the error message for mentioned 
case in patch v3.


was (Author: ajayydv):
[~arpitagarwal], Thanks for review. Improved the error message for mentioned 
case in pacth v3.

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-14969.001.patch, HADOOP-14969.002.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-08 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16317191#comment-16317191
 ] 

Ajay Kumar edited comment on HADOOP-14969 at 1/8/18 10:12 PM:
--

[~arpitagarwal], Thanks for review. Improved the error message for mentioned 
case in pacth v3.


was (Author: ajayydv):
[~arpitagarwal], Thanks for review. Improved the error message for mentioned 
case.

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-14969.001.patch, HADOOP-14969.002.patch, 
> HADOOP-14969.003.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-08 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14969:

Attachment: HADOOP-14969.003.patch

[~arpitagarwal], Thanks for review. Improved the error message for mentioned 
case.

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-14969.001.patch, HADOOP-14969.002.patch, 
> HADOOP-14969.003.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-08 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14969:

Attachment: HADOOP-14969.002.patch

Added patch v2 to link documentation from new confluence. Added corresponding 
page in new wiki.  
([https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+in+Secure+Mode#HadoopinSecureMode-SecureDataNode])

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-14969.001.patch, HADOOP-14969.002.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-08 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312079#comment-16312079
 ] 

Ajay Kumar edited comment on HADOOP-14969 at 1/8/18 6:25 PM:
-

Added initial patch for review. [~arpitagarwal],[~ste...@apache.org] could you 
please have a look. Error message points to existing documentation for secure 
datanode in error message. (i.e 
[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html#Secure_DataNode]
 )


was (Author: ajayydv):
Added initial patch for review. [~arpitagarwal],[~ste...@apache.org] could you 
please have a look. Error message points to existing documentation for secure 
datanode in error message. (i.e 
https://hadoop.apache.org/docs/current/hadoop-project-dist/ )

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-14969.001.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs

2018-01-08 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16316721#comment-16316721
 ] 

Ajay Kumar commented on HADOOP-14788:
-

[~hanishakoneru],[~ste...@apache.org] Could you please have a look at latest 
patch.

> Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
> --
>
> Key: HADOOP-14788
> URL: https://issues.apache.org/jira/browse/HADOOP-14788
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, 
> HADOOP-14788.003.patch, HADOOP-14788.004.patch
>
>
> When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps 
> with the filename, so losing the exception class information.
> Is this needed. or can it pass everything up?
> If it is needed, well, it's a common pattern: wrapping the exception with the 
> path & operation. Maybe it's time to add an IOE version of 
> {{NetworkUtils.wrapException()}} which handles the broader set of IOEs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15160) Confusing text in http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html

2018-01-05 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16314068#comment-16314068
 ] 

Ajay Kumar commented on HADOOP-15160:
-

agree, I think those two sections with incompatible changes should be merged. 
Thoughts on {{Delete an optional field as long as the optional field has 
reasonable defaults to allow deletions}} ? To me this looks like an compatible 
change.

> Confusing text in 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html
> 
>
> Key: HADOOP-15160
> URL: https://issues.apache.org/jira/browse/HADOOP-15160
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Jim Showalter
>Priority: Minor
>
> The text in wire formats, policy, is confusing.
> First, there are two subsections with the same heading:
> The following changes to a .proto file SHALL be considered incompatible:
> The following changes to a .proto file SHALL be considered incompatible:
> Second, one of the items listed under the first of those two headings seems 
> like it is a compatible change, not an incompatible change:
> Delete an optional field as long as the optional field has reasonable 
> defaults to allow deletions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-04 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312079#comment-16312079
 ] 

Ajay Kumar commented on HADOOP-14969:
-

Added initial patch for review. [~arpitagarwal],[~ste...@apache.org] could you 
please have a look. Error message points to existing documentation for secure 
datanode in error message. (i.e 
https://hadoop.apache.org/docs/current/hadoop-project-dist/ )

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-14969.001.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-04 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14969:

Status: Patch Available  (was: Open)

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-14969.001.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14969) Improve diagnostics in secure DataNode startup

2018-01-04 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14969:

Attachment: HADOOP-14969.001.patch

> Improve diagnostics in secure DataNode startup
> --
>
> Key: HADOOP-14969
> URL: https://issues.apache.org/jira/browse/HADOOP-14969
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-14969.001.patch
>
>
> When DN secure mode configuration is incorrect, it throws the following 
> exception from Datanode#checkSecureConfig
> {code}
>   private static void checkSecureConfig(DNConf dnConf, Configuration conf,
>   SecureResources resources) throws RuntimeException {
> if (!UserGroupInformation.isSecurityEnabled()) {
>   return;
> }
> ...
> throw new RuntimeException("Cannot start secure DataNode without " +
>   "configuring either privileged resources or SASL RPC data transfer " +
>   "protection and SSL for HTTP.  Using privileged resources in " +
>   "combination with SASL RPC data transfer protection is not supported.");
> {code}
> The DN should print more useful diagnostics as to what exactly what went 
> wrong.
> Also when starting secure DN with resources then the startup scripts should 
> launch the SecureDataNodeStarter class. If no SASL is configured and 
> SecureDataNodeStarter is not used, then we could mention that too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-03 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16310515#comment-16310515
 ] 

Ajay Kumar commented on HADOOP-15114:
-

fixed checkstyle issues in patch v5

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15114) Add closeStreams(...) to IOUtils

2018-01-03 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15114:

Attachment: HADOOP-15114.005.patch

> Add closeStreams(...) to IOUtils
> 
>
> Key: HADOOP-15114
> URL: https://issues.apache.org/jira/browse/HADOOP-15114
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HADOOP-15114.001.patch, HADOOP-15114.002.patch, 
> HADOOP-15114.003.patch, HADOOP-15114.004.patch, HADOOP-15114.005.patch
>
>
> Add closeStreams(...) in IOUtils. Originally suggested by [Jason 
> Lowe|https://issues.apache.org/jira/browse/HDFS-12881?focusedCommentId=16288320=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288320].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15155) Error in javadoc of ReconfigurableBase#reconfigureProperty

2018-01-03 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16310392#comment-16310392
 ] 

Ajay Kumar commented on HADOOP-15155:
-

[~arpitagarwal], thanks for review and commit.

> Error in javadoc of ReconfigurableBase#reconfigureProperty
> --
>
> Key: HADOOP-15155
> URL: https://issues.apache.org/jira/browse/HADOOP-15155
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Minor
>  Labels: newbie
> Fix For: 3.1.0
>
> Attachments: HADOOP-15155.001.patch
>
>
> There is an error in javadoc of reconfigureProperty#reconfigurePropertyImpl
> {code}
>* This method cannot be overridden, subclasses should instead override
>* reconfigureProperty.
> {code}
> should change to
> {code}
>* This method cannot be overridden, subclasses should instead override
>* reconfigurePropertyImpl.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15093) Deprecation of yarn.resourcemanager.zk-address is undocumented

2018-01-03 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16310387#comment-16310387
 ] 

Ajay Kumar commented on HADOOP-15093:
-

[~arpitagarwal],[~djp] thanks for review. 

> Deprecation of yarn.resourcemanager.zk-address is undocumented
> --
>
> Key: HADOOP-15093
> URL: https://issues.apache.org/jira/browse/HADOOP-15093
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.9.0, 3.0.0, 3.1.0
>Reporter: Eric Wohlstadter
>Assignee: Ajay Kumar
>  Labels: documentation
> Attachments: HADOOP-15093.001.patch
>
>
> "yarn.resourcemanager.zk-address" was deprecated in 2.9.x and moved to 
> "hadoop.zk.address". However this doesn't appear in Deprecated Properties. 
> Additionally, the Configuration base class doesn't auto-translate from 
> "yarn.resourcemanager.zk-address" to "hadoop.zk.address". Only the sub-class 
> YarnConfiguration does the translation. 
> Also, the 2.9+ Resource Manager HA documentation still refers to the use of 
> "yarn.resourcemanager.zk-address".
> https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15155) Typo in javadoc of ReconfigurableBase#reconfigureProperty

2018-01-02 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15155:

Labels: newbie  (was: )

> Typo in javadoc of ReconfigurableBase#reconfigureProperty
> -
>
> Key: HADOOP-15155
> URL: https://issues.apache.org/jira/browse/HADOOP-15155
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-15155.001.patch
>
>
> There is a typo in javadoc of reconfigureProperty#reconfigurePropertyImpl
> {code}
>* This method cannot be overridden, subclasses should instead override
>* reconfigureProperty.
> {code}
> should change to
> {code}
>* This method cannot be overridden, subclasses should instead override
>* reconfigurePropertyImpl.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15155) Typo in javadoc of ReconfigurableBase#reconfigureProperty

2018-01-02 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15155:

Labels:   (was: newbie)

> Typo in javadoc of ReconfigurableBase#reconfigureProperty
> -
>
> Key: HADOOP-15155
> URL: https://issues.apache.org/jira/browse/HADOOP-15155
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15155.001.patch
>
>
> There is a typo in javadoc of reconfigureProperty#reconfigurePropertyImpl
> {code}
>* This method cannot be overridden, subclasses should instead override
>* reconfigureProperty.
> {code}
> should change to
> {code}
>* This method cannot be overridden, subclasses should instead override
>* reconfigurePropertyImpl.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15155) Typo in javadoc of ReconfigurableBase#reconfigureProperty

2018-01-02 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15155:

Status: Patch Available  (was: Open)

> Typo in javadoc of ReconfigurableBase#reconfigureProperty
> -
>
> Key: HADOOP-15155
> URL: https://issues.apache.org/jira/browse/HADOOP-15155
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-15155.001.patch
>
>
> There is a typo in javadoc of reconfigureProperty#reconfigurePropertyImpl
> {code}
>* This method cannot be overridden, subclasses should instead override
>* reconfigureProperty.
> {code}
> should change to
> {code}
>* This method cannot be overridden, subclasses should instead override
>* reconfigurePropertyImpl.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15155) Typo in javadoc of ReconfigurableBase#reconfigureProperty

2018-01-02 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15155:

Attachment: HADOOP-15155.001.patch

> Typo in javadoc of ReconfigurableBase#reconfigureProperty
> -
>
> Key: HADOOP-15155
> URL: https://issues.apache.org/jira/browse/HADOOP-15155
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-15155.001.patch
>
>
> There is a typo in javadoc of reconfigureProperty#reconfigurePropertyImpl
> {code}
>* This method cannot be overridden, subclasses should instead override
>* reconfigureProperty.
> {code}
> should change to
> {code}
>* This method cannot be overridden, subclasses should instead override
>* reconfigurePropertyImpl.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15155) Typo in javadoc of ReconfigurableBase#reconfigureProperty

2018-01-02 Thread Ajay Kumar (JIRA)
Ajay Kumar created HADOOP-15155:
---

 Summary: Typo in javadoc of ReconfigurableBase#reconfigureProperty
 Key: HADOOP-15155
 URL: https://issues.apache.org/jira/browse/HADOOP-15155
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ajay Kumar
Assignee: Ajay Kumar
Priority: Minor


There is a typo in javadoc of reconfigureProperty#reconfigurePropertyImpl

{code}
   * This method cannot be overridden, subclasses should instead override
   * reconfigureProperty.
{code}
should change to
{code}
   * This method cannot be overridden, subclasses should instead override
   * reconfigurePropertyImpl.
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-02 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16308633#comment-16308633
 ] 

Ajay Kumar edited comment on HADOOP-15121 at 1/2/18 11:13 PM:
--

[~Tao Jie] thanks for updating the patch. While testing the patch in my local 
machine it timed out occasionally. We can increase the timeout to 4000 or 6000 
to be on safer side. Also, i  think we can simply the test case a little bit by 
getting rid of catch block. Any exception will result in failure and stacktrace 
will be visible in logs.


was (Author: ajayydv):
[~Tao Jie] thanks for updating the patch. While testing the patch in my local 
machine it timed out occasionally. I think we can increase the timeout to 4000 
or 6000 to be on safer side. Also, i  think we can simply the test case a 
little bit by getting rid of catch block. Any exception will result in failure 
and stacktrace will be visible in logs.

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> 

[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler

2018-01-02 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16308633#comment-16308633
 ] 

Ajay Kumar commented on HADOOP-15121:
-

[~Tao Jie] thanks for updating the patch. While testing the patch in my local 
machine it timed out occasionally. I think we can increase the timeout to 4000 
or 6000 to be on safer side. Also, i  think we can simply the test case a 
little bit by getting rid of catch block. Any exception will result in failure 
and stacktrace will be visible in logs.

> Encounter NullPointerException when using DecayRpcScheduler
> ---
>
> Key: HADOOP-15121
> URL: https://issues.apache.org/jira/browse/HADOOP-15121
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Tao Jie
> Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, 
> HADOOP-15121.003.patch
>
>
> I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but 
> got excetion in namenode:
> {code}
> 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter 
> (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from 
> source DecayRpcSchedulerMetrics2.ipc.8020
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319)
> at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693)
> at 
> org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102)
> at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76)
> at org.apache.hadoop.ipc.Server.(Server.java:2612)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678)
> {code}
> It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its 
> {{delegate}} field in its Initialization method



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: 

<    1   2   3   4   5   >