[jira] [Commented] (HADOOP-15523) Shell command timeout given is in seconds whereas it is taken as millisec while scheduling

2018-06-07 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505686#comment-16505686
 ] 

Bibin A Chundatt commented on HADOOP-15523:
---

[~BilwaST]

Please do handle the following as part for the patch 

# javadoc update for Shell#getTimeoutInterval
{code}
/**
 * Returns the timeout value set for the executor's sub-commands.
 *
 * @return The timeout value in seconds
 */
@VisibleForTesting
public long getTimeoutInterval() {
  return timeOutInterval;
}
{code}
# Add/Update the testcase added as part of HADOOP-13817 too.

> Shell command timeout given is in seconds whereas it is taken as millisec 
> while scheduling
> --
>
> Key: HADOOP-15523
> URL: https://issues.apache.org/jira/browse/HADOOP-15523
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>
> ShellBasedUnixGroupsMapping has a property 
> {{hadoop.security.groups.shell.command.timeout}} to control how long to wait 
> for the fetch groups command which can be configured in seconds. but while 
> scheduling the time taken is millisecs. so currently if u give value as 60s, 
> it is taken as 60ms.
> {code:java}
> timeout = conf.getTimeDuration(
> CommonConfigurationKeys.
> HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS,
> CommonConfigurationKeys.
> HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT,
> TimeUnit.SECONDS);{code}
> Time unit given is in seconds but it should be millisecs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code

2018-06-07 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505644#comment-16505644
 ] 

genericqa commented on HADOOP-15522:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HADOOP-15461 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
34s{color} | {color:green} HADOOP-15461 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m  
5s{color} | {color:green} HADOOP-15461 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HADOOP-15461 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} HADOOP-15461 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} HADOOP-15461 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} HADOOP-15461 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
20s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15522 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926999/HADOOP-15522-HADOOP-15461.v1.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c1f18de71673 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HADOOP-15461 / b59400d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14740/testReport/ |
| Max. process+thread count | 1355 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14740/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This 

[jira] [Commented] (HADOOP-15520) Add new JUnit test cases

2018-06-07 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505642#comment-16505642
 ] 

genericqa commented on HADOOP-15520:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
39s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926996/HADOOP-15520.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9b4b7ad23b3e 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3b88fe2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14741/testReport/ |
| Max. process+thread count | 1357 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14741/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add new JUnit test cases
> 
>
> Key: HADOOP-15520
> URL: 

[jira] [Commented] (HADOOP-15523) Shell command timeout given is in seconds whereas it is taken as millisec while scheduling

2018-06-07 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505610#comment-16505610
 ] 

Brahma Reddy Battula commented on HADOOP-15523:
---

{color:#205081}Thanks for reporting, added you  as hadoop-common 
contributor.you can assign yourself from now onwards.{color}

{color:#205081}it's nice finding,I wonder, why this *timout* not converted as 
following expect *ms*.(java.util.Timer#schedule(java.util.TimerTask, long) also 
converts to ms){color}

 

{color:#808080}/**
{color}{color:#808080} * Create a new instance of the ShellCommandExecutor to 
execute a command.
{color}{color:#808080} *
{color}{color:#808080} * {color}{color:#808080}@param 
{color}{color:#3d3d3d}execString {color}{color:#808080}The command to execute 
with arguments
{color}{color:#808080} * {color}{color:#808080}@param {color}{color:#3d3d3d}dir 
{color}{color:#808080}If not-null, specifies the directory which should be set
{color}{color:#808080} * as the current working directory for the command.
{color}{color:#808080} * If null, the current working directory is not modified.
{color}{color:#808080} * {color}{color:#808080}@param {color}{color:#3d3d3d}env 
{color}{color:#808080}If not-null, environment of the command will include the
{color}{color:#808080} * key-value pairs specified in the map. If null, the 
current
{color}{color:#808080} * environment is not modified.
{color}{color:#808080} * {color}{color:#808080}@param 
{color}{color:#3d3d3d}timeout {color}{color:#808080}Specifies the time in 
milliseconds, after which the
{color}{color:#808080} * command will be killed and the status marked as 
timed-out.
{color}{color:#808080} * If 0, the command will not be timed out.
{color}{color:#808080} * {color}{color:#808080}@param 
{color}{color:#3d3d3d}inheritParentEnv {color}{color:#808080}Indicates if the 
process should inherit the env
{color}{color:#808080} * vars from the parent process or not.
{color}{color:#808080} */
{color}{color:#80}public {color}ShellCommandExecutor(String[] execString, 
File dir,
 Map env, {color:#80}long {color}timeout, 
{color:#80}boolean {color}inheritParentEnv) {

> Shell command timeout given is in seconds whereas it is taken as millisec 
> while scheduling
> --
>
> Key: HADOOP-15523
> URL: https://issues.apache.org/jira/browse/HADOOP-15523
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>
> ShellBasedUnixGroupsMapping has a property 
> {{hadoop.security.groups.shell.command.timeout}} to control how long to wait 
> for the fetch groups command which can be configured in seconds. but while 
> scheduling the time taken is millisecs. so currently if u give value as 60s, 
> it is taken as 60ms.
> {code:java}
> timeout = conf.getTimeDuration(
> CommonConfigurationKeys.
> HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS,
> CommonConfigurationKeys.
> HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT,
> TimeUnit.SECONDS);{code}
> Time unit given is in seconds but it should be millisecs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15523) Shell command timeout given is in seconds whereas it is taken as millisec while scheduling

2018-06-07 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HADOOP-15523:
-

Assignee: Bilwa S T

> Shell command timeout given is in seconds whereas it is taken as millisec 
> while scheduling
> --
>
> Key: HADOOP-15523
> URL: https://issues.apache.org/jira/browse/HADOOP-15523
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>
> ShellBasedUnixGroupsMapping has a property 
> {{hadoop.security.groups.shell.command.timeout}} to control how long to wait 
> for the fetch groups command which can be configured in seconds. but while 
> scheduling the time taken is millisecs. so currently if u give value as 60s, 
> it is taken as 60ms.
> {code:java}
> timeout = conf.getTimeDuration(
> CommonConfigurationKeys.
> HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS,
> CommonConfigurationKeys.
> HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT,
> TimeUnit.SECONDS);{code}
> Time unit given is in seconds but it should be millisecs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15523) Shell command timeout given is in seconds whereas it is taken as millisec while scheduling

2018-06-07 Thread Bilwa S T (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505597#comment-16505597
 ] 

Bilwa S T commented on HADOOP-15523:


can someone assign this issue to me?

> Shell command timeout given is in seconds whereas it is taken as millisec 
> while scheduling
> --
>
> Key: HADOOP-15523
> URL: https://issues.apache.org/jira/browse/HADOOP-15523
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Priority: Major
>
> ShellBasedUnixGroupsMapping has a property 
> {{hadoop.security.groups.shell.command.timeout}} to control how long to wait 
> for the fetch groups command which can be configured in seconds. but while 
> scheduling the time taken is millisecs. so currently if u give value as 60s, 
> it is taken as 60ms.
> {code:java}
> timeout = conf.getTimeDuration(
> CommonConfigurationKeys.
> HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS,
> CommonConfigurationKeys.
> HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT,
> TimeUnit.SECONDS);{code}
> Time unit given is in seconds but it should be millisecs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15523) Shell command timeout given is in seconds whereas it is taken as millisec while scheduling

2018-06-07 Thread Bilwa S T (JIRA)
Bilwa S T created HADOOP-15523:
--

 Summary: Shell command timeout given is in seconds whereas it is 
taken as millisec while scheduling
 Key: HADOOP-15523
 URL: https://issues.apache.org/jira/browse/HADOOP-15523
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bilwa S T


ShellBasedUnixGroupsMapping has a property 
{{hadoop.security.groups.shell.command.timeout}} to control how long to wait 
for the fetch groups command which can be configured in seconds. but while 
scheduling the time taken is millisecs. so currently if u give value as 60s, it 
is taken as 60ms.

{code:java}
timeout = conf.getTimeDuration(
CommonConfigurationKeys.
HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS,
CommonConfigurationKeys.
HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT,
TimeUnit.SECONDS);{code}

Time unit given is in seconds but it should be millisecs





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code

2018-06-07 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505577#comment-16505577
 ] 

Giovanni Matteo Fumarola commented on HADOOP-15522:
---

Tested on Windows and Linux.

> Deprecate Shell#ReadLink by using native java code
> --
>
> Key: HADOOP-15522
> URL: https://issues.apache.org/jira/browse/HADOOP-15522
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15522-HADOOP-15461.v1.patch
>
>
> Hadoop uses the shell to read symbolic links. Now that Hadoop relies on Java 
> 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code

2018-06-07 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15522:
--
Status: Patch Available  (was: Open)

> Deprecate Shell#ReadLink by using native java code
> --
>
> Key: HADOOP-15522
> URL: https://issues.apache.org/jira/browse/HADOOP-15522
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15522-HADOOP-15461.v1.patch
>
>
> Hadoop uses the shell to read symbolic links. Now that Hadoop relies on Java 
> 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code

2018-06-07 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15522:
--
Attachment: HADOOP-15522-HADOOP-15461.v1.patch

> Deprecate Shell#ReadLink by using native java code
> --
>
> Key: HADOOP-15522
> URL: https://issues.apache.org/jira/browse/HADOOP-15522
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15522-HADOOP-15461.v1.patch
>
>
> Hadoop uses the shell to read symbolic links. Now that Hadoop relies on Java 
> 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code

2018-06-07 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola reassigned HADOOP-15522:
-

Assignee: Giovanni Matteo Fumarola

> Deprecate Shell#ReadLink by using native java code
> --
>
> Key: HADOOP-15522
> URL: https://issues.apache.org/jira/browse/HADOOP-15522
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
>
> Hadoop uses the shell to read symbolic links. Now that Hadoop relies on Java 
> 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code

2018-06-07 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15522:
--
Description: Hadoop uses the shell to read symbolic links. Now that Hadoop 
relies on Java 7+, we can deprecate all the shell code and rely on the Java 
APIs.

> Deprecate Shell#ReadLink by using native java code
> --
>
> Key: HADOOP-15522
> URL: https://issues.apache.org/jira/browse/HADOOP-15522
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Priority: Major
>
> Hadoop uses the shell to read symbolic links. Now that Hadoop relies on Java 
> 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15522) Deprecate Shell#ReadLink by using native java code

2018-06-07 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created HADOOP-15522:
-

 Summary: Deprecate Shell#ReadLink by using native java code
 Key: HADOOP-15522
 URL: https://issues.apache.org/jira/browse/HADOOP-15522
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Giovanni Matteo Fumarola






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add new JUnit test cases

2018-06-07 Thread Arash Nabili (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arash Nabili updated HADOOP-15520:
--
Attachment: HADOOP-15520.002.patch
Status: Patch Available  (was: Open)

> Add new JUnit test cases
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.002.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add new JUnit test cases

2018-06-07 Thread Arash Nabili (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arash Nabili updated HADOOP-15520:
--
Attachment: (was: HADOOP-15520.001.patch)

> Add new JUnit test cases
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.002.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add new JUnit test cases

2018-06-07 Thread Arash Nabili (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arash Nabili updated HADOOP-15520:
--
Status: Open  (was: Patch Available)

> Add new JUnit test cases
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.001.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add new JUnit test cases

2018-06-07 Thread Arash Nabili (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arash Nabili updated HADOOP-15520:
--
Attachment: (was: HADOOP-15520.002.patch)

> Add new JUnit test cases
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.001.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add new JUnit test cases

2018-06-07 Thread Arash Nabili (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arash Nabili updated HADOOP-15520:
--
Attachment: HADOOP-15520.002.patch

> Add new JUnit test cases
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.001.patch, HADOOP-15520.002.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-06-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15465:
-
Affects Version/s: 3.1.0

> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Fix For: HADOOP-15461
>
> Attachments: HADOOP-15465-HADOOP-15461.v1.patch, 
> HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch, HADOOP-15465.v1.patch, 
> HADOOP-15465.v2.patch, HADOOP-15465.v3.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-06-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15465:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HADOOP-15461
   Status: Resolved  (was: Patch Available)

> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Fix For: HADOOP-15461
>
> Attachments: HADOOP-15465-HADOOP-15461.v1.patch, 
> HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch, HADOOP-15465.v1.patch, 
> HADOOP-15465.v2.patch, HADOOP-15465.v3.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-06-07 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505521#comment-16505521
 ] 

Íñigo Goiri edited comment on HADOOP-15465 at 6/8/18 12:03 AM:
---

Thanks [~giovanni.fumarola] for the patch.
As you still cannot commit, I committed to the HADOOP-15461 branch.


was (Author: elgoiri):
Thanks [~giovanni.fumarola] for the patch.
As you still cannot committed, I committed to the HADOOP-15461 branch.

> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465-HADOOP-15461.v1.patch, 
> HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch, HADOOP-15465.v1.patch, 
> HADOOP-15465.v2.patch, HADOOP-15465.v3.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-06-07 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505521#comment-16505521
 ] 

Íñigo Goiri commented on HADOOP-15465:
--

Thanks [~giovanni.fumarola] for the patch.
As you still cannot committed, I committed to the HADOOP-15461 branch.

> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465-HADOOP-15461.v1.patch, 
> HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch, HADOOP-15465.v1.patch, 
> HADOOP-15465.v2.patch, HADOOP-15465.v3.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15520) Add new JUnit test cases

2018-06-07 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505516#comment-16505516
 ] 

genericqa commented on HADOOP-15520:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 34 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
20s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
37s{color} | {color:red} The patch generated 4 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926975/HADOOP-15520.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 023bb3a832af 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ba303b1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14739/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14739/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14739/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1521 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[jira] [Commented] (HADOOP-15521) Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code blocks

2018-06-07 Thread Esfandiar Manii (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505507#comment-16505507
 ] 

Esfandiar Manii commented on HADOOP-15521:
--


{code:java}
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-azure ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.fs.azure.TestBlobOperationDescriptor
[INFO] Running org.apache.hadoop.fs.azure.TestWasbFsck
[INFO] Running org.apache.hadoop.fs.azure.TestShellDecryptionKeyProvider
[INFO] Running 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.148 s 
- in org.apache.hadoop.fs.azure.TestShellDecryptionKeyProvider
[WARNING] Tests run: 2, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.703 
s - in org.apache.hadoop.fs.azure.TestWasbFsck
[INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked
[INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency
[INFO] Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.321 s 
- in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked
[INFO] Running org.apache.hadoop.fs.azure.TestBlobMetadata
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.124 s 
- in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.817 s 
- in org.apache.hadoop.fs.azure.TestBlobMetadata
[INFO] Running org.apache.hadoop.fs.azure.TestOutOfBandAzureBlobOperations
[INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemUploadLogic
[WARNING] Tests run: 3, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 0.065 
s - in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemUploadLogic
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.752 s 
- in org.apache.hadoop.fs.azure.TestOutOfBandAzureBlobOperations
[INFO] Running 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemBlockCompaction
[INFO] Running org.apache.hadoop.fs.azure.TestClientThrottlingAnalyzer
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.151 s 
- in org.apache.hadoop.fs.azure.TestBlobOperationDescriptor
[INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.698 s 
- in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck
[INFO] Running 
org.apache.hadoop.fs.azure.metrics.TestNativeAzureFileSystemMetricsSystem
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.626 s 
- in org.apache.hadoop.fs.azure.metrics.TestNativeAzureFileSystemMetricsSystem
[INFO] Running org.apache.hadoop.fs.azure.metrics.TestBandwidthGaugeUpdater
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.332 s 
- in org.apache.hadoop.fs.azure.metrics.TestBandwidthGaugeUpdater
[INFO] Running 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
[INFO] Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.364 s 
- in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
[INFO] Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAuthorization
[INFO] Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.197 
s - in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.129 s 
- in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemBlockCompaction
[INFO] Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.754 s 
- in org.apache.hadoop.fs.azure.TestClientThrottlingAnalyzer
[INFO] Tests run: 59, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 231.325 
s - in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAuthorization
[INFO] 
[INFO] Results:
[INFO] 
[WARNING] Tests run: 232, Failures: 0, Errors: 0, Skipped: 4
[INFO] 
[INFO] 
[INFO] --- maven-surefire-plugin:2.21.0:test (serialized-test) @ hadoop-azure 
---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.fs.azure.metrics.TestRollingWindowAverage
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.203 s 
- in org.apache.hadoop.fs.azure.metrics.TestRollingWindowAverage
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-azure ---
[INFO] Deleting /home/esmanii/trunk-2/hadoop/hadoop-tools/hadoop-azure/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-azure ---
[INFO] 

[jira] [Updated] (HADOOP-15521) Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code blocks

2018-06-07 Thread Esfandiar Manii (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii updated HADOOP-15521:
-
Attachment: HADOOP-15521-001.patch

> Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code 
> blocks
> ---
>
> Key: HADOOP-15521
> URL: https://issues.apache.org/jira/browse/HADOOP-15521
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Minor
> Attachments: HADOOP-15521-001.patch
>
>
> Upgraded Azure Storage Sdk to 7.0.0
> Fixed code issues and couple of tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15521) Upgrading Azure Storage Sdk version to 7.0.0 and updating corresponding code blocks

2018-06-07 Thread Esfandiar Manii (JIRA)
Esfandiar Manii created HADOOP-15521:


 Summary: Upgrading Azure Storage Sdk version to 7.0.0 and updating 
corresponding code blocks
 Key: HADOOP-15521
 URL: https://issues.apache.org/jira/browse/HADOOP-15521
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Reporter: Esfandiar Manii
Assignee: Esfandiar Manii


Upgraded Azure Storage Sdk to 7.0.0
Fixed code issues and couple of tests




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15464) [C] Create a JNI interface to interact with Windows

2018-06-07 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola reassigned HADOOP-15464:
-

Assignee: (was: Giovanni Matteo Fumarola)

> [C] Create a JNI interface to interact with Windows
> ---
>
> Key: HADOOP-15464
> URL: https://issues.apache.org/jira/browse/HADOOP-15464
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15463) [Java] Create a JNI interface to interact with Windows

2018-06-07 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola reassigned HADOOP-15463:
-

Assignee: (was: Giovanni Matteo Fumarola)

> [Java] Create a JNI interface to interact with Windows
> --
>
> Key: HADOOP-15463
> URL: https://issues.apache.org/jira/browse/HADOOP-15463
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Priority: Major
>
> This JIRA tracks the design/implementation of the Java layer for the JNI 
> interface to interact with Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15462) Create a JNI interface to interact with Windows

2018-06-07 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola reassigned HADOOP-15462:
-

Assignee: (was: Giovanni Matteo Fumarola)

> Create a JNI interface to interact with Windows
> ---
>
> Key: HADOOP-15462
> URL: https://issues.apache.org/jira/browse/HADOOP-15462
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: WinUtils-Functions.pdf, WinUtils.CSV
>
>
> I did a quick investigation of the performance of WinUtils in YARN. In 
> average NM calls 4.76 times per second and 65.51 per container.
>  
> | |Requests|Requests/sec|Requests/min|Requests/container|
> |*Sum [WinUtils]*|*135354*|*4.761*|*286.160*|*65.51*|
> |[WinUtils] Execute -help|4148|0.145|8.769|2.007|
> |[WinUtils] Execute -ls|2842|0.0999|6.008|1.37|
> |[WinUtils] Execute -systeminfo|9153|0.321|19.35|4.43|
> |[WinUtils] Execute -symlink|115096|4.048|243.33|57.37|
> |[WinUtils] Execute -task isAlive|4115|0.144|8.699|2.05|
>  Interval: 7 hours, 53 minutes and 48 seconds
> Each execution of WinUtils does around *140 IO ops*, of which 130 are DDL ops.
> This means *666.58* IO ops/second due to WinUtils.
> We should start considering to remove WinUtils from Hadoop and creating a JNI 
> interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add new JUnit test cases

2018-06-07 Thread Arash Nabili (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arash Nabili updated HADOOP-15520:
--
Attachment: HADOOP-15520.001.patch
Status: Patch Available  (was: Open)

> Add new JUnit test cases
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.001.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add new JUnit test cases

2018-06-07 Thread Arash Nabili (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arash Nabili updated HADOOP-15520:
--
Attachment: (was: HADOOP-15520.001.patch)

> Add new JUnit test cases
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.001.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15520) Add new JUnit test cases

2018-06-07 Thread Arash Nabili (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arash Nabili updated HADOOP-15520:
--
Attachment: HADOOP-15520.001.patch

> Add new JUnit test cases
> 
>
> Key: HADOOP-15520
> URL: https://issues.apache.org/jira/browse/HADOOP-15520
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.2.0
> Environment: CentOS 7 - amd64
> Oracle JDK 8u172
> Maven 3.5.3
> hadoop trunk
>Reporter: Arash Nabili
>Priority: Minor
> Attachments: HADOOP-15520.001.patch
>
>
> Created new JUnit test classes for the following classes:
>  * org.apache.hadoop.util.CloseableReferenceCount
>  * org.apache.hadoop.util.IntrusiveCollection
>  * org.apache.hadoop.util.LimitInputStream
>  * org.apache.hadoop.util.UTF8ByteArrayUtils
> Added new JUnit test cases to the following test classes:
>  * org.apache.hadoop.util.TestShell
>  * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15520) Add new JUnit test cases

2018-06-07 Thread Arash Nabili (JIRA)
Arash Nabili created HADOOP-15520:
-

 Summary: Add new JUnit test cases
 Key: HADOOP-15520
 URL: https://issues.apache.org/jira/browse/HADOOP-15520
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.2.0
 Environment: CentOS 7 - amd64

Oracle JDK 8u172

Maven 3.5.3

hadoop trunk
Reporter: Arash Nabili


Created new JUnit test classes for the following classes:
 * org.apache.hadoop.util.CloseableReferenceCount
 * org.apache.hadoop.util.IntrusiveCollection
 * org.apache.hadoop.util.LimitInputStream
 * org.apache.hadoop.util.UTF8ByteArrayUtils

Added new JUnit test cases to the following test classes:
 * org.apache.hadoop.util.TestShell
 * org.apache.hadoop.util.TestStringUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14199) TestFsShellList.testList fails on windows: illegal filenames

2018-06-07 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505302#comment-16505302
 ] 

genericqa commented on HADOOP-14199:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
13s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-14199 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926957/HADOOP-14199.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2572a709a3ab 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b79ae5d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14738/testReport/ |
| Max. process+thread count | 1524 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14738/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestFsShellList.testList fails on windows: illegal filenames
> 
>
> Key: 

[jira] [Commented] (HADOOP-15516) Add test cases to cover FileUtil#readLink

2018-06-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505271#comment-16505271
 ] 

Hudson commented on HADOOP-15516:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14383 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14383/])
HADOOP-15516. Add test cases to cover FileUtil#readLink. Contributed by 
(inigoiri: rev 12be8bad7debd67c9ea72b979a39c8cf42c5f37d)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java


> Add test cases to cover FileUtil#readLink
> -
>
> Key: HADOOP-15516
> URL: https://issues.apache.org/jira/browse/HADOOP-15516
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Minor
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15516.v1.patch
>
>
> Currently, FileUtil#readLink has no unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15516) Add test cases to cover FileUtil#readLink

2018-06-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15516:
-
Affects Version/s: 3.1.0

> Add test cases to cover FileUtil#readLink
> -
>
> Key: HADOOP-15516
> URL: https://issues.apache.org/jira/browse/HADOOP-15516
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Minor
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15516.v1.patch
>
>
> Currently, FileUtil#readLink has no unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15516) Add test cases to cover FileUtil#readLink

2018-06-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15516:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

Thanks [~giovanni.fumarola] for adding the test cases.
Committed to trunk and branch-3.1.

> Add test cases to cover FileUtil#readLink
> -
>
> Key: HADOOP-15516
> URL: https://issues.apache.org/jira/browse/HADOOP-15516
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Minor
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15516.v1.patch
>
>
> Currently, FileUtil#readLink has no unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14199) TestFsShellList.testList fails on windows: illegal filenames

2018-06-07 Thread Anbang Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505124#comment-16505124
 ] 

Anbang Hu commented on HADOOP-14199:


Uploaded  [^HADOOP-14199.000.patch] in response to [~elgoiri]'s suggestion. 
[~ste...@apache.org] Thanks for the useful input. However, my understanding is 
that the purpose of {{TestFsShellList#testList}} is to test {{-ls}} instead of 
special characters in filename. What's your take on this?

> TestFsShellList.testList fails on windows: illegal filenames
> 
>
> Key: HADOOP-14199
> URL: https://issues.apache.org/jira/browse/HADOOP-14199
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.8.0
> Environment: win64
>Reporter: Steve Loughran
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HADOOP-14199.000.patch, HADOOP-15496.000.patch
>
>
> {{TestFsShellList.testList}} fails setting up the files to test against
> {code}
> org.apache.hadoop.io.nativeio.NativeIOException: The filename, directory 
> name, or volume label syntax is incorrect.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14199) TestFsShellList.testList fails on windows: illegal filenames

2018-06-07 Thread Anbang Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HADOOP-14199:
---
Attachment: HADOOP-14199.000.patch

> TestFsShellList.testList fails on windows: illegal filenames
> 
>
> Key: HADOOP-14199
> URL: https://issues.apache.org/jira/browse/HADOOP-14199
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.8.0
> Environment: win64
>Reporter: Steve Loughran
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HADOOP-14199.000.patch, HADOOP-15496.000.patch
>
>
> {{TestFsShellList.testList}} fails setting up the files to test against
> {code}
> org.apache.hadoop.io.nativeio.NativeIOException: The filename, directory 
> name, or volume label syntax is incorrect.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505081#comment-16505081
 ] 

Eric Yang commented on HADOOP-15518:


[~kminder] Race condition is not a criticism of this patch.  It is byproduct of 
having multiple instances of AuthenticationFilter that authenticate method is 
called more than once due to lack of checking that the current request is 
already authenticated.  My above comment is showing the states prior to this 
patch.  Thank you for the patch.

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred in the current request.  This 
> primarily affects situations where multiple authentication mechanism has been 
> configured.  For example when core-site.xml's has 
> hadoop.http.authentication.type=kerberos and yarn-site.xml has 
> yarn.timeline-service.http-authentication.type=kerberos the result is an 
> attempt to perform two Kerberos authentications for the same request.  This 
> in turn results in Kerberos triggering a replay attack detection.  The 
> javadocs for AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-06-07 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505033#comment-16505033
 ] 

genericqa commented on HADOOP-15307:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15307 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926924/HADOOP-15307.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cf8648a7c3ac 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 377ea1b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14737/testReport/ |
| Max. process+thread count | 336 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-nfs U: 
hadoop-common-project/hadoop-nfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14737/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve NFS error handling: 

[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Kevin Minder (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505025#comment-16505025
 ] 

Kevin Minder commented on HADOOP-15518:
---

[~eyang] < Can you please clarify if your comment about a race condition is a 
criticism of the patch?  I'm having trouble rationalizing your +1 with your 
other comments.  Are you really trying to say that the patch solves the race 
condition?

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred in the current request.  This 
> primarily affects situations where multiple authentication mechanism has been 
> configured.  For example when core-site.xml's has 
> hadoop.http.authentication.type=kerberos and yarn-site.xml has 
> yarn.timeline-service.http-authentication.type=kerberos the result is an 
> attempt to perform two Kerberos authentications for the same request.  This 
> in turn results in Kerberos triggering a replay attack detection.  The 
> javadocs for AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-06-07 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505018#comment-16505018
 ] 

Steve Loughran commented on HADOOP-15407:
-

h1. First attempt at testing

I found it very hard to get up and running. As in: I have one of the contract 
tests going, but nothing else yet. 

The testing docs will need to explain how to get started. The easier the setup 
process, the easier writing those docs become

I'd make writing that doc priority, as without that, getting the tests working 
will be a blocker to reviews.

Key things I had trouble with

* the difference between the wasb & dfs accounts
* what's needed in terms of pre-test store container setup. I think it's 
happening
automatically, but that's probably repeating the same problem we see with wasb: 
container leakage & the
need to periodically purge them all.  If that's the case, a new version 
of {{org.apache.hadoop.fs.azure.integration.CleanupTestContainers}} is needed, 
and
again, the docs.
* Lack of meaningful details on why a test setup failed other than "skipped". 
The attached patch
addresses that by including a message in the Assume clause. (side-note: I 
expect meaningful messages in
*all* Assume.assume clauses, as I try to do in my own contribs). 


I tried to get {{ITestAzureBlobFileSystemMkDir}} up and working and didn't get 
that far, timeouts.


Every test needs a timeout. This is to avoid messages like

{code}
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.21.0:test
   (default-test) on project hadoop-azure: There was a timeout or other error 
in the fork -> [Help 1]
{code}

When maven kills a test, all the output is lost, and, as all test teardown is 
skipped,
things on remote stores left in a mess.

I've added one to {{DependencyInjectedTest}} where it will be found everywhere

This shows me what's hanging. I'm assuming its still test setup related, so
will look at my config options more. But the fact things are timing out
if the tests are misconfigured is a problem on its own.


{code}
"Thread-0" #13 prio=5 os_prio=31 tid=0x7f97061b3000 nid=0x5803 waiting on 
condition [0x7f8ac000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:255)
at 
com.microsoft.azure.storage.blob.CloudBlobContainer.exists(CloudBlobContainer.java:769)
at 
com.microsoft.azure.storage.blob.CloudBlobContainer.exists(CloudBlobContainer.java:756)
at 
org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobContainerWrapperImpl.exists(StorageInterfaceImpl.java:233)
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.connectUsingAnonymousCredentials(AzureNativeFileSystemStore.java:856)
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.createAzureStorageSession(AzureNativeFileSystemStore.java:1081)
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.initialize(AzureNativeFileSystemStore.java:538)
at 
org.apache.hadoop.fs.azurebfs.DependencyInjectedTest.initialize(DependencyInjectedTest.java:132)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}


# I don't think falling back to anon should happen, at least with tests.
# I absolutely don't think that login failures should be something you retry on.

I see there's a call to {{suppressRetryPolicyInClientIfNeeded();}}, so
the tests need to make sure that's running. I think production-side code
needs to look at the auth codepath and make sure that its operations are all
fail fast.

Proposed: add a test for this. Create a config, remove the auth, try to do
anon access to your test containers. Expect it to fail fast.

Other aspects of that test: LambdaTestUtils.intercept() loves closures which 
return things other than void: the string value of the response is used in 

[jira] [Updated] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-06-07 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15407:

Attachment: HADOOP-15407-patch-atop-patch-007.patch

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
> Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch, 
> HADOOP-15407-003.patch, HADOOP-15407-004.patch, 
> HADOOP-15407-HADOOP-15407.006.patch, HADOOP-15407-HADOOP-15407.007.patch, 
> HADOOP-15407-patch-atop-patch-007.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production environment.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-06-07 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15407:

Status: Open  (was: Patch Available)

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
> Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch, 
> HADOOP-15407-003.patch, HADOOP-15407-004.patch, 
> HADOOP-15407-HADOOP-15407.006.patch, HADOOP-15407-HADOOP-15407.007.patch, 
> HADOOP-15407-patch-atop-patch-007.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production environment.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504999#comment-16504999
 ] 

Eric Yang commented on HADOOP-15518:


[~lmccay] {quote}it is unclear to me where the race condition is. My assumption 
is that the filters are invoked linearly so a second filter shouldn't be 
invoked until the request state is set properly.{quote}

Your assumption is correct, and the diagram might help to explain the race 
conditions.

HIgh level of the sequence of events of a normal setup:

| Time | Browser | HttpRequest | HttpResponse |
| 1 | Send WWW-Authenticate 1 | | |
| 2 | | AuthenticationFilter checks WWW-Authenticate 1 |  |
| 3 | | Call authenticate to verify WWW-Authenticate 1 ticket with KDC | |
| 4 | | Set User principal and remote user via Java security callbacks | |
| 5 | | | AuthenticationFilter writes WWW-Authenticate 2 |
| 6 | | | Business logic |
| 7 | Received WWW-Authenticate 2 | | |

Events of duplicated AuthenticationFilters are:

| Time | Browser | HttpRequest | HttpResponse |
| 1 | WWW-Authenticate 1 | | |
| 2 | | AuthenticationFilter Instance 1 Check WWW-Authenticate 1 |  |
| 3 | | Call authenticate to verify WWW-Authenticate 1 ticket with KDC | |
| 4 | | Set User principal and remote user via Java security callbacks | |
| 5 | | | AuthenticationFilter Instance 1 writes WWW-Authenticate 2 |
| 6 | | AuthenticationFilter Instance 2 Check WWW-Autnenticate 1 | 
AuthenticationFilter Instance 2 rewrites HTTP status with 403 |

Browser has never retrieved WWW-Authenticate 2 header from server because 
HttpResponse is still buffered on server side.  The race condition is between 
HttpRequest at time 6 is still using existing ticket 1 without using the new 
ticket 2 that is issued at time 5.  Second Filter is invoked at time 6 using 
out dated data.

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred in the current request.  This 
> primarily affects situations where multiple authentication mechanism has been 
> configured.  For example when core-site.xml's has 
> hadoop.http.authentication.type=kerberos and yarn-site.xml has 
> yarn.timeline-service.http-authentication.type=kerberos the result is an 
> attempt to perform two Kerberos authentications for the same request.  This 
> in turn results in Kerberos triggering a replay attack detection.  The 
> javadocs for AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Larry McCay (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504893#comment-16504893
 ] 

Larry McCay commented on HADOOP-15518:
--

[~eyang] - it is unclear to me where the race condition is. My assumption is 
that the filters are invoked linearly so a second filter shouldn't be invoked 
until the request state is set properly.

[~kminder] - this seems an appropriate change to me and worth adding tests for.

[~owen.omalley] - are we missing anything in this implementation and 
assumptions about the HTTP client only being authenticated once by 
AuthenticationFilter?

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred in the current request.  This 
> primarily affects situations where multiple authentication mechanism has been 
> configured.  For example when core-site.xml's has 
> hadoop.http.authentication.type=kerberos and yarn-site.xml has 
> yarn.timeline-service.http-authentication.type=kerberos the result is an 
> attempt to perform two Kerberos authentications for the same request.  This 
> in turn results in Kerberos triggering a replay attack detection.  The 
> javadocs for AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Kevin Minder (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Minder updated HADOOP-15518:
--
Description: 
The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
successful authentication has occurred in the current request.  This primarily 
affects situations where multiple authentication mechanism has been configured. 
 For example when core-site.xml's has hadoop.http.authentication.type=kerberos 
and yarn-site.xml has yarn.timeline-service.http-authentication.type=kerberos 
the result is an attempt to perform two Kerberos authentications for the same 
request.  This in turn results in Kerberos triggering a replay attack 
detection.  The javadocs for AuthenticationHandler 
([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
 indicate for the authenticate method that
{quote}This method is invoked by the AuthenticationFilter only if the HTTP 
client request is not yet authenticated.
{quote}
This does not appear to be the case in practice.

I've create a patch and tested on a limited number of functional use cases 
(e.g. the timeline-service issue noted above).  If there is general agreement 
that the change is valid I'll add unit tests to the patch.

 

  was:
The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
successful authentication has occurred in the current request.  This primarily 
affects situations where multiple  authentication mechanism has been 
configured.  For example when core-site.xml's has 
hadoop.http.authentication.type=kerberos and yarn-site.xml has 
yarn.timeline-service.http-authentication.type=kerberos the result is an 
attempt to perform two Kerberos authentications for the same request.  This in 
turn results in Kerberos triggering a replay attack detection.  The javadocs 
for AuthenticationHandler 
([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
 indicate for the authenticate method that
{quote}This method is invoked by the AuthenticationFilter only if the HTTP 
client request is not yet authenticated.
{quote}
This does not appear to be the case in practice.

I've create a patch and tested on a limited number of functional use cases 
(e.g. the timeline-service issue noted above).  If there is general agreement 
that the change is valid I'll add unit tests to the patch.

 


> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred in the current request.  This 
> primarily affects situations where multiple authentication mechanism has been 
> configured.  For example when core-site.xml's has 
> hadoop.http.authentication.type=kerberos and yarn-site.xml has 
> yarn.timeline-service.http-authentication.type=kerberos the result is an 
> attempt to perform two Kerberos authentications for the same request.  This 
> in turn results in Kerberos triggering a replay attack detection.  The 
> javadocs for AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Kevin Minder (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Minder updated HADOOP-15518:
--
Description: 
The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
successful authentication has occurred in the current request.  This primarily 
affects situations where multiple  authentication mechanism has been 
configured.  For example when core-site.xml's has 
hadoop.http.authentication.type=kerberos and yarn-site.xml has 
yarn.timeline-service.http-authentication.type=kerberos the result is an 
attempt to perform two Kerberos authentications for the same request.  This in 
turn results in Kerberos triggering a replay attack detection.  The javadocs 
for AuthenticationHandler 
([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
 indicate for the authenticate method that
{quote}This method is invoked by the AuthenticationFilter only if the HTTP 
client request is not yet authenticated.
{quote}
This does not appear to be the case in practice.

I've create a patch and tested on a limited number of functional use cases 
(e.g. the timeline-service issue noted above).  If there is general agreement 
that the change is valid I'll add unit tests to the patch.

 

  was:
The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
successful authentication has occurred.  This primarily affects situations 
where multiple  authentication mechanism has been configured.  For example when 
core-site.xml's has hadoop.http.authentication.type=kerberos and yarn-site.xml 
has yarn.timeline-service.http-authentication.type=kerberos the result is an 
attempt to perform two Kerberos authentications for the same request.  This in 
turn results in Kerberos triggering a replay attack detection.  The javadocs 
for AuthenticationHandler 
([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
 indicate for the authenticate method that
{quote}This method is invoked by the AuthenticationFilter only if the HTTP 
client request is not yet authenticated.
{quote}
This does not appear to be the case in practice.

I've create a patch and tested on a limited number of functional use cases 
(e.g. the timeline-service issue noted above).  If there is general agreement 
that the change is valid I'll add unit tests to the patch.

 


> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred in the current request.  This 
> primarily affects situations where multiple  authentication mechanism has 
> been configured.  For example when core-site.xml's has 
> hadoop.http.authentication.type=kerberos and yarn-site.xml has 
> yarn.timeline-service.http-authentication.type=kerberos the result is an 
> attempt to perform two Kerberos authentications for the same request.  This 
> in turn results in Kerberos triggering a replay attack detection.  The 
> javadocs for AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Kevin Minder (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Minder updated HADOOP-15518:
--
Description: 
The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
successful authentication has occurred.  This primarily affects situations 
where multiple  authentication mechanism has been configured.  For example when 
core-site.xml's has hadoop.http.authentication.type=kerberos and yarn-site.xml 
has yarn.timeline-service.http-authentication.type=kerberos the result is an 
attempt to perform two Kerberos authentications for the same request.  This in 
turn results in Kerberos triggering a replay attack detection.  The javadocs 
for AuthenticationHandler 
([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
 indicate for the authenticate method that
{quote}This method is invoked by the AuthenticationFilter only if the HTTP 
client request is not yet authenticated.
{quote}
This does not appear to be the case in practice.

I've create a patch and tested on a limited number of functional use cases 
(e.g. the timeline-service issue noted above).  If there is general agreement 
that the change is valid I'll add unit tests to the patch.

 

  was:
The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
successful authentication has occurred.  This primarily affects situations 
where multipole authentication mechanism has been configured.  For example when 
core-site.xml's has hadoop.http.authentication.type and yarn-site.xml has 
yarn.timeline-service.http-authentication.type the result is an attempt to 
perform two Kerberos authentications for the same request.  This in turn 
results in Kerberos triggering a replay attack detection.  The javadocs for 
AuthenticationHandler 
([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
 indicate for the authenticate method that
{quote}This method is invoked by the AuthenticationFilter only if the HTTP 
client request is not yet authenticated.
{quote}
This does not appear to be the case in practice.

I've create a patch and tested on a limited number of functional use cases 
(e.g. the timeline-service issue noted above).  If there is general agreement 
that the change is valid I'll add unit tests to the patch.

 


> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred.  This primarily affects situations 
> where multiple  authentication mechanism has been configured.  For example 
> when core-site.xml's has hadoop.http.authentication.type=kerberos and 
> yarn-site.xml has yarn.timeline-service.http-authentication.type=kerberos the 
> result is an attempt to perform two Kerberos authentications for the same 
> request.  This in turn results in Kerberos triggering a replay attack 
> detection.  The javadocs for AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Kevin Minder (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Minder updated HADOOP-15518:
--
Description: 
The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
successful authentication has occurred.  This primarily affects situations 
where multipole authentication mechanism has been configured.  For example when 
core-site.xml's has hadoop.http.authentication.type and yarn-site.xml has 
yarn.timeline-service.http-authentication.type the result is an attempt to 
perform two Kerberos authentications for the same request.  This in turn 
results in Kerberos triggering a replay attack detection.  The javadocs for 
AuthenticationHandler 
([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
 indicate for the authenticate method that
{quote}This method is invoked by the AuthenticationFilter only if the HTTP 
client request is not yet authenticated.
{quote}
This does not appear to be the case in practice.

I've create a patch and tested on a limited number of functional use cases 
(e.g. the timeline-service issue noted above).  If there is general agreement 
that the change is valid I'll add unit tests to the patch.

 

  was:
The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
successful authentication has occurred.  This primarily affects situations 
where multipole authentication mechanism has been configured.  For example when 
core-site.xml's has hadoop.http.authentication.type and yarn-site.xml has 
yarn.timeline-service.http-authentication.type the result is an attempt to 
perform two Kerberos authentications for the same request.  This in turn 
results in Kerberos triggering a replay attack detection.  The javadocs for 
AuthenticationHandler 
([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
 indicate for the authenticate method that
{quote}This method is invoked by the AuthenticationFilter only if the HTTP 
client request is not yet authenticated.
{quote}
This does not appear to be the cause in practice.

I've create a patch and tested on a limited number of functional use cases 
(e.g. the timeline-service issue noted above).  If there is general agreement 
that the change is valid I'll add unit tests to the patch.

 


> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred.  This primarily affects situations 
> where multipole authentication mechanism has been configured.  For example 
> when core-site.xml's has hadoop.http.authentication.type and yarn-site.xml 
> has yarn.timeline-service.http-authentication.type the result is an attempt 
> to perform two Kerberos authentications for the same request.  This in turn 
> results in Kerberos triggering a replay attack detection.  The javadocs for 
> AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the case in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Larry McCay (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay reassigned HADOOP-15518:


Assignee: Kevin Minder

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Assignee: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred.  This primarily affects situations 
> where multipole authentication mechanism has been configured.  For example 
> when core-site.xml's has hadoop.http.authentication.type and yarn-site.xml 
> has yarn.timeline-service.http-authentication.type the result is an attempt 
> to perform two Kerberos authentications for the same request.  This in turn 
> results in Kerberos triggering a replay attack detection.  The javadocs for 
> AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the cause in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504881#comment-16504881
 ] 

Eric Yang commented on HADOOP-15518:


+1 The patch looks good to me.

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred.  This primarily affects situations 
> where multipole authentication mechanism has been configured.  For example 
> when core-site.xml's has hadoop.http.authentication.type and yarn-site.xml 
> has yarn.timeline-service.http-authentication.type the result is an attempt 
> to perform two Kerberos authentications for the same request.  This in turn 
> results in Kerberos triggering a replay attack detection.  The javadocs for 
> AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the cause in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504877#comment-16504877
 ] 

Eric Yang commented on HADOOP-15518:


This is a race condition if multiple instances of AuthenticationFilters are 
chained together by accident, and the token has already been checked once by 
first instance of AuthenticationFilter, and the token has not been committed to 
HttpResponse.  "authenticate" method will get called twice.  The javadoc is not 
wrong, except how to go about validating the current request is already 
authenticated is not trivial when token has not been received by browser yet.  
It might be possible to add checks for either HttpRequest.getRemoteUser() or 
HttpResponse header committing state to see if Authentication had already 
occurred to prevent the race condition.

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred.  This primarily affects situations 
> where multipole authentication mechanism has been configured.  For example 
> when core-site.xml's has hadoop.http.authentication.type and yarn-site.xml 
> has yarn.timeline-service.http-authentication.type the result is an attempt 
> to perform two Kerberos authentications for the same request.  This in turn 
> results in Kerberos triggering a replay attack detection.  The javadocs for 
> AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the cause in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-06-07 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504875#comment-16504875
 ] 

Gabor Bota commented on HADOOP-15307:
-

Thanks [~templedf], that's totally reasonable. Added the comment in the v004 
patch.

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15307.001.patch, HADOOP-15307.002.patch, 
> HADOOP-15307.003.patch, HADOOP-15307.004.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-06-07 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15307:

Attachment: HADOOP-15307.004.patch

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15307.001.patch, HADOOP-15307.002.patch, 
> HADOOP-15307.003.patch, HADOOP-15307.004.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15519) KMS fails to read the existing key metadata after upgrading to JDK 1.8u171

2018-06-07 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504866#comment-16504866
 ] 

Wei-Chiu Chuang commented on HADOOP-15519:
--

Hi [~vrathor-hw] thanks for filing the jira. I suspect this is fixed by 
HADOOP-15473? It's a known issue with JDK 8u171

> KMS fails to read the existing key metadata after upgrading to JDK 1.8u171 
> ---
>
> Key: HADOOP-15519
> URL: https://issues.apache.org/jira/browse/HADOOP-15519
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.3
>Reporter: Vipin Rathor
>Priority: Critical
>
> Steps to reproduce are:
>  a. Setup a KMS with any OpenJDK 1.8 before u171 and create few KMS keys.
>  b. Update KMS to run with OpenJDK 1.8u171 JDK and keys can't be read 
> anymore, as can be seen below
> {code:java}
> hadoop key list -metadata
>  : null
> {code}
> c. Going back to earlier JDK version fixes the issue.
>  
> There are no direct error / stacktrace in kms.log when it is not able to read 
> the key metadata. Only Java serialization INFO messages are printed, followed 
> by this one empty line in log which just says:
> {code:java}
> ERROR RangerKeyStore - 
> {code}
> In some cases, kms.log can also have these lines:
> {code:java}
> 2018-05-18 10:40:46,438 DEBUG RangerKmsAuthorizer - <== 
> RangerKmsAuthorizer.assertAccess(null, rangerkms/node1.host@env.com 
> (auth:KERBEROS), GET_METADATA) 
> 2018-05-18 10:40:46,598 INFO serialization - ObjectInputFilter REJECTED: 
> class org.apache.hadoop.crypto.key.RangerKeyStoreProvider$KeyMetadata, array 
> length: -1, nRefs: 1, depth: 1, bytes: 147, ex: n/a
> 2018-05-18 10:40:46,598 ERROR RangerKeyStore - 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Kevin Minder (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Minder updated HADOOP-15518:
--
Description: 
The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
successful authentication has occurred.  This primarily affects situations 
where multipole authentication mechanism has been configured.  For example when 
core-site.xml's has hadoop.http.authentication.type and yarn-site.xml has 
yarn.timeline-service.http-authentication.type the result is an attempt to 
perform two Kerberos authentications for the same request.  This in turn 
results in Kerberos triggering a replay attack detection.  The javadocs for 
AuthenticationHandler 
([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
 indicate for the authenticate method that
{quote}This method is invoked by the AuthenticationFilter only if the HTTP 
client request is not yet authenticated.
{quote}
This does not appear to be the cause in practice.

I've create a patch and tested on a limited number of functional use cases 
(e.g. the timeline-service issue noted above).  If there is general agreement 
that the change is valid I'll add unit tests to the patch.

 

  was:
The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
successful authentication has occurred.  This primarily affects situations 
where multipole authentication mechanism has been configured.  For example when 
core-site.xml's has hadoop.http.authentication.type and yarn-site.xml has 
yarn.timeline-service.http-authentication.type the result is an attempt to 
perform two Kerberos authentications for the same request.  This in turn 
results in Kerberos triggering a replay attack detection.  The javadocs for 
AuthenticationHandler 
([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
 indicate for the authenticate method that
{quote}This method is invoked by the AuthenticationFilter only if the HTTP 
client request is not yet authenticated.
{quote}
This does not appear to be the cause in practice.

 


> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred.  This primarily affects situations 
> where multipole authentication mechanism has been configured.  For example 
> when core-site.xml's has hadoop.http.authentication.type and yarn-site.xml 
> has yarn.timeline-service.http-authentication.type the result is an attempt 
> to perform two Kerberos authentications for the same request.  This in turn 
> results in Kerberos triggering a replay attack detection.  The javadocs for 
> AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the cause in practice.
> I've create a patch and tested on a limited number of functional use cases 
> (e.g. the timeline-service issue noted above).  If there is general agreement 
> that the change is valid I'll add unit tests to the patch.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Kevin Minder (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Minder updated HADOOP-15518:
--
Attachment: HADOOP-15518-001.patch

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Priority: Major
> Attachments: HADOOP-15518-001.patch
>
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred.  This primarily affects situations 
> where multipole authentication mechanism has been configured.  For example 
> when core-site.xml's has hadoop.http.authentication.type and yarn-site.xml 
> has yarn.timeline-service.http-authentication.type the result is an attempt 
> to perform two Kerberos authentications for the same request.  This in turn 
> results in Kerberos triggering a replay attack detection.  The javadocs for 
> AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the cause in practice.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Kevin Minder (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Minder updated HADOOP-15518:
--
Description: 
The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
successful authentication has occurred.  This primarily affects situations 
where multipole authentication mechanism has been configured.  For example when 
core-site.xml's has hadoop.http.authentication.type and yarn-site.xml has 
yarn.timeline-service.http-authentication.type the result is an attempt to 
perform two Kerberos authentications for the same request.  This in turn 
results in Kerberos triggering a replay attack detection.  The javadocs for 
AuthenticationHandler 
([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
 indicate for the authenticate method that
{quote}This method is invoked by the AuthenticationFilter only if the HTTP 
client request is not yet authenticated.
{quote}
This does not appear to be the cause in practice.

 

  was:The hadoop-auth AuthenticationFilter will invoke its handler even if a 
prior successful authentication has occurred.  This primarily affects 
situations where multipole authentication mechanism has been configured.  For 
example when core-site.xml's has hadoop.http.authentication.type and 
yarn-site.xml has yarn.timeline-service.http-authentication.type the result is 
an attempt to perform two Kerberos authentications for the same request.  This 
in turn results in Kerberos triggering a replay attack detection.  


> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Priority: Major
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred.  This primarily affects situations 
> where multipole authentication mechanism has been configured.  For example 
> when core-site.xml's has hadoop.http.authentication.type and yarn-site.xml 
> has yarn.timeline-service.http-authentication.type the result is an attempt 
> to perform two Kerberos authentications for the same request.  This in turn 
> results in Kerberos triggering a replay attack detection.  The javadocs for 
> AuthenticationHandler 
> ([https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationHandler.java)]
>  indicate for the authenticate method that
> {quote}This method is invoked by the AuthenticationFilter only if the HTTP 
> client request is not yet authenticated.
> {quote}
> This does not appear to be the cause in practice.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15519) KMS fails to read the existing key metadata after upgrading to JDK 1.8u171

2018-06-07 Thread Vipin Rathor (JIRA)
Vipin Rathor created HADOOP-15519:
-

 Summary: KMS fails to read the existing key metadata after 
upgrading to JDK 1.8u171 
 Key: HADOOP-15519
 URL: https://issues.apache.org/jira/browse/HADOOP-15519
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.7.3
Reporter: Vipin Rathor


Steps to reproduce are:
 a. Setup a KMS with any OpenJDK 1.8 before u171 and create few KMS keys.
 b. Update KMS to run with OpenJDK 1.8u171 JDK and keys can't be read anymore, 
as can be seen below
{code:java}
hadoop key list -metadata
 : null
{code}
c. Going back to earlier JDK version fixes the issue.

 

There are no direct error / stacktrace in kms.log when it is not able to read 
the key metadata. Only Java serialization INFO messages are printed, followed 
by this one empty line in log which just says:
{code:java}
ERROR RangerKeyStore - 
{code}
In some cases, kms.log can also have these lines:
{code:java}
2018-05-18 10:40:46,438 DEBUG RangerKmsAuthorizer - <== 
RangerKmsAuthorizer.assertAccess(null, rangerkms/node1.host@env.com 
(auth:KERBEROS), GET_METADATA) 
2018-05-18 10:40:46,598 INFO serialization - ObjectInputFilter REJECTED: class 
org.apache.hadoop.crypto.key.RangerKeyStoreProvider$KeyMetadata, array length: 
-1, nRefs: 1, depth: 1, bytes: 147, ex: n/a
2018-05-18 10:40:46,598 ERROR RangerKeyStore - 
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Kevin Minder (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Minder updated HADOOP-15518:
--
Description: The hadoop-auth AuthenticationFilter will invoke its handler 
even if a prior successful authentication has occurred.  This primarily affects 
situations where multipole authentication mechanism has been configured.  For 
example when core-site.xml's has hadoop.http.authentication.type and 
yarn-site.xml has yarn.timeline-service.http-authentication.type the result is 
an attempt to perform two Kerberos authentications for the same request.  This 
in turn results in Kerberos triggering a replay attack detection.  

> Authentication filter calling handler after request already authenticated
> -
>
> Key: HADOOP-15518
> URL: https://issues.apache.org/jira/browse/HADOOP-15518
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Kevin Minder
>Priority: Major
>
> The hadoop-auth AuthenticationFilter will invoke its handler even if a prior 
> successful authentication has occurred.  This primarily affects situations 
> where multipole authentication mechanism has been configured.  For example 
> when core-site.xml's has hadoop.http.authentication.type and yarn-site.xml 
> has yarn.timeline-service.http-authentication.type the result is an attempt 
> to perform two Kerberos authentications for the same request.  This in turn 
> results in Kerberos triggering a replay attack detection.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15518) Authentication filter calling handler after request already authenticated

2018-06-07 Thread Kevin Minder (JIRA)
Kevin Minder created HADOOP-15518:
-

 Summary: Authentication filter calling handler after request 
already authenticated
 Key: HADOOP-15518
 URL: https://issues.apache.org/jira/browse/HADOOP-15518
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.7.1
Reporter: Kevin Minder






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15512) clean up Shell from JDK7 workarounds

2018-06-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504799#comment-16504799
 ] 

Hudson commented on HADOOP-15512:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14377 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14377/])
HADOOP-15512. Clean up Shell from JDK7 workarounds. Contributed by Zsolt 
(stevel: rev f494f0b8968a61bf3aa32b7ca0851b8c744aa70f)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java


> clean up Shell from JDK7 workarounds
> 
>
> Key: HADOOP-15512
> URL: https://issues.apache.org/jira/browse/HADOOP-15512
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Zsolt Venczel
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15512.01.patch
>
>
> there's some comments in {{Shell}} about JDK7 specific issues (especially 
> {{runCommand()}}. These workarounds don't matter, so can be purged



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15512) clean up Shell from JDK7 workarounds

2018-06-07 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15512:

   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

LGTM +1

 

thanks for this, it's little ones which get neglected but overall keep the code 
quality up

> clean up Shell from JDK7 workarounds
> 
>
> Key: HADOOP-15512
> URL: https://issues.apache.org/jira/browse/HADOOP-15512
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Zsolt Venczel
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15512.01.patch
>
>
> there's some comments in {{Shell}} about JDK7 specific issues (especially 
> {{runCommand()}}. These workarounds don't matter, so can be purged



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15517) NPE on MapReduce AM leaves the job in an inconsistent state

2018-06-07 Thread Gonzalo Herreros (JIRA)
Gonzalo Herreros created HADOOP-15517:
-

 Summary: NPE on MapReduce AM leaves the job in an inconsistent 
state
 Key: HADOOP-15517
 URL: https://issues.apache.org/jira/browse/HADOOP-15517
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.3
Reporter: Gonzalo Herreros


On AWS, running a MapReduce job, one of the nodes died and was decommissioned.

However, the AM doesn't seem to handle that well and the job didn't complete 
mappers correctly from there.


{code:java}
2018-06-07 14:29:08,686 ERROR [RMCommunicator Allocator] 
org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: ERROR IN CONTACTING RM. 
java.lang.NullPointerException
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.handleUpdatedNodes(RMContainerAllocator.java:879)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:779)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:259)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$AllocatorRunnable.run(RMCommunicator.java:281)
at java.lang.Thread.run(Thread.java:748)
2018-06-07 14:29:08,686 INFO [IPC Server handler 4 on 46577] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: MapCompletionEvents request 
from attempt_1528378746527_0011_r_000553_0. startIndex 13112 maxEvents 1453
2018-06-07 14:29:08,686 ERROR [AsyncDispatcher event handler] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Error in dispatcher thread
java.lang.NullPointerException
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$UpdatedNodesTransition.transition(JobImpl.java:2162)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$UpdatedNodesTransition.transition(JobImpl.java:2155)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:997)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:139)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1346)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1342)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
at java.lang.Thread.run(Thread.java:748)
2018-06-07 14:29:08,688 INFO [AsyncDispatcher ShutDown handler] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Exiting, bbye..
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-06-07 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504683#comment-16504683
 ] 

Steve Loughran commented on HADOOP-15407:
-

[~fabbri], [~esmanii] the hadoop project needs to think about what to do w.r.t 
htrace in future, given its not going to leave incubation. HDFS uses, so I'm 
not worried about it being added as a dependency here, but it's not going to 
get any maintenance unless we think about co-opting it into the client-side 
tracing into own codebase, leaving log collection to other tools. With Todd and 
Colin Patrick McCabe on our committer list, we could make a case for that. 

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
> Attachments: HADOOP-15407-001.patch, HADOOP-15407-002.patch, 
> HADOOP-15407-003.patch, HADOOP-15407-004.patch, 
> HADOOP-15407-HADOOP-15407.006.patch, HADOOP-15407-HADOOP-15407.007.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production environment.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To