[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383394#comment-14383394
 ] 

Kai Zheng commented on HADOOP-11754:


The logic looks good. It's smart to tell if security/Kerberos is enabled or 
not. I'm not sure why we change the tests, which caused the failures. Do we 
need an update or just trigger since the dep of HADOOP-11748 was already in ?

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.servic

[jira] [Comment Edited] (HADOOP-11746) rewrite test-patch.sh

2015-03-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383372#comment-14383372
 ] 

Allen Wittenauer edited comment on HADOOP-11746 at 3/27/15 5:56 AM:


-02: 
* still very little new functionality, except that --dirty-workspace is now 
working for me.
* console output cleanup
* fix the /tmp default for the patch scratch space to be 
/tmp/(projname)-test-patch/pid to allow for simultaneous test patch runs
* add in a timer to show how long various steps take (as requested by 
[~raviprak] )
* Fix more "we should have a var rather than hard code this binary" issues
* just skip the findbugs test if findbugs isn't installed
* send the mvn install run to a log file rather than dumping it to the screen
* fix some of the backslash indentation problems generated by the autoformatter


Current console output now looks like:

{noformat}
w$ dev-support/test-patch.sh --dirty-workspace /tmp/H1
Running in developer mode
/tmp/Hadoop-test-patch/54448 has been created


===
===
Testing patch for H1.
===
===

-1 overall

| Vote |   Subsystem | Comment
|  +1  |@author  |  00m 00s  | The patch does not contain any 
 | @author tags.
|  -1  | tests included  |  00m 00s  | The patch doesn't appear to include 
 | any new or modified tests. Please
 | justify why no new tests are needed
 | for this patch. Also please list what
 | manual steps were performed to verify
 | this patch.
|  +1  |  javac  |  04m 30s  | There were no new javac warning 
 | messages.
|  +1  |javadoc  |  06m 15s  | There were no new javadoc warning 
 | messages.
|  +1  |eclipse:eclipse  |  00m 24s  | The patch built with eclipse:eclipse.
|  -1  |  release audit  |  00m 05s  | The applied patch generated 1 
release 
 | audit warnings.


===
===
   Finished build.
===
===
{noformat}

Note the always centered text and the column wrap on the output. :D


was (Author: aw):
-02: 
* still very little new functionality, except that --dirty-workspace is now 
working for me.
* console output cleanup
* fix the /tmp default for the patch scratch space to be 
/tmp/(projname)-test-patch/pid to allow for simultaneous test patch runs
* add in a timer to show how long various steps take (as requested by 
[~raviprak] )
* Fix more "we should have a var rather than hard code this binary" issues
* just skip the findbugs test if findbugs isn't installed
* send the mvn install run to a log file rather than dumping it to the screen
* fix some of the backslash indentation problems generated by the autoformatter


Current console output now looks like:

{code}
w$ dev-support/test-patch.sh --dirty-workspace /tmp/H1
Running in developer mode
/tmp/Hadoop-test-patch/54448 has been created


===
===
Testing patch for H1.
===
===

-1 overall

| Vote |   Subsystem | Comment
|  +1  |@author  |  00m 00s  | The patch does not contain any 
 | @author tags.
|  -1  | tests included  |  00m 00s  | The patch doesn't appear to include 
 | any new or modified tests. Please
 | justify why no new tests are needed
 | for this patch. Also please list what
 | manual steps were performed to verify
 | this patch.
|  +1  |  javac  |  04m 30s  | There were no new javac warning 
 | messages.
|  +1  |javadoc  |  06m 15s  | There were no new javado

[jira] [Issue Comment Deleted] (HADOOP-11746) rewrite test-patch.sh

2015-03-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Comment: was deleted

(was: (and, of course, JIRA's formatting screwed it up. lol.  but it should 
look good in email.  The JIRA formatting is different, so shouldn't suffer like 
that.))

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh

2015-03-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383374#comment-14383374
 ] 

Allen Wittenauer commented on HADOOP-11746:
---

(and, of course, JIRA's formatting screwed it up. lol.  but it should look good 
in email.  The JIRA formatting is different, so shouldn't suffer like that.)

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh

2015-03-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Attachment: HADOOP-11746-02.patch

-02: 
* still very little new functionality, except that --dirty-workspace is now 
working for me.
* console output cleanup
* fix the /tmp default for the patch scratch space to be 
/tmp/(projname)-test-patch/pid to allow for simultaneous test patch runs
* add in a timer to show how long various steps take (as requested by 
[~raviprak] )
* Fix more "we should have a var rather than hard code this binary" issues
* just skip the findbugs test if findbugs isn't installed
* send the mvn install run to a log file rather than dumping it to the screen
* fix some of the backslash indentation problems generated by the autoformatter


Current console output now looks like:

{code}
w$ dev-support/test-patch.sh --dirty-workspace /tmp/H1
Running in developer mode
/tmp/Hadoop-test-patch/54448 has been created


===
===
Testing patch for H1.
===
===

-1 overall

| Vote |   Subsystem | Comment
|  +1  |@author  |  00m 00s  | The patch does not contain any 
 | @author tags.
|  -1  | tests included  |  00m 00s  | The patch doesn't appear to include 
 | any new or modified tests. Please
 | justify why no new tests are needed
 | for this patch. Also please list what
 | manual steps were performed to verify
 | this patch.
|  +1  |  javac  |  04m 30s  | There were no new javac warning 
 | messages.
|  +1  |javadoc  |  06m 15s  | There were no new javadoc warning 
 | messages.
|  +1  |eclipse:eclipse  |  00m 24s  | The patch built with eclipse:eclipse.
|  -1  |  release audit  |  00m 05s  | The applied patch generated 1 
release 
 | audit warnings.


===
===
   Finished build.
===
===
{code}

Note the always centered text and the column wrap on the output. :D

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)

2015-03-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10392:
---
Target Version/s: 2.8.0  (was: 2.7.0)

> Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)
> 
>
> Key: HADOOP-10392
> URL: https://issues.apache.org/jira/browse/HADOOP-10392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-10392.2.patch, HADOOP-10392.3.patch, 
> HADOOP-10392.4.patch, HADOOP-10392.4.patch, HADOOP-10392.5.patch, 
> HADOOP-10392.6.patch, HADOOP-10392.7.patch, HADOOP-10392.7.patch, 
> HADOOP-10392.8.patch, HADOOP-10392.patch
>
>
> There're some methods calling Path.makeQualified(FileSystem), which causes 
> javac warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)

2015-03-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10392:
---
Attachment: HADOOP-10392.8.patch

Rebased for the latest trunk.

> Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)
> 
>
> Key: HADOOP-10392
> URL: https://issues.apache.org/jira/browse/HADOOP-10392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-10392.2.patch, HADOOP-10392.3.patch, 
> HADOOP-10392.4.patch, HADOOP-10392.4.patch, HADOOP-10392.5.patch, 
> HADOOP-10392.6.patch, HADOOP-10392.7.patch, HADOOP-10392.7.patch, 
> HADOOP-10392.8.patch, HADOOP-10392.patch
>
>
> There're some methods calling Path.makeQualified(FileSystem), which causes 
> javac warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383347#comment-14383347
 ] 

Hudson commented on HADOOP-11691:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #7445 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7445/])
HADOOP-11691. X86 build of libwinutils is broken. Contributed by Kiran Kumar M 
R. (cnauroth: rev af618f23a70508111f490a24d74fc90161cfc079)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props


> X86 build of libwinutils is broken
> --
>
> Key: HADOOP-11691
> URL: https://issues.apache.org/jira/browse/HADOOP-11691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 2.7.0
>Reporter: Remus Rusanu
>Assignee: Kiran Kumar M R
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
> HADOOP-11691-003.patch
>
>
> Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
> in error:
> {code}
> (Link target) ->
>   
> E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
>  : fatal error LNK1112: module machine type 'x64' conflicts with target 
> machine type 'X86' 
> [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11691:
---
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1 for the patch.  I committed this to trunk, branch-2 and branch-2.7.  Kiran, 
thank you for the patch.  Remus and Chuan, thank you for helping with code 
review and testing.

> X86 build of libwinutils is broken
> --
>
> Key: HADOOP-11691
> URL: https://issues.apache.org/jira/browse/HADOOP-11691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 2.7.0
>Reporter: Remus Rusanu
>Assignee: Kiran Kumar M R
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
> HADOOP-11691-003.patch
>
>
> Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
> in error:
> {code}
> (Link target) ->
>   
> E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
>  : fatal error LNK1112: module machine type 'x64' conflicts with target 
> machine type 'X86' 
> [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11257) Update "hadoop jar" documentation to warn against using it for launching yarn jars

2015-03-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-11257.

  Resolution: Fixed
Hadoop Flags: Reviewed

I committed the addendum patch to branch-2 and branch-2.7.  [~iwasakims], thank 
you for acting so quickly to provide the patch.

> Update "hadoop jar" documentation to warn against using it for launching yarn 
> jars
> --
>
> Key: HADOOP-11257
> URL: https://issues.apache.org/jira/browse/HADOOP-11257
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.1.1-beta
>Reporter: Allen Wittenauer
>Assignee: Masatake Iwasaki
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: HADOOP-11257-branch-2.addendum.001.patch, 
> HADOOP-11257.1.patch, HADOOP-11257.1.patch, HADOOP-11257.2.patch, 
> HADOOP-11257.3.patch, HADOOP-11257.4.patch, HADOOP-11257.4.patch
>
>
> We should update the "hadoop jar" documentation to warn against using it for 
> launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11762) Enable swift distcp to secure HDFS

2015-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383256#comment-14383256
 ] 

Hadoop QA commented on HADOOP-11762:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707672/HADOOP-11762.000.patch
  against trunk revision 47782cb.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-openstack.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6009//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6009//console

This message is automatically generated.

> Enable swift distcp to secure HDFS
> --
>
> Key: HADOOP-11762
> URL: https://issues.apache.org/jira/browse/HADOOP-11762
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.3.0, 2.4.0, 2.5.0, 2.4.1, 2.6.0, 2.5.1
>Reporter: Chen He
>Assignee: Chen He
> Attachments: HADOOP-11762.000.patch
>
>
> Even we can use "dfs -put" or "dfs -cp" to move data between swift and 
> secured HDFS, it will be impractical for moving huge amount of data like 10TB 
> or larger.
> Current Hadoop code will result in :"java.lang.IllegalArgumentException: 
> java.net.UnknownHostException: container.swiftdomain" 
> Since it does not support token feature in SwiftNativeFileSystem right now, 
> it will be reasonable that we override the "getCanonicalServiceName" method 
> like other filesystem extensions (S3FileSystem, S3AFileSystem)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11762) Enable swift distcp to secure HDFS

2015-03-26 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-11762:
-
Attachment: HADOOP-11762.000.patch

> Enable swift distcp to secure HDFS
> --
>
> Key: HADOOP-11762
> URL: https://issues.apache.org/jira/browse/HADOOP-11762
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.3.0, 2.4.0, 2.5.0, 2.4.1, 2.6.0, 2.5.1
>Reporter: Chen He
>Assignee: Chen He
> Attachments: HADOOP-11762.000.patch
>
>
> Even we can use "dfs -put" or "dfs -cp" to move data between swift and 
> secured HDFS, it will be impractical for moving huge amount of data like 10TB 
> or larger.
> Current Hadoop code will result in :"java.lang.IllegalArgumentException: 
> java.net.UnknownHostException: container.swiftdomain" 
> Since it does not support token feature in SwiftNativeFileSystem right now, 
> it will be reasonable that we override the "getCanonicalServiceName" method 
> like other filesystem extensions (S3FileSystem, S3AFileSystem)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11762) Enable swift distcp to secure HDFS

2015-03-26 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-11762:
-
Attachment: (was: HADOOP-11762.000.patch)

> Enable swift distcp to secure HDFS
> --
>
> Key: HADOOP-11762
> URL: https://issues.apache.org/jira/browse/HADOOP-11762
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.3.0, 2.4.0, 2.5.0, 2.4.1, 2.6.0, 2.5.1
>Reporter: Chen He
>Assignee: Chen He
> Attachments: HADOOP-11762.000.patch
>
>
> Even we can use "dfs -put" or "dfs -cp" to move data between swift and 
> secured HDFS, it will be impractical for moving huge amount of data like 10TB 
> or larger.
> Current Hadoop code will result in :"java.lang.IllegalArgumentException: 
> java.net.UnknownHostException: container.swiftdomain" 
> Since it does not support token feature in SwiftNativeFileSystem right now, 
> it will be reasonable that we override the "getCanonicalServiceName" method 
> like other filesystem extensions (S3FileSystem, S3AFileSystem)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11762) Enable swift distcp to secure HDFS

2015-03-26 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-11762:
-
Status: Patch Available  (was: Open)

> Enable swift distcp to secure HDFS
> --
>
> Key: HADOOP-11762
> URL: https://issues.apache.org/jira/browse/HADOOP-11762
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.5.1, 2.6.0, 2.4.1, 2.5.0, 2.4.0, 2.3.0
>Reporter: Chen He
>Assignee: Chen He
> Attachments: HADOOP-11762.000.patch
>
>
> Even we can use "dfs -put" or "dfs -cp" to move data between swift and 
> secured HDFS, it will be impractical for moving huge amount of data like 10TB 
> or larger.
> Current Hadoop code will result in :"java.lang.IllegalArgumentException: 
> java.net.UnknownHostException: container.swiftdomain" 
> Since it does not support token feature in SwiftNativeFileSystem right now, 
> it will be reasonable that we override the "getCanonicalServiceName" method 
> like other filesystem extensions (S3FileSystem, S3AFileSystem)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11762) Enable swift distcp to secure HDFS

2015-03-26 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-11762:
-
Attachment: HADOOP-11762.000.patch

> Enable swift distcp to secure HDFS
> --
>
> Key: HADOOP-11762
> URL: https://issues.apache.org/jira/browse/HADOOP-11762
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/swift
>Affects Versions: 2.3.0, 2.4.0, 2.5.0, 2.4.1, 2.6.0, 2.5.1
>Reporter: Chen He
>Assignee: Chen He
> Attachments: HADOOP-11762.000.patch
>
>
> Even we can use "dfs -put" or "dfs -cp" to move data between swift and 
> secured HDFS, it will be impractical for moving huge amount of data like 10TB 
> or larger.
> Current Hadoop code will result in :"java.lang.IllegalArgumentException: 
> java.net.UnknownHostException: container.swiftdomain" 
> Since it does not support token feature in SwiftNativeFileSystem right now, 
> it will be reasonable that we override the "getCanonicalServiceName" method 
> like other filesystem extensions (S3FileSystem, S3AFileSystem)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11762) Enable swift distcp to secure HDFS

2015-03-26 Thread Chen He (JIRA)
Chen He created HADOOP-11762:


 Summary: Enable swift distcp to secure HDFS
 Key: HADOOP-11762
 URL: https://issues.apache.org/jira/browse/HADOOP-11762
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/swift
Affects Versions: 2.5.1, 2.6.0, 2.4.1, 2.5.0, 2.4.0, 2.3.0
Reporter: Chen He
Assignee: Chen He


Even we can use "dfs -put" or "dfs -cp" to move data between swift and secured 
HDFS, it will be impractical for moving huge amount of data like 10TB or larger.

Current Hadoop code will result in :"java.lang.IllegalArgumentException: 
java.net.UnknownHostException: container.swiftdomain" 

Since it does not support token feature in SwiftNativeFileSystem right now, it 
will be reasonable that we override the "getCanonicalServiceName" method like 
other filesystem extensions (S3FileSystem, S3AFileSystem)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11759) TockenCache doc has minor problem

2015-03-26 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-11759:
-
Labels: newbie++  (was: )

> TockenCache doc has minor problem
> -
>
> Key: HADOOP-11759
> URL: https://issues.apache.org/jira/browse/HADOOP-11759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.6.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
>
> /**
>* get delegation token for a specific FS
>* @param fs
>* @param credentials
>* @param p
>* @param conf
>* @throws IOException
>*/
>   static void obtainTokensForNamenodesInternal(FileSystem fs, 
>   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11760) Typo in DistCp.java

2015-03-26 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-11760:
-
Labels: newbie++  (was: )

> Typo in DistCp.java
> ---
>
> Key: HADOOP-11760
> URL: https://issues.apache.org/jira/browse/HADOOP-11760
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
>
> /**
>* Create a default working folder for the job, under the
>* job staging directory
>*
>* @return Returns the working folder information
>* @throws Exception - EXception if any
>*/
>   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11760) Typo in DistCp.java

2015-03-26 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383188#comment-14383188
 ] 

Chen He commented on HADOOP-11760:
--

No, "EXception"

> Typo in DistCp.java
> ---
>
> Key: HADOOP-11760
> URL: https://issues.apache.org/jira/browse/HADOOP-11760
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
>
> /**
>* Create a default working folder for the job, under the
>* job staging directory
>*
>* @return Returns the working folder information
>* @throws Exception - EXception if any
>*/
>   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11664) Loading predefined EC schemas from configuration

2015-03-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383184#comment-14383184
 ] 

Kai Zheng commented on HADOOP-11664:


Hi Zhe, thanks for your review.
I agree with you it doesn't have to be configured. It just follows conventions 
in Hadoop because some one would prefer to find those configurable items from 
file like core-default.xml to see what needs to be prepared. Hard-coded values 
sometimes are just not easy to be out.

If you agree we can keep the configurable item, I will have to change the 
property key ! Thanks for the pasting.

> Loading predefined EC schemas from configuration
> 
>
> Key: HADOOP-11664
> URL: https://issues.apache.org/jira/browse/HADOOP-11664
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11664-v2.patch, HADOOP-11664-v3.patch, 
> HDFS-7371_v1.patch
>
>
> System administrator can configure multiple EC codecs in hdfs-site.xml file, 
> and codec instances or schemas in a new configuration file named 
> ec-schema.xml in the conf folder. A codec can be referenced by its instance 
> or schema using the codec name, and a schema can be utilized and specified by 
> the schema name for a folder or EC ZONE to enforce EC. Once a schema is used 
> to define an EC ZONE, then its associated parameter values will be stored as 
> xattributes and respected thereafter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383174#comment-14383174
 ] 

Hadoop QA commented on HADOOP-11761:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707659/HADOOP-11761-032615.patch
  against trunk revision 47782cb.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6008//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6008//console

This message is automatically generated.

> Fix findbugs warnings in org.apache.hadoop.security.authentication
> --
>
> Key: HADOOP-11761
> URL: https://issues.apache.org/jira/browse/HADOOP-11761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: findbugs
> Attachments: HADOOP-11761-032615.patch
>
>
> As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
> org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11740) Combine erasure encoder and decoder interfaces

2015-03-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383161#comment-14383161
 ] 

Kai Zheng commented on HADOOP-11740:


Thanks [~zhz] for the good thoughts.
bq.We can get rid of ErasureCoder since it has a single subclass now 
(AbstractErasureCoder)
Yes it's often a good question to think about either interface or abstract 
class in Java. My feeling is that if it's in a framework, subject to be 
pluggable and implemented by customers, an interface would be good to have. So 
I guess we could keep {{ErasureCoder}} interface, and convert 
{{ErasureCodingStep}} interface to a class.
bq.If ECBlockGroup can provide erased indices, we can further combine encoding 
and decoding classes
I'm not sure, as erased indices have to be be computed according to input 
blocks and output blocks just for decoders, and encoders don't have the related 
logics. Currently in RS coder the decoding is rather simple but I believe it 
will be much complicated for codes like LRC and Hitchhiker, so separating 
encoding class and decoding class is desired.

> Combine erasure encoder and decoder interfaces
> --
>
> Key: HADOOP-11740
> URL: https://issues.apache.org/jira/browse/HADOOP-11740
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HADOOP-11740-000.patch
>
>
> Rationale [discussed | 
> https://issues.apache.org/jira/browse/HDFS-7337?focusedCommentId=14376540&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14376540]
>  under HDFS-7337.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383156#comment-14383156
 ] 

Hadoop QA commented on HADOOP-11754:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707657/HADOOP-11754.001.patch
  against trunk revision 47782cb.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-auth:

  
org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6007//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6007//artifact/patchprocess/newPatchFindbugsWarningshadoop-auth.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6007//console

This message is automatically generated.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(

[jira] [Commented] (HADOOP-11759) TockenCache doc has minor problem

2015-03-26 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383139#comment-14383139
 ] 

Brahma Reddy Battula commented on HADOOP-11759:
---

Thanks for reporting.. I will remove {{param p}}

> TockenCache doc has minor problem
> -
>
> Key: HADOOP-11759
> URL: https://issues.apache.org/jira/browse/HADOOP-11759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.6.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>
> /**
>* get delegation token for a specific FS
>* @param fs
>* @param credentials
>* @param p
>* @param conf
>* @throws IOException
>*/
>   static void obtainTokensForNamenodesInternal(FileSystem fs, 
>   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11759) TockenCache doc has minor problem

2015-03-26 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HADOOP-11759:
-

Assignee: Brahma Reddy Battula

> TockenCache doc has minor problem
> -
>
> Key: HADOOP-11759
> URL: https://issues.apache.org/jira/browse/HADOOP-11759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.6.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>
> /**
>* get delegation token for a specific FS
>* @param fs
>* @param credentials
>* @param p
>* @param conf
>* @throws IOException
>*/
>   static void obtainTokensForNamenodesInternal(FileSystem fs, 
>   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11759) TockenCache doc has minor problem

2015-03-26 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11759:
--
Labels:   (was: newbie++)

> TockenCache doc has minor problem
> -
>
> Key: HADOOP-11759
> URL: https://issues.apache.org/jira/browse/HADOOP-11759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.6.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>
> /**
>* get delegation token for a specific FS
>* @param fs
>* @param credentials
>* @param p
>* @param conf
>* @throws IOException
>*/
>   static void obtainTokensForNamenodesInternal(FileSystem fs, 
>   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11760) Typo in DistCp.java

2015-03-26 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383136#comment-14383136
 ] 

Brahma Reddy Battula commented on HADOOP-11760:
---

Thanks for reporting.. hope you mean staging..?

> Typo in DistCp.java
> ---
>
> Key: HADOOP-11760
> URL: https://issues.apache.org/jira/browse/HADOOP-11760
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>
> /**
>* Create a default working folder for the job, under the
>* job staging directory
>*
>* @return Returns the working folder information
>* @throws Exception - EXception if any
>*/
>   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11760) Typo in DistCp.java

2015-03-26 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11760:
--
Labels:   (was: newbie++)

> Typo in DistCp.java
> ---
>
> Key: HADOOP-11760
> URL: https://issues.apache.org/jira/browse/HADOOP-11760
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>
> /**
>* Create a default working folder for the job, under the
>* job staging directory
>*
>* @return Returns the working folder information
>* @throws Exception - EXception if any
>*/
>   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11760) Typo in DistCp.java

2015-03-26 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HADOOP-11760:
-

Assignee: Brahma Reddy Battula

> Typo in DistCp.java
> ---
>
> Key: HADOOP-11760
> URL: https://issues.apache.org/jira/browse/HADOOP-11760
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
>
> /**
>* Create a default working folder for the job, under the
>* job staging directory
>*
>* @return Returns the working folder information
>* @throws Exception - EXception if any
>*/
>   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11761:
---
Labels: findbugs  (was: )

> Fix findbugs warnings in org.apache.hadoop.security.authentication
> --
>
> Key: HADOOP-11761
> URL: https://issues.apache.org/jira/browse/HADOOP-11761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: findbugs
> Attachments: HADOOP-11761-032615.patch
>
>
> As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
> org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11761:
---
Attachment: HADOOP-11761-032615.patch

This issue looks really weird as I cleaned all findbugs warnings in 
HADOOP-11379. After looking into it, seems like the fix in HADOOP-10670 
introduce the warning to the current location. In the findbugs log of 
HADOOP-10670, I can notice the following lines:
{code}
==
==
Determining number of patched Findbugs warnings.
==
==


  Running findbugs in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core
/home/jenkins/tools/maven/latest/bin/mvn clean test findbugs:findbugs 
-DskipTests -DHadoopPatchProcess < /dev/null > 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/../patchprocess/patchFindBugsOutputhadoop-mapreduce-client-core.txt
 2>&1
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build
  Running findbugs in hadoop-tools/hadoop-archives
/home/jenkins/tools/maven/latest/bin/mvn clean test findbugs:findbugs 
-DskipTests -DHadoopPatchProcess < /dev/null > 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/../patchprocess/patchFindBugsOutputhadoop-archives.txt
 2>&1
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build
Found 0 Findbugs warnings 
(/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-tools/hadoop-archives/target/findbugsXml.xml)
Found 0 Findbugs warnings 
(/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/target/findbugsXml.xml)
{code}
So apparently Jenkins ran findbugs against a wrong place on HADOOP-10670. I 
reran findbugs locally against hadoop-auth and now the warning is gone after 
this quick fix. 

> Fix findbugs warnings in org.apache.hadoop.security.authentication
> --
>
> Key: HADOOP-11761
> URL: https://issues.apache.org/jira/browse/HADOOP-11761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: findbugs
> Attachments: HADOOP-11761-032615.patch
>
>
> As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
> org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11761:
---
Status: Patch Available  (was: Open)

> Fix findbugs warnings in org.apache.hadoop.security.authentication
> --
>
> Key: HADOOP-11761
> URL: https://issues.apache.org/jira/browse/HADOOP-11761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
> Attachments: HADOOP-11761-032615.patch
>
>
> As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
> org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11754:

Status: Patch Available  (was: Open)

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java

[jira] [Updated] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11754:

Attachment: HADOOP-11754.001.patch

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch, HADOOP-11754.001.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.i

[jira] [Commented] (HADOOP-11748) The secrets of auth cookies should not be specified in configuration in clear text

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383084#comment-14383084
 ] 

Hudson commented on HADOOP-11748:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7444 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7444/])
HADOOP-11748. The secrets of auth cookies should not be specified in 
configuration in clear text. Contributed by Li Lu and Haohui Mai. (wheat9: rev 
47782cbf4a66d49064fd3dd6d1d1a19cc42157fc)
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProviderCreator.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java


> The secrets of auth cookies should not be specified in configuration in clear 
> text
> --
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch
>
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11761:
---
Priority: Minor  (was: Major)

> Fix findbugs warnings in org.apache.hadoop.security.authentication
> --
>
> Key: HADOOP-11761
> URL: https://issues.apache.org/jira/browse/HADOOP-11761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>
> As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
> org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11748) The secrets of auth cookies should not be specified in configuration in clear text

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11748:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk, branch-2 and branch-2.7. Thanks 
[~gtCarrera9] for the contribution.

> The secrets of auth cookies should not be specified in configuration in clear 
> text
> --
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch
>
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11748) The secrets of auth cookies should not be specified in configuration in clear text

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11748:

Summary: The secrets of auth cookies should not be specified in 
configuration in clear text  (was: Secrets for auth cookies can be specified in 
clear text)

> The secrets of auth cookies should not be specified in configuration in clear 
> text
> --
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
> Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch
>
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383056#comment-14383056
 ] 

Jing Zhao commented on HADOOP-11748:


+1

> Secrets for auth cookies can be specified in clear text
> ---
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
> Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch
>
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Li Lu (JIRA)
Li Lu created HADOOP-11761:
--

 Summary: Fix findbugs warnings in 
org.apache.hadoop.security.authentication
 Key: HADOOP-11761
 URL: https://issues.apache.org/jira/browse/HADOOP-11761
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu


As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383047#comment-14383047
 ] 

Haohui Mai commented on HADOOP-11748:
-

The findbugs warning seems to be originated from HADOOP-10670, I'll file 
another jira to fix it.

> Secrets for auth cookies can be specified in clear text
> ---
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
> Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch
>
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383033#comment-14383033
 ] 

Hadoop QA commented on HADOOP-11748:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707639/HADOOP-11748.001.patch
  against trunk revision 5695c7a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth hadoop-hdfs-project/hadoop-hdfs-httpfs.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6006//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6006//artifact/patchprocess/newPatchFindbugsWarningshadoop-auth.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6006//console

This message is automatically generated.

> Secrets for auth cookies can be specified in clear text
> ---
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
> Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch
>
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11760) Typo in DistCp.java

2015-03-26 Thread Chen He (JIRA)
Chen He created HADOOP-11760:


 Summary: Typo in DistCp.java
 Key: HADOOP-11760
 URL: https://issues.apache.org/jira/browse/HADOOP-11760
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chen He
Priority: Trivial


/**
   * Create a default working folder for the job, under the
   * job staging directory
   *
   * @return Returns the working folder information
   * @throws Exception - EXception if any
   */
  private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11664) Loading predefined EC schemas from configuration

2015-03-26 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383027#comment-14383027
 ] 

Zhe Zhang commented on HADOOP-11664:


Thanks Kai for the patch! The main logic looks good. Just 1 minor comment:

Is it necessary to configure the name of the xml file? I suggest we just hard 
code the file name to simplify code.
{code}
+  public static final String IO_ERASURECODE_SCHEMA_FILE_KEY =
+  "hadoop.io.erasurecode.";
+  public static final String IO_ERASURECODE_SCHEMA_FILE_DEFAULT =
+  "ecschema-def.xml";
{code}

> Loading predefined EC schemas from configuration
> 
>
> Key: HADOOP-11664
> URL: https://issues.apache.org/jira/browse/HADOOP-11664
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11664-v2.patch, HADOOP-11664-v3.patch, 
> HDFS-7371_v1.patch
>
>
> System administrator can configure multiple EC codecs in hdfs-site.xml file, 
> and codec instances or schemas in a new configuration file named 
> ec-schema.xml in the conf folder. A codec can be referenced by its instance 
> or schema using the codec name, and a schema can be utilized and specified by 
> the schema name for a folder or EC ZONE to enforce EC. Once a schema is used 
> to define an EC ZONE, then its associated parameter values will be stored as 
> xattributes and respected thereafter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11759) TockenCache doc has minor problem

2015-03-26 Thread Chen He (JIRA)
Chen He created HADOOP-11759:


 Summary: TockenCache doc has minor problem
 Key: HADOOP-11759
 URL: https://issues.apache.org/jira/browse/HADOOP-11759
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0, 3.0.0
Reporter: Chen He
Priority: Trivial


/**
   * get delegation token for a specific FS
   * @param fs
   * @param credentials
   * @param p
   * @param conf
   * @throws IOException
   */
  static void obtainTokensForNamenodesInternal(FileSystem fs, 
  Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11754:

Target Version/s: 2.7.0

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.io.IOException: Problem in starting 

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383016#comment-14383016
 ] 

Haohui Mai commented on HADOOP-11754:
-

Uploaded a patch to implement the second approach. In insecure mode, the 
{{AuthenticationFilerInitializer}} will fall back to 
{{RandomSignerSecretProvider}} when the secret file is unavailable. Note that 
the patch is based on HADOOP-11748.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(Reso

[jira] [Updated] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11754:

Attachment: HADOOP-11754.000.patch

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
> HADOOP-11754.000.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.io.IOException: Problem i

[jira] [Commented] (HADOOP-11257) Update "hadoop jar" documentation to warn against using it for launching yarn jars

2015-03-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383005#comment-14383005
 ] 

Chris Nauroth commented on HADOOP-11257:


On further thought, I'm also +1 for the addendum patch that sends the warning 
message to stderr.  This is exactly how it works on trunk via the 
{{hadoop_error}} function.

> Update "hadoop jar" documentation to warn against using it for launching yarn 
> jars
> --
>
> Key: HADOOP-11257
> URL: https://issues.apache.org/jira/browse/HADOOP-11257
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.1.1-beta
>Reporter: Allen Wittenauer
>Assignee: Masatake Iwasaki
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: HADOOP-11257-branch-2.addendum.001.patch, 
> HADOOP-11257.1.patch, HADOOP-11257.1.patch, HADOOP-11257.2.patch, 
> HADOOP-11257.3.patch, HADOOP-11257.4.patch, HADOOP-11257.4.patch
>
>
> We should update the "hadoop jar" documentation to warn against using it for 
> launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11740) Combine erasure encoder and decoder interfaces

2015-03-26 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-11740:
---
Attachment: HADOOP-11740-000.patch

This initial patch simply removes {{ErasureEncoder}} and {{ErasureDecoder}}. I 
think the following further simplifications are possible:
# We can get rid of {{ErasureCoder}} since it has a single subclass now 
({{AbstractErasureCoder}}
# Similarly, maybe we can get rid of {{ErasureCodingStep}} since 
{{AbstractErasureCodingStep}} provides enough abstraction anyway
# If {{ECBlockGroup}} can provide erased indices, we can further combine 
encoding and decoding classes

> Combine erasure encoder and decoder interfaces
> --
>
> Key: HADOOP-11740
> URL: https://issues.apache.org/jira/browse/HADOOP-11740
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HADOOP-11740-000.patch
>
>
> Rationale [discussed | 
> https://issues.apache.org/jira/browse/HDFS-7337?focusedCommentId=14376540&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14376540]
>  under HDFS-7337.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382953#comment-14382953
 ] 

Li Lu commented on HADOOP-11748:


Thanks [~wheat9] for continuing on this. The fix on TestAuthenticationFilter 
looks good to me. 

> Secrets for auth cookies can be specified in clear text
> ---
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
> Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch
>
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382936#comment-14382936
 ] 

Haohui Mai commented on HADOOP-11748:
-

Continue on [~gtCarrera]'s work and to fix the unit tests.

> Secrets for auth cookies can be specified in clear text
> ---
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
> Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch
>
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11748:

Status: Patch Available  (was: Open)

> Secrets for auth cookies can be specified in clear text
> ---
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
> Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch
>
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11748:

Attachment: HADOOP-11748.001.patch

> Secrets for auth cookies can be specified in clear text
> ---
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
> Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch
>
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HADOOP-11740) Combine erasure encoder and decoder interfaces

2015-03-26 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-11740 started by Zhe Zhang.
--
> Combine erasure encoder and decoder interfaces
> --
>
> Key: HADOOP-11740
> URL: https://issues.apache.org/jira/browse/HADOOP-11740
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>
> Rationale [discussed | 
> https://issues.apache.org/jira/browse/HDFS-7337?focusedCommentId=14376540&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14376540]
>  under HDFS-7337.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11740) Combine erasure encoder and decoder interfaces

2015-03-26 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang reassigned HADOOP-11740:
--

Assignee: Zhe Zhang

> Combine erasure encoder and decoder interfaces
> --
>
> Key: HADOOP-11740
> URL: https://issues.apache.org/jira/browse/HADOOP-11740
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>
> Rationale [discussed | 
> https://issues.apache.org/jira/browse/HDFS-7337?focusedCommentId=14376540&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14376540]
>  under HDFS-7337.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382895#comment-14382895
 ] 

Kai Zheng commented on HADOOP-11754:


You're right. To be safer, we may also need to check if the file is empty or 
not, when deciding to unset the property or otherwise. Kinds of dirty.

Maybe we can have the 2nd way as a work around for the release to keep the 
original behavior, and the 1st way for the next release to clean up finally ?

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.R

[jira] [Updated] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11748:
---
Attachment: HADOOP-11748-032615-poc.patch

Did some work to change the {{StringSecretProvider}} class to be test only. 
Most work done but TestAuthenticationFilter is failing because we're changing 
the default filters. In a comprehensive fix we need to change the mockito 
settings in TestAuthenticationFilter to create {{StringSecretProvider}}s in 
{{config}} objects. 

> Secrets for auth cookies can be specified in clear text
> ---
>
> Key: HADOOP-11748
> URL: https://issues.apache.org/jira/browse/HADOOP-11748
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>Priority: Critical
> Attachments: HADOOP-11748-032615-poc.patch
>
>
> Based on the discussion on HADOOP-10670, this jira proposes to remove 
> {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
> and security vulnerabilities.
> {quote}
> My understanding is that the use case of inlining the secret is never 
> supported. The property is used to pass the secret internally. The way it 
> works before HADOOP-10868 is the following:
> * Users specify the initializer of the authentication filter in the 
> configuration.
> * AuthenticationFilterInitializer reads the secret file. The server will not 
> start if the secret file does not exists. The initializer will set the 
> property if it read the file correctly.
> *There is no way to specify the secret in the configuration out-of-the-box – 
> the secret is always overwritten by AuthenticationFilterInitializer.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11257) Update "hadoop jar" documentation to warn against using it for launching yarn jars

2015-03-26 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-11257:
-
Priority: Blocker  (was: Major)

Marked as a blocker for 2.7. I think we should get in the patch that prints it 
to stderr.

> Update "hadoop jar" documentation to warn against using it for launching yarn 
> jars
> --
>
> Key: HADOOP-11257
> URL: https://issues.apache.org/jira/browse/HADOOP-11257
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.1.1-beta
>Reporter: Allen Wittenauer
>Assignee: Masatake Iwasaki
>Priority: Blocker
> Fix For: 2.7.0
>
> Attachments: HADOOP-11257-branch-2.addendum.001.patch, 
> HADOOP-11257.1.patch, HADOOP-11257.1.patch, HADOOP-11257.2.patch, 
> HADOOP-11257.3.patch, HADOOP-11257.4.patch, HADOOP-11257.4.patch
>
>
> We should update the "hadoop jar" documentation to warn against using it for 
> launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382879#comment-14382879
 ] 

Zhijie Shen commented on HADOOP-11754:
--

bq. There are two ways to do this

Prefer the second way. We still want to load  auth filter with pseudo auth 
handler to accept "user.name=blah blah". Moreover, before HADOOP-10670, the 
semantics is falling back to random secret if no customized secret is given, no 
matter it's from config directly, or read from a configured secret file. After 
that jira, the semantics changed to also failing when error happens in reading 
the secret file. So previously if secret file is empty, it will work. Now even 
though no read failure happens, I'm afraid the empty secret file will still 
bring down the auth filter with null secret object.



> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> Resource

[jira] [Commented] (HADOOP-11553) Formalize the shell API

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382815#comment-14382815
 ] 

Hudson commented on HADOOP-11553:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7443 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7443/])
HADOOP-11553. Foramlize the shell API (aw) (aw: rev 
b30ca8ce0e0d435327e179f0877bd58fa3896793)
* hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-project/src/site/site.xml
* dev-support/shelldocs.py
* hadoop-common-project/hadoop-common/pom.xml
HADOOP-11553 addendum fix the typo in the changes file (aw: rev 
5695c7a541c1a3092040523446f1ba689fb495e3)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Formalize the shell API
> ---
>
> Key: HADOOP-11553
> URL: https://issues.apache.org/jira/browse/HADOOP-11553
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: documentation, scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
> HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
> HADOOP-11553-05.patch, HADOOP-11553-06.patch
>
>
> After HADOOP-11485, we need to formally document functions and environment 
> variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11758) Add options to filter out too much granular tracing spans

2015-03-26 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11758:
--
Attachment: testWriteTraceHooks.html

e.g. DFSOutputStream#writeChunk in testWriteTraceHooks.html.

> Add options to filter out too much granular tracing spans
> -
>
> Key: HADOOP-11758
> URL: https://issues.apache.org/jira/browse/HADOOP-11758
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tracing
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: testWriteTraceHooks.html
>
>
> in order to avoid queue in span receiver spills



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11758) Add options to filter out too much granular tracing spans

2015-03-26 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HADOOP-11758:
-

 Summary: Add options to filter out too much granular tracing spans
 Key: HADOOP-11758
 URL: https://issues.apache.org/jira/browse/HADOOP-11758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tracing
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor


in order to avoid queue in span receiver spills



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11553) Formalize the shell API

2015-03-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11553:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

This has been committed to trunk.

Thanks for the review!

> Formalize the shell API
> ---
>
> Key: HADOOP-11553
> URL: https://issues.apache.org/jira/browse/HADOOP-11553
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: documentation, scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
> HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
> HADOOP-11553-05.patch, HADOOP-11553-06.patch
>
>
> After HADOOP-11485, we need to formally document functions and environment 
> variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11553) Formalize the shell API

2015-03-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11553:
--
Issue Type: New Feature  (was: Improvement)

> Formalize the shell API
> ---
>
> Key: HADOOP-11553
> URL: https://issues.apache.org/jira/browse/HADOOP-11553
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: documentation, scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
> HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
> HADOOP-11553-05.patch, HADOOP-11553-06.patch
>
>
> After HADOOP-11485, we need to formally document functions and environment 
> variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382779#comment-14382779
 ] 

Kai Zheng commented on HADOOP-11754:


It sounds complete ! Just note for the 2nd way, to allow RM to fall back to 
RandomSigner, we can unset or remove the file property. Current 
AuthenticationFilter will perform the fallback when not seeing the file 
property. We don't have to bring back the original specific codes in RM.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.s

[jira] [Updated] (HADOOP-11257) Update "hadoop jar" documentation to warn against using it for launching yarn jars

2015-03-26 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11257:
--
Attachment: HADOOP-11257-branch-2.addendum.001.patch

I attached the patch to echo warn message to stderr.

> Update "hadoop jar" documentation to warn against using it for launching yarn 
> jars
> --
>
> Key: HADOOP-11257
> URL: https://issues.apache.org/jira/browse/HADOOP-11257
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.1.1-beta
>Reporter: Allen Wittenauer
>Assignee: Masatake Iwasaki
> Fix For: 2.7.0
>
> Attachments: HADOOP-11257-branch-2.addendum.001.patch, 
> HADOOP-11257.1.patch, HADOOP-11257.1.patch, HADOOP-11257.2.patch, 
> HADOOP-11257.3.patch, HADOOP-11257.4.patch, HADOOP-11257.4.patch
>
>
> We should update the "hadoop jar" documentation to warn against using it for 
> launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382748#comment-14382748
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-11754:
--

Okay, then how about this?
 - In secure mode, we always had the signature file configured by default in 
the default config files. And if that file didn't exist, we failed the daemons 
(in HDFS as well as YARN). We should keep this behavior here.
 - In non-secure mode, before HADOOP-10670, RM didn't fail the daemon if the 
default signature file didn't exist but it starts failing after HADOOP-10670. 
We should fix this to not have RM fail. There are two ways to do this
-- Not use the filter at all for the ResourceManager in non-secure mode - 
other daemons already do this. And so no cookies sent to RM clients. Which 
should be okay in non-secure mode.
-- Use the filter in RM in non-secure mode also but fall back to the 
RandomSigner signed cookie as it is today. This can be done by keeping the 
signer choice code in each of the individual filter-initializers.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.h

[jira] [Commented] (HADOOP-11757) NFS gateway should shutdown when it can't start UDP or TCP server

2015-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382743#comment-14382743
 ] 

Hadoop QA commented on HADOOP-11757:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707589/HDFS-7989.002.patch
  against trunk revision 61df1b2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6005//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6005//console

This message is automatically generated.

> NFS gateway should shutdown when it can't start UDP or TCP server
> -
>
> Key: HADOOP-11757
> URL: https://issues.apache.org/jira/browse/HADOOP-11757
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7989.001.patch, HDFS-7989.002.patch
>
>
> Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11553) Formalize the shell API

2015-03-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11553:
---
Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)

+1 for patch v06.  Thank you for the documentation, Allen.

> Formalize the shell API
> ---
>
> Key: HADOOP-11553
> URL: https://issues.apache.org/jira/browse/HADOOP-11553
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
> HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
> HADOOP-11553-05.patch, HADOOP-11553-06.patch
>
>
> After HADOOP-11485, we need to formally document functions and environment 
> variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11660) Add support for hardware crc on ARM aarch64 architecture

2015-03-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382721#comment-14382721
 ] 

Colin Patrick McCabe commented on HADOOP-11660:
---

Thank you for your patience, [~enevill].  I think this is almost ready to 
commit.

{code}
168 ELSEIF (CMAKE_SYSTEM_PROCESSOR STREQUAL "aarch64")
169   set(BULK_CRC_ARCH_SOURCE_FIlE "${D}/util/bulk_crc32_aarch64.c")
170 ENDIF()
{code}
Can you put in a {{MESSAGE}} here that explains that the architecture is 
unsupported in the ELSE case?  We certainly don't want to be losing hardware 
acceleration without being aware of it.

{{bulk_crc32_aarch64.c}}: you should include {{stdint.h}} here for {{uint8_t}}, 
etc.  Even though some other header is probably pulling it in now by accident.

{{bulk_crc32_x86.c}}: I would really prefer not to wrap this in a giant {{#if 
defined(__GNUC__) && !defined(__FreeBSD__)}}, especially since we're not 
wrapping the ARM version like that.  If people want this to be compiler and 
os-specific, it would be better to do it at the CMake level.  I would say just 
take that out and let people fix it if it becomes a problem for them.

Can you post before / after performance numbers for x86_64?  Maybe you can 
instrument test_bulk_crc32.c to produce those numbers.

It looks like when this was done previously, the test code was not checked in.  
See:
https://issues.apache.org/jira/browse/HADOOP-7446?focusedCommentId=13084519&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13084519

I'm sorry to dump another task on you, but my co-workers will kill me if I 
regress checksum performance.

Thanks again for working on this.  As soon as we verify that we haven't 
regressed perf, and made those minor changes, we should be good to go.

> Add support for hardware crc on ARM aarch64 architecture
> 
>
> Key: HADOOP-11660
> URL: https://issues.apache.org/jira/browse/HADOOP-11660
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
> Environment: ARM aarch64 development platform
>Reporter: Edward Nevill
>Assignee: Edward Nevill
>Priority: Minor
>  Labels: performance
> Attachments: jira-11660.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> This patch adds support for hardware crc for ARM's new 64 bit architecture
> The patch is completely conditionalized on __aarch64__
> I have only added support for the non pipelined version as I benchmarked the 
> pipelined version on aarch64 and it showed no performance improvement.
> The aarch64 version supports both Castagnoli and Zlib CRCs as both of these 
> are supported on ARM aarch64 hardwre.
> To benchmark this I modified the test_bulk_crc32 test to print out the time 
> taken to CRC a 1MB dataset 1000 times.
> Before:
> CRC 1048576 bytes @ 512 bytes per checksum X 1000 iterations = 2.55
> CRC 1048576 bytes @ 512 bytes per checksum X 1000 iterations = 2.55
> After:
> CRC 1048576 bytes @ 512 bytes per checksum X 1000 iterations = 0.57
> CRC 1048576 bytes @ 512 bytes per checksum X 1000 iterations = 0.57
> So this represents a 5X performance improvement on raw CRC calculation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11257) Update "hadoop jar" documentation to warn against using it for launching yarn jars

2015-03-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382718#comment-14382718
 ] 

Chris Nauroth commented on HADOOP-11257:


Shall we just revert the script changes from branch-2 and branch-2.7 since this 
has proven to be a backwards-incompatible change?  We can still make the script 
changes in trunk, and the documentation part of the change is still good for 
all branches.

> Update "hadoop jar" documentation to warn against using it for launching yarn 
> jars
> --
>
> Key: HADOOP-11257
> URL: https://issues.apache.org/jira/browse/HADOOP-11257
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.1.1-beta
>Reporter: Allen Wittenauer
>Assignee: Masatake Iwasaki
> Fix For: 2.7.0
>
> Attachments: HADOOP-11257.1.patch, HADOOP-11257.1.patch, 
> HADOOP-11257.2.patch, HADOOP-11257.3.patch, HADOOP-11257.4.patch, 
> HADOOP-11257.4.patch
>
>
> We should update the "hadoop jar" documentation to warn against using it for 
> launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382700#comment-14382700
 ] 

Kai Zheng commented on HADOOP-11754:


Oh bad, may I correct.
bq.we can have some logic in RM like this:...
I mean, the fix logic could be: if 1) it's not in secure mode, 2) **the 
signature file property is set but absent**, and optionally 3) it's the default 
property value (not set explicitly by user), then we may remove the property.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.Reso

[jira] [Commented] (HADOOP-11257) Update "hadoop jar" documentation to warn against using it for launching yarn jars

2015-03-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382693#comment-14382693
 ] 

Colin Patrick McCabe commented on HADOOP-11257:
---

I have no objection to using stderr rather than stdout, but I also think Hive 
should be using "yarn jar" to launch yarn jars.  If you post a patch to send 
this to stderr I will review

> Update "hadoop jar" documentation to warn against using it for launching yarn 
> jars
> --
>
> Key: HADOOP-11257
> URL: https://issues.apache.org/jira/browse/HADOOP-11257
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.1.1-beta
>Reporter: Allen Wittenauer
>Assignee: Masatake Iwasaki
> Fix For: 2.7.0
>
> Attachments: HADOOP-11257.1.patch, HADOOP-11257.1.patch, 
> HADOOP-11257.2.patch, HADOOP-11257.3.patch, HADOOP-11257.4.patch, 
> HADOOP-11257.4.patch
>
>
> We should update the "hadoop jar" documentation to warn against using it for 
> launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382692#comment-14382692
 ] 

Kai Zheng commented on HADOOP-11754:


For the long term, ideally, as desired and did in HADOOP-10670, the signature 
secret file setting stuff should be taken care of in the 
{{AuthenticationFilter}} so that it's possible for all the Hadoop web UIs 
(HDFS, YARN) can easily share the same and common configuration and logics, 
thus some advanced SSO effect can be achieved. The dirty things can all be 
handled in the common place, not needed in all the places. It's ideal.

For now and the release, to keep the original behavior, as I said before, we 
can have some logic in RM like this: if it's not in secure mode, and signature 
file property is set, then we may remove the property. If we want to be more 
careful, we can even check the specified file is originally the default file or 
not. As such change is only in RM and Timeline server, it won't affect other 
places.


> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializ

[jira] [Moved] (HADOOP-11757) NFS gateway should shutdown when it can't start UDP or TCP server

2015-03-26 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li moved HDFS-7989 to HADOOP-11757:
---

  Component/s: (was: nfs)
   nfs
Affects Version/s: (was: 2.2.0)
   2.2.0
  Key: HADOOP-11757  (was: HDFS-7989)
  Project: Hadoop Common  (was: Hadoop HDFS)

> NFS gateway should shutdown when it can't start UDP or TCP server
> -
>
> Key: HADOOP-11757
> URL: https://issues.apache.org/jira/browse/HADOOP-11757
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7989.001.patch, HDFS-7989.002.patch
>
>
> Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11757) NFS gateway should shutdown when it can't start UDP or TCP server

2015-03-26 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382678#comment-14382678
 ] 

Brandon Li commented on HADOOP-11757:
-

Moved this JIRA form HDFS to COMMON since we only changed the common code.

> NFS gateway should shutdown when it can't start UDP or TCP server
> -
>
> Key: HADOOP-11757
> URL: https://issues.apache.org/jira/browse/HADOOP-11757
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7989.001.patch, HDFS-7989.002.patch
>
>
> Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382676#comment-14382676
 ] 

Allen Wittenauer commented on HADOOP-11754:
---

bq.  If all we want in HADOOP-10670 is for the webHDFS auth filter to be able 
to use file-based signer, why don't' we implement that functionality there 
similar to RMAuthenticationFilterInitializer instead of changing 
AuthenticationFilter. That should get what you want but avoid this breakage, 
even if it isn't ideal?

... which is pretty much what [~rsasson]'s patch in HDFS-5796 does. :)



> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.Resource

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382673#comment-14382673
 ] 

Sangjin Lee commented on HADOOP-11754:
--

The current state of things is SIGNATURE_SECRET_FILE is set by default in 
core-default.xml, so it's always set unless the user explicitly unsets it.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>  

[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-26 Thread Chuan Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382665#comment-14382665
 ] 

Chuan Liu commented on HADOOP-11691:


+1 I have verified the CPU feature is present when building with the latest 
patch. Thanks for fixing the build! 

> X86 build of libwinutils is broken
> --
>
> Key: HADOOP-11691
> URL: https://issues.apache.org/jira/browse/HADOOP-11691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 2.7.0
>Reporter: Remus Rusanu
>Assignee: Kiran Kumar M R
>Priority: Critical
> Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
> HADOOP-11691-003.patch
>
>
> Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
> in error:
> {code}
> (Link target) ->
>   
> E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
>  : fatal error LNK1112: module machine type 'x64' conflicts with target 
> machine type 'X86' 
> [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-03-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382660#comment-14382660
 ] 

Colin Patrick McCabe commented on HADOOP-11731:
---

Thank you for tackling this, Allen.  It looks good.

{code}
1   #!/usr/bin/python
{code}

Should be {{#!/usr/bin/env python}}?

{code}
2   #   Licensed under the Apache License, Version 2.0 (the "License");
3   #   you may not use this file except in compliance with the License.
4   #   You may obtain a copy of the License at
5   #
6   #   http://www.apache.org/licenses/LICENSE-2.0
7   #
8   #   Unless required by applicable law or agreed to in writing, software
9   #   distributed under the License is distributed on an "AS IS" BASIS,
10  #   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied.
11  #   See the License for the specific language governing permissions and
12  #   limitations under the License.
{code}
I realize that you are just copying this from the previous {{relnotes.py}}, but 
we should fix this to match our other license headers.  If you look at 
{{determine-flaky-tests-hadoop.py}}, you can see its header is:

{code}
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
{code}
The text about the {{NOTICE}} file is missing from {{releasedocmaker.py}}.

{code}
296 def main():
297   parser = OptionParser(usage="usage: %prog --version VERSION 
[--version VERSION2 ...]")
298   parser.add_option("-v", "--version", dest="versions",
299  action="append", type="string",
300  help="versions in JIRA to include in releasenotes", 
metavar="VERSION")
301   parser.add_option("-m","--master", dest="master", action="store_true",
302  help="only create the master files")
303   parser.add_option("-i","--index", dest="index", action="store_true",
304  help="build an index file")
{code}
Can you add a note to the usage message about which files are generated by this 
script, and what their names will be?  Also where the files will be generated

{code}
80  found = re.match('^((\d+)(\.\d+)*).*$', data)
81  if (found):
82self.parts = [ int(p) for p in found.group(1).split('.') ]
83  else:
84self.parts = []
{code}
Should we throw an exception if we can't parse the version?

{code}
28  def clean(str):
29return clean(re.sub(namePattern, "", str))
30  
31  def formatComponents(str):
32str = re.sub(namePattern, '', str).replace("'", "")
33if str != "":
34  ret = str
35else:
36  # some markdown parsers don't like empty tables
37  ret = "."
38return clean(ret)
39  
40  def lessclean(str):
41str=str.encode('utf-8')
42str=str.replace("_","\_")
43str=str.replace("\r","")
44str=str.rstrip()
45return str
46  
47  def clean(str):
48str=lessclean(str)
49str=str.replace("|","\|")
50str=str.rstrip()
{code}

I find this a bit confusing.  Can we call the first function something other 
than "clean", to avoid having two different functions named "clean" that do 
different things?  When would I use {{lessclean}} rather than {{clean}}?  It 
seems like only the release notes get the lessclean treatment.  It would be 
helpful to have a comments before the lessclean function explaining when it is 
useful.

> Rework the changelog and releasenotes
> -
>
> Key: HADOOP-11731
> URL: https://issues.apache.org/jira/browse/HADOOP-11731
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
> HADOOP-11731-03.patch, HADOOP-11731-04.patch
>
>
> The current way we generate these build artifacts is awful.  Plus they are 
> ugly and, in the case of release notes, very hard to pick out what is 
> important.



--
This message was sent by Atlassian JIRA
(v6.

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382659#comment-14382659
 ] 

Zhijie Shen commented on HADOOP-11754:
--

BTW, that SIGNATURE_SECRET_FILE is not set and that SIGNATURE_SECRET_FILE is 
pointing to a non-existing file mean different. While the former case indicates 
using the default random secret, the latter one is regarded as the wrong 
configuration.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:107

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382648#comment-14382648
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-11754:
--

[~drankye] / [~wheat9],

I may still not have the full-picture but how about this? If all we want in 
HADOOP-10670 is for the webHDFS auth filter to be able to use file-based 
signer, why don't' we implement that functionality there similar to  
RMAuthenticationFilterInitializer instead of changing AuthenticationFilter. 
That should get what you want but avoid this breakage, even if it isn't ideal?

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resou

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382640#comment-14382640
 ] 

Zhijie Shen commented on HADOOP-11754:
--

bq. That's a definite change in behavior. If a secret wasn't configured, the 
2.6 and previous filters generated a random one since it was assumed that the 
serving system was a single host.

Agree. This change is incompatible. It will break the current timeline server 
secure deployment.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceMan

[jira] [Commented] (HADOOP-11553) Formalize the shell API

2015-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382635#comment-14382635
 ] 

Hadoop QA commented on HADOOP-11553:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707572/HADOOP-11553-06.patch
  against trunk revision 87130bf.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6004//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6004//console

This message is automatically generated.

> Formalize the shell API
> ---
>
> Key: HADOOP-11553
> URL: https://issues.apache.org/jira/browse/HADOOP-11553
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
> HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
> HADOOP-11553-05.patch, HADOOP-11553-06.patch
>
>
> After HADOOP-11485, we need to formally document functions and environment 
> variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11756) Warning "yarn jar" instead of "hadoop jar" in hadoop 2.7.0

2015-03-26 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HADOOP-11756:
---

 Summary: Warning "yarn jar" instead of "hadoop jar" in hadoop 2.7.0
 Key: HADOOP-11756
 URL: https://issues.apache.org/jira/browse/HADOOP-11756
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Gunther Hagleitner


HADOOP-11257 adds a warning to stdout

{noformat}
WARNING: Use "yarn jar" to launch YARN applications.
{noformat}

which will cause issues if untreated with folks that programatically parse 
stdout for query results (i.e.: CLI, silent mode, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-11257) Update "hadoop jar" documentation to warn against using it for launching yarn jars

2015-03-26 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner reopened HADOOP-11257:
-

This is causing issues in Hive. Can we at least have the warning go to stderr 
instead of stdout? In hive anything printed to stdout is considered part of the 
"query result" and now that includes this warning message.

> Update "hadoop jar" documentation to warn against using it for launching yarn 
> jars
> --
>
> Key: HADOOP-11257
> URL: https://issues.apache.org/jira/browse/HADOOP-11257
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.1.1-beta
>Reporter: Allen Wittenauer
>Assignee: Masatake Iwasaki
> Fix For: 2.7.0
>
> Attachments: HADOOP-11257.1.patch, HADOOP-11257.1.patch, 
> HADOOP-11257.2.patch, HADOOP-11257.3.patch, HADOOP-11257.4.patch, 
> HADOOP-11257.4.patch
>
>
> We should update the "hadoop jar" documentation to warn against using it for 
> launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11553) Formalize the shell API

2015-03-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11553:
--
Attachment: HADOOP-11553-06.patch

-06:
* Fixed those spelling errors

Thanks for the reviews, btw. :)

> Formalize the shell API
> ---
>
> Key: HADOOP-11553
> URL: https://issues.apache.org/jira/browse/HADOOP-11553
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
> HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
> HADOOP-11553-05.patch, HADOOP-11553-06.patch
>
>
> After HADOOP-11485, we need to formally document functions and environment 
> variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11691:
---
 Priority: Critical  (was: Major)
 Target Version/s: 2.7.0  (was: 3.0.0)
Affects Version/s: (was: 3.0.0)
   2.7.0

[~chuanliu], could you please try to verify Kiran's newest patch to see if this 
has resolved the earlier problem that you saw?  From my side, I was able to 
build successfully for both 64-bit and 32-bit using Windows SDK 7.1.

I'm retargeting this to 2.7.0, which was the original goal for HADOOP-9922.  
I'm bumping priority to critical, because we expect to start cutting 2.7.0 
release candidates in a few days.

> X86 build of libwinutils is broken
> --
>
> Key: HADOOP-11691
> URL: https://issues.apache.org/jira/browse/HADOOP-11691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native
>Affects Versions: 2.7.0
>Reporter: Remus Rusanu
>Assignee: Kiran Kumar M R
>Priority: Critical
> Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
> HADOOP-11691-003.patch
>
>
> Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
> in error:
> {code}
> (Link target) ->
>   
> E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
>  : fatal error LNK1112: module machine type 'x64' conflicts with target 
> machine type 'X86' 
> [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382282#comment-14382282
 ] 

Haohui Mai commented on HADOOP-11754:
-

bq. How about reverting this change for now (at least for branch-2)? This is a 
blocker for the 2.7 release. Is there a strong reason that HADOOP-10670 must be 
part of the 2.7 release? If not, it may not be a bad idea to revert this for 
now and revise the patch for a later release. Thoughts?

I pull this into 2.7 because it is a build block for HDFS-5796, which is a 
blocker for 2.7 as well. :-( I'll take care of this today.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.re

[jira] [Assigned] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai reassigned HADOOP-11754:
---

Assignee: Haohui Mai

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.io.IOException: Problem in starting http server. Server 
> 

[jira] [Resolved] (HADOOP-11752) Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: p

2015-03-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11752.
---
  Resolution: Cannot Reproduce
Target Version/s: 2.6.0, 2.4.0  (was: 2.4.0, 2.6.0)

Let me re-iterate what Brahma said:

Please direct questions like this to the mailing list. JIRA is the bug tracker.

> Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc 
> (compile-protoc) on project hadoop-common: 
> org.apache.maven.plugin.MojoExecutionException: protoc failure -> [Help 1]
> 
>
> Key: HADOOP-11752
> URL: https://issues.apache.org/jira/browse/HADOOP-11752
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.4.0, 2.6.0
> Environment: Operating System: Windows 8.1 64Bit
> Cygwin 64Bit
> protobuf-2.5.0
> protoc 2.5.0
> hadoop-2.4.0-src
> apache-maven-3.3.1
>Reporter: Venkata Sravan Kumar Talasila
>  Labels: build, maven
>
> while build of Hadoop, I am facing the below error
> Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc 
> (compile-protoc) on project hadoop-common: 
> org.apache.maven.plugin.MojoExecutionException: protoc failure -> [Help 1]
> [INFO] 
> 
> [INFO] Building Apache Hadoop Common 2.4.0
> [INFO] 
> 
> [INFO]
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
> [INFO] Executing tasks
> main:
> [INFO] Executed tasks
> [INFO]
> [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-os) @ hadoop-common 
> ---
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.4.0:protoc (compile-protoc) @ hadoop-common 
> ---
> [WARNING] [C:\cygwin64\usr\local\bin\protoc.exe, 
> --java_out=C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\target\generated-sources\java,
> -IC:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\GetUserMappingsProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\HAServiceProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\IpcConnectionContext.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ProtobufRpcEngine.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ProtocolInfo.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshAuthorizationPolicyProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshCallQueueProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshUserMappingsProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RpcHeader.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\Security.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ZKFCProtocol.proto]
>  failed with error code 1
> [ERROR] protoc compiler error
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Main . SUCCESS [  6.080 
> s]
> [INFO] Apache Hadoop Project POM .. SUCCESS [  2.140 
> s]
> [INFO] Apache Hadoop Annotations .. SUCCESS [  2.691 
> s]
> [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.250 
> s]
> [INFO] Apache Hadoop Assemblies ... SUCCESS [  0.453 
> s]
> [INFO] Apache Hadoop Maven Plugins  SUCCESS [  6.932 
> s]
> [INFO] Apache Hadoop MiniKDC .. SUCCESS [01:59 
> min]
> [INFO] Apache Hadoop Auth . SUCCESS [11:02 
> min]
> [INFO] Apache Hadoop Auth Examples  SUCCESS [  3.697 
> s]
> [INFO] Apache Hadoop Common ... FAILURE [  4.067 
> s]
> [INFO] Apache Hadoop NFS .. SKIPPED
> [INFO] Apache Hadoop Common Project ... SKIPPED
> [INFO] Apache Hadoop HDFS . SKIPPED
> [INFO] Apache Hadoop HttpFS ..

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382076#comment-14382076
 ] 

Allen Wittenauer commented on HADOOP-11754:
---

bq.  If security is enabled, the absence of the file should be a failure. 

That's a definite change in behavior.  If a secret wasn't configured, the 2.6 
and previous filters generated a random one since it was assumed that the 
serving system was a single host.

bq. Wouldn't that break users that were simply relying on the default value for 
this property in the secure mode? I'm not sure if that qualifies as not 
backward compatible, but does sound like there will be user impact if that 
change is made.

... which means that, yes, changing this behavior is most definitely not 
backward compatible.

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.webapp.WebAppException: 

[jira] [Reopened] (HADOOP-11752) Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: p

2015-03-26 Thread Venkata Sravan Kumar Talasila (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Sravan Kumar Talasila reopened HADOOP-11752:


As per your instructions i have followed the procedure and the build is 
successful, but how to Run the Hadoop on windows 8.
When i try to do hdfs namenode -format, i am getting the below error:
C:\Users\..\hadoop>hdfs namenode -format
'hdfs' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\..\hadoop>start-dfs
'start-dfs' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\..\hadoop\hadoop-dist\target\hadoop-3.0.0-SNAPSHOT\sbin>hdfs name
node -format
'hdfs' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\..\hadoop\hadoop-dist\target\hadoop-3.0.0-SNAPSHOT\sbin>start-dfs
The system cannot find the file hadoop.
The system cannot find the file hadoop.
Can you please let me know how to start DFS, YARN and run the hadoop on windows 
8.

> Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc 
> (compile-protoc) on project hadoop-common: 
> org.apache.maven.plugin.MojoExecutionException: protoc failure -> [Help 1]
> 
>
> Key: HADOOP-11752
> URL: https://issues.apache.org/jira/browse/HADOOP-11752
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.4.0, 2.6.0
> Environment: Operating System: Windows 8.1 64Bit
> Cygwin 64Bit
> protobuf-2.5.0
> protoc 2.5.0
> hadoop-2.4.0-src
> apache-maven-3.3.1
>Reporter: Venkata Sravan Kumar Talasila
>  Labels: build, maven
>
> while build of Hadoop, I am facing the below error
> Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc 
> (compile-protoc) on project hadoop-common: 
> org.apache.maven.plugin.MojoExecutionException: protoc failure -> [Help 1]
> [INFO] 
> 
> [INFO] Building Apache Hadoop Common 2.4.0
> [INFO] 
> 
> [INFO]
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
> [INFO] Executing tasks
> main:
> [INFO] Executed tasks
> [INFO]
> [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-os) @ hadoop-common 
> ---
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.4.0:protoc (compile-protoc) @ hadoop-common 
> ---
> [WARNING] [C:\cygwin64\usr\local\bin\protoc.exe, 
> --java_out=C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\target\generated-sources\java,
> -IC:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\GetUserMappingsProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\HAServiceProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\IpcConnectionContext.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ProtobufRpcEngine.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ProtocolInfo.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshAuthorizationPolicyProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshCallQueueProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshUserMappingsProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RpcHeader.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\Security.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ZKFCProtocol.proto]
>  failed with error code 1
> [ERROR] protoc compiler error
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Main . SUCCESS [  6.080 
> s]
> [INFO] Apache Hadoop Project POM .. SUCCESS [  2.140 
> s]
> [INFO] Apache Hadoop Annotations .. SUCCESS [  2.691 
> s]
> [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.250 
> s]
> [INFO] Apache Hadoop Assemblies ... SUCCESS [  0

[jira] [Commented] (HADOOP-11752) Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException:

2015-03-26 Thread Venkata Sravan Kumar Talasila (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382067#comment-14382067
 ] 

Venkata Sravan Kumar Talasila commented on HADOOP-11752:


As per your instructions i have followed the procedure and the build is 
successful, but how to Run the Hadoop on windows 8.

When i try to do hdfs namenode -format, i am getting the below error:
C:\Users\..\hadoop>hdfs namenode -format
'hdfs' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\..\hadoop>start-dfs
'start-dfs' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\..\hadoop\hadoop-dist\target\hadoop-3.0.0-SNAPSHOT\sbin>hdfs name
node -format
'hdfs' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\..\hadoop\hadoop-dist\target\hadoop-3.0.0-SNAPSHOT\sbin>start-dfs
The system cannot find the file hadoop.
The system cannot find the file hadoop.
Can you please let me know how to start DFS, YARN and run the hadoop on windows 
8.

> Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc 
> (compile-protoc) on project hadoop-common: 
> org.apache.maven.plugin.MojoExecutionException: protoc failure -> [Help 1]
> 
>
> Key: HADOOP-11752
> URL: https://issues.apache.org/jira/browse/HADOOP-11752
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.4.0, 2.6.0
> Environment: Operating System: Windows 8.1 64Bit
> Cygwin 64Bit
> protobuf-2.5.0
> protoc 2.5.0
> hadoop-2.4.0-src
> apache-maven-3.3.1
>Reporter: Venkata Sravan Kumar Talasila
>  Labels: build, maven
>
> while build of Hadoop, I am facing the below error
> Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc 
> (compile-protoc) on project hadoop-common: 
> org.apache.maven.plugin.MojoExecutionException: protoc failure -> [Help 1]
> [INFO] 
> 
> [INFO] Building Apache Hadoop Common 2.4.0
> [INFO] 
> 
> [INFO]
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
> [INFO] Executing tasks
> main:
> [INFO] Executed tasks
> [INFO]
> [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-os) @ hadoop-common 
> ---
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.4.0:protoc (compile-protoc) @ hadoop-common 
> ---
> [WARNING] [C:\cygwin64\usr\local\bin\protoc.exe, 
> --java_out=C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\target\generated-sources\java,
> -IC:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\GetUserMappingsProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\HAServiceProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\IpcConnectionContext.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ProtobufRpcEngine.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ProtocolInfo.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshAuthorizationPolicyProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshCallQueueProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshUserMappingsProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RpcHeader.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\Security.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ZKFCProtocol.proto]
>  failed with error code 1
> [ERROR] protoc compiler error
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Main . SUCCESS [  6.080 
> s]
> [INFO] Apache Hadoop Project POM .. SUCCESS [  2.140 
> s]
> [INFO] Apache Hadoop Annotations .. SUCCESS [  2.691 
> s]
> [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.250 
> s]
> [INFO] Apache Ha

[jira] [Commented] (HADOOP-11750) distcp fails if we copy data from swift to secure HDFS

2015-03-26 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382068#comment-14382068
 ] 

Chen He commented on HADOOP-11750:
--

Hi [~ste...@apache.org], thank you for reviewing this issue. 

If we use "dfs -cp", it is a single "JVM" serial copy, right? What if users 
want to copy 10TB data from swift to HDFS? Serial copy is impractical.

 IMHO, DistCp is a tool that can help people copy data across different 
filesystems in parallel but not limited to HDFS. Our team is working on 
resolving this problem, please at least leave one or two days for further 
discussion before close it directly.

> distcp fails if we copy data from swift to secure HDFS
> --
>
> Key: HADOOP-11750
> URL: https://issues.apache.org/jira/browse/HADOOP-11750
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Chen He
>Assignee: Chen He
>
> ERROR tools.DistCp: Exception encountered
> java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> babynames.main
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
> at 
> org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:258)
> at 
> org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:301)
> at 
> org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:523)
> at org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:507)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
> at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
> at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:133)
> at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:83)
> at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
> at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
> at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:353)
> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:160)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:401)
> Caused by: java.net.UnknownHostException: babynames.main
> ... 17 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11752) Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException:

2015-03-26 Thread Venkata Sravan Kumar Talasila (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382065#comment-14382065
 ] 

Venkata Sravan Kumar Talasila commented on HADOOP-11752:


As per your instructions i have followed the procedure and the build is 
successful, but how to Run the Hadoop on windows 8.

When i try to do hdfs namenode -format, i am getting the below error: 

C:\Users\..\hadoop>hdfs namenode -format
'hdfs' is not recognized as an internal or external command,
operable program or batch file.

C:\Users\..\hadoop>start-dfs
'start-dfs' is not recognized as an internal or external command,
operable program or batch file.

C:\Users\..\hadoop\hadoop-dist\target\hadoop-3.0.0-SNAPSHOT\sbin>hdfs name
node -format
'hdfs' is not recognized as an internal or external command,
operable program or batch file.

C:\Users\..\hadoop\hadoop-dist\target\hadoop-3.0.0-SNAPSHOT\sbin>start-dfs

The system cannot find the file hadoop.
The system cannot find the file hadoop.

Can you please let me know how to start DFS, YARN and run the hadoop on windows 
8.

> Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc 
> (compile-protoc) on project hadoop-common: 
> org.apache.maven.plugin.MojoExecutionException: protoc failure -> [Help 1]
> 
>
> Key: HADOOP-11752
> URL: https://issues.apache.org/jira/browse/HADOOP-11752
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.4.0, 2.6.0
> Environment: Operating System: Windows 8.1 64Bit
> Cygwin 64Bit
> protobuf-2.5.0
> protoc 2.5.0
> hadoop-2.4.0-src
> apache-maven-3.3.1
>Reporter: Venkata Sravan Kumar Talasila
>  Labels: build, maven
>
> while build of Hadoop, I am facing the below error
> Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc 
> (compile-protoc) on project hadoop-common: 
> org.apache.maven.plugin.MojoExecutionException: protoc failure -> [Help 1]
> [INFO] 
> 
> [INFO] Building Apache Hadoop Common 2.4.0
> [INFO] 
> 
> [INFO]
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
> [INFO] Executing tasks
> main:
> [INFO] Executed tasks
> [INFO]
> [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-os) @ hadoop-common 
> ---
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.4.0:protoc (compile-protoc) @ hadoop-common 
> ---
> [WARNING] [C:\cygwin64\usr\local\bin\protoc.exe, 
> --java_out=C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\target\generated-sources\java,
> -IC:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\GetUserMappingsProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\HAServiceProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\IpcConnectionContext.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ProtobufRpcEngine.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ProtocolInfo.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshAuthorizationPolicyProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshCallQueueProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshUserMappingsProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RpcHeader.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\Security.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ZKFCProtocol.proto]
>  failed with error code 1
> [ERROR] protoc compiler error
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Main . SUCCESS [  6.080 
> s]
> [INFO] Apache Hadoop Project POM .. SUCCESS [  2.140 
> s]
> [INFO] Apache Hadoop Annotations .. SUCCESS [  2.691 
> s]
> [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.250 
> s]
> [INFO] Ap

[jira] [Commented] (HADOOP-11752) Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException:

2015-03-26 Thread Venkata Sravan Kumar Talasila (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382064#comment-14382064
 ] 

Venkata Sravan Kumar Talasila commented on HADOOP-11752:


As per your instructions i have followed the procedure and the build is 
successful, but how to Run the Hadoop on windows 8.

When i try to do hdfs namenode -format, i am getting the below error: 

C:\Users\..\hadoop>hdfs namenode -format
'hdfs' is not recognized as an internal or external command,
operable program or batch file.

C:\Users\..\hadoop>start-dfs
'start-dfs' is not recognized as an internal or external command,
operable program or batch file.

C:\Users\..\hadoop\hadoop-dist\target\hadoop-3.0.0-SNAPSHOT\sbin>hdfs name
node -format
'hdfs' is not recognized as an internal or external command,
operable program or batch file.

C:\Users\..\hadoop\hadoop-dist\target\hadoop-3.0.0-SNAPSHOT\sbin>start-dfs

The system cannot find the file hadoop.
The system cannot find the file hadoop.

Can you please let me know how to start DFS, YARN and run the hadoop on windows 
8.

> Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc 
> (compile-protoc) on project hadoop-common: 
> org.apache.maven.plugin.MojoExecutionException: protoc failure -> [Help 1]
> 
>
> Key: HADOOP-11752
> URL: https://issues.apache.org/jira/browse/HADOOP-11752
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.4.0, 2.6.0
> Environment: Operating System: Windows 8.1 64Bit
> Cygwin 64Bit
> protobuf-2.5.0
> protoc 2.5.0
> hadoop-2.4.0-src
> apache-maven-3.3.1
>Reporter: Venkata Sravan Kumar Talasila
>  Labels: build, maven
>
> while build of Hadoop, I am facing the below error
> Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc 
> (compile-protoc) on project hadoop-common: 
> org.apache.maven.plugin.MojoExecutionException: protoc failure -> [Help 1]
> [INFO] 
> 
> [INFO] Building Apache Hadoop Common 2.4.0
> [INFO] 
> 
> [INFO]
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
> [INFO] Executing tasks
> main:
> [INFO] Executed tasks
> [INFO]
> [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-os) @ hadoop-common 
> ---
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.4.0:protoc (compile-protoc) @ hadoop-common 
> ---
> [WARNING] [C:\cygwin64\usr\local\bin\protoc.exe, 
> --java_out=C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\target\generated-sources\java,
> -IC:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\GetUserMappingsProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\HAServiceProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\IpcConnectionContext.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ProtobufRpcEngine.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ProtocolInfo.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshAuthorizationPolicyProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshCallQueueProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshUserMappingsProtocol.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RpcHeader.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\Security.proto,
>  
> C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ZKFCProtocol.proto]
>  failed with error code 1
> [ERROR] protoc compiler error
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Main . SUCCESS [  6.080 
> s]
> [INFO] Apache Hadoop Project POM .. SUCCESS [  2.140 
> s]
> [INFO] Apache Hadoop Annotations .. SUCCESS [  2.691 
> s]
> [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.250 
> s]
> [INFO] Ap

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382040#comment-14382040
 ] 

Sangjin Lee commented on HADOOP-11754:
--

cc [~vinodkv]

bq. Maybe we should avoid that at all, no default value for the property ? 
Indeed it should be explicitly prepared and configured.

Wouldn't that break users that were simply relying on the default value for 
this property in the secure mode? I'm not sure if that qualifies as not 
backward compatible, but does sound like there will be user impact if that 
change is made.

How about reverting this change for now (at least for branch-2)? This is a 
blocker for the 2.7 release. Is there a strong reason that HADOOP-10670 must be 
part of the 2.7 release? If not, it may not be a bad idea to revert this for 
now and revise the patch for a later release. Thoughts?

> RM fails to start in non-secure mode due to authentication filter failure
> -
>
> Key: HADOOP-11754
> URL: https://issues.apache.org/jira/browse/HADOOP-11754
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Priority: Blocker
> Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch
>
>
> RM fails to start in the non-secure mode with the following exception:
> {noformat}
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
> 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
> org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
> javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
> signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
>   at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
>   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
>   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
>   at 
> org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
>   at 
> org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
>   at org.mortbay.jetty.Server.doStart(Server.java:224)
>   at 
> org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
>   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
> Caused by: java.lang.RuntimeException: Could not read signature secret file: 
> /Users/sjlee/hadoop-http-auth-signature-secret
>   at 
> org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
>   ... 23 more
> ...
> 2015-03-25 22:02:42,538 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> 

[jira] [Commented] (HADOOP-9774) RawLocalFileSystem.listStatus() return absolute paths when input path is relative on Windows

2015-03-26 Thread Ha Son Hai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14381982#comment-14381982
 ] 

Ha Son Hai commented on HADOOP-9774:


I'm Sorry, I post on the wrong thread. I should post it on Mahout's place.

> RawLocalFileSystem.listStatus() return absolute paths when input path is 
> relative on Windows
> 
>
> Key: HADOOP-9774
> URL: https://issues.apache.org/jira/browse/HADOOP-9774
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Fix For: 2.1.1-beta
>
> Attachments: HADOOP-9774-2.patch, HADOOP-9774-3.patch, 
> HADOOP-9774-4.patch, HADOOP-9774-5.patch, HADOOP-9774.patch
>
>
> On Windows, when using RawLocalFileSystem.listStatus() to enumerate a 
> relative path (without drive spec), e.g., "file:///mydata", the resulting 
> paths become absolute paths, e.g., ["file://E:/mydata/t1.txt", 
> "file://E:/mydata/t2.txt"...].
> Note that if we use it to enumerate an absolute path, e.g., 
> "file://E:/mydata" then the we get the same results as above.
> This breaks some hive unit tests which uses local file system to simulate 
> HDFS when testing, therefore the drive spec is removed. Then after 
> listStatus() the path is changed to absolute path, hive failed to find the 
> path in its map reduce job.
> You'll see the following exception:
> [junit] java.io.IOException: cannot find dir = 
> pfile:/E:/GitHub/hive-monarch/build/ql/test/data/warehouse/src/kv1.txt in 
> pathToPartitionInfo: 
> [pfile:/GitHub/hive-monarch/build/ql/test/data/warehouse/src]
> [junit]   at 
> org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:298)
> This problem is introduced by this JIRA:
> HADOOP-8962
> Prior to the fix for HADOOP-8962 (merged in 0.23.5), the resulting paths are 
> relative paths if the parent paths are relative, e.g., 
> ["file:///mydata/t1.txt", "file:///mydata/t2.txt"...]
> This behavior change is a side effect of the fix in HADOOP-8962, not an 
> intended change. The resulting behavior, even though is legitimate from a 
> function point of view, break consistency from the caller's point of view. 
> When the caller use a relative path (without drive spec) to do listStatus() 
> the resulting path should be relative. Therefore, I think this should be 
> fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >