[jira] [Commented] (HADOOP-11789) NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec

2015-04-07 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14484799#comment-14484799
 ] 

Yi Liu commented on HADOOP-11789:
-

Colin, if {{-Pnative}} is set, but the os doesn't have a correct version of 
OpenSSL, then we should make the test failed? Could we test the crypto streams 
with OpensslAesCtrCryptoCodec only if correct Openssl is loaded? Otherwise 
people will still see the failure in the environment without correct OpenSSL.

> NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
> -
>
> Key: HADOOP-11789
> URL: https://issues.apache.org/jira/browse/HADOOP-11789
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.8.0
> Environment: ASF Jenkins
>Reporter: Steve Loughran
>Assignee: Yi Liu
> Attachments: HADOOP-11789.001.patch
>
>
> NPE surfacing in {{TestCryptoStreamsWithOpensslAesCtrCryptoCodec}} on  Jenkins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh

2015-04-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14484703#comment-14484703
 ] 

Hadoop QA commented on HADOOP-11746:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723822/HADOOP-11746-11.patch
  against trunk revision 4be648b.

{color:red}-1 @author{color}.  The patch appears to contain 13 @author tags 
which the Hadoop community has agreed to not allow in code contributions.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6073//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6073//console

This message is automatically generated.

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch, HADOOP-11746-11.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh

2015-04-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Release Note: 
* test-patch.sh now has new output that is different than the previous versions
* test-patch.sh is now pluggable via the test-patch.d directory, with 
checkstyle and shellcheck tests included
* JIRA comments now use much more markup to improve readability
* test-patch.sh now supports either a file name, a URL, or a JIRA issue as 
input in developer mode
* If part of the patch testing code is changed, test-patch.sh will now attempt 
to re-executing itself using those new versions.
* Some logic to try and reduce the amount of unnecessary tests.  For example, 
patches that only modify markdown should not run the Java compilation tests.
* Plugins for checkstyle, shellcheck, and whitespace now execute as necessary.
* New test code for mvn site

  was:
* test-patch.sh now has new output that is different than the previous versions
* test-patch.sh is now pluggable via the test-patch.d directory, with 
checkstyle and shellcheck tests included
* JIRA comments now use much more markup to improve readability
* test-patch.sh now supports either a file name, a URL, or a JIRA issue as 
input in developer mode
* If part of the patch testing code is changed, test-patch.sh will now 
re-executing itself using those new versions.

  Status: Patch Available  (was: Open)

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch, HADOOP-11746-11.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh

2015-04-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Status: Open  (was: Patch Available)

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch, HADOOP-11746-11.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh

2015-04-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14484682#comment-14484682
 ] 

Allen Wittenauer commented on HADOOP-11746:
---

Testing this patch takes less than a minute, BTW.  Huge improvement over the 
20+ mins it takes to run tests that don't make a bit of difference. It pulls in 
the shellcheck tests and completely skips the java-related tests.  Testing 
other patches seems to indicate the heuristics do more false positives (more 
tests than needed) but I haven't seen a situation with false negatives (missing 
tests that should really be run) yet. They are likely there, but we'll see. The 
system does report what tests it ran so we should be able to watch for that.

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch, HADOOP-11746-11.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh

2015-04-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Attachment: HADOOP-11746-11.patch

-11:
* clean up the test selector API so that plugins could easily use it. so now 
checkstyle and (sometimes) shellcheck only kick in when necessary
* added mvn install check to javac or javadoc test check. suspect it really 
only needs javadoc, but let's be conservative for now
* fixed the regexes on the test selector
* added cmakelists.txt as a trigger for the native code checks



> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch, HADOOP-11746-11.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11802) DomainSocketWatcher#watcherThread can encounter IllegalStateException in finally block when calling sendCallback

2015-04-07 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14484509#comment-14484509
 ] 

Colin Patrick McCabe commented on HADOOP-11802:
---

Hi Eric,

You should never be in "the main finally block" of DomainSocketWatcher unless 
you are in a unit test.  If you are in this finally block in the actual 
DataNode, something is wrong.  You should see a string like "terminating on 
InterruptedException" or "terminating on IOException" explaining why you ended 
up in this finally block in the first place.  This should be the root cause.  
Do you have a log line like that?

> DomainSocketWatcher#watcherThread can encounter IllegalStateException in 
> finally block when calling sendCallback
> 
>
> Key: HADOOP-11802
> URL: https://issues.apache.org/jira/browse/HADOOP-11802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>
> In the main finally block of the {{DomainSocketWatcher#watcherThread}}, the 
> call to {{sendCallback}} can encounter an {{IllegalStateException}}, and 
> leave some cleanup tasks undone.
> {code}
>   } finally {
> lock.lock();
> try {
>   kick(); // allow the handler for notificationSockets[0] to read a 
> byte
>   for (Entry entry : entries.values()) {
> // We do not remove from entries as we iterate, because that can
> // cause a ConcurrentModificationException.
> sendCallback("close", entries, fdSet, entry.getDomainSocket().fd);
>   }
>   entries.clear();
>   fdSet.close();
> } finally {
>   lock.unlock();
> }
>   }
> {code}
> The exception causes {{watcherThread}} to skip the calls to 
> {{entries.clear()}} and {{fdSet.close()}}.
> {code}
> 2015-04-02 11:48:09,941 [DataXceiver for client 
> unix:/home/gs/var/run/hdfs/dn_socket [Waiting for operation #1]] INFO 
> DataNode.clienttrace: cliID: DFSClient_NONMAPREDUCE_-807148576_1, src: 
> 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_SHM, shmId: n/a, srvID: 
> e6b6cdd7-1bf8-415f-a412-32d8493554df, success: false
> 2015-04-02 11:48:09,941 [Thread-14] ERROR unix.DomainSocketWatcher: 
> Thread[Thread-14,5,main] terminating on unexpected exception
> java.lang.IllegalStateException: failed to remove 
> b845649551b6b1eab5c17f630e42489d
> at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:145)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.removeShm(ShortCircuitRegistry.java:119)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry$RegisteredShm.handle(ShortCircuitRegistry.java:102)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher.sendCallback(DomainSocketWatcher.java:402)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher.access$1100(DomainSocketWatcher.java:52)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:522)
> at java.lang.Thread.run(Thread.java:722)
> {code}
> Please note that this is not a duplicate of HADOOP-11333, HADOOP-11604, or 
> HADOOP-10404. The cluster installation is running code with all of these 
> fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11801) Update BUILDING.txt for Ubuntu

2015-04-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14484500#comment-14484500
 ] 

Hudson commented on HADOOP-11801:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7526 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7526/])
HADOOP-11801. Update BUILDING.txt for Ubuntu. (Contributed by Gabor Liptak) 
(arp: rev 5449adc9e5fa0607b27caacd0f7aafc18c100975)
* hadoop-common-project/hadoop-common/CHANGES.txt
* BUILDING.txt


> Update BUILDING.txt for Ubuntu
> --
>
> Key: HADOOP-11801
> URL: https://issues.apache.org/jira/browse/HADOOP-11801
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11801.patch
>
>
> ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11801) Update BUILDING.txt for Ubuntu

2015-04-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14484490#comment-14484490
 ] 

Hadoop QA commented on HADOOP-11801:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723795/HADOOP-11801.patch
  against trunk revision 5b8a3ae.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6072//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6072//console

This message is automatically generated.

> Update BUILDING.txt for Ubuntu
> --
>
> Key: HADOOP-11801
> URL: https://issues.apache.org/jira/browse/HADOOP-11801
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11801.patch
>
>
> ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11801) Update BUILDING.txt for Ubuntu

2015-04-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-11801:
---
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed it to trunk through branch-2.7.

Jenkins +1 is not needed for the v2 patch since it's a trivial update to 
BUILDING.txt.

Thanks for the contribution [~gliptak]. FYI next time you don't need to delete 
the older version of the patch. You can attach newer patch versions like 
HADOOP-11801.02.patch etc.

> Update BUILDING.txt for Ubuntu
> --
>
> Key: HADOOP-11801
> URL: https://issues.apache.org/jira/browse/HADOOP-11801
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11801.patch
>
>
> ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11801) Update BUILDING.txt for Ubuntu

2015-04-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-11801:
---
Target Version/s:   (was: 2.6.1)

> Update BUILDING.txt for Ubuntu
> --
>
> Key: HADOOP-11801
> URL: https://issues.apache.org/jira/browse/HADOOP-11801
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Attachments: HADOOP-11801.patch
>
>
> ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11801) Update BUILDING.txt for Ubuntu

2015-04-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-11801:
---
Summary: Update BUILDING.txt for Ubuntu  (was: Update BUILDING.txt)

> Update BUILDING.txt for Ubuntu
> --
>
> Key: HADOOP-11801
> URL: https://issues.apache.org/jira/browse/HADOOP-11801
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Attachments: HADOOP-11801.patch
>
>
> ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11811) Fix typos in hadoop-project/pom.xml

2015-04-07 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14484449#comment-14484449
 ] 

Brahma Reddy Battula commented on HADOOP-11811:
---

Thanks for reporting this..It's better to address collection of typos,let me 
check If I find some more...

> Fix typos in hadoop-project/pom.xml
> ---
>
> Key: HADOOP-11811
> URL: https://issues.apache.org/jira/browse/HADOOP-11811
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
>
> 
> 
> etc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11801) Update BUILDING.txt

2015-04-07 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1448#comment-1448
 ] 

Arpit Agarwal commented on HADOOP-11801:


Thanks. +1.

I will commit it later today.

> Update BUILDING.txt
> ---
>
> Key: HADOOP-11801
> URL: https://issues.apache.org/jira/browse/HADOOP-11801
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Attachments: HADOOP-11801.patch
>
>
> ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11801) Update BUILDING.txt

2015-04-07 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14484442#comment-14484442
 ] 

Gabor Liptak commented on HADOOP-11801:
---

Arpit, updated patch attached.

> Update BUILDING.txt
> ---
>
> Key: HADOOP-11801
> URL: https://issues.apache.org/jira/browse/HADOOP-11801
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Attachments: HADOOP-11801.patch
>
>
> ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11811) Fix typos in hadoop-project/pom.xml

2015-04-07 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HADOOP-11811:
-

Assignee: Brahma Reddy Battula

> Fix typos in hadoop-project/pom.xml
> ---
>
> Key: HADOOP-11811
> URL: https://issues.apache.org/jira/browse/HADOOP-11811
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chen He
>Assignee: Brahma Reddy Battula
>Priority: Trivial
>  Labels: newbie++
>
> 
> 
> etc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11801) Update BUILDING.txt

2015-04-07 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HADOOP-11801:
--
Attachment: (was: HADOOP-11801.patch)

> Update BUILDING.txt
> ---
>
> Key: HADOOP-11801
> URL: https://issues.apache.org/jira/browse/HADOOP-11801
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Attachments: HADOOP-11801.patch
>
>
> ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11801) Update BUILDING.txt

2015-04-07 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HADOOP-11801:
--
Attachment: HADOOP-11801.patch

> Update BUILDING.txt
> ---
>
> Key: HADOOP-11801
> URL: https://issues.apache.org/jira/browse/HADOOP-11801
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
>Priority: Minor
> Attachments: HADOOP-11801.patch
>
>
> ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh

2015-04-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14484337#comment-14484337
 ] 

Hadoop QA commented on HADOOP-11746:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723763/HADOOP-11746-10.patch
  against trunk revision bd77a7c.

{color:red}-1 @author{color}.  The patch appears to contain 13 @author tags 
which the Hadoop community has agreed to not allow in code contributions.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6071//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6071//console

This message is automatically generated.

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh

2015-04-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Status: Open  (was: Patch Available)

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh

2015-04-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Attachment: HADOOP-11746-10.patch

-10:
* fix the rat error
* add better heuristics to determine the minimal amount of long running tests 
to execute: significantly faster run times for certain classes of patches
* add a site compilation test; doc patches actually get tested
* fix a few bugs in the checkstyle plugin
* fix a few bugs in the whitespace plugin



> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh

2015-04-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Status: Patch Available  (was: Open)

> rewrite test-patch.sh
> -
>
> Key: HADOOP-11746
> URL: https://issues.apache.org/jira/browse/HADOOP-11746
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
> HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
> HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
> HADOOP-11746-09.patch, HADOOP-11746-10.patch
>
>
> This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11796) Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows

2015-04-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14484248#comment-14484248
 ] 

Hudson commented on HADOOP-11796:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7524 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7524/])
HADOOP-11796. Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows. 
Contributed by Xiaoyu Yao. (cnauroth: rev 
bd77a7c4d94fe8a74b36deb50e19396c98b8908e)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestShellBasedIdMapping.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows
> ---
>
> Key: HADOOP-11796
> URL: https://issues.apache.org/jira/browse/HADOOP-11796
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11796.00.patch, HADOOP-11796.01.patch
>
>
> The test should be skipped on Windows.
> {code}
> Stacktrace
> java.util.NoSuchElementException: null
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:809)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:847)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:845)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:314)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:306)
>   at 
> org.apache.hadoop.security.TestShellBasedIdMapping.testStaticMapUpdate(TestShellBasedIdMapping.java:151)
> Standard Output
> 2015-03-30 00:44:30,267 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,274 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,274 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,275 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11796) Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows

2015-04-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11796:
---
Component/s: test

> Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows
> ---
>
> Key: HADOOP-11796
> URL: https://issues.apache.org/jira/browse/HADOOP-11796
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11796.00.patch, HADOOP-11796.01.patch
>
>
> The test should be skipped on Windows.
> {code}
> Stacktrace
> java.util.NoSuchElementException: null
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:809)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:847)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:845)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:314)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:306)
>   at 
> org.apache.hadoop.security.TestShellBasedIdMapping.testStaticMapUpdate(TestShellBasedIdMapping.java:151)
> Standard Output
> 2015-03-30 00:44:30,267 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,274 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,274 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,275 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11796) Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows

2015-04-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11796:
---
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1 for the patch.  I committed this to trunk, branch-2 and branch-2.7.  Xiaoyu, 
thank you for contributing the patch.

> Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows
> ---
>
> Key: HADOOP-11796
> URL: https://issues.apache.org/jira/browse/HADOOP-11796
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11796.00.patch, HADOOP-11796.01.patch
>
>
> The test should be skipped on Windows.
> {code}
> Stacktrace
> java.util.NoSuchElementException: null
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:809)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:847)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:845)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:314)
>   at 
> com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:306)
>   at 
> org.apache.hadoop.security.TestShellBasedIdMapping.testStaticMapUpdate(TestShellBasedIdMapping.java:151)
> Standard Output
> 2015-03-30 00:44:30,267 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,274 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,274 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:(113)) - User configured user account update 
> time is less than 1 minute. Use 1 minute instead.
> 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
> UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
> not exist.
> 2015-03-30 00:44:30,275 ERROR security.ShellBasedIdMapping 
> (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
> supported:Windows Server 2008 R2. Can't update user map and group map and 
> 'nobody' will be used for any user and group.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11811) Fix typos in hadoop-project/pom.xml

2015-04-07 Thread Chen He (JIRA)
Chen He created HADOOP-11811:


 Summary: Fix typos in hadoop-project/pom.xml
 Key: HADOOP-11811
 URL: https://issues.apache.org/jira/browse/HADOOP-11811
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He
Priority: Trivial





etc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11795) Fix Hadoop unit test failures on Windows

2015-04-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HADOOP-11795:
---

Assignee: Xiaoyu Yao

> Fix Hadoop unit test failures on Windows
> 
>
> Key: HADOOP-11795
> URL: https://issues.apache.org/jira/browse/HADOOP-11795
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11783) Build failed with JVM IBM JAVA on TestSecureLogins

2015-04-07 Thread pascal oliva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pascal oliva updated HADOOP-11783:
--
Fix Version/s: (was: 3.0.0)

> Build failed with JVM IBM JAVA on TestSecureLogins
> --
>
> Key: HADOOP-11783
> URL: https://issues.apache.org/jira/browse/HADOOP-11783
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
> Environment: $ mvn -version
> Apache Maven 3.3.1 (cab6659f9874fa96462afef40fcf6bc033d58c1c; 
> 2015-03-13T16:10:27-04:00)
> Maven home: /opt/apache-maven-3.3.1
> Java version: 1.7.0, vendor: IBM Corporation
> Java home: /usr/lib/jvm/ibm-java-x86_64-71/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "3.10.0-229.el7.x86_64", arch: "amd64", family: 
> "unix"
>Reporter: pascal oliva
> Attachments: HADOOP-11783-1.patch
>
>
> Hadoop build failed with JVM IBM due to the use of com.sun.security.* classes 
> : 
> [ERROR] 
> /home/ibmadmin/TRUNK/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureLogins.java:[23,36]
>  package com.sun.security.auth.module does not exist
> [ERROR] 
> /home/ibmadmin/TRUNK/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureLogins.java:[138,11]
>  cannot find symbol
> [ERROR] symbol:   class Krb5LoginModule
> [ERROR] location: class org.apache.hadoop.registry.secure.TestSecureLogins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-04-07 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14483923#comment-14483923
 ] 

Gopal V commented on HADOOP-11772:
--

[~ajisakaa]: added to today's builds.

> RPC Invoker relies on static ClientCache which has synchronized(this) blocks
> 
>
> Key: HADOOP-11772
> URL: https://issues.apache.org/jira/browse/HADOOP-11772
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc, performance
>Reporter: Gopal V
>Assignee: Akira AJISAKA
> Attachments: HADOOP-11772-001.patch, HADOOP-11772-wip-001.patch, 
> HADOOP-11772-wip-002.patch, dfs-sync-ipc.png, sync-client-bt.png, 
> sync-client-threads.png
>
>
> {code}
>   private static ClientCache CLIENTS=new ClientCache();
> ...
> this.client = CLIENTS.getClient(conf, factory);
> {code}
> Meanwhile in ClientCache
> {code}
> public synchronized Client getClient(Configuration conf,
>   SocketFactory factory, Class valueClass) {
> ...
>Client client = clients.get(factory);
> if (client == null) {
>   client = new Client(valueClass, conf, factory);
>   clients.put(factory, client);
> } else {
>   client.incCount();
> }
> {code}
> All invokers end up calling these methods, resulting in IPC clients choking 
> up.
> !sync-client-threads.png!
> !sync-client-bt.png!
> !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11810) Test TestSecureRMRegistryOperations failed with IBM_JAVA JVM

2015-04-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14483679#comment-14483679
 ] 

Hadoop QA commented on HADOOP-11810:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723668/HADOOP-11810-1.patch
  against trunk revision d27e924.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6070//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6070//console

This message is automatically generated.

> Test TestSecureRMRegistryOperations failed with IBM_JAVA JVM
> 
>
> Key: HADOOP-11810
> URL: https://issues.apache.org/jira/browse/HADOOP-11810
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.6.0
> Environment: $ mvn -version
> Apache Maven 3.2.1 (ea8b2b07643dbb1b84b6d16e1f08391b666bc1e9; 
> 2014-02-14T11:37:52-06:00)
> Maven home: /opt/apache-maven-3.2.1
> Java version: 1.7.0, vendor: IBM Corporation
> Java home: /usr/lib/jvm/ibm-java-ppc64le-71/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "3.10.0-229.ael7b.ppc64le", arch: "ppc64le", 
> family: "unix"
>Reporter: pascal oliva
> Fix For: 3.0.0
>
> Attachments: HADOOP-11810-1.patch
>
>
> TestSecureRMRegistryOperations failed with JBM IBM JAVA
> mvn test -X 
> -Dtest=org.apache.hadoop.registry.secure.TestSecureRMRegistryOperations
> ModuleTotal Failure Error Skipped
> -
> hadoop-yarn-registry 12  0   12 0
> -
>  Total  12  0   12 0
> With 
> javax.security.auth.login.LoginException: Bad JAAS configuration: 
> unrecognized option: isInitiator
> and 
> Bad JAAS configuration: unrecognized option: storeKey



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11809) Building hadoop on windows 64 bit, windows 7.1 SDK : \hadoop-common\target\findbugsXml.xml does not exist

2015-04-07 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HADOOP-11809.

Resolution: Invalid

Hi [~kantum...@yahoo.com], please use the dev mailing list for questions.

Resolving as Invalid.

> Building hadoop on windows 64 bit, windows 7.1 SDK : 
> \hadoop-common\target\findbugsXml.xml does not exist
> -
>
> Key: HADOOP-11809
> URL: https://issues.apache.org/jira/browse/HADOOP-11809
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.6.0
>Reporter: Umesh Kant
>
> I am trying to build hadoop 2.6.0 on Windows 7 64 bit, Windows 7.1 SDK. I 
> have gone through Build.txt file and have did follow all the pre-requisites 
> for build on windows. Still when I try to build, I am getting following error:
> Maven command: mvn package -X -Pdist -Pdocs -Psrc -Dtar -DskipTests 
> -Pnative-win findbugs:findbugs
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 04:35 min
> [INFO] Finished at: 2015-04-03T23:16:57-04:00
> [INFO] Final Memory: 123M/1435M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:
> run (site) on project hadoop-common: An Ant BuildException has occured: input 
> fi
> le 
> C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-common\target\findbugsXml.
> xml does not exist
> [ERROR] around Ant part ... in="C:\H\hadoop-2.6.0-src\hadoop-common-project
> \hadoop-common\target/findbugsXml.xml" 
> style="C:\findbugs-3.0.1/src/xsl/default.
> xsl" 
> out="C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-common\target/site/
> findbugs.html"/>... @ 44:232 in 
> C:\H\hadoop-2.6.0-src\hadoop-common-project\hado
> op-common\target\antrun\build-main.xml
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal o
> rg.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project 
> hadoop-com
> mon: An Ant BuildException has occured: input file 
> C:\H\hadoop-2.6.0-src\hadoop-
> common-project\hadoop-common\target\findbugsXml.xml does not exist
> around Ant part ... in="C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-
> common\target/findbugsXml.xml" style="C:\findbugs-3.0.1/src/xsl/default.xsl" 
> out
> ="C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-common\target/site/findbugs
> .html"/>... @ 44:232 in 
> C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-commo
> n\target\antrun\build-main.xml
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
> .java:216)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
> .java:153)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
> .java:145)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
> ct(LifecycleModuleBuilder.java:116)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
> ct(LifecycleModuleBuilder.java:80)
> at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThre
> adedBuilder.build(SingleThreadedBuilder.java:51)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(Lifecycl
> eStarter.java:128)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:862)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:286)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:197)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
> java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Laun
> cher.java:289)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.jav
> a:229)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(La
> uncher.java:415)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:
> 356)
> Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant 
> BuildException
>  has occured: input file 
> C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-comm
> on\target\findbugsXml.xml does not exi

[jira] [Updated] (HADOOP-11783) Build failed with JVM IBM JAVA on TestSecureLogins

2015-04-07 Thread pascal oliva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pascal oliva updated HADOOP-11783:
--
Fix Version/s: 3.0.0

> Build failed with JVM IBM JAVA on TestSecureLogins
> --
>
> Key: HADOOP-11783
> URL: https://issues.apache.org/jira/browse/HADOOP-11783
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
> Environment: $ mvn -version
> Apache Maven 3.3.1 (cab6659f9874fa96462afef40fcf6bc033d58c1c; 
> 2015-03-13T16:10:27-04:00)
> Maven home: /opt/apache-maven-3.3.1
> Java version: 1.7.0, vendor: IBM Corporation
> Java home: /usr/lib/jvm/ibm-java-x86_64-71/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "3.10.0-229.el7.x86_64", arch: "amd64", family: 
> "unix"
>Reporter: pascal oliva
> Fix For: 3.0.0
>
> Attachments: HADOOP-11783-1.patch
>
>
> Hadoop build failed with JVM IBM due to the use of com.sun.security.* classes 
> : 
> [ERROR] 
> /home/ibmadmin/TRUNK/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureLogins.java:[23,36]
>  package com.sun.security.auth.module does not exist
> [ERROR] 
> /home/ibmadmin/TRUNK/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureLogins.java:[138,11]
>  cannot find symbol
> [ERROR] symbol:   class Krb5LoginModule
> [ERROR] location: class org.apache.hadoop.registry.secure.TestSecureLogins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11810) Test TestSecureRMRegistryOperations failed with IBM_JAVA JVM

2015-04-07 Thread pascal oliva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pascal oliva updated HADOOP-11810:
--
Attachment: HADOOP-11810-1.patch

Patch with : git diff --no-prefix trunk > ../HADOOP-11810-1.patch
tested with : mvn test -X 
-Dtest=org.apache.hadoop.registry.secure.TestSecureRMRegistryOperations



> Test TestSecureRMRegistryOperations failed with IBM_JAVA JVM
> 
>
> Key: HADOOP-11810
> URL: https://issues.apache.org/jira/browse/HADOOP-11810
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.6.0
> Environment: $ mvn -version
> Apache Maven 3.2.1 (ea8b2b07643dbb1b84b6d16e1f08391b666bc1e9; 
> 2014-02-14T11:37:52-06:00)
> Maven home: /opt/apache-maven-3.2.1
> Java version: 1.7.0, vendor: IBM Corporation
> Java home: /usr/lib/jvm/ibm-java-ppc64le-71/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "3.10.0-229.ael7b.ppc64le", arch: "ppc64le", 
> family: "unix"
>Reporter: pascal oliva
> Fix For: 3.0.0
>
> Attachments: HADOOP-11810-1.patch
>
>
> TestSecureRMRegistryOperations failed with JBM IBM JAVA
> mvn test -X 
> -Dtest=org.apache.hadoop.registry.secure.TestSecureRMRegistryOperations
> ModuleTotal Failure Error Skipped
> -
> hadoop-yarn-registry 12  0   12 0
> -
>  Total  12  0   12 0
> With 
> javax.security.auth.login.LoginException: Bad JAAS configuration: 
> unrecognized option: isInitiator
> and 
> Bad JAAS configuration: unrecognized option: storeKey



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11810) Test TestSecureRMRegistryOperations failed with IBM_JAVA JVM

2015-04-07 Thread pascal oliva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pascal oliva updated HADOOP-11810:
--
Fix Version/s: 3.0.0
Affects Version/s: 3.0.0
   2.6.0
   Status: Patch Available  (was: Open)

Tested with SUCESS With IBM JAVA a Open JDK



> Test TestSecureRMRegistryOperations failed with IBM_JAVA JVM
> 
>
> Key: HADOOP-11810
> URL: https://issues.apache.org/jira/browse/HADOOP-11810
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 2.6.0, 3.0.0
> Environment: $ mvn -version
> Apache Maven 3.2.1 (ea8b2b07643dbb1b84b6d16e1f08391b666bc1e9; 
> 2014-02-14T11:37:52-06:00)
> Maven home: /opt/apache-maven-3.2.1
> Java version: 1.7.0, vendor: IBM Corporation
> Java home: /usr/lib/jvm/ibm-java-ppc64le-71/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "3.10.0-229.ael7b.ppc64le", arch: "ppc64le", 
> family: "unix"
>Reporter: pascal oliva
> Fix For: 3.0.0
>
>
> TestSecureRMRegistryOperations failed with JBM IBM JAVA
> mvn test -X 
> -Dtest=org.apache.hadoop.registry.secure.TestSecureRMRegistryOperations
> ModuleTotal Failure Error Skipped
> -
> hadoop-yarn-registry 12  0   12 0
> -
>  Total  12  0   12 0
> With 
> javax.security.auth.login.LoginException: Bad JAAS configuration: 
> unrecognized option: isInitiator
> and 
> Bad JAAS configuration: unrecognized option: storeKey



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11810) Test TestSecureRMRegistryOperations failed with IBM_JAVA JVM

2015-04-07 Thread pascal oliva (JIRA)
pascal oliva created HADOOP-11810:
-

 Summary: Test TestSecureRMRegistryOperations failed with IBM_JAVA 
JVM
 Key: HADOOP-11810
 URL: https://issues.apache.org/jira/browse/HADOOP-11810
 Project: Hadoop Common
  Issue Type: Test
 Environment: $ mvn -version
Apache Maven 3.2.1 (ea8b2b07643dbb1b84b6d16e1f08391b666bc1e9; 
2014-02-14T11:37:52-06:00)
Maven home: /opt/apache-maven-3.2.1
Java version: 1.7.0, vendor: IBM Corporation
Java home: /usr/lib/jvm/ibm-java-ppc64le-71/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-229.ael7b.ppc64le", arch: "ppc64le", family: 
"unix"

Reporter: pascal oliva


TestSecureRMRegistryOperations failed with JBM IBM JAVA

mvn test -X 
-Dtest=org.apache.hadoop.registry.secure.TestSecureRMRegistryOperations

ModuleTotal Failure Error Skipped
-
hadoop-yarn-registry 12  0   12 0
-
 Total  12  0   12 0

With 
javax.security.auth.login.LoginException: Bad JAAS configuration: unrecognized 
option: isInitiator

and 

Bad JAAS configuration: unrecognized option: storeKey










--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11745) Incorporate ShellCheck static analysis into Jenkins pre-commit builds.

2015-04-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11745:
--
Hadoop Flags:   (was: Reviewed)

> Incorporate ShellCheck static analysis into Jenkins pre-commit builds.
> --
>
> Key: HADOOP-11745
> URL: https://issues.apache.org/jira/browse/HADOOP-11745
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts
>Reporter: Chris Nauroth
>Assignee: Allen Wittenauer
>Priority: Minor
>
> During the shell script rewrite on trunk, we've been using ShellCheck as a 
> static analysis tool to catch common errors.  We can incorporate this 
> directly into Jenkins pre-commit builds.  Jenkins can reply with a -1 on 
> shell script patches that introduce new ShellCheck warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-11745) Incorporate ShellCheck static analysis into Jenkins pre-commit builds.

2015-04-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HADOOP-11745:
---
  Assignee: Allen Wittenauer  (was: aaaliu)

> Incorporate ShellCheck static analysis into Jenkins pre-commit builds.
> --
>
> Key: HADOOP-11745
> URL: https://issues.apache.org/jira/browse/HADOOP-11745
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts
>Reporter: Chris Nauroth
>Assignee: Allen Wittenauer
>Priority: Minor
>
> During the shell script rewrite on trunk, we've been using ShellCheck as a 
> static analysis tool to catch common errors.  We can incorporate this 
> directly into Jenkins pre-commit builds.  Jenkins can reply with a -1 on 
> shell script patches that introduce new ShellCheck warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-07 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14483372#comment-14483372
 ] 

Larry McCay commented on HADOOP-11717:
--

There may very well be usecases where encryption is necessary. I didn't mean to 
say that it is never needed.
This handler is not trying to do anymore than it does.

Keep in mind that as a pluggable handler that this mechanism is completely 
replaceable with some other implementation that fits the needs of a given 
cluster deployment better. There is no precedence being set here that can't be 
replaced.

At the same time, furthering the work done in this patch with follow up 
improvements is a great plan to move it forward. It is much easier than trying 
to do everything at once.

As for the SSO behavior:

Yes, I have configured the signer secrets to be alike, the cookie domain to 
work across UIs and the expiry of the JWT token to work in various ways across 
the UIs with a single redirect for authentication.

The fact that webhdfs has a completely different authentication filter means 
that REST requests work as normally expected - in this case it will require 
SPNEGO.

{quote} 
I thought you agreed to have general token stuff in some time in future even 
not now, so why won't we use more general configuration name here right now? 
{quote}

I have no problem with a general token API. The use of a handler specific 
configuration element shouldn't impact this at all. It is up to the handler to 
pass the appropriate parameters to the API.

Thank you for your insights and discussion here,  [~drankye].
We will continue to evolve this work to meet as many usecases as appropriate 
and have a truly useful feature set here.
Having it align with future work will also be great.


> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, 
> RedirectingWebSSOwithJWTforHadoopWebUIs.pdf
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14483355#comment-14483355
 ] 

Hudson commented on HADOOP-11717:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #7518 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7518/])
HADOOP-11717. Support JWT tokens for web single sign on to the Hadoop (omalley: 
rev ce635733144456bce6bcf8664c5850ef6b60aa49)
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestCertificateUtil.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestJWTRedirectAuthentictionHandler.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/JWTRedirectAuthenticationHandler.java
* hadoop-common-project/hadoop-auth/pom.xml
* hadoop-project/pom.xml
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/CertificateUtil.java


> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, 
> RedirectingWebSSOwithJWTforHadoopWebUIs.pdf
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-07 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14483341#comment-14483341
 ] 

Owen O'Malley commented on HADOOP-11717:


I think this is good to go. If we want to generalize it further when we have 
additional use cases to support, we can do that. This just provides a plugin 
for web sso that is useful to users that don't want to use spnego for web ui.

> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, 
> RedirectingWebSSOwithJWTforHadoopWebUIs.pdf
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-07 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HADOOP-11717:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I just committed this. Thanks, Larry!

> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, 
> RedirectingWebSSOwithJWTforHadoopWebUIs.pdf
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14483330#comment-14483330
 ] 

Kai Zheng commented on HADOOP-11717:


bq.Encryption is great where it is required. It isn't required here ...
This work should not just solve your case. We should not suppose all JWT tokens 
are issued as you expected. Assume some JWT token with encrypted attributes, 
how would this handle it? As I said, this can be followed up later, but you 
disagreed at all.
bq.The reason that it extends AltKerberosAuthenticationHandler is to 
accommodate non-browser clients...
I see no reason we have to couple with AltKerberosAuthenticationHandler. It 
looks rather complicated, even so why we won't have a dedicated handler to 
handle all the cases? Wouldn't it be easier? If I missed anything please 
correct me. Thanks.
bq.As I answered previously, there is no need to pull the JWT code into a 
generic token handling utility at this point...
I agree. I'm not saying we should do this right now. I will follow up in other 
issues. Agree?
bq.This handler already works for HDFS and YARN UIs - I have tested them.
Sounds good. Did you get the SSO effect across all  the UIs, say only ONE time 
redirection to the authentication provider url happened in a reasonable time 
when you go here and there? How about web HDFS?
bq.I see little value in the configuration element changes...
I thought it's worth to pay attention to introduce new configuration items. 
Once it's used, we'll need to maintain it.
public.key.pem to token.signature.publickey, so it will be easy to add another 
key, token.encryption.privatekey.
bq.Replacing JWT with token does make it more general but this handler really 
is about JWT support
I thought you agreed to have general token stuff in some time in future even 
not now, so why won't we use more general configuration name here right now?


> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, 
> RedirectingWebSSOwithJWTforHadoopWebUIs.pdf
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-07 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14483251#comment-14483251
 ] 

Larry McCay commented on HADOOP-11717:
--

[~drankye] - Encryption is great where it is required. It isn't required here 
as the cookie should be set to HTTPOnly which will not allow access by JS 
inside of pages and Secure which will require it to be sent over secure 
channels - it is otherwise managed by the browser.

The reason that it extends AltKerberosAuthenticationHandler is to accommodate 
non-browser clients - of which there are a few. Requiring all clients to the 
same endpoints to be able to handle a redirect and challenge - typically with a 
form - will not work. Also, changing all these clients to acquire a token that 
is more appropriate for their usage pattern is outside the scope of this patch. 
This usage pattern will be introduced for such clients in a later effort.

As I answered previously, there is no need to pull the JWT code into a generic 
token handling utility at this point and there is no value in doing so 
prematurely. Slowing progress here in order to do this now - to meet the needs 
of no other consumers would be artificial and unnecessary.

This handler already works for HDFS and YARN UIs - I have tested them.

I see little value in the configuration element changes that you propose:
Adding token to authentication.provider.url - doesn't make it more general.
Changing public.key.pem to token.signature.publickey - loses the self 
descriptive nature of it being a pem representation.
Replacing JWT with token does make it more general but this handler really is 
about JWT support.

I will consider changing these names a bit more but don't see any reason that 
they can't go in the way they are.
We will certainly want to have them nailed down before backporting the patch to 
another branch.


> Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
> -
>
> Key: HADOOP-11717
> URL: https://issues.apache.org/jira/browse/HADOOP-11717
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
> HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
> HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, 
> RedirectingWebSSOwithJWTforHadoopWebUIs.pdf
>
>
> Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
> The actual authentication is done by some external service that the handler 
> will redirect to when there is no hadoop.auth cookie and no JWT token found 
> in the incoming request.
> Using JWT provides a number of benefits:
> * It is not tied to any specific authentication mechanism - so buys us many 
> SSO integrations
> * It is cryptographically verifiable for determining whether it can be trusted
> * Checking for expiration allows for a limited lifetime and window for 
> compromised use
> This will introduce the use of nimbus-jose-jwt library for processing, 
> validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11645) Erasure Codec API covering the essential aspects for an erasure code

2015-04-07 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14482969#comment-14482969
 ] 

Vinayakumar B commented on HADOOP-11645:


+1. committed to HDFS-7285 branch.

> Erasure Codec API covering the essential aspects for an erasure code
> 
>
> Key: HADOOP-11645
> URL: https://issues.apache.org/jira/browse/HADOOP-11645
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11645-v1.patch, HADOOP-11645-v2.patch, 
> HADOOP-11645-v3.patch
>
>
> This is to define the even higher level API *ErasureCodec* to possiblly 
> consider all the essential aspects for an erasure code, as discussed in in 
> HDFS-7337 in details. Generally, it will cover the necessary configurations 
> about which *RawErasureCoder* to use for the code scheme, how to form and 
> layout the BlockGroup, and etc. It will also discuss how an *ErasureCodec* 
> will be used in both client and DataNode, in all the supported modes related 
> to EC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11645) Erasure Codec API covering the essential aspects for an erasure code

2015-04-07 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HADOOP-11645.

   Resolution: Fixed
Fix Version/s: HDFS-7285
 Hadoop Flags: Reviewed

Thanks [~drankye].

> Erasure Codec API covering the essential aspects for an erasure code
> 
>
> Key: HADOOP-11645
> URL: https://issues.apache.org/jira/browse/HADOOP-11645
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11645-v1.patch, HADOOP-11645-v2.patch, 
> HADOOP-11645-v3.patch
>
>
> This is to define the even higher level API *ErasureCodec* to possiblly 
> consider all the essential aspects for an erasure code, as discussed in in 
> HDFS-7337 in details. Generally, it will cover the necessary configurations 
> about which *RawErasureCoder* to use for the code scheme, how to form and 
> layout the BlockGroup, and etc. It will also discuss how an *ErasureCodec* 
> will be used in both client and DataNode, in all the supported modes related 
> to EC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11645) Erasure Codec API covering the essential aspects for an erasure code

2015-04-07 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11645:
---
Attachment: HADOOP-11645-v3.patch

Uploaded a new patch rebasing with the branch.

> Erasure Codec API covering the essential aspects for an erasure code
> 
>
> Key: HADOOP-11645
> URL: https://issues.apache.org/jira/browse/HADOOP-11645
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11645-v1.patch, HADOOP-11645-v2.patch, 
> HADOOP-11645-v3.patch
>
>
> This is to define the even higher level API *ErasureCodec* to possiblly 
> consider all the essential aspects for an erasure code, as discussed in in 
> HDFS-7337 in details. Generally, it will cover the necessary configurations 
> about which *RawErasureCoder* to use for the code scheme, how to form and 
> layout the BlockGroup, and etc. It will also discuss how an *ErasureCodec* 
> will be used in both client and DataNode, in all the supported modes related 
> to EC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11645) Erasure Codec API covering the essential aspects for an erasure code

2015-04-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14482920#comment-14482920
 ] 

Kai Zheng commented on HADOOP-11645:


Thanks Vinay for the review. I will refine the patch based on latest codes.

> Erasure Codec API covering the essential aspects for an erasure code
> 
>
> Key: HADOOP-11645
> URL: https://issues.apache.org/jira/browse/HADOOP-11645
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11645-v1.patch, HADOOP-11645-v2.patch
>
>
> This is to define the even higher level API *ErasureCodec* to possiblly 
> consider all the essential aspects for an erasure code, as discussed in in 
> HDFS-7337 in details. Generally, it will cover the necessary configurations 
> about which *RawErasureCoder* to use for the code scheme, how to form and 
> layout the BlockGroup, and etc. It will also discuss how an *ErasureCodec* 
> will be used in both client and DataNode, in all the supported modes related 
> to EC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11649) Allow to configure multiple erasure codecs

2015-04-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14482916#comment-14482916
 ] 

Kai Zheng commented on HADOOP-11649:


Thanks [~vinayrpet] for the review and great thoughts! I will update the patch 
accordingly.

> Allow to configure multiple erasure codecs
> --
>
> Key: HADOOP-11649
> URL: https://issues.apache.org/jira/browse/HADOOP-11649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11649-v1.patch
>
>
> This is to allow to configure erasure codec and coder in core-site 
> configuration file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11805) Better to rename some raw erasure coders

2015-04-07 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11805:
---
Hadoop Flags: Reviewed

> Better to rename some raw erasure coders
> 
>
> Key: HADOOP-11805
> URL: https://issues.apache.org/jira/browse/HADOOP-11805
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11805-v1.patch
>
>
> When work on more coders, it was found better to rename some existing raw 
> coders for consistency, and more meaningful. As a result, we may have:
> XORRawErasureCoder, in Java
> NativeXORRawErasureCoder, in native
> RSRawErasureCoder, in Java
> NativeRSRawErasureCoder, in native and using ISA-L



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11805) Better to rename some raw erasure coders

2015-04-07 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14482912#comment-14482912
 ] 

Kai Zheng commented on HADOOP-11805:


Thanks [~zhz] for the review. I just committed it in the branch.

> Better to rename some raw erasure coders
> 
>
> Key: HADOOP-11805
> URL: https://issues.apache.org/jira/browse/HADOOP-11805
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11805-v1.patch
>
>
> When work on more coders, it was found better to rename some existing raw 
> coders for consistency, and more meaningful. As a result, we may have:
> XORRawErasureCoder, in Java
> NativeXORRawErasureCoder, in native
> RSRawErasureCoder, in Java
> NativeRSRawErasureCoder, in native and using ISA-L



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11805) Better to rename some raw erasure coders

2015-04-07 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng resolved HADOOP-11805.

   Resolution: Fixed
Fix Version/s: HDFS-7285

> Better to rename some raw erasure coders
> 
>
> Key: HADOOP-11805
> URL: https://issues.apache.org/jira/browse/HADOOP-11805
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11805-v1.patch
>
>
> When work on more coders, it was found better to rename some existing raw 
> coders for consistency, and more meaningful. As a result, we may have:
> XORRawErasureCoder, in Java
> NativeXORRawErasureCoder, in native
> RSRawErasureCoder, in Java
> NativeRSRawErasureCoder, in native and using ISA-L



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11745) Incorporate ShellCheck static analysis into Jenkins pre-commit builds.

2015-04-07 Thread aaaliu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

aaaliu resolved HADOOP-11745.
-
  Resolution: Fixed
Assignee: aaaliu
Hadoop Flags: Reviewed

> Incorporate ShellCheck static analysis into Jenkins pre-commit builds.
> --
>
> Key: HADOOP-11745
> URL: https://issues.apache.org/jira/browse/HADOOP-11745
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, scripts
>Reporter: Chris Nauroth
>Assignee: aaaliu
>Priority: Minor
>
> During the shell script rewrite on trunk, we've been using ShellCheck as a 
> static analysis tool to catch common errors.  We can incorporate this 
> directly into Jenkins pre-commit builds.  Jenkins can reply with a -1 on 
> shell script patches that introduce new ShellCheck warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)