[jira] [Updated] (HADOOP-9559) When metrics system is restarted MBean names get incorrectly flagged as dupes

2014-06-19 Thread Mike Liddell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Liddell updated HADOOP-9559:
-

Attachment: (was: HADOOP-9559.2.txt)

 When metrics system is restarted MBean names get incorrectly flagged as dupes
 -

 Key: HADOOP-9559
 URL: https://issues.apache.org/jira/browse/HADOOP-9559
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mostafa Elhemali
 Attachments: HADOOP-9559.2.patch, HADOOP-9559.patch


 In the Metrics2 system, every source gets registered as an MBean name, which 
 gets put into a unique name pool in the singleton DefaultMetricsSystem 
 object. The problem is that when the metrics system is shutdown (which 
 unregisters the MBeans) this unique name pool is left as is, so if the 
 metrics system is started again every attempt to register the same MBean 
 names fails (exception is eaten and a warning is logged).
 I think the fix here is to remove the name from the unique name pool if an 
 MBean is unregistered, since it's OK at this point to add it again.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10710) hadoop.auth cookie is not properly constructed according to RFC2109

2014-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038209#comment-14038209
 ] 

Hadoop QA commented on HADOOP-10710:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12651552/HADOOP-10710.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4115//console

This message is automatically generated.

 hadoop.auth cookie is not properly constructed according to RFC2109
 ---

 Key: HADOOP-10710
 URL: https://issues.apache.org/jira/browse/HADOOP-10710
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.0
Reporter: Alejandro Abdelnur
Assignee: Juan Yu
 Attachments: HADOOP-10710.001.patch, HADOOP-10710.002.patch, 
 HADOOP-10710.003.patch


 It seems that HADOOP-10379 introduced a bug on how hadoop.auth cookies are 
 being constructed.
 Before HADOOP-10379, cookies were constructed using Servlet's {{Cookie}} 
 class and corresponding {{HttpServletResponse}} methods. This was taking care 
 of setting attributes like 'Version=1' and double-quoting the cookie value if 
 necessary.
 HADOOP-10379 changed the Cookie creation to use a {{StringBuillder}} and 
 setting values and attributes by hand. This is not taking care of setting 
 required attributes like Version and escaping the cookie value.
 While this is not breaking HadoopAuth {{AuthenticatedURL}} access, it is 
 breaking access done using {{HtttpClient}}. I.e. Solr uses HttpClient and its 
 access is broken since this change.
 It seems that HADOOP-10379 main objective was to set the 'secure' attribute. 
 Note this can be done using the {{Cookie}} API.
 We should revert the cookie creation logic to use the {{Cookie}} API and take 
 care of the security flag via {{setSecure(boolean)}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10728) Metrics system for Windows Azure Storage Filesystem

2014-06-19 Thread Mike Liddell (JIRA)
Mike Liddell created HADOOP-10728:
-

 Summary: Metrics system for Windows Azure Storage Filesystem
 Key: HADOOP-10728
 URL: https://issues.apache.org/jira/browse/HADOOP-10728
 Project: Hadoop Common
  Issue Type: New Feature
  Components: tools
Reporter: Mike Liddell
Assignee: Mike Liddell


Add a metrics2 source for the Windows Azure Filesystem driver that was 
introduced with HADOOP-9629.

AzureFileSystemInstrumentation is the new MetricsSource.  

AzureNativeFilesystemStore and NativeAzureFilesystem have been modified to 
record statistics and some machinery is added for the accumulation of 'rolling 
average' statistics.

Primary new code appears in org.apache.hadoop.fs.azure.metrics namespace.

h2. Credits and history
Credit for this work goes to the early team: [~minwei], [~davidlao], 
[~lengningliu] and [~stojanovic] as well as multiple people who have taken over 
this work since then (hope I don't forget anyone): [~dexterb], Johannes Klein, 
[~ivanmi], Michael Rys, [~mostafae], [~brian_swan], [~mikelid], [~xifang], and 
[~chuanliu].




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10674) Rewrite the PureJavaCrc32 loop for performance improvement

2014-06-19 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10674:
-

Status: Patch Available  (was: Open)

I have not yet changed PureJavaCrc32C since (1) the naive CPU support is much 
faster than software implementation and (2) I do not have time to play with it 
for the moment.

 Rewrite the PureJavaCrc32 loop for performance improvement
 --

 Key: HADOOP-10674
 URL: https://issues.apache.org/jira/browse/HADOOP-10674
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance, util
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: c10674_20140609.patch, c10674_20140609b.patch, 
 c10674_20140610.patch, c10674_20140612.patch, c10674_20140619.patch


 Below are some performance improvement opportunities performance improvement 
 in PureJavaCrc32.
 - eliminate off += 8; len -= 8;
 - replace T8_x_start with hard coded constants
 - eliminate c0 - c7 local variables
 In my machine, there are 30% to 50% improvement for most of the cases.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10674) Rewrite the PureJavaCrc32 loop for performance improvement

2014-06-19 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10674:
-

Attachment: c10674_20140619.patch

c10674_20140619.patch: patch for commit.

 Rewrite the PureJavaCrc32 loop for performance improvement
 --

 Key: HADOOP-10674
 URL: https://issues.apache.org/jira/browse/HADOOP-10674
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance, util
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: c10674_20140609.patch, c10674_20140609b.patch, 
 c10674_20140610.patch, c10674_20140612.patch, c10674_20140619.patch


 Below are some performance improvement opportunities performance improvement 
 in PureJavaCrc32.
 - eliminate off += 8; len -= 8;
 - replace T8_x_start with hard coded constants
 - eliminate c0 - c7 local variables
 In my machine, there are 30% to 50% improvement for most of the cases.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10728) Metrics system for Windows Azure Storage Filesystem

2014-06-19 Thread Mike Liddell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Liddell updated HADOOP-10728:
--

Attachment: HADOOP-10728.1.patch

 Metrics system for Windows Azure Storage Filesystem
 ---

 Key: HADOOP-10728
 URL: https://issues.apache.org/jira/browse/HADOOP-10728
 Project: Hadoop Common
  Issue Type: New Feature
  Components: tools
Reporter: Mike Liddell
Assignee: Mike Liddell
 Attachments: HADOOP-10728.1.patch


 Add a metrics2 source for the Windows Azure Filesystem driver that was 
 introduced with HADOOP-9629.
 AzureFileSystemInstrumentation is the new MetricsSource.  
 AzureNativeFilesystemStore and NativeAzureFilesystem have been modified to 
 record statistics and some machinery is added for the accumulation of 
 'rolling average' statistics.
 Primary new code appears in org.apache.hadoop.fs.azure.metrics namespace.
 h2. Credits and history
 Credit for this work goes to the early team: [~minwei], [~davidlao], 
 [~lengningliu] and [~stojanovic] as well as multiple people who have taken 
 over this work since then (hope I don't forget anyone): [~dexterb], Johannes 
 Klein, [~ivanmi], Michael Rys, [~mostafae], [~brian_swan], [~mikelid], 
 [~xifang], and [~chuanliu].



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10719) Add generateEncryptedKey and decryptEncryptedKey methods to KeyProvider

2014-06-19 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038237#comment-14038237
 ] 

Andrew Wang commented on HADOOP-10719:
--

I must have missed this in an earlier review, but very minor nit: the 
Preconditions checks in KeyProvider are longer than 80-chars.

+1 pending that though, we should rekick Jenkins too. This compiles for me 
locally.

 Add generateEncryptedKey and decryptEncryptedKey methods to KeyProvider
 ---

 Key: HADOOP-10719
 URL: https://issues.apache.org/jira/browse/HADOOP-10719
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10719.patch, HADOOP-10719.patch, 
 HADOOP-10719.patch, HADOOP-10719.patch


 This is a follow up on 
 [HDFS-6134|https://issues.apache.org/jira/browse/HDFS-6134?focusedCommentId=14036044page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14036044]
 KeyProvider API should  have 2 new methods:
 * KeyVersion generateEncryptedKey(String keyVersionName, byte[] iv)
 * KeyVersion decryptEncryptedKey(String keyVersionName, byte[] iv, KeyVersion 
 encryptedKey)
 The implementation would do a known transformation on the IV (i.e.: xor with 
 0xff the original IV).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10719) Add generateEncryptedKey and decryptEncryptedKey methods to KeyProvider

2014-06-19 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038242#comment-14038242
 ] 

Andrew Wang commented on HADOOP-10719:
--

Woops, some longer-than-80-char lines in other places too, please fix those up 
too while you're at it.

 Add generateEncryptedKey and decryptEncryptedKey methods to KeyProvider
 ---

 Key: HADOOP-10719
 URL: https://issues.apache.org/jira/browse/HADOOP-10719
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10719.patch, HADOOP-10719.patch, 
 HADOOP-10719.patch, HADOOP-10719.patch


 This is a follow up on 
 [HDFS-6134|https://issues.apache.org/jira/browse/HDFS-6134?focusedCommentId=14036044page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14036044]
 KeyProvider API should  have 2 new methods:
 * KeyVersion generateEncryptedKey(String keyVersionName, byte[] iv)
 * KeyVersion decryptEncryptedKey(String keyVersionName, byte[] iv, KeyVersion 
 encryptedKey)
 The implementation would do a known transformation on the IV (i.e.: xor with 
 0xff the original IV).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10695) KMSClientProvider should respect a configurable timeout.

2014-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038240#comment-14038240
 ] 

Hadoop QA commented on HADOOP-10695:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651551/HADOOP-10695.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ipc.TestIPC

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4114//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4114//console

This message is automatically generated.

 KMSClientProvider should respect a configurable timeout.
 

 Key: HADOOP-10695
 URL: https://issues.apache.org/jira/browse/HADOOP-10695
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Mike Yoder
 Attachments: HADOOP-10695.patch


 It'd be good if KMSClientProvider used a timeout, so it doesn't hang forever 
 if the KMServer is down.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10720) KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API

2014-06-19 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038243#comment-14038243
 ] 

Andrew Wang commented on HADOOP-10720:
--

+1 pending HADOOP-10719, thanks tucu

 KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API
 ---

 Key: HADOOP-10720
 URL: https://issues.apache.org/jira/browse/HADOOP-10720
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: COMBO.patch, COMBO.patch, COMBO.patch, COMBO.patch, 
 HADOOP-10720.patch, HADOOP-10720.patch, HADOOP-10720.patch, HADOOP-10720.patch


 KMS client/server should implement support for generating encrypted keys and 
 decrypting them via the REST API being introduced by HADOOP-10719.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10713) Refactor CryptoCodec#generateSecureRandom to take a byte[]

2014-06-19 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038261#comment-14038261
 ] 

Charles Lamb commented on HADOOP-10713:
---

+1


 Refactor CryptoCodec#generateSecureRandom to take a byte[]
 --

 Key: HADOOP-10713
 URL: https://issues.apache.org/jira/browse/HADOOP-10713
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial
 Fix For: 3.0.0

 Attachments: HADOOP-10713.001.patch, HADOOP-10713.002.patch


 Following suit with the Java Random implementations, it'd be better if we 
 switched CryptoCodec#generateSecureRandom to take a byte[] for parity.
 Also, let's document that this method needs to be thread-safe, which is an 
 important consideration for CryptoCodec implementations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10729) Add tests for PB RPC in case version mismatch of client and server

2014-06-19 Thread Junping Du (JIRA)
Junping Du created HADOOP-10729:
---

 Summary: Add tests for PB RPC in case version mismatch of client 
and server
 Key: HADOOP-10729
 URL: https://issues.apache.org/jira/browse/HADOOP-10729
 Project: Hadoop Common
  Issue Type: Test
  Components: ipc
Affects Versions: 2.4.0
Reporter: Junping Du
Assignee: Junping Du


We have ProtocolInfo specified in protocol interface with version info, but we 
don't have unit test to verify if/how it works. We should have tests to track 
this annotation work as expectation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10312) Shell.ExitCodeException to have more useful toString

2014-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038265#comment-14038265
 ] 

Hadoop QA commented on HADOOP-10312:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12651557/HADOOP-10312-002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4116//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4116//console

This message is automatically generated.

 Shell.ExitCodeException to have more useful toString
 

 Key: HADOOP-10312
 URL: https://issues.apache.org/jira/browse/HADOOP-10312
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.4.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
 Attachments: HADOOP-10312-001.patch, HADOOP-10312-002.patch


 Shell's ExitCodeException doesn't include the exit code in the toString 
 value, so isn't that useful in diagnosing container start failures in YARN



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10720) KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API

2014-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038267#comment-14038267
 ] 

Hadoop QA commented on HADOOP-10720:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651562/COMBO.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4117//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4117//console

This message is automatically generated.

 KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API
 ---

 Key: HADOOP-10720
 URL: https://issues.apache.org/jira/browse/HADOOP-10720
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: COMBO.patch, COMBO.patch, COMBO.patch, COMBO.patch, 
 HADOOP-10720.patch, HADOOP-10720.patch, HADOOP-10720.patch, HADOOP-10720.patch


 KMS client/server should implement support for generating encrypted keys and 
 decrypting them via the REST API being introduced by HADOOP-10719.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10729) Add tests for PB RPC in case version mismatch of client and server

2014-06-19 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-10729:


Attachment: HADOOP-10729.patch

Upload a patch.

 Add tests for PB RPC in case version mismatch of client and server
 --

 Key: HADOOP-10729
 URL: https://issues.apache.org/jira/browse/HADOOP-10729
 Project: Hadoop Common
  Issue Type: Test
  Components: ipc
Affects Versions: 2.4.0
Reporter: Junping Du
Assignee: Junping Du
 Attachments: HADOOP-10729.patch


 We have ProtocolInfo specified in protocol interface with version info, but 
 we don't have unit test to verify if/how it works. We should have tests to 
 track this annotation work as expectation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10729) Add tests for PB RPC in case version mismatch of client and server

2014-06-19 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-10729:


Status: Patch Available  (was: Open)

 Add tests for PB RPC in case version mismatch of client and server
 --

 Key: HADOOP-10729
 URL: https://issues.apache.org/jira/browse/HADOOP-10729
 Project: Hadoop Common
  Issue Type: Test
  Components: ipc
Affects Versions: 2.4.0
Reporter: Junping Du
Assignee: Junping Du
 Attachments: HADOOP-10729.patch


 We have ProtocolInfo specified in protocol interface with version info, but 
 we don't have unit test to verify if/how it works. We should have tests to 
 track this annotation work as expectation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10719) Add generateEncryptedKey and decryptEncryptedKey methods to KeyProvider

2014-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038280#comment-14038280
 ] 

Hadoop QA commented on HADOOP-10719:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651560/HADOOP-10719.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4118//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4118//console

This message is automatically generated.

 Add generateEncryptedKey and decryptEncryptedKey methods to KeyProvider
 ---

 Key: HADOOP-10719
 URL: https://issues.apache.org/jira/browse/HADOOP-10719
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10719.patch, HADOOP-10719.patch, 
 HADOOP-10719.patch, HADOOP-10719.patch


 This is a follow up on 
 [HDFS-6134|https://issues.apache.org/jira/browse/HDFS-6134?focusedCommentId=14036044page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14036044]
 KeyProvider API should  have 2 new methods:
 * KeyVersion generateEncryptedKey(String keyVersionName, byte[] iv)
 * KeyVersion decryptEncryptedKey(String keyVersionName, byte[] iv, KeyVersion 
 encryptedKey)
 The implementation would do a known transformation on the IV (i.e.: xor with 
 0xff the original IV).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9559) When metrics system is restarted MBean names get incorrectly flagged as dupes

2014-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038299#comment-14038299
 ] 

Hadoop QA commented on HADOOP-9559:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651576/HADOOP-9559.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4119//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4119//console

This message is automatically generated.

 When metrics system is restarted MBean names get incorrectly flagged as dupes
 -

 Key: HADOOP-9559
 URL: https://issues.apache.org/jira/browse/HADOOP-9559
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mostafa Elhemali
 Attachments: HADOOP-9559.2.patch, HADOOP-9559.patch


 In the Metrics2 system, every source gets registered as an MBean name, which 
 gets put into a unique name pool in the singleton DefaultMetricsSystem 
 object. The problem is that when the metrics system is shutdown (which 
 unregisters the MBeans) this unique name pool is left as is, so if the 
 metrics system is started again every attempt to register the same MBean 
 names fails (exception is eaten and a warning is logged).
 I think the fix here is to remove the name from the unique name pool if an 
 MBean is unregistered, since it's OK at this point to add it again.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10607) Create an API to Separate Credentials/Password Storage from Applications

2014-06-19 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-10607:
-

Attachment: 10607-branch-2.patch

patch to merge to branch-2 added

 Create an API to Separate Credentials/Password Storage from Applications
 

 Key: HADOOP-10607
 URL: https://issues.apache.org/jira/browse/HADOOP-10607
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Fix For: 3.0.0

 Attachments: 10607-10.patch, 10607-11.patch, 10607-12.patch, 
 10607-2.patch, 10607-3.patch, 10607-4.patch, 10607-5.patch, 10607-6.patch, 
 10607-7.patch, 10607-8.patch, 10607-9.patch, 10607-branch-2.patch, 10607.patch


 As with the filesystem API, we need to provide a generic mechanism to support 
 multiple credential storage mechanisms that are potentially from third 
 parties. 
 We need the ability to eliminate the storage of passwords and secrets in 
 clear text within configuration files or within code.
 Toward that end, I propose an API that is configured using a list of URLs of 
 CredentialProviders. The implementation will look for implementations using 
 the ServiceLoader interface and thus support third party libraries.
 Two providers will be included in this patch. One using the credentials cache 
 in MapReduce jobs and the other using Java KeyStores from either HDFS or 
 local file system. 
 A CredShell CLI will also be included in this patch which provides the 
 ability to manage the credentials within the stores.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10717) Missing JSP support in Jetty, 'NO JSP Support for /, did not find org.apache.jasper.servlet.JspServlet' when user want to start namenode.

2014-06-19 Thread Dapeng Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038306#comment-14038306
 ] 

Dapeng Sun commented on HADOOP-10717:
-

Hi [#vinayrpet] 
{quote}
Seems like, jsp dependency is required to resolve this issue.
{quote}
I'm agree with you, thank you.

Hi [#wheat9] 
NO JSP Support for / is fixed, it's great.
But the time-out exception is not. You can add {noformat}127.0.0.1 
java.sun.com{noformat} to your /etc/hosts to reproduce. Thank you.

 Missing JSP support in Jetty, 'NO JSP Support for /, did not find 
 org.apache.jasper.servlet.JspServlet' when user want to start namenode.
 -

 Key: HADOOP-10717
 URL: https://issues.apache.org/jira/browse/HADOOP-10717
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Dapeng Sun
Assignee: Dapeng Sun
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-10717-disable-jsp.000.patch, 
 HADOOP-10717-disable-jsp.001.patch, HADOOP-10717.patch


 When user want to start NameNode, user would got the following exception, it 
 is caused by missing org.mortbay.jetty:jsp-2.1-jetty:jar:6.1.26 in the pom.xml
 14/06/18 14:55:30 INFO http.HttpServer2: Added global filter 'safety' 
 (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
 14/06/18 14:55:30 INFO http.HttpServer2: Added filter static_user_filter 
 (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
 context hdfs
 14/06/18 14:55:30 INFO http.HttpServer2: Added filter static_user_filter 
 (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
 context static
 14/06/18 14:55:30 INFO http.HttpServer2: Added filter static_user_filter 
 (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
 context logs
 14/06/18 14:55:30 INFO http.HttpServer2: Added filter 
 'org.apache.hadoop.hdfs.web.AuthFilter' 
 (class=org.apache.hadoop.hdfs.web.AuthFilter)
 14/06/18 14:55:30 INFO http.HttpServer2: addJerseyResourcePackage: 
 packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
  pathSpec=/webhdfs/v1/*
 14/06/18 14:55:30 INFO http.HttpServer2: Jetty bound to port 50070
 14/06/18 14:55:30 INFO mortbay.log: jetty-6.1.26
 14/06/18 14:55:30 INFO mortbay.log: NO JSP Support for /, did not find 
 org.apache.jasper.servlet.JspServlet
 14/06/18 14:57:38 WARN mortbay.log: EXCEPTION
 java.net.ConnectException: Connection timed out
 at java.net.PlainSocketImpl.socketConnect(Native Method)
 at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
 at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
 at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
 at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
 at java.net.Socket.connect(Socket.java:529)
 at java.net.Socket.connect(Socket.java:478)
 at sun.net.NetworkClient.doConnect(NetworkClient.java:163)
 at sun.net.www.http.HttpClient.openServer(HttpClient.java:395)
 at sun.net.www.http.HttpClient.openServer(HttpClient.java:530)
 at sun.net.www.http.HttpClient.init(HttpClient.java:234)
 at sun.net.www.http.HttpClient.New(HttpClient.java:307)
 at sun.net.www.http.HttpClient.New(HttpClient.java:324)
 at 
 sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:970)
 at 
 sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:911)
 at 
 sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:836)
 at 
 sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1172)
 at 
 com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:677)
 at 
 com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startEntity(XMLEntityManager.java:1315)
 at 
 com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startDTDEntity(XMLEntityManager.java:1282)
 at 
 com.sun.org.apache.xerces.internal.impl.XMLDTDScannerImpl.setInputSource(XMLDTDScannerImpl.java:283)
 at 
 com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.dispatch(XMLDocumentScannerImpl.java:1194)
 at 
 com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.next(XMLDocumentScannerImpl.java:1090)
 at 
 com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:1003)
 at 
 com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648)
 at 

[jira] [Commented] (HADOOP-10729) Add tests for PB RPC in case version mismatch of client and server

2014-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038326#comment-14038326
 ] 

Hadoop QA commented on HADOOP-10729:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651596/HADOOP-10729.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4121//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4121//console

This message is automatically generated.

 Add tests for PB RPC in case version mismatch of client and server
 --

 Key: HADOOP-10729
 URL: https://issues.apache.org/jira/browse/HADOOP-10729
 Project: Hadoop Common
  Issue Type: Test
  Components: ipc
Affects Versions: 2.4.0
Reporter: Junping Du
Assignee: Junping Du
 Attachments: HADOOP-10729.patch


 We have ProtocolInfo specified in protocol interface with version info, but 
 we don't have unit test to verify if/how it works. We should have tests to 
 track this annotation work as expectation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HADOOP-10725) Implement listStatus and getFileInfo in the native client

2014-06-19 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-10725 started by Colin Patrick McCabe.

 Implement listStatus and getFileInfo in the native client
 -

 Key: HADOOP-10725
 URL: https://issues.apache.org/jira/browse/HADOOP-10725
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-10725-pnative.001.patch


 Implement listStatus and getFileInfo in the native client.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10725) Implement listStatus and getFileInfo in the native client

2014-06-19 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-10725:
--

Attachment: HADOOP-10725-pnative.001.patch

 Implement listStatus and getFileInfo in the native client
 -

 Key: HADOOP-10725
 URL: https://issues.apache.org/jira/browse/HADOOP-10725
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-10725-pnative.001.patch


 Implement listStatus and getFileInfo in the native client.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10279) Create multiplexer, a requirement for the fair queue

2014-06-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10279:
---

Status: Patch Available  (was: Open)

 Create multiplexer, a requirement for the fair queue
 

 Key: HADOOP-10279
 URL: https://issues.apache.org/jira/browse/HADOOP-10279
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Chris Li
Assignee: Chris Li
 Attachments: HADOOP-10279.patch, HADOOP-10279.patch, 
 WeightedRoundRobinMultiplexer.java, subtask2_add_mux.patch


 The Multiplexer helps the FairCallQueue decide which of its internal 
 sub-queues to read from during a poll() or take(). It controls the penalty of 
 being in a lower queue. Without the mux, the FairCallQueue would have issues 
 with starvation of low-priority requests.
 The WeightedRoundRobinMultiplexer is an implementation which uses a weighted 
 round robin approach to muxing the sub-queues. It is configured with an 
 integer list pattern.
 For example: 10, 5, 5, 2 means:
 * Read queue 0 10 times
 * Read queue 1 5 times
 * Read queue 2 5 times
 * Read queue 3 2 times
 * Repeat



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10279) Create multiplexer, a requirement for the fair queue

2014-06-19 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038411#comment-14038411
 ] 

Arpit Agarwal commented on HADOOP-10279:


+1 pending Jenkins.

Thanks for the updated patch Chris.

 Create multiplexer, a requirement for the fair queue
 

 Key: HADOOP-10279
 URL: https://issues.apache.org/jira/browse/HADOOP-10279
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Chris Li
Assignee: Chris Li
 Attachments: HADOOP-10279.patch, HADOOP-10279.patch, 
 WeightedRoundRobinMultiplexer.java, subtask2_add_mux.patch


 The Multiplexer helps the FairCallQueue decide which of its internal 
 sub-queues to read from during a poll() or take(). It controls the penalty of 
 being in a lower queue. Without the mux, the FairCallQueue would have issues 
 with starvation of low-priority requests.
 The WeightedRoundRobinMultiplexer is an implementation which uses a weighted 
 round robin approach to muxing the sub-queues. It is configured with an 
 integer list pattern.
 For example: 10, 5, 5, 2 means:
 * Read queue 0 10 times
 * Read queue 1 5 times
 * Read queue 2 5 times
 * Read queue 3 2 times
 * Repeat



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10710) hadoop.auth cookie is not properly constructed according to RFC2109

2014-06-19 Thread Juan Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Yu updated HADOOP-10710:
-

Attachment: HADOOP-10710.004.patch

revert the new unit test and some changes in TestHttpCookieFlag.java, try to 
figure out why the build fails.

 hadoop.auth cookie is not properly constructed according to RFC2109
 ---

 Key: HADOOP-10710
 URL: https://issues.apache.org/jira/browse/HADOOP-10710
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.0
Reporter: Alejandro Abdelnur
Assignee: Juan Yu
 Attachments: HADOOP-10710.001.patch, HADOOP-10710.002.patch, 
 HADOOP-10710.003.patch, HADOOP-10710.004.patch


 It seems that HADOOP-10379 introduced a bug on how hadoop.auth cookies are 
 being constructed.
 Before HADOOP-10379, cookies were constructed using Servlet's {{Cookie}} 
 class and corresponding {{HttpServletResponse}} methods. This was taking care 
 of setting attributes like 'Version=1' and double-quoting the cookie value if 
 necessary.
 HADOOP-10379 changed the Cookie creation to use a {{StringBuillder}} and 
 setting values and attributes by hand. This is not taking care of setting 
 required attributes like Version and escaping the cookie value.
 While this is not breaking HadoopAuth {{AuthenticatedURL}} access, it is 
 breaking access done using {{HtttpClient}}. I.e. Solr uses HttpClient and its 
 access is broken since this change.
 It seems that HADOOP-10379 main objective was to set the 'secure' attribute. 
 Note this can be done using the {{Cookie}} API.
 We should revert the cookie creation logic to use the {{Cookie}} API and take 
 care of the security flag via {{setSecure(boolean)}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10719) Add generateEncryptedKey and decryptEncryptedKey methods to KeyProvider

2014-06-19 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10719:


Attachment: HADOOP-10719.patch

fixing extra long lines, also I've removed redundant new methods leaving one 
signature for generate and one signature for decrypt.

 Add generateEncryptedKey and decryptEncryptedKey methods to KeyProvider
 ---

 Key: HADOOP-10719
 URL: https://issues.apache.org/jira/browse/HADOOP-10719
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10719.patch, HADOOP-10719.patch, 
 HADOOP-10719.patch, HADOOP-10719.patch, HADOOP-10719.patch


 This is a follow up on 
 [HDFS-6134|https://issues.apache.org/jira/browse/HDFS-6134?focusedCommentId=14036044page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14036044]
 KeyProvider API should  have 2 new methods:
 * KeyVersion generateEncryptedKey(String keyVersionName, byte[] iv)
 * KeyVersion decryptEncryptedKey(String keyVersionName, byte[] iv, KeyVersion 
 encryptedKey)
 The implementation would do a known transformation on the IV (i.e.: xor with 
 0xff the original IV).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10720) KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API

2014-06-19 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10720:


Attachment: HADOOP-10720.patch

new patch synch-ing up with HADOOP-10719.

 KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API
 ---

 Key: HADOOP-10720
 URL: https://issues.apache.org/jira/browse/HADOOP-10720
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: COMBO.patch, COMBO.patch, COMBO.patch, COMBO.patch, 
 HADOOP-10720.patch, HADOOP-10720.patch, HADOOP-10720.patch, 
 HADOOP-10720.patch, HADOOP-10720.patch


 KMS client/server should implement support for generating encrypted keys and 
 decrypting them via the REST API being introduced by HADOOP-10719.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10720) KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API

2014-06-19 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10720:


Attachment: COMBO.patch

new combo for testpatch.

 KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API
 ---

 Key: HADOOP-10720
 URL: https://issues.apache.org/jira/browse/HADOOP-10720
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: COMBO.patch, COMBO.patch, COMBO.patch, COMBO.patch, 
 COMBO.patch, HADOOP-10720.patch, HADOOP-10720.patch, HADOOP-10720.patch, 
 HADOOP-10720.patch, HADOOP-10720.patch


 KMS client/server should implement support for generating encrypted keys and 
 decrypting them via the REST API being introduced by HADOOP-10719.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10128) Please delete old releases from mirroring system

2014-06-19 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038447#comment-14038447
 ] 

Arun C Murthy commented on HADOOP-10128:


Btw, [~s...@apache.org], I can't remove hadoop-2.0.6-alpha; permission issues. 
Can you please help? Tx.

 Please delete old releases from mirroring system
 

 Key: HADOOP-10128
 URL: https://issues.apache.org/jira/browse/HADOOP-10128
 Project: Hadoop Common
  Issue Type: Bug
 Environment: http://www.apache.org/dist/hadoop/common/
 http://www.apache.org/dist/hadoop/core/
Reporter: Sebb

 To reduce the load on the ASF mirrors, projects are required to delete old 
 releases.
 Please can you remove all non-current releases?
 i.e. anything except
 0.23.9
 1.2.1
 2.2.0
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10128) Please delete old releases from mirroring system

2014-06-19 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038445#comment-14038445
 ] 

Arun C Murthy commented on HADOOP-10128:


[~s...@apache.org] - Thanks for the reminder. I've removed stale releases. 
Please take a look. Thanks.

 Please delete old releases from mirroring system
 

 Key: HADOOP-10128
 URL: https://issues.apache.org/jira/browse/HADOOP-10128
 Project: Hadoop Common
  Issue Type: Bug
 Environment: http://www.apache.org/dist/hadoop/common/
 http://www.apache.org/dist/hadoop/core/
Reporter: Sebb

 To reduce the load on the ASF mirrors, projects are required to delete old 
 releases.
 Please can you remove all non-current releases?
 i.e. anything except
 0.23.9
 1.2.1
 2.2.0
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10710) hadoop.auth cookie is not properly constructed according to RFC2109

2014-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038454#comment-14038454
 ] 

Hadoop QA commented on HADOOP-10710:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12651607/HADOOP-10710.004.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4123//console

This message is automatically generated.

 hadoop.auth cookie is not properly constructed according to RFC2109
 ---

 Key: HADOOP-10710
 URL: https://issues.apache.org/jira/browse/HADOOP-10710
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.0
Reporter: Alejandro Abdelnur
Assignee: Juan Yu
 Attachments: HADOOP-10710.001.patch, HADOOP-10710.002.patch, 
 HADOOP-10710.003.patch, HADOOP-10710.004.patch


 It seems that HADOOP-10379 introduced a bug on how hadoop.auth cookies are 
 being constructed.
 Before HADOOP-10379, cookies were constructed using Servlet's {{Cookie}} 
 class and corresponding {{HttpServletResponse}} methods. This was taking care 
 of setting attributes like 'Version=1' and double-quoting the cookie value if 
 necessary.
 HADOOP-10379 changed the Cookie creation to use a {{StringBuillder}} and 
 setting values and attributes by hand. This is not taking care of setting 
 required attributes like Version and escaping the cookie value.
 While this is not breaking HadoopAuth {{AuthenticatedURL}} access, it is 
 breaking access done using {{HtttpClient}}. I.e. Solr uses HttpClient and its 
 access is broken since this change.
 It seems that HADOOP-10379 main objective was to set the 'secure' attribute. 
 Note this can be done using the {{Cookie}} API.
 We should revert the cookie creation logic to use the {{Cookie}} API and take 
 care of the security flag via {{setSecure(boolean)}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10279) Create multiplexer, a requirement for the fair queue

2014-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038480#comment-14038480
 ] 

Hadoop QA commented on HADOOP-10279:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651270/HADOOP-10279.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4122//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4122//console

This message is automatically generated.

 Create multiplexer, a requirement for the fair queue
 

 Key: HADOOP-10279
 URL: https://issues.apache.org/jira/browse/HADOOP-10279
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Chris Li
Assignee: Chris Li
 Attachments: HADOOP-10279.patch, HADOOP-10279.patch, 
 WeightedRoundRobinMultiplexer.java, subtask2_add_mux.patch


 The Multiplexer helps the FairCallQueue decide which of its internal 
 sub-queues to read from during a poll() or take(). It controls the penalty of 
 being in a lower queue. Without the mux, the FairCallQueue would have issues 
 with starvation of low-priority requests.
 The WeightedRoundRobinMultiplexer is an implementation which uses a weighted 
 round robin approach to muxing the sub-queues. It is configured with an 
 integer list pattern.
 For example: 10, 5, 5, 2 means:
 * Read queue 0 10 times
 * Read queue 1 5 times
 * Read queue 2 5 times
 * Read queue 3 2 times
 * Repeat



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10674) Rewrite the PureJavaCrc32 loop for performance improvement

2014-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038489#comment-14038489
 ] 

Hadoop QA commented on HADOOP-10674:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651584/c10674_20140619.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4120//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4120//console

This message is automatically generated.

 Rewrite the PureJavaCrc32 loop for performance improvement
 --

 Key: HADOOP-10674
 URL: https://issues.apache.org/jira/browse/HADOOP-10674
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance, util
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: c10674_20140609.patch, c10674_20140609b.patch, 
 c10674_20140610.patch, c10674_20140612.patch, c10674_20140619.patch


 Below are some performance improvement opportunities performance improvement 
 in PureJavaCrc32.
 - eliminate off += 8; len -= 8;
 - replace T8_x_start with hard coded constants
 - eliminate c0 - c7 local variables
 In my machine, there are 30% to 50% improvement for most of the cases.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10279) Create multiplexer, a requirement for the fair queue

2014-06-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10279:
---

   Resolution: Fixed
Fix Version/s: 2.5.0
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed this to trunk and branch-2. Thanks for the contribution [~chrilisf]!

 Create multiplexer, a requirement for the fair queue
 

 Key: HADOOP-10279
 URL: https://issues.apache.org/jira/browse/HADOOP-10279
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Chris Li
Assignee: Chris Li
 Fix For: 3.0.0, 2.5.0

 Attachments: HADOOP-10279.patch, HADOOP-10279.patch, 
 WeightedRoundRobinMultiplexer.java, subtask2_add_mux.patch


 The Multiplexer helps the FairCallQueue decide which of its internal 
 sub-queues to read from during a poll() or take(). It controls the penalty of 
 being in a lower queue. Without the mux, the FairCallQueue would have issues 
 with starvation of low-priority requests.
 The WeightedRoundRobinMultiplexer is an implementation which uses a weighted 
 round robin approach to muxing the sub-queues. It is configured with an 
 integer list pattern.
 For example: 10, 5, 5, 2 means:
 * Read queue 0 10 times
 * Read queue 1 5 times
 * Read queue 2 5 times
 * Read queue 3 2 times
 * Repeat



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10710) hadoop.auth cookie is not properly constructed according to RFC2109

2014-06-19 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14038499#comment-14038499
 ] 

Alejandro Abdelnur commented on HADOOP-10710:
-

[~j...@cloudera.com], the append(; VERSION=1) should be done always, not when 
token not null. other than that seems fine, have you figured out why 
compilation is failing in test-patch?

 hadoop.auth cookie is not properly constructed according to RFC2109
 ---

 Key: HADOOP-10710
 URL: https://issues.apache.org/jira/browse/HADOOP-10710
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.0
Reporter: Alejandro Abdelnur
Assignee: Juan Yu
 Attachments: HADOOP-10710.001.patch, HADOOP-10710.002.patch, 
 HADOOP-10710.003.patch, HADOOP-10710.004.patch


 It seems that HADOOP-10379 introduced a bug on how hadoop.auth cookies are 
 being constructed.
 Before HADOOP-10379, cookies were constructed using Servlet's {{Cookie}} 
 class and corresponding {{HttpServletResponse}} methods. This was taking care 
 of setting attributes like 'Version=1' and double-quoting the cookie value if 
 necessary.
 HADOOP-10379 changed the Cookie creation to use a {{StringBuillder}} and 
 setting values and attributes by hand. This is not taking care of setting 
 required attributes like Version and escaping the cookie value.
 While this is not breaking HadoopAuth {{AuthenticatedURL}} access, it is 
 breaking access done using {{HtttpClient}}. I.e. Solr uses HttpClient and its 
 access is broken since this change.
 It seems that HADOOP-10379 main objective was to set the 'secure' attribute. 
 Note this can be done using the {{Cookie}} API.
 We should revert the cookie creation logic to use the {{Cookie}} API and take 
 care of the security flag via {{setSecure(boolean)}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


<    1   2