[jira] [Commented] (HADOOP-10562) Namenode exits on exception without printing stack trace in AbstractDelegationTokenSecretManager

2014-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006864#comment-14006864
 ] 

Hudson commented on HADOOP-10562:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1781 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1781/])
HADOOP-10562. Fix CHANGES.txt entry again (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1596386)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-10562. Fix CHANGES.txt (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1596378)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Namenode exits on exception without printing stack trace in 
 AbstractDelegationTokenSecretManager
 

 Key: HADOOP-10562
 URL: https://issues.apache.org/jira/browse/HADOOP-10562
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.1, 2.4.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Critical
 Fix For: 3.0.0, 1.3.0, 2.4.1

 Attachments: HADOOP-10562.1.patch, HADOOP-10562.branch-1.1.patch, 
 HADOOP-10562.patch


 Not printing the stack trace makes debugging harder.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10618) Remove SingleNodeSetup.apt.vm

2014-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006879#comment-14006879
 ] 

Hudson commented on HADOOP-10618:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1781 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1781/])
HADOOP-10618. Remove SingleNodeSetup.apt.vm (Contributed by Akira Ajisaka) 
(arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1596964)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SingleNodeSetup.apt.vm


 Remove SingleNodeSetup.apt.vm
 -

 Key: HADOOP-10618
 URL: https://issues.apache.org/jira/browse/HADOOP-10618
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Fix For: 3.0.0, 2.5.0

 Attachments: HADOOP-10618.2.patch, HADOOP-10618.patch


 http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-common/SingleNodeSetup.html
  is deprecated and not linked from the left side page.
 We should remove the document and use 
 http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-common/SingleCluster.html
  instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10562) Namenode exits on exception without printing stack trace in AbstractDelegationTokenSecretManager

2014-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006886#comment-14006886
 ] 

Hudson commented on HADOOP-10562:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1755 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1755/])
HADOOP-10562. Fix CHANGES.txt entry again (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1596386)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-10562. Fix CHANGES.txt (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1596378)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Namenode exits on exception without printing stack trace in 
 AbstractDelegationTokenSecretManager
 

 Key: HADOOP-10562
 URL: https://issues.apache.org/jira/browse/HADOOP-10562
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.1, 2.4.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Critical
 Fix For: 3.0.0, 1.3.0, 2.4.1

 Attachments: HADOOP-10562.1.patch, HADOOP-10562.branch-1.1.patch, 
 HADOOP-10562.patch


 Not printing the stack trace makes debugging harder.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10618) Remove SingleNodeSetup.apt.vm

2014-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006901#comment-14006901
 ] 

Hudson commented on HADOOP-10618:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1755 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1755/])
HADOOP-10618. Remove SingleNodeSetup.apt.vm (Contributed by Akira Ajisaka) 
(arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1596964)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SingleNodeSetup.apt.vm


 Remove SingleNodeSetup.apt.vm
 -

 Key: HADOOP-10618
 URL: https://issues.apache.org/jira/browse/HADOOP-10618
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Fix For: 3.0.0, 2.5.0

 Attachments: HADOOP-10618.2.patch, HADOOP-10618.patch


 http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-common/SingleNodeSetup.html
  is deprecated and not linked from the left side page.
 We should remove the document and use 
 http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-common/SingleCluster.html
  instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HADOOP-10624) Fix some minors typo and add more test cases for hadoop_err

2014-05-23 Thread Wenwu Peng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-10624 started by Wenwu Peng.

 Fix some minors typo and add more test cases for hadoop_err
 ---

 Key: HADOOP-10624
 URL: https://issues.apache.org/jira/browse/HADOOP-10624
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HADOOP-10388
Reporter: Wenwu Peng
Assignee: Wenwu Peng
 Attachments: HADOOP-10624-pnative.001.patch


 Changes:
 1. Add more test cases to cover method hadoop_lerr_alloc and 
 hadoop_uverr_alloc
 2. Fix typo as following:
 1) Change hadoop_uverr_alloc(int cod to hadoop_uverr_alloc(int code in 
 hadoop_err.h
 2) Change OutOfMemory to OutOfMemoryException to consistent with other 
 Exception in hadoop_err.c
 3) Change DBUG to DEBUG in messenger.c
 4) Change DBUG to DEBUG in reactor.c



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9534) Credential Management Framework (CMF)

2014-05-23 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-9534:


Status: Open  (was: Patch Available)

 Credential Management Framework (CMF)
 -

 Key: HADOOP-9534
 URL: https://issues.apache.org/jira/browse/HADOOP-9534
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0
Reporter: Larry McCay
Assignee: Larry McCay
  Labels: patch
 Attachments: 
 0001-HADOOP-9534-Credential-Management-Framework-initial-.patch, 
 0002-HADOOP-9534-Credential-Management-Framework-second-iteration-.patch, 
 CMF-overview.txt, HADOOP-9534.patch, HADOOP-9534.patch, HADOOP-9534.patch, 
 HADOOP-9534.patch

   Original Estimate: 504h
  Remaining Estimate: 504h

 The credential management framework consists of library for securing, 
 acquiring and rolling credentials for a given Hadoop service.
 Specifically the library will provide:
 1. Password Indirection or Aliasing
 2. Management of identity and trust keystores
 3. Rolling of key pairs and credentials
 4. Discovery of externally provisioned credentials
 5. Service specific CMF secret protection
 6. Syntax for Aliases within configuration files
 Password Indirection or Aliasing:
 By providing alias based access to actual secrets stored within a service 
 specific JCEKS keystore, we are able to eliminate the need for any secret to 
 be stored in clear text on the filesystem. This is a current redflag during 
 security reviews for many customers.
 Management of Identity and Trust Keystores:
 Service specific identity and trust keystores will be managed by a 
 combination of the HSSO service and CMF. 
 Upon registration with the HSSO service a dependent service will be able 
 discover externally provisioned keystores or have them created by the HSSO 
 service on its behalf. The public key of the HSSO service will be provided to 
 the service to be imported into its service specific trust store.
 Service specific keystores and credential stores will be protected with the 
 service specific CMF secret.
 Rolling of Keypairs and Credentials:
 The ability to automate the rolling of PKI keypairs and credentials provide 
 the services a common facility for discovering new HSSO public keys and the 
 need and means to roll their own credentials while being able to retain a 
 number of previous values (as needed).
 Discovery of Externally Provisioned Credentials:
 For environments that want control over the certificate generation and 
 provisioning, CMF provides the ability to discover preprovisioned artifacts 
 based on naming conventions of the artifacts and the use of the service 
 specific CMF secret to access the credentials within the keystores.
 Service Specific CMF Secret Protection:
 By providing a common facility to prompt for and optionally persist a service 
 specific CMF secret at service installation/startup, we enable the ability to 
 protect all the service specific security artifacts with this protected 
 secret. It is protected with a combination of AES 128 bit encryption and file 
 permissions set for only the service specific OS user.
 Syntax for Aliases within configuration files:
 In order to facilitate the use of aliases but also preserve backward 
 compatibility of config files, we will introduce a syntax for marking a value 
 in a configuration file as an alias. A getSecret(String value) type utility 
 method will encapsulate the recognition and parsing of an alias and the 
 retrieval from CMF or return the provided value as the password.
 For instance, if a properties file were to require a password to be provided 
 instead of:
 passwd=supersecret
 we would provide an alias as such:
 passwd=${ALIAS=supersecret}
 At runtime, the value from the properties file is provided to the 
 CMF.getSecret(value) method and it either resolves the alias (where it finds 
 the alias syntax) or returns the value (when there is no alias syntax).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-9534) Credential Management Framework (CMF)

2014-05-23 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay resolved HADOOP-9534.
-

Resolution: Duplicate

This jira has been superseded by HADOOP-10141 and HADOOP-10607.  All related 
work will be done there.

 Credential Management Framework (CMF)
 -

 Key: HADOOP-9534
 URL: https://issues.apache.org/jira/browse/HADOOP-9534
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0
Reporter: Larry McCay
Assignee: Larry McCay
  Labels: patch
 Attachments: 
 0001-HADOOP-9534-Credential-Management-Framework-initial-.patch, 
 0002-HADOOP-9534-Credential-Management-Framework-second-iteration-.patch, 
 CMF-overview.txt, HADOOP-9534.patch, HADOOP-9534.patch, HADOOP-9534.patch, 
 HADOOP-9534.patch

   Original Estimate: 504h
  Remaining Estimate: 504h

 The credential management framework consists of library for securing, 
 acquiring and rolling credentials for a given Hadoop service.
 Specifically the library will provide:
 1. Password Indirection or Aliasing
 2. Management of identity and trust keystores
 3. Rolling of key pairs and credentials
 4. Discovery of externally provisioned credentials
 5. Service specific CMF secret protection
 6. Syntax for Aliases within configuration files
 Password Indirection or Aliasing:
 By providing alias based access to actual secrets stored within a service 
 specific JCEKS keystore, we are able to eliminate the need for any secret to 
 be stored in clear text on the filesystem. This is a current redflag during 
 security reviews for many customers.
 Management of Identity and Trust Keystores:
 Service specific identity and trust keystores will be managed by a 
 combination of the HSSO service and CMF. 
 Upon registration with the HSSO service a dependent service will be able 
 discover externally provisioned keystores or have them created by the HSSO 
 service on its behalf. The public key of the HSSO service will be provided to 
 the service to be imported into its service specific trust store.
 Service specific keystores and credential stores will be protected with the 
 service specific CMF secret.
 Rolling of Keypairs and Credentials:
 The ability to automate the rolling of PKI keypairs and credentials provide 
 the services a common facility for discovering new HSSO public keys and the 
 need and means to roll their own credentials while being able to retain a 
 number of previous values (as needed).
 Discovery of Externally Provisioned Credentials:
 For environments that want control over the certificate generation and 
 provisioning, CMF provides the ability to discover preprovisioned artifacts 
 based on naming conventions of the artifacts and the use of the service 
 specific CMF secret to access the credentials within the keystores.
 Service Specific CMF Secret Protection:
 By providing a common facility to prompt for and optionally persist a service 
 specific CMF secret at service installation/startup, we enable the ability to 
 protect all the service specific security artifacts with this protected 
 secret. It is protected with a combination of AES 128 bit encryption and file 
 permissions set for only the service specific OS user.
 Syntax for Aliases within configuration files:
 In order to facilitate the use of aliases but also preserve backward 
 compatibility of config files, we will introduce a syntax for marking a value 
 in a configuration file as an alias. A getSecret(String value) type utility 
 method will encapsulate the recognition and parsing of an alias and the 
 retrieval from CMF or return the provided value as the password.
 For instance, if a properties file were to require a password to be provided 
 instead of:
 passwd=supersecret
 we would provide an alias as such:
 passwd=${ALIAS=supersecret}
 At runtime, the value from the properties file is provided to the 
 CMF.getSecret(value) method and it either resolves the alias (where it finds 
 the alias syntax) or returns the value (when there is no alias syntax).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10603) Crypto input and output streams implementing Hadoop stream interfaces

2014-05-23 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-10603:


Attachment: HADOOP-10603.10.patch

Hi [~andrew.wang], the new patch includes update for  your and [~clamb]’s new 
comments and the left comments last time.  Thanks for your nice review :-). If 
I miss something, please remind me.
Furthermore, I clarify following items:

{quote}
An ASCII art diagram showing how padding and the stream offset works would also 
be nice. Javadoc for the special padding handling would be nice.
{quote}
I add more comments for padding handling. 

{quote}
We need to return -1 on EOF for zero-byte reads, see HDFS-5762.
{quote}
We have already handled this, and the new patch includes a test to verify 
return -1 on EOF for zero-byte reads.
 


{quote}
Could also do some basic Precondition validation on the config parameters.
{quote}
Which parameter?  Buffer size?  If so, we add a precondition in the new patch.

{quote}
getDataLen() is never used
{quote}
{{getDataLen()}} is used by {{TestHdfsCryptoStreams}} in HDFS-6405


 Crypto input and output streams implementing Hadoop stream interfaces
 -

 Key: HADOOP-10603
 URL: https://issues.apache.org/jira/browse/HADOOP-10603
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Alejandro Abdelnur
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: CryptoInputStream.java, CryptoOutputStream.java, 
 HADOOP-10603.1.patch, HADOOP-10603.10.patch, HADOOP-10603.2.patch, 
 HADOOP-10603.3.patch, HADOOP-10603.4.patch, HADOOP-10603.5.patch, 
 HADOOP-10603.6.patch, HADOOP-10603.7.patch, HADOOP-10603.8.patch, 
 HADOOP-10603.9.patch, HADOOP-10603.patch


 A common set of Crypto Input/Output streams. They would be used by 
 CryptoFileSystem, HDFS encryption, MapReduce intermediate data and spills. 
 Note we cannot use the JDK Cipher Input/Output streams directly because we 
 need to support the additional interfaces that the Hadoop FileSystem streams 
 implement (Seekable, PositionedReadable, ByteBufferReadable, 
 HasFileDescriptor, CanSetDropBehind, CanSetReadahead, 
 HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10603) Crypto input and output streams implementing Hadoop stream interfaces

2014-05-23 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007205#comment-14007205
 ] 

Yi Liu commented on HADOOP-10603:
-

The new patch is 
[HADOOP-10603.10.patch|https://issues.apache.org/jira/secure/attachment/12646518/HADOOP-10603.10.patch]


 Crypto input and output streams implementing Hadoop stream interfaces
 -

 Key: HADOOP-10603
 URL: https://issues.apache.org/jira/browse/HADOOP-10603
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Alejandro Abdelnur
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: CryptoInputStream.java, CryptoOutputStream.java, 
 HADOOP-10603.1.patch, HADOOP-10603.10.patch, HADOOP-10603.2.patch, 
 HADOOP-10603.3.patch, HADOOP-10603.4.patch, HADOOP-10603.5.patch, 
 HADOOP-10603.6.patch, HADOOP-10603.7.patch, HADOOP-10603.8.patch, 
 HADOOP-10603.9.patch, HADOOP-10603.patch


 A common set of Crypto Input/Output streams. They would be used by 
 CryptoFileSystem, HDFS encryption, MapReduce intermediate data and spills. 
 Note we cannot use the JDK Cipher Input/Output streams directly because we 
 need to support the additional interfaces that the Hadoop FileSystem streams 
 implement (Seekable, PositionedReadable, ByteBufferReadable, 
 HasFileDescriptor, CanSetDropBehind, CanSetReadahead, 
 HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-3845) equals() method in GenericWritable

2014-05-23 Thread jhanver chand sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jhanver chand sharma updated HADOOP-3845:
-

Attachment: Hadoop-3845.patch

 equals() method in GenericWritable
 --

 Key: HADOOP-3845
 URL: https://issues.apache.org/jira/browse/HADOOP-3845
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: yskhoo
 Attachments: Hadoop-3845.patch


 Missing equals() and hash() methods in GenericWritable



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-1) initial import of code from Nutch

2014-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007308#comment-14007308
 ] 

Hudson commented on HADOOP-1:
-

FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #294 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/294/])
HBASE-11104 Addendum to fix compilation on hadoop-1 (tedyu: rev 
ef0ee0ff697192b6202ff62766370fbc9f3dfe38)
* hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestImportTsv.java


 initial import of code from Nutch
 -

 Key: HADOOP-1
 URL: https://issues.apache.org/jira/browse/HADOOP-1
 Project: Hadoop Common
  Issue Type: Task
Reporter: Doug Cutting
Assignee: Doug Cutting
 Fix For: 0.1.0


 The initial code for Hadoop will be copied from Nutch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10603) Crypto input and output streams implementing Hadoop stream interfaces

2014-05-23 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007457#comment-14007457
 ] 

Charles Lamb commented on HADOOP-10603:
---

Yi,

I'm sorry I didn't get these comments to you sooner.

In general, please add blank lines before all block comments.

AESCTRCryptoCodec.java
+public abstract class AESCTRCryptoCodec extends CryptoCodec {
+  /**
+   * For AES, the algorithm block is fixed size of 128 bits.
+   * @see http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
+   */
+  private static final int AES_BLOCK_SIZE = 16;

+  /**
+   * IV is produced by combining initial IV and the counter using addition.
+   * IV length should be the same as {@link #AES_BLOCK_SIZE}
+   */

The IV is produced by adding the initial IV to the counter. IV length
should be the same as {@link #AES_BLOCK_SIZE}.

+  @Override
+  public void calculateIV(byte[] initIV, long counter, byte[] IV) {
...
+ByteBuffer buf = ByteBuffer.wrap(IV);

add a final decl.

CryptoCodec.java

+  /**
+   * Get block size of a block cipher.

Get the block size of a block cipher.

+   * For different algorithms, the block size may be different.
+   * @return int block size

@return the block size

+   * Get a {@link #org.apache.hadoop.crypto.Encryptor}. 

s/Get a/Get an/

+   * @return Encryptor

@return the Encryptor

+   * @return Decryptor

@return the Decryptor

+   * This interface is only for Counter (CTR) mode. Typically calculating 
+   * IV(Initialization Vector) is up to Encryptor or Decryptor, for 
+   * example {@link #javax.crypto.Cipher} will maintain encryption context 
+   * internally when do encryption/decryption continuously using its 
+   * Cipher#update interface. 

This interface is only needed for AES-CTR mode. The IV is generally
calculated by the Encryptor or Decryptor and maintained as internal
state. For example, a {@link #javax.crypto.Cipher} will maintain its
encryption context internally using the Cipher#update interface.

+   * In Hadoop, multiple nodes may read splits of a file, so decrypting of 
+   * file is not continuous, even for encrypting may be not continuous. For 
+   * each part, we need to calculate the counter through file position.

Encryption/Decryption is not always on the entire file. For example,
in Hadoop, a node may only decrypt a portion of a file (i.e. a
split). In these situations, the counter is derived from the file
position.

+   * p/
+   * Typically IV for a file position is produced by combining initial IV and 
+   * the counter using any lossless operation (concatenation, addition, or 
XOR).
+   * @see 
http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Counter_.28CTR.29

The IV can be calculated based on the file position by combining the
initial IV and the counter with a lossless operation (concatenation, addition, 
or XOR).
@see 
http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Counter_.28CTR.29

CryptoInputStream.java

+public class CryptoInputStream extends FilterInputStream implements 
+Seekable, PositionedReadable, ByteBufferReadable, HasFileDescriptor, 
+CanSetDropBehind, CanSetReadahead, HasEnhancedByteBufferAccess {

Add a newline here please.

+  /**
+   * Whether underlying stream supports 

s/Whether underlying/Whether the underlying/

+  /**
+   * Padding = pos%(algorithm blocksize); Padding is put into {@link 
#inBuffer} 
+   * before any other data goes in. The purpose of padding is to put input data
+   * at proper position.

s/put input data/put the input data/

+  @Override
+  public int read(byte[] b, int off, int len) throws IOException {

+int remaining = outBuffer.remaining();

final int remaining...

+  if (usingByteBufferRead == null) {
+if (in instanceof ByteBufferReadable) {
+  try {
+n = ((ByteBufferReadable) in).read(inBuffer);
+usingByteBufferRead = Boolean.TRUE;
+  } catch (UnsupportedOperationException e) {
+usingByteBufferRead = Boolean.FALSE;
+  }
+}
+if (!usingByteBufferRead.booleanValue()) {
+  n = readFromUnderlyingStream();
+}
+  } else {
+if (usingByteBufferRead.booleanValue()) {
+  n = ((ByteBufferReadable) in).read(inBuffer);
+} else {
+  n = readFromUnderlyingStream();
+}
+  }

For the code above, I wonder if we shouldn't maintain a reference to
the actual ByteBuffer once it is known to be ByteBufferReadable. If
the caller switches BBs, then it is possible that this could throw a
UnsupportedOperationException. So the check would be to see if the BB
was the same one that was already known to be BBReadable, and if not,
then check it again.

+  // Read data from underlying stream.
+  private int readFromUnderlyingStream() throws IOException {
+int toRead = inBuffer.remaining();
+byte[] tmp = getTmpBuf();
+int n = in.read(tmp, 0, 

[jira] [Commented] (HADOOP-10618) Remove SingleNodeSetup.apt.vm

2014-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007616#comment-14007616
 ] 

Hudson commented on HADOOP-10618:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5608 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5608/])
HADOOP-10618. Remove SingleNodeSetup.apt.vm (Contributed by Akira Ajisaka) 
(arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1596964)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/SingleNodeSetup.apt.vm


 Remove SingleNodeSetup.apt.vm
 -

 Key: HADOOP-10618
 URL: https://issues.apache.org/jira/browse/HADOOP-10618
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Fix For: 3.0.0, 2.5.0

 Attachments: HADOOP-10618.2.patch, HADOOP-10618.patch


 http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-common/SingleNodeSetup.html
  is deprecated and not linked from the left side page.
 We should remove the document and use 
 http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-common/SingleCluster.html
  instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-05-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Affects Version/s: (was: 2.1.1-beta)
   Status: Patch Available  (was: Open)

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-05-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Attachment: HADOOP-9902.patch

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-6356) Add a Cache for AbstractFileSystem in the new FileContext/AbstractFileSystem framework.

2014-05-23 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007671#comment-14007671
 ] 

Chris Nauroth commented on HADOOP-6356:
---

If not a cache, then I do think {{FileContext}} would benefit from having a 
{{close}} method.  Right now, {{FileContext}} doesn't provide any kind of 
reliable shutdown hook where a file system implementor can clean up scarce 
resources (i.e. background threads allocated during usage).

 Add a Cache for AbstractFileSystem in the new FileContext/AbstractFileSystem 
 framework.
 ---

 Key: HADOOP-6356
 URL: https://issues.apache.org/jira/browse/HADOOP-6356
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.22.0
Reporter: Sanjay Radia
Assignee: Sanjay Radia

 The new filesystem framework, FileContext and AbstractFileSystem does not 
 implement a cache for AbstractFileSystem.
 This Jira proposes to add a cache to the new framework just like with the old 
 FileSystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-05-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Status: Open  (was: Patch Available)

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9704) Write metrics sink plugin for Hadoop/Graphite

2014-05-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9704:
-

Status: Open  (was: Patch Available)

 Write metrics sink plugin for Hadoop/Graphite
 -

 Key: HADOOP-9704
 URL: https://issues.apache.org/jira/browse/HADOOP-9704
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 2.0.3-alpha
Reporter: Chu Tong
 Attachments: 
 0001-HADOOP-9704.-Write-metrics-sink-plugin-for-Hadoop-Gr.patch, 
 HADOOP-9704.patch, HADOOP-9704.patch


 Write a metrics sink plugin for Hadoop to send metrics directly to Graphite 
 in additional to the current ganglia and file ones.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10625) Configuration: names should be trimmed when putting/getting to properties

2014-05-23 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007747#comment-14007747
 ] 

Xuan Gong commented on HADOOP-10625:


+1 LGTM

 Configuration: names should be trimmed when putting/getting to properties
 -

 Key: HADOOP-10625
 URL: https://issues.apache.org/jira/browse/HADOOP-10625
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.4.0
Reporter: Wangda Tan
 Attachments: HADOOP-10625.patch


 Currently, Hadoop will not trim name when putting a pair of k/v to property. 
 But when loading configuration from file, names will be trimmed:
 (In Configuration.java)
 {code}
   if (name.equals(field.getTagName())  field.hasChildNodes())
 attr = StringInterner.weakIntern(
 ((Text)field.getFirstChild()).getData().trim());
   if (value.equals(field.getTagName())  field.hasChildNodes())
 value = StringInterner.weakIntern(
 ((Text)field.getFirstChild()).getData());
 {code}
 With this behavior, following steps will be problematic:
 1. User incorrectly set  hadoop.key=value (with a space before hadoop.key)
 2. User try to get hadoop.key, cannot get value
 3. Serialize/deserialize configuration (Like what did in MR)
 4. User try to get hadoop.key, can get value, which will make 
 inconsistency problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-05-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Attachment: (was: HADOOP-9902.patch)

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10603) Crypto input and output streams implementing Hadoop stream interfaces

2014-05-23 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007792#comment-14007792
 ] 

Andrew Wang commented on HADOOP-10603:
--

Hey Charles, how do you feel about committing Yi's base patch, and doing these 
additional comments in a separate JIRA? I'm +1 on the latest rev.

 Crypto input and output streams implementing Hadoop stream interfaces
 -

 Key: HADOOP-10603
 URL: https://issues.apache.org/jira/browse/HADOOP-10603
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Alejandro Abdelnur
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: CryptoInputStream.java, CryptoOutputStream.java, 
 HADOOP-10603.1.patch, HADOOP-10603.10.patch, HADOOP-10603.2.patch, 
 HADOOP-10603.3.patch, HADOOP-10603.4.patch, HADOOP-10603.5.patch, 
 HADOOP-10603.6.patch, HADOOP-10603.7.patch, HADOOP-10603.8.patch, 
 HADOOP-10603.9.patch, HADOOP-10603.patch


 A common set of Crypto Input/Output streams. They would be used by 
 CryptoFileSystem, HDFS encryption, MapReduce intermediate data and spills. 
 Note we cannot use the JDK Cipher Input/Output streams directly because we 
 need to support the additional interfaces that the Hadoop FileSystem streams 
 implement (Seekable, PositionedReadable, ByteBufferReadable, 
 HasFileDescriptor, CanSetDropBehind, CanSetReadahead, 
 HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007799#comment-14007799
 ] 

Hadoop QA commented on HADOOP-9902:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12628148/HADOOP-9902.txt
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3961//console

This message is automatically generated.

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10566) Refactor proxyservers out of ProxyUsers

2014-05-23 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10566:
--

Attachment: HADOOP-10566-branch-2.patch

Attaching the patch for branch-2.

 Refactor proxyservers out of ProxyUsers
 ---

 Key: HADOOP-10566
 URL: https://issues.apache.org/jira/browse/HADOOP-10566
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.4.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10566-branch-2.patch, HADOOP-10566.patch, 
 HADOOP-10566.patch, HADOOP-10566.patch, HADOOP-10566.patch, HADOOP-10566.patch


 HADOOP-10498 added proxyservers feature in ProxyUsers. It is beneficial to 
 treat this as a separate feature since 
 1 The ProxyUsers is per proxyuser where as proxyservers is per cluster. The 
 cardinality is different. 
 2 The ProxyUsers.authorize() and ProxyUsers.isproxyUser() are synchronized 
 and hence share the same lock  and impacts performance.
 Since these are two separate features, it will be an improvement to keep them 
 separate. It also enables one to fine-tune each feature independently.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10566) Refactor proxyservers out of ProxyUsers

2014-05-23 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007810#comment-14007810
 ] 

Benoy Antony commented on HADOOP-10566:
---

[~sureshms] ,  I have attached the patch for branch-2 .  
Thanks for committing the patch for trunk . 

 Refactor proxyservers out of ProxyUsers
 ---

 Key: HADOOP-10566
 URL: https://issues.apache.org/jira/browse/HADOOP-10566
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.4.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10566-branch-2.patch, HADOOP-10566.patch, 
 HADOOP-10566.patch, HADOOP-10566.patch, HADOOP-10566.patch, HADOOP-10566.patch


 HADOOP-10498 added proxyservers feature in ProxyUsers. It is beneficial to 
 treat this as a separate feature since 
 1 The ProxyUsers is per proxyuser where as proxyservers is per cluster. The 
 cardinality is different. 
 2 The ProxyUsers.authorize() and ProxyUsers.isproxyUser() are synchronized 
 and hence share the same lock  and impacts performance.
 Since these are two separate features, it will be an improvement to keep them 
 separate. It also enables one to fine-tune each feature independently.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10603) Crypto input and output streams implementing Hadoop stream interfaces

2014-05-23 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007827#comment-14007827
 ] 

Charles Lamb commented on HADOOP-10603:
---

Sounds good.

+1


 Crypto input and output streams implementing Hadoop stream interfaces
 -

 Key: HADOOP-10603
 URL: https://issues.apache.org/jira/browse/HADOOP-10603
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Alejandro Abdelnur
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: CryptoInputStream.java, CryptoOutputStream.java, 
 HADOOP-10603.1.patch, HADOOP-10603.10.patch, HADOOP-10603.2.patch, 
 HADOOP-10603.3.patch, HADOOP-10603.4.patch, HADOOP-10603.5.patch, 
 HADOOP-10603.6.patch, HADOOP-10603.7.patch, HADOOP-10603.8.patch, 
 HADOOP-10603.9.patch, HADOOP-10603.patch


 A common set of Crypto Input/Output streams. They would be used by 
 CryptoFileSystem, HDFS encryption, MapReduce intermediate data and spills. 
 Note we cannot use the JDK Cipher Input/Output streams directly because we 
 need to support the additional interfaces that the Hadoop FileSystem streams 
 implement (Seekable, PositionedReadable, ByteBufferReadable, 
 HasFileDescriptor, CanSetDropBehind, CanSetReadahead, 
 HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10603) Crypto input and output streams implementing Hadoop stream interfaces

2014-05-23 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007829#comment-14007829
 ] 

Charles Lamb commented on HADOOP-10603:
---

+1



 Crypto input and output streams implementing Hadoop stream interfaces
 -

 Key: HADOOP-10603
 URL: https://issues.apache.org/jira/browse/HADOOP-10603
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Alejandro Abdelnur
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: CryptoInputStream.java, CryptoOutputStream.java, 
 HADOOP-10603.1.patch, HADOOP-10603.10.patch, HADOOP-10603.2.patch, 
 HADOOP-10603.3.patch, HADOOP-10603.4.patch, HADOOP-10603.5.patch, 
 HADOOP-10603.6.patch, HADOOP-10603.7.patch, HADOOP-10603.8.patch, 
 HADOOP-10603.9.patch, HADOOP-10603.patch


 A common set of Crypto Input/Output streams. They would be used by 
 CryptoFileSystem, HDFS encryption, MapReduce intermediate data and spills. 
 Note we cannot use the JDK Cipher Input/Output streams directly because we 
 need to support the additional interfaces that the Hadoop FileSystem streams 
 implement (Seekable, PositionedReadable, ByteBufferReadable, 
 HasFileDescriptor, CanSetDropBehind, CanSetReadahead, 
 HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10602) Documentation has broken Go Back hyperlinks.

2014-05-23 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10602:
---

Hadoop Flags: Reviewed

+1, pending Jenkins run for the latest patch.  I built the site, and it looked 
good.  Thanks, Akira.  I'm going to hold off committing until Tuesday, 5/27, 
just in case anyone objects to removal of the Go Back links.

 Documentation has broken Go Back hyperlinks.
 --

 Key: HADOOP-10602
 URL: https://issues.apache.org/jira/browse/HADOOP-10602
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.4.0
Reporter: Chris Nauroth
Assignee: Akira AJISAKA
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-10602.2.patch, HADOOP-10602.3.patch, 
 HADOOP-10602.patch


 Multiple pages of our documentation have Go Back links that are broken, 
 because they point to an incorrect relative path.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10625) Configuration: names should be trimmed when putting/getting to properties

2014-05-23 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007858#comment-14007858
 ] 

Wangda Tan commented on HADOOP-10625:
-

Thanks [~xgong] for review, and could you please add me to contributor list of 
hadoop-common?

 Configuration: names should be trimmed when putting/getting to properties
 -

 Key: HADOOP-10625
 URL: https://issues.apache.org/jira/browse/HADOOP-10625
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.4.0
Reporter: Wangda Tan
 Attachments: HADOOP-10625.patch


 Currently, Hadoop will not trim name when putting a pair of k/v to property. 
 But when loading configuration from file, names will be trimmed:
 (In Configuration.java)
 {code}
   if (name.equals(field.getTagName())  field.hasChildNodes())
 attr = StringInterner.weakIntern(
 ((Text)field.getFirstChild()).getData().trim());
   if (value.equals(field.getTagName())  field.hasChildNodes())
 value = StringInterner.weakIntern(
 ((Text)field.getFirstChild()).getData());
 {code}
 With this behavior, following steps will be problematic:
 1. User incorrectly set  hadoop.key=value (with a space before hadoop.key)
 2. User try to get hadoop.key, cannot get value
 3. Serialize/deserialize configuration (Like what did in MR)
 4. User try to get hadoop.key, can get value, which will make 
 inconsistency problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10625) Configuration: names should be trimmed when putting/getting to properties

2014-05-23 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-10625:
---

Assignee: Wangda Tan

 Configuration: names should be trimmed when putting/getting to properties
 -

 Key: HADOOP-10625
 URL: https://issues.apache.org/jira/browse/HADOOP-10625
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.4.0
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: HADOOP-10625.patch


 Currently, Hadoop will not trim name when putting a pair of k/v to property. 
 But when loading configuration from file, names will be trimmed:
 (In Configuration.java)
 {code}
   if (name.equals(field.getTagName())  field.hasChildNodes())
 attr = StringInterner.weakIntern(
 ((Text)field.getFirstChild()).getData().trim());
   if (value.equals(field.getTagName())  field.hasChildNodes())
 value = StringInterner.weakIntern(
 ((Text)field.getFirstChild()).getData());
 {code}
 With this behavior, following steps will be problematic:
 1. User incorrectly set  hadoop.key=value (with a space before hadoop.key)
 2. User try to get hadoop.key, cannot get value
 3. Serialize/deserialize configuration (Like what did in MR)
 4. User try to get hadoop.key, can get value, which will make 
 inconsistency problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-6356) Add a Cache for AbstractFileSystem in the new FileContext/AbstractFileSystem framework.

2014-05-23 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007875#comment-14007875
 ] 

Colin Patrick McCabe commented on HADOOP-6356:
--

Yeah, it would be nice if {{FileContext}} had a {{close}} method.

 Add a Cache for AbstractFileSystem in the new FileContext/AbstractFileSystem 
 framework.
 ---

 Key: HADOOP-6356
 URL: https://issues.apache.org/jira/browse/HADOOP-6356
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.22.0
Reporter: Sanjay Radia
Assignee: Sanjay Radia

 The new filesystem framework, FileContext and AbstractFileSystem does not 
 implement a cache for AbstractFileSystem.
 This Jira proposes to add a cache to the new framework just like with the old 
 FileSystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10625) Configuration: names should be trimmed when putting/getting to properties

2014-05-23 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007893#comment-14007893
 ] 

Jing Zhao commented on HADOOP-10625:


Hi [~wangda], I've added you as the contributor and assigned the jira to you. 

 Configuration: names should be trimmed when putting/getting to properties
 -

 Key: HADOOP-10625
 URL: https://issues.apache.org/jira/browse/HADOOP-10625
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.4.0
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: HADOOP-10625.patch


 Currently, Hadoop will not trim name when putting a pair of k/v to property. 
 But when loading configuration from file, names will be trimmed:
 (In Configuration.java)
 {code}
   if (name.equals(field.getTagName())  field.hasChildNodes())
 attr = StringInterner.weakIntern(
 ((Text)field.getFirstChild()).getData().trim());
   if (value.equals(field.getTagName())  field.hasChildNodes())
 value = StringInterner.weakIntern(
 ((Text)field.getFirstChild()).getData());
 {code}
 With this behavior, following steps will be problematic:
 1. User incorrectly set  hadoop.key=value (with a space before hadoop.key)
 2. User try to get hadoop.key, cannot get value
 3. Serialize/deserialize configuration (Like what did in MR)
 4. User try to get hadoop.key, can get value, which will make 
 inconsistency problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10625) Configuration: names should be trimmed when putting/getting to properties

2014-05-23 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007904#comment-14007904
 ] 

Wangda Tan commented on HADOOP-10625:
-

Thanks for your help, :)

 Configuration: names should be trimmed when putting/getting to properties
 -

 Key: HADOOP-10625
 URL: https://issues.apache.org/jira/browse/HADOOP-10625
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.4.0
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: HADOOP-10625.patch


 Currently, Hadoop will not trim name when putting a pair of k/v to property. 
 But when loading configuration from file, names will be trimmed:
 (In Configuration.java)
 {code}
   if (name.equals(field.getTagName())  field.hasChildNodes())
 attr = StringInterner.weakIntern(
 ((Text)field.getFirstChild()).getData().trim());
   if (value.equals(field.getTagName())  field.hasChildNodes())
 value = StringInterner.weakIntern(
 ((Text)field.getFirstChild()).getData());
 {code}
 With this behavior, following steps will be problematic:
 1. User incorrectly set  hadoop.key=value (with a space before hadoop.key)
 2. User try to get hadoop.key, cannot get value
 3. Serialize/deserialize configuration (Like what did in MR)
 4. User try to get hadoop.key, can get value, which will make 
 inconsistency problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10603) Crypto input and output streams implementing Hadoop stream interfaces

2014-05-23 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007914#comment-14007914
 ] 

Yi Liu commented on HADOOP-10603:
-

Thanks Andrew and Charles for review :-). I will commit the patch later and 
have a new JIRA for Charles' comment of javadoc and code comment. Let's add 
[~clamb] name in the contributor list when commit since he makes many 
contributions to this JIRA too.

 Crypto input and output streams implementing Hadoop stream interfaces
 -

 Key: HADOOP-10603
 URL: https://issues.apache.org/jira/browse/HADOOP-10603
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Alejandro Abdelnur
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: CryptoInputStream.java, CryptoOutputStream.java, 
 HADOOP-10603.1.patch, HADOOP-10603.10.patch, HADOOP-10603.2.patch, 
 HADOOP-10603.3.patch, HADOOP-10603.4.patch, HADOOP-10603.5.patch, 
 HADOOP-10603.6.patch, HADOOP-10603.7.patch, HADOOP-10603.8.patch, 
 HADOOP-10603.9.patch, HADOOP-10603.patch


 A common set of Crypto Input/Output streams. They would be used by 
 CryptoFileSystem, HDFS encryption, MapReduce intermediate data and spills. 
 Note we cannot use the JDK Cipher Input/Output streams directly because we 
 need to support the additional interfaces that the Hadoop FileSystem streams 
 implement (Seekable, PositionedReadable, ByteBufferReadable, 
 HasFileDescriptor, CanSetDropBehind, CanSetReadahead, 
 HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10628) Javadoc and few code style improvement for Crypto input and output streams

2014-05-23 Thread Yi Liu (JIRA)
Yi Liu created HADOOP-10628:
---

 Summary: Javadoc and few code style improvement for Crypto input 
and output streams
 Key: HADOOP-10628
 URL: https://issues.apache.org/jira/browse/HADOOP-10628
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)


There are some additional comments from [~clamb] related to javadoc and few 
code style on HADOOP-10603, let's fix them in this follow-on JIRA.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HADOOP-10628) Javadoc and few code style improvement for Crypto input and output streams

2014-05-23 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-10628 started by Yi Liu.

 Javadoc and few code style improvement for Crypto input and output streams
 --

 Key: HADOOP-10628
 URL: https://issues.apache.org/jira/browse/HADOOP-10628
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)


 There are some additional comments from [~clamb] related to javadoc and few 
 code style on HADOOP-10603, let's fix them in this follow-on JIRA.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10603) Crypto input and output streams implementing Hadoop stream interfaces

2014-05-23 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007960#comment-14007960
 ] 

Yi Liu commented on HADOOP-10603:
-

I have just committed this to branch, Thanks all.
I filed HADOOP-10628 to address [~clamb]'s comment of javadoc and few code 
style.

 Crypto input and output streams implementing Hadoop stream interfaces
 -

 Key: HADOOP-10603
 URL: https://issues.apache.org/jira/browse/HADOOP-10603
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Alejandro Abdelnur
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: CryptoInputStream.java, CryptoOutputStream.java, 
 HADOOP-10603.1.patch, HADOOP-10603.10.patch, HADOOP-10603.2.patch, 
 HADOOP-10603.3.patch, HADOOP-10603.4.patch, HADOOP-10603.5.patch, 
 HADOOP-10603.6.patch, HADOOP-10603.7.patch, HADOOP-10603.8.patch, 
 HADOOP-10603.9.patch, HADOOP-10603.patch


 A common set of Crypto Input/Output streams. They would be used by 
 CryptoFileSystem, HDFS encryption, MapReduce intermediate data and spills. 
 Note we cannot use the JDK Cipher Input/Output streams directly because we 
 need to support the additional interfaces that the Hadoop FileSystem streams 
 implement (Seekable, PositionedReadable, ByteBufferReadable, 
 HasFileDescriptor, CanSetDropBehind, CanSetReadahead, 
 HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10628) Javadoc and few code style improvement for Crypto input and output streams

2014-05-23 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-10628:


Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-10150

 Javadoc and few code style improvement for Crypto input and output streams
 --

 Key: HADOOP-10628
 URL: https://issues.apache.org/jira/browse/HADOOP-10628
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)


 There are some additional comments from [~clamb] related to javadoc and few 
 code style on HADOOP-10603, let's fix them in this follow-on JIRA.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10603) Crypto input and output streams implementing Hadoop stream interfaces

2014-05-23 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu resolved HADOOP-10603.
-

  Resolution: Fixed
Target Version/s: fs-encryption (HADOOP-10150 and HDFS-6134)
Hadoop Flags: Reviewed

 Crypto input and output streams implementing Hadoop stream interfaces
 -

 Key: HADOOP-10603
 URL: https://issues.apache.org/jira/browse/HADOOP-10603
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Alejandro Abdelnur
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: CryptoInputStream.java, CryptoOutputStream.java, 
 HADOOP-10603.1.patch, HADOOP-10603.10.patch, HADOOP-10603.2.patch, 
 HADOOP-10603.3.patch, HADOOP-10603.4.patch, HADOOP-10603.5.patch, 
 HADOOP-10603.6.patch, HADOOP-10603.7.patch, HADOOP-10603.8.patch, 
 HADOOP-10603.9.patch, HADOOP-10603.patch


 A common set of Crypto Input/Output streams. They would be used by 
 CryptoFileSystem, HDFS encryption, MapReduce intermediate data and spills. 
 Note we cannot use the JDK Cipher Input/Output streams directly because we 
 need to support the additional interfaces that the Hadoop FileSystem streams 
 implement (Seekable, PositionedReadable, ByteBufferReadable, 
 HasFileDescriptor, CanSetDropBehind, CanSetReadahead, 
 HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10617) Tests for Crypto input and output streams using fake streams implementing Hadoop streams interfaces.

2014-05-23 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu resolved HADOOP-10617.
-

  Resolution: Fixed
Hadoop Flags: Reviewed

Have merged this patch into HADOOP-10603 and Committed to branch.

 Tests for Crypto input and output streams using fake streams implementing 
 Hadoop streams interfaces.
 

 Key: HADOOP-10617
 URL: https://issues.apache.org/jira/browse/HADOOP-10617
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: HADOOP-10617.1.patch, HADOOP-10617.2.patch, 
 HADOOP-10617.3.patch, HADOOP-10617.patch


 Tests for Crypto input and output streams using fake input and output streams 
 implementing Hadoop streams interfaces. To cover functionality of Hadoop 
 streams with crypto.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10448) Support pluggable mechanism to specify proxy user settings

2014-05-23 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10448:
--

Attachment: HADOOP-10448.patch

All the dependent patches are in trunk.
Attaching the rebased patch. 

[~sureshms], Could you please review and commit this ?


 Support pluggable mechanism to specify proxy user settings
 --

 Key: HADOOP-10448
 URL: https://issues.apache.org/jira/browse/HADOOP-10448
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.3.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10448.patch, HADOOP-10448.patch, 
 HADOOP-10448.patch, HADOOP-10448.patch, HADOOP-10448.patch, 
 HADOOP-10448.patch, HADOOP-10448.patch, HADOOP-10448.patch, 
 HADOOP-10448.patch, HADOOP-10448.patch


 We have a requirement to support large number of superusers. (users who 
 impersonate as another user) 
 (http://hadoop.apache.org/docs/r1.2.1/Secure_Impersonation.html) 
 Currently each  superuser needs to be defined in the core-site.xml via 
 proxyuser settings. This will be cumbersome when there are 1000 entries.
 It seems useful to have a pluggable mechanism to specify  proxy user settings 
 with the current approach as the default. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10594) Improve Concurrency in Groups

2014-05-23 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007969#comment-14007969
 ] 

Benoy Antony commented on HADOOP-10594:
---

[~daryn], could you please review if this patch is beneficial ?

 Improve Concurrency in Groups
 -

 Key: HADOOP-10594
 URL: https://issues.apache.org/jira/browse/HADOOP-10594
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10594.patch


 The static field GROUPS in Groups can be accessed by holding a lock only.
 This object is effectively immutable after construction and hence can safely 
 published using a volatile field. This enables threads to access this GROUPS 
 object without holding lock



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10625) Configuration: names should be trimmed when putting/getting to properties

2014-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007971#comment-14007971
 ] 

Hadoop QA commented on HADOOP-10625:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12646182/HADOOP-10625.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3964//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3964//console

This message is automatically generated.

 Configuration: names should be trimmed when putting/getting to properties
 -

 Key: HADOOP-10625
 URL: https://issues.apache.org/jira/browse/HADOOP-10625
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.4.0
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: HADOOP-10625.patch


 Currently, Hadoop will not trim name when putting a pair of k/v to property. 
 But when loading configuration from file, names will be trimmed:
 (In Configuration.java)
 {code}
   if (name.equals(field.getTagName())  field.hasChildNodes())
 attr = StringInterner.weakIntern(
 ((Text)field.getFirstChild()).getData().trim());
   if (value.equals(field.getTagName())  field.hasChildNodes())
 value = StringInterner.weakIntern(
 ((Text)field.getFirstChild()).getData());
 {code}
 With this behavior, following steps will be problematic:
 1. User incorrectly set  hadoop.key=value (with a space before hadoop.key)
 2. User try to get hadoop.key, cannot get value
 3. Serialize/deserialize configuration (Like what did in MR)
 4. User try to get hadoop.key, can get value, which will make 
 inconsistency problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10602) Documentation has broken Go Back hyperlinks.

2014-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007979#comment-14007979
 ] 

Hadoop QA commented on HADOOP-10602:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12646287/HADOOP-10602.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-kms 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-tools/hadoop-sls:

  org.apache.hadoop.fs.TestHdfsNativeCodeLoader
  
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3962//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3962//console

This message is automatically generated.

 Documentation has broken Go Back hyperlinks.
 --

 Key: HADOOP-10602
 URL: https://issues.apache.org/jira/browse/HADOOP-10602
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.4.0
Reporter: Chris Nauroth
Assignee: Akira AJISAKA
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-10602.2.patch, HADOOP-10602.3.patch, 
 HADOOP-10602.patch


 Multiple pages of our documentation have Go Back links that are broken, 
 because they point to an incorrect relative path.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10625) Configuration: names should be trimmed when putting/getting to properties

2014-05-23 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007991#comment-14007991
 ] 

Harsh J commented on HADOOP-10625:
--

Can we add in a brief mention of this behaviour into Configuration's
top-level javadoc as well?






-- 
Harsh J


 Configuration: names should be trimmed when putting/getting to properties
 -

 Key: HADOOP-10625
 URL: https://issues.apache.org/jira/browse/HADOOP-10625
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.4.0
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: HADOOP-10625.patch


 Currently, Hadoop will not trim name when putting a pair of k/v to property. 
 But when loading configuration from file, names will be trimmed:
 (In Configuration.java)
 {code}
   if (name.equals(field.getTagName())  field.hasChildNodes())
 attr = StringInterner.weakIntern(
 ((Text)field.getFirstChild()).getData().trim());
   if (value.equals(field.getTagName())  field.hasChildNodes())
 value = StringInterner.weakIntern(
 ((Text)field.getFirstChild()).getData());
 {code}
 With this behavior, following steps will be problematic:
 1. User incorrectly set  hadoop.key=value (with a space before hadoop.key)
 2. User try to get hadoop.key, cannot get value
 3. Serialize/deserialize configuration (Like what did in MR)
 4. User try to get hadoop.key, can get value, which will make 
 inconsistency problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-1) initial import of code from Nutch

2014-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14007998#comment-14007998
 ] 

Hudson commented on HADOOP-1:
-

ABORTED: Integrated in HBase-0.98 #313 (See 
[https://builds.apache.org/job/HBase-0.98/313/])
HBASE-11104 Addendum to fix compilation on hadoop-1 (tedyu: rev 
ef0ee0ff697192b6202ff62766370fbc9f3dfe38)
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestImportTsv.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java


 initial import of code from Nutch
 -

 Key: HADOOP-1
 URL: https://issues.apache.org/jira/browse/HADOOP-1
 Project: Hadoop Common
  Issue Type: Task
Reporter: Doug Cutting
Assignee: Doug Cutting
 Fix For: 0.1.0


 The initial code for Hadoop will be copied from Nutch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10602) Documentation has broken Go Back hyperlinks.

2014-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008009#comment-14008009
 ] 

Hadoop QA commented on HADOOP-10602:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12646287/HADOOP-10602.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-kms 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-tools/hadoop-sls:

  org.apache.hadoop.fs.TestHdfsNativeCodeLoader

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3963//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3963//console

This message is automatically generated.

 Documentation has broken Go Back hyperlinks.
 --

 Key: HADOOP-10602
 URL: https://issues.apache.org/jira/browse/HADOOP-10602
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.4.0
Reporter: Chris Nauroth
Assignee: Akira AJISAKA
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-10602.2.patch, HADOOP-10602.3.patch, 
 HADOOP-10602.patch


 Multiple pages of our documentation have Go Back links that are broken, 
 because they point to an incorrect relative path.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10566) Refactor proxyservers out of ProxyUsers

2014-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14008012#comment-14008012
 ] 

Hadoop QA commented on HADOOP-10566:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12646627/HADOOP-10566-branch-2.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3965//console

This message is automatically generated.

 Refactor proxyservers out of ProxyUsers
 ---

 Key: HADOOP-10566
 URL: https://issues.apache.org/jira/browse/HADOOP-10566
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.4.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10566-branch-2.patch, HADOOP-10566.patch, 
 HADOOP-10566.patch, HADOOP-10566.patch, HADOOP-10566.patch, HADOOP-10566.patch


 HADOOP-10498 added proxyservers feature in ProxyUsers. It is beneficial to 
 treat this as a separate feature since 
 1 The ProxyUsers is per proxyuser where as proxyservers is per cluster. The 
 cardinality is different. 
 2 The ProxyUsers.authorize() and ProxyUsers.isproxyUser() are synchronized 
 and hence share the same lock  and impacts performance.
 Since these are two separate features, it will be an improvement to keep them 
 separate. It also enables one to fine-tune each feature independently.



--
This message was sent by Atlassian JIRA
(v6.2#6252)