[jira] [Commented] (HADOOP-9706) Provide Hadoop Karaf support

2013-07-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701903#comment-13701903
 ] 

Steve Loughran commented on HADOOP-9706:


# is there any reason why this couldn't fit under {{hadoop-tools}}?
# there's a lot of new version numberings for various libraries -i'd have 
expected these all to be picked up from the main hadoop-project/pom.xml unless 
there's some problem.

# how would we go about adding tests for this? Is there an OSGi container that 
can some up inside JUnit, host -say- a miniDFS cluster- and then let us talk to 
the filesystem? Tests inside the hadoop codebase are a key way to stop 
regressions.

 Provide Hadoop Karaf support
 

 Key: HADOOP-9706
 URL: https://issues.apache.org/jira/browse/HADOOP-9706
 Project: Hadoop Common
  Issue Type: Task
  Components: tools
Reporter: Jean-Baptiste Onofré
 Fix For: 3.0.0

 Attachments: HADOOP-9706.patch


 To follow the discussion about OSGi, and in order to move forward, I propose 
 the following hadoop-karaf bundle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9651) Filesystems to throw FileAlreadyExistsException in createFile(path, overwrite=false) when the file exists

2013-07-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701916#comment-13701916
 ] 

Steve Loughran commented on HADOOP-9651:


Core patch looks good, though you need to make sure that your IDE isn't turning 
imports into .* imports -having them leave all imports alone is the best way to 
avoid gratuitous diff/merge problems.


I'll add common contract tests for this then merge in the exception 
modifications

 Filesystems to throw FileAlreadyExistsException in createFile(path, 
 overwrite=false) when the file exists
 -

 Key: HADOOP-9651
 URL: https://issues.apache.org/jira/browse/HADOOP-9651
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Steve Loughran
Priority: Minor
 Attachments: HADOOP-9651.patch


 While HDFS and other filesystems throw a {{FileAlreadyExistsException}} if 
 you try to create a file that exists and you have set {{overwrite=false}}, 
 {{RawLocalFileSystem}} throws a plain {{IOException}}. This makes it 
 impossible to distinguish a create operation failing from a fixable problem 
 (the file is there) and something more fundamental.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9683) Wrap IpcConnectionContext in RPC headers

2013-07-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702012#comment-13702012
 ] 

Daryn Sharp commented on HADOOP-9683:
-

Thanks Luke.  I removed the logging of bytes read because it's in an exception 
handler that wraps its assignment, so it will always be zero.

 Wrap IpcConnectionContext in RPC headers
 

 Key: HADOOP-9683
 URL: https://issues.apache.org/jira/browse/HADOOP-9683
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Reporter: Luke Lu
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9683.patch


 After HADOOP-9421, all RPC exchanges (including SASL) are wrapped in RPC 
 headers except IpcConnectionContext, which is still raw protobuf, which makes 
 request pipelining (a desirable feature for things like HDFS-2856) impossible 
 to achieve in a backward compatible way. Let's finish the job and wrap 
 IpcConnectionContext with the RPC request header with the call id of 
 SET_IPC_CONNECTION_CONTEXT. Or simply make it an optional field in the RPC 
 request header that gets set for the first RPC call of a given stream.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8477) Pull in Yahoo! Hadoop Tutorial and update it accordingly.

2013-07-08 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702033#comment-13702033
 ] 

Robert Joseph Evans commented on HADOOP-8477:
-

Always glad to see more people helping out with the community.  If you want 
some help building the documentation feel free to send me the build error you 
are seeing, or send a mail to u...@hadoop.apache.org

Once you start posting some patches we probably want to start creating some 
sub-tasks to put them in piecemeal.

 

 Pull in Yahoo! Hadoop Tutorial and update it accordingly.
 -

 Key: HADOOP-8477
 URL: https://issues.apache.org/jira/browse/HADOOP-8477
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 1.1.0, 2.0.0-alpha
Reporter: Robert Joseph Evans
 Attachments: tutorial.tgz


 I was able to get the Yahoo! Hadoop tutorial released under an Apache 2.0 
 license.  This allows us to make it a official part of the Hadoop Project.  
 This ticket is to pull the tutorial and update it as needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9562) Create REST interface for HDFS health data

2013-07-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702102#comment-13702102
 ] 

Suresh Srinivas commented on HADOOP-9562:
-

Trevor, can you please update the summary and the description of the jira? I 
will review the patch shortly.

 Create REST interface for HDFS health data
 --

 Key: HADOOP-9562
 URL: https://issues.apache.org/jira/browse/HADOOP-9562
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Trevor Lorimer
Priority: Minor
 Attachments: HADOOP-9562.diff


 The HDFS health screen (dfshealth.jsp) displays basic Version, Security and 
 Health information concerning the NameNode, currently this information is 
 accessible from classes in the org.apache.hadoop,hdfs.server.namenode package 
 and cannot be accessed outside the NameNode. This becomes prevalent if the 
 data is required to be displayed using a new user interface.
 The proposal is to create a REST interface to expose all the information 
 displayed on dfshealth.jsp using GET methods. Wrapper classes will be created 
 to serve the data to the REST root resource within the hadoop-hdfs project.
 This will enable the HDFS health screen information to be accessed remotely.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9562) Create REST interface for HDFS health data

2013-07-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702103#comment-13702103
 ] 

Suresh Srinivas commented on HADOOP-9562:
-

After looking at the patch briefly, what is the reason for now doing this 
similar to all the other JMX/Http mechanism? That should require very little 
change.

 Create REST interface for HDFS health data
 --

 Key: HADOOP-9562
 URL: https://issues.apache.org/jira/browse/HADOOP-9562
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Trevor Lorimer
Priority: Minor
 Attachments: HADOOP-9562.diff


 The HDFS health screen (dfshealth.jsp) displays basic Version, Security and 
 Health information concerning the NameNode, currently this information is 
 accessible from classes in the org.apache.hadoop,hdfs.server.namenode package 
 and cannot be accessed outside the NameNode. This becomes prevalent if the 
 data is required to be displayed using a new user interface.
 The proposal is to create a REST interface to expose all the information 
 displayed on dfshealth.jsp using GET methods. Wrapper classes will be created 
 to serve the data to the REST root resource within the hadoop-hdfs project.
 This will enable the HDFS health screen information to be accessed remotely.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9685) hadoop-config.cmd: builds a classpath that is too long on windows

2013-07-08 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9685:
--

Hadoop Flags: Reviewed

+1 for the patch.  I'll commit this.

 hadoop-config.cmd: builds a classpath that is too long on windows
 -

 Key: HADOOP-9685
 URL: https://issues.apache.org/jira/browse/HADOOP-9685
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 1-win
Reporter: Raja Aluri
Assignee: Raja Aluri
 Fix For: 1-win

 Attachments: HADOOP-9685.branch-1-win.patch


 hadoop-config.cmd, sets the class path by listing each jar file to the 
 CLASAPATH. Many times the downstream components use hadoop_config.cmd to set 
 the CLASAPATH for hadoop jars. After adding their own class path, many times 
 the classpath length is hitting the windows command lenght limit.
 We should try to use classpath wildcards to reduce the length of the 
 classpath.
 http://docs.oracle.com/javase/6/docs/technotes/tools/windows/classpath.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9685) hadoop-config.cmd: builds a classpath that is too long on windows

2013-07-08 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9685:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

I committed this to branch-1-win.  Thank you for the patch, Raja.

 hadoop-config.cmd: builds a classpath that is too long on windows
 -

 Key: HADOOP-9685
 URL: https://issues.apache.org/jira/browse/HADOOP-9685
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 1-win
Reporter: Raja Aluri
Assignee: Raja Aluri
 Fix For: 1-win

 Attachments: HADOOP-9685.branch-1-win.patch


 hadoop-config.cmd, sets the class path by listing each jar file to the 
 CLASAPATH. Many times the downstream components use hadoop_config.cmd to set 
 the CLASAPATH for hadoop jars. After adding their own class path, many times 
 the classpath length is hitting the windows command lenght limit.
 We should try to use classpath wildcards to reduce the length of the 
 classpath.
 http://docs.oracle.com/javase/6/docs/technotes/tools/windows/classpath.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9688) Add globally unique Client ID to RPC requests

2013-07-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9688:


Summary: Add globally unique Client ID to RPC requests  (was: Add globally 
unique request ID to RPC requests)

 Add globally unique Client ID to RPC requests
 -

 Key: HADOOP-9688
 URL: https://issues.apache.org/jira/browse/HADOOP-9688
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HADOOP-9688.clientId.1.patch, 
 HADOOP-9688.clientId.patch, HADOOP-9688.patch


 This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
 ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9688) Add globally unique Client ID to RPC requests

2013-07-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9688:


Fix Version/s: 3.0.0

 Add globally unique Client ID to RPC requests
 -

 Key: HADOOP-9688
 URL: https://issues.apache.org/jira/browse/HADOOP-9688
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 3.0.0

 Attachments: HADOOP-9688.clientId.1.patch, 
 HADOOP-9688.clientId.patch, HADOOP-9688.patch


 This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
 ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9688) Add globally unique Client ID to RPC requests

2013-07-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702132#comment-13702132
 ] 

Suresh Srinivas commented on HADOOP-9688:
-

I committed the patch to trunk. I will wait for a day or so before committing 
it to branch-2.1.0-beta.

 Add globally unique Client ID to RPC requests
 -

 Key: HADOOP-9688
 URL: https://issues.apache.org/jira/browse/HADOOP-9688
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HADOOP-9688.clientId.1.patch, 
 HADOOP-9688.clientId.patch, HADOOP-9688.patch


 This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
 ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9562) Create REST interface for HDFS health data

2013-07-08 Thread Trevor Lorimer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702136#comment-13702136
 ] 

Trevor Lorimer commented on HADOOP-9562:


Hi Suresh, 
The current JMX mechanism exposes data from the FSNameSystem class, but I 
needed the Status information from the NameNode class which is currently not 
available in JMX.
The other reason was CorruptFiles could contain a List of hundreds of entries 
which seemed out of place with the amount of current data returned in JMX.
So we decided on adding a REST interface to HDFS, that was not much more work 
than updating the JMX.

Thanks,
Trevor 

 Create REST interface for HDFS health data
 --

 Key: HADOOP-9562
 URL: https://issues.apache.org/jira/browse/HADOOP-9562
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Trevor Lorimer
Priority: Minor
 Attachments: HADOOP-9562.diff


 The HDFS health screen (dfshealth.jsp) displays basic Version, Security and 
 Health information concerning the NameNode, currently this information is 
 accessible from classes in the org.apache.hadoop,hdfs.server.namenode package 
 and cannot be accessed outside the NameNode. This becomes prevalent if the 
 data is required to be displayed using a new user interface.
 The proposal is to create a REST interface to expose all the information 
 displayed on dfshealth.jsp using GET methods. Wrapper classes will be created 
 to serve the data to the REST root resource within the hadoop-hdfs project.
 This will enable the HDFS health screen information to be accessed remotely.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9688) Add globally unique Client ID to RPC requests

2013-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702139#comment-13702139
 ] 

Hudson commented on HADOOP-9688:


Integrated in Hadoop-trunk-Commit #4048 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4048/])
HADOOP-9688. Add globally unique Client ID to RPC requests. Contributed by 
Suresh Srinivas. (Revision 1500843)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1500843
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ProtoUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/RpcHeader.proto
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestProtoBufRpc.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestProtoUtil.java


 Add globally unique Client ID to RPC requests
 -

 Key: HADOOP-9688
 URL: https://issues.apache.org/jira/browse/HADOOP-9688
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 3.0.0

 Attachments: HADOOP-9688.clientId.1.patch, 
 HADOOP-9688.clientId.patch, HADOOP-9688.patch


 This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
 ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9562) Create REST interface for HDFS health data

2013-07-08 Thread Trevor Lorimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Lorimer updated HADOOP-9562:
---

Description: 
The HDFS health screen (dfshealth.jsp) displays basic Version, Security and 
Health information concerning the NameNode, currently this information is 
accessible from classes in the org.apache.hadoop,hdfs.server.namenode package 
and cannot be accessed outside the NameNode. This becomes prevalent if the data 
is required to be displayed using a new user interface.

The proposal is to create a REST interface to expose the NameNode information 
displayed on dfshealth.jsp using GET methods. Wrapper classes will be created 
to serve the data to the REST root resource within the hadoop-hdfs project.

This will enable the HDFS health screen information to be accessed remotely.

  was:
The HDFS health screen (dfshealth.jsp) displays basic Version, Security and 
Health information concerning the NameNode, currently this information is 
accessible from classes in the org.apache.hadoop,hdfs.server.namenode package 
and cannot be accessed outside the NameNode. This becomes prevalent if the data 
is required to be displayed using a new user interface.

The proposal is to create a REST interface to expose all the information 
displayed on dfshealth.jsp using GET methods. Wrapper classes will be created 
to serve the data to the REST root resource within the hadoop-hdfs project.

This will enable the HDFS health screen information to be accessed remotely.


 Create REST interface for HDFS health data
 --

 Key: HADOOP-9562
 URL: https://issues.apache.org/jira/browse/HADOOP-9562
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Trevor Lorimer
Priority: Minor
 Attachments: HADOOP-9562.diff


 The HDFS health screen (dfshealth.jsp) displays basic Version, Security and 
 Health information concerning the NameNode, currently this information is 
 accessible from classes in the org.apache.hadoop,hdfs.server.namenode package 
 and cannot be accessed outside the NameNode. This becomes prevalent if the 
 data is required to be displayed using a new user interface.
 The proposal is to create a REST interface to expose the NameNode information 
 displayed on dfshealth.jsp using GET methods. Wrapper classes will be created 
 to serve the data to the REST root resource within the hadoop-hdfs project.
 This will enable the HDFS health screen information to be accessed remotely.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9688) Add globally unique Client ID to RPC requests

2013-07-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702151#comment-13702151
 ] 

Hudson commented on HADOOP-9688:


Integrated in Hadoop-trunk-Commit #4049 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4049/])
HADOOP-9688. Adding a file missed in the commit 1500843 (Revision 1500847)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1500847
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcConstants.java


 Add globally unique Client ID to RPC requests
 -

 Key: HADOOP-9688
 URL: https://issues.apache.org/jira/browse/HADOOP-9688
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 3.0.0

 Attachments: HADOOP-9688.clientId.1.patch, 
 HADOOP-9688.clientId.patch, HADOOP-9688.patch


 This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
 ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9707) Fix register lists for crc32c inline assembly

2013-07-08 Thread Todd Lipcon (JIRA)
Todd Lipcon created HADOOP-9707:
---

 Summary: Fix register lists for crc32c inline assembly
 Key: HADOOP-9707
 URL: https://issues.apache.org/jira/browse/HADOOP-9707
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor


The inline assembly used for the crc32 instructions has an incorrect clobber 
list: the computed CRC values are in-out variables and thus need to use the 
matching constraint syntax in the clobber list.

This doesn't seem to cause a problem now in Hadoop, but may break in a 
different compiler version which allocates registers differently, or may break 
when the same code is used in another context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9707) Fix register lists for crc32c inline assembly

2013-07-08 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9707:


Status: Patch Available  (was: Open)

 Fix register lists for crc32c inline assembly
 -

 Key: HADOOP-9707
 URL: https://issues.apache.org/jira/browse/HADOOP-9707
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-9707.txt


 The inline assembly used for the crc32 instructions has an incorrect clobber 
 list: the computed CRC values are in-out variables and thus need to use the 
 matching constraint syntax in the clobber list.
 This doesn't seem to cause a problem now in Hadoop, but may break in a 
 different compiler version which allocates registers differently, or may 
 break when the same code is used in another context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9707) Fix register lists for crc32c inline assembly

2013-07-08 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9707:


Attachment: hadoop-9707.txt

 Fix register lists for crc32c inline assembly
 -

 Key: HADOOP-9707
 URL: https://issues.apache.org/jira/browse/HADOOP-9707
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-9707.txt


 The inline assembly used for the crc32 instructions has an incorrect clobber 
 list: the computed CRC values are in-out variables and thus need to use the 
 matching constraint syntax in the clobber list.
 This doesn't seem to cause a problem now in Hadoop, but may break in a 
 different compiler version which allocates registers differently, or may 
 break when the same code is used in another context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9307) BufferedFSInputStream.read returns wrong results after certain seeks

2013-07-08 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9307:


   Resolution: Fixed
Fix Version/s: 1.3.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Hearing no objections, committed to branch-1 as well.

 BufferedFSInputStream.read returns wrong results after certain seeks
 

 Key: HADOOP-9307
 URL: https://issues.apache.org/jira/browse/HADOOP-9307
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 3.0.0, 2.1.0-beta, 1.3.0

 Attachments: hadoop-9307-branch-1.txt, hadoop-9307.txt


 After certain sequences of seek/read, BufferedFSInputStream can silently 
 return data from the wrong part of the file. Further description in first 
 comment below.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9707) Fix register lists for crc32c inline assembly

2013-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702236#comment-13702236
 ] 

Hadoop QA commented on HADOOP-9707:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591237/hadoop-9707.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2749//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2749//console

This message is automatically generated.

 Fix register lists for crc32c inline assembly
 -

 Key: HADOOP-9707
 URL: https://issues.apache.org/jira/browse/HADOOP-9707
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-9707.txt


 The inline assembly used for the crc32 instructions has an incorrect clobber 
 list: the computed CRC values are in-out variables and thus need to use the 
 matching constraint syntax in the clobber list.
 This doesn't seem to cause a problem now in Hadoop, but may break in a 
 different compiler version which allocates registers differently, or may 
 break when the same code is used in another context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9683) Wrap IpcConnectionContext in RPC headers

2013-07-08 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702241#comment-13702241
 ] 

Luke Lu commented on HADOOP-9683:
-

The current count is not very useful in the catch block. I was hoping that you 
can propagate the count via your RpcReplyException. I'm fine with addressing it 
in a separate jira (useful as a unique signature for debugging of alternative 
client impls) though. +1 for the patch. 

 Wrap IpcConnectionContext in RPC headers
 

 Key: HADOOP-9683
 URL: https://issues.apache.org/jira/browse/HADOOP-9683
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Reporter: Luke Lu
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9683.patch


 After HADOOP-9421, all RPC exchanges (including SASL) are wrapped in RPC 
 headers except IpcConnectionContext, which is still raw protobuf, which makes 
 request pipelining (a desirable feature for things like HDFS-2856) impossible 
 to achieve in a backward compatible way. Let's finish the job and wrap 
 IpcConnectionContext with the RPC request header with the call id of 
 SET_IPC_CONNECTION_CONTEXT. Or simply make it an optional field in the RPC 
 request header that gets set for the first RPC call of a given stream.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-9708) Potential ConcurrentModificationException at AbstractService class.

2013-07-08 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli moved YARN-504 to HADOOP-9708:
--

  Component/s: (was: resourcemanager)
 Target Version/s:   (was: 3.0.0)
Affects Version/s: (was: 3.0.0)
   3.0.0
  Key: HADOOP-9708  (was: YARN-504)
  Project: Hadoop Common  (was: Hadoop YARN)

 Potential ConcurrentModificationException at AbstractService class.
 ---

 Key: HADOOP-9708
 URL: https://issues.apache.org/jira/browse/HADOOP-9708
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Przemyslaw Pretki
Priority: Minor
  Labels: hadoop

 Steps to reproduce the exception:
 {code}
 BreakableService svc = new BreakableService();
 svc.register(new ServiceStateChangeListener() 
 {
 @Override
 public void stateChanged(Service service) 
 {
 if (service.getServiceState() == STATE.STOPPED)
   {
   service.unregister(this);
   }
 }
 });
 svc.init(new Configuration());
 svc.start();
 svc.stop();
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9432) Add support for markdown .md files in site documentation

2013-07-08 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702289#comment-13702289
 ] 

Luke Lu commented on HADOOP-9432:
-

doxia markdown module is based [pegdown|https://github.com/sirthias/pegdown] 
which supports table nicely.

 Add support for markdown .md files in site documentation
 

 Key: HADOOP-9432
 URL: https://issues.apache.org/jira/browse/HADOOP-9432
 Project: Hadoop Common
  Issue Type: New Feature
  Components: build, documentation
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Minor
 Attachments: HADOOP-9432.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 The markdown syntax for marking up text is something which the {{mvn site}} 
 build can be set up to support alongside the existing APT formatted text.
 Markdown offers many advantages
  # It's more widely understood.
  # There's tooling support in various text editors (TextMate, an IDEA plugin 
 and others)
  # It can be directly rendered in github
  # the {{.md}} files can be named {{.md.vm}} to trigger velocity 
 preprocessing, at the expense of direct viewing in github
 feature #3 is good as it means that you can point people directly at a doc 
 via a github mirror, and have it rendered. 
 I propose adding the options to Maven to enable content be written as {{.md}} 
 and {{.md.vm}} files in the directory {{src/site/markdown}}. This does not 
 require any changes to the existing {{.apt}} files, which can co-exist and 
 cross-reference each other.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9688) Add globally unique Client ID to RPC requests

2013-07-08 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702311#comment-13702311
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-9688:
-

Guess I am late, but what is the uniqueness of the Client ID useful for? And 
clearly without an external service to generate unique IDs in a cluster, this 
won't be easy. So, is that a requirement?

 Add globally unique Client ID to RPC requests
 -

 Key: HADOOP-9688
 URL: https://issues.apache.org/jira/browse/HADOOP-9688
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 3.0.0

 Attachments: HADOOP-9688.clientId.1.patch, 
 HADOOP-9688.clientId.patch, HADOOP-9688.patch


 This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
 ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9688) Add globally unique Client ID to RPC requests

2013-07-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702321#comment-13702321
 ] 

Suresh Srinivas commented on HADOOP-9688:
-

[~vinodkv] Please see why client ID needs to be unique - 
https://issues.apache.org/jira/browse/HADOOP-9688?focusedCommentId=13698436page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13698436

ClientID + callID makes each request uniquely identifiable for the retry cache.

ClientID is UUID generated using - 
http://docs.oracle.com/javase/6/docs/api/java/util/UUID.html

 Add globally unique Client ID to RPC requests
 -

 Key: HADOOP-9688
 URL: https://issues.apache.org/jira/browse/HADOOP-9688
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 3.0.0

 Attachments: HADOOP-9688.clientId.1.patch, 
 HADOOP-9688.clientId.patch, HADOOP-9688.patch


 This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
 ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9688) Add globally unique Client ID to RPC requests

2013-07-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702327#comment-13702327
 ] 

Suresh Srinivas commented on HADOOP-9688:
-

Some additional discussion about random UUID:
http://stackoverflow.com/questions/2513573/how-good-is-javas-uuid-randomuuid
http://en.wikipedia.org/wiki/Universally_unique_identifier#Random_UUID_probability_of_duplicates

 Add globally unique Client ID to RPC requests
 -

 Key: HADOOP-9688
 URL: https://issues.apache.org/jira/browse/HADOOP-9688
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 3.0.0

 Attachments: HADOOP-9688.clientId.1.patch, 
 HADOOP-9688.clientId.patch, HADOOP-9688.patch


 This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
 ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-08 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702418#comment-13702418
 ] 

Akira AJISAKA commented on HADOOP-8873:
---

On 1.2.0, mkdir doesn't fail if parent direcotry exists.

{code}
$ hadoop dfs -mkdir parent
$ hadoop dfs -mkdir parent/child

$ hadoop dfs -ls parent
Found 1 items
drwxr-xr-x   - hadoop supergroup  0 2013-07-08 13:35 
/user/hadoop/parent/child
{code}

If -p option is set, both parent/child2 and -p directories are created.

{code}
$ hadoop dfs -mkdir -p parent/child2
$ hadoop dfs -ls
Found 2 items
drwxr-xr-x   - hadoop supergroup  0 2013-07-08 13:35 /user/hadoop/-p
drwxr-xr-x   - hadoop supergroup  0 2013-07-08 13:35 /user/hadoop/parent

$ hadoop dfs -ls parent
Found 2 items
drwxr-xr-x   - hadoop supergroup  0 2013-07-08 13:35 
/user/hadoop/parent/child
drwxr-xr-x   - hadoop supergroup  0 2013-07-08 13:35 
/user/hadoop/parent/child2
{code}

 Port HADOOP-8175 (Add mkdir -p flag) to branch-1
 

 Key: HADOOP-8873
 URL: https://issues.apache.org/jira/browse/HADOOP-8873
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0
Reporter: Eli Collins
  Labels: newbie

 Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
 to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
 currently requires the -p option to create parent directories but a program 
 that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9487) Deprecation warnings in Configuration should go to their own log or otherwise be suppressible

2013-07-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702457#comment-13702457
 ] 

Sergey Shelukhin commented on HADOOP-9487:
--

This warning is also output in HBase shell.
The latest patch looks reasonable

 Deprecation warnings in Configuration should go to their own log or otherwise 
 be suppressible
 -

 Key: HADOOP-9487
 URL: https://issues.apache.org/jira/browse/HADOOP-9487
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Steve Loughran
 Attachments: HADOOP-9487.patch, HADOOP-9487.patch


 Running local pig jobs triggers large quantities of warnings about deprecated 
 properties -something I don't care about as I'm not in a position to fix 
 without delving into Pig. 
 I can suppress them by changing the log level, but that can hide other 
 warnings that may actually matter.
 If there was a special Configuration.deprecated log for all deprecation 
 messages, this log could be suppressed by people who don't want noisy logs

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-08 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-8873:
--

Affects Version/s: (was: 1.0.0)
   1.2.0

 Port HADOOP-8175 (Add mkdir -p flag) to branch-1
 

 Key: HADOOP-8873
 URL: https://issues.apache.org/jira/browse/HADOOP-8873
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.2.0
Reporter: Eli Collins
Assignee: Akira AJISAKA
  Labels: newbie

 Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
 to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
 currently requires the -p option to create parent directories but a program 
 that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-08 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HADOOP-8873:
-

Assignee: Akira AJISAKA

 Port HADOOP-8175 (Add mkdir -p flag) to branch-1
 

 Key: HADOOP-8873
 URL: https://issues.apache.org/jira/browse/HADOOP-8873
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0
Reporter: Eli Collins
Assignee: Akira AJISAKA
  Labels: newbie

 Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
 to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
 currently requires the -p option to create parent directories but a program 
 that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9709) Add ability in Hadoop servers (Namenode, Datanode, ResourceManager ) to support multiple QOP (Authentication , Privacy)

2013-07-08 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-9709:


 Summary: Add ability in Hadoop servers (Namenode, Datanode, 
ResourceManager )  to support multiple QOP (Authentication , Privacy) 
 Key: HADOOP-9709
 URL: https://issues.apache.org/jira/browse/HADOOP-9709
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Benoy Antony
Assignee: Benoy Antony


Hadoop Servers currently support only one QOP for the whole cluster.
We want Hadoop servers to support different quality of protection at the same 
time. This will enable different clients to use a different QOP.

A simple usecase will be to define two QOP .
1.  Authentication
2. Privacy (Privacy includes Authentication) . 

The Hadoop servers and internal clients does Authentication without incurring 
cost of encryption. External clients use Privacy. 
The hadoop servers and internal clients are inside the firewall. External 
clients are outside the firewall.

As an enhancement , it is possible to add  a pluggable check (eg. IP whitelist) 
to identify internal and external clients.

The implementation is simple. 
Each Hadoop server listens on two ports by configuration with different QOP. 
The servers - NameNode, DataNode, ResourceManager listen on two ports (much 
like 80(http) and 443(https)) for RPC and Streaming.  ApplicationMaster uses a 
range of ports for privacy and non-privacy and picks up a port and QOP based on 
client's config.
The clients specify  the port which they are suppose to connect to. Clients 
specify the rpc protection  as well encryption policy for streaming layer.

This is an umbrella jira . 
I have divided this feature into multiple small tasks. I'll add testcases once 
the approach is reviewed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9710) Modify security layer to support QOP based on ports

2013-07-08 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-9710:


 Summary: Modify security layer  to support QOP based on ports
 Key: HADOOP-9710
 URL: https://issues.apache.org/jira/browse/HADOOP-9710
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Benoy Antony
Assignee: Benoy Antony


Hadoop Servers currently support only one QOP for all of the cluster.
This jira allows a server two have different QOP on different ports. 

The QOP is set based on the port.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9710) Modify security layer to support QOP based on ports

2013-07-08 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-9710:
-

Attachment: HADOOP-9710.patch

The testcases will be added once the approach is reviewed.

 Modify security layer  to support QOP based on ports
 

 Key: HADOOP-9710
 URL: https://issues.apache.org/jira/browse/HADOOP-9710
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-9710.patch


 Hadoop Servers currently support only one QOP for all of the cluster.
 This jira allows a server two have different QOP on different ports. 
 The QOP is set based on the port.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9707) Fix register lists for crc32c inline assembly

2013-07-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702563#comment-13702563
 ] 

Kihwal Lee commented on HADOOP-9707:


The clobber lists are actually okay. Your change is to the templates. It 
affects register allocation, but not in this case because of the nature of the 
instruction. I don't think it has anything to do with correctness.  At the 
RTL-level and in the binary, I see the two pieces of generated code are 
identical, but there are extra information in RTL.  If code was written in a 
way that the link between source and assembly code is confusing, this can be 
helpful. But it still does not affect the actual code being executed.

Without turning the compiler optimization on, the input and output variables 
are copied in and out inside the loop, which starves the pipeline. Even in this 
case, all outputs are copied out because the template specifies all of them.

The patch is okay. I am curious whether you have seen any cases in which it 
breaks.

 Fix register lists for crc32c inline assembly
 -

 Key: HADOOP-9707
 URL: https://issues.apache.org/jira/browse/HADOOP-9707
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-9707.txt


 The inline assembly used for the crc32 instructions has an incorrect clobber 
 list: the computed CRC values are in-out variables and thus need to use the 
 matching constraint syntax in the clobber list.
 This doesn't seem to cause a problem now in Hadoop, but may break in a 
 different compiler version which allocates registers differently, or may 
 break when the same code is used in another context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9709) Add ability in Hadoop servers (Namenode, Datanode, ResourceManager ) to support multiple QOP (Authentication , Privacy)

2013-07-08 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-9709:
-

Description: 
Hadoop Servers currently support only one QOP for the whole cluster.
We want Hadoop servers to support different quality of protection at the same 
time. This will enable different clients to use different QOP.

A simple usecase:

Let each Hadoop server support two QOP .
1.  Authentication
2. Privacy (Privacy includes Authentication) . 

The Hadoop servers and internal clients does Authentication without incurring 
cost of encryption. External clients use Privacy. 
The hadoop servers and internal clients are inside the firewall. External 
clients are outside the firewall.

As an enhancement , it is possible to add  a pluggable check (eg. IP whitelist) 
to identify internal and external clients. 

The implementation is simple. 
Each Hadoop server listens on multiple ports by configuration with different 
QOP. 

For the above usecase mentioned above, the servers - NameNode, DataNode, 
ResourceManager listen on two ports (much like 80(http) and 443(https)) for RPC 
and Streaming.  ApplicationMaster uses a range of ports for privacy and 
non-privacy and picks up a port and QOP based on client's config for client 
communication.

The clients specify the port which they are supposed to connect to. Clients 
specify the rpc protection as well encryption policy for streaming layer.

This is an umbrella jira . 
I have divided this feature into multiple small tasks. I'll add testcases once 
the approach is reviewed.

  was:
Hadoop Servers currently support only one QOP for the whole cluster.
We want Hadoop servers to support different quality of protection at the same 
time. This will enable different clients to use a different QOP.

A simple usecase will be to define two QOP .
1.  Authentication
2. Privacy (Privacy includes Authentication) . 

The Hadoop servers and internal clients does Authentication without incurring 
cost of encryption. External clients use Privacy. 
The hadoop servers and internal clients are inside the firewall. External 
clients are outside the firewall.

As an enhancement , it is possible to add  a pluggable check (eg. IP whitelist) 
to identify internal and external clients.

The implementation is simple. 
Each Hadoop server listens on two ports by configuration with different QOP. 
The servers - NameNode, DataNode, ResourceManager listen on two ports (much 
like 80(http) and 443(https)) for RPC and Streaming.  ApplicationMaster uses a 
range of ports for privacy and non-privacy and picks up a port and QOP based on 
client's config.
The clients specify  the port which they are suppose to connect to. Clients 
specify the rpc protection  as well encryption policy for streaming layer.

This is an umbrella jira . 
I have divided this feature into multiple small tasks. I'll add testcases once 
the approach is reviewed.


 Add ability in Hadoop servers (Namenode, Datanode, ResourceManager )  to 
 support multiple QOP (Authentication , Privacy) 
 -

 Key: HADOOP-9709
 URL: https://issues.apache.org/jira/browse/HADOOP-9709
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Benoy Antony
Assignee: Benoy Antony

 Hadoop Servers currently support only one QOP for the whole cluster.
 We want Hadoop servers to support different quality of protection at the same 
 time. This will enable different clients to use different QOP.
 A simple usecase:
 Let each Hadoop server support two QOP .
 1.  Authentication
 2. Privacy (Privacy includes Authentication) . 
 The Hadoop servers and internal clients does Authentication without incurring 
 cost of encryption. External clients use Privacy. 
 The hadoop servers and internal clients are inside the firewall. External 
 clients are outside the firewall.
 As an enhancement , it is possible to add  a pluggable check (eg. IP 
 whitelist) to identify internal and external clients. 
 The implementation is simple. 
 Each Hadoop server listens on multiple ports by configuration with different 
 QOP. 
 For the above usecase mentioned above, the servers - NameNode, DataNode, 
 ResourceManager listen on two ports (much like 80(http) and 443(https)) for 
 RPC and Streaming.  ApplicationMaster uses a range of ports for privacy and 
 non-privacy and picks up a port and QOP based on client's config for client 
 communication.
 The clients specify the port which they are supposed to connect to. Clients 
 specify the rpc protection as well encryption policy for streaming layer.
 This is an umbrella jira . 
 I have divided this feature into multiple small tasks. I'll add testcases 
 once the approach is reviewed.

--
This message is automatically generated by JIRA.

[jira] [Updated] (HADOOP-9691) RPC clients can generate call ID using AtomicInteger instead of synchronizing on the Client instance.

2013-07-08 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9691:
--

Attachment: HADOOP-9691.3.patch

Attaching version 3 of the patch to use the new {{RpcConstants}} class for 
defining {{INVALID_CALL_ID}}, now that the new class has been committed to 
trunk in HADOOP-9688.

 RPC clients can generate call ID using AtomicInteger instead of synchronizing 
 on the Client instance.
 -

 Key: HADOOP-9691
 URL: https://issues.apache.org/jira/browse/HADOOP-9691
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-9691.1.patch, HADOOP-9691.2.patch, 
 HADOOP-9691.3.patch


 As noted in discussion on HADOOP-9688, we can optimize generation of call ID 
 in the RPC client code.  Currently, it synchronizes on the {{Client}} 
 instance to coordinate access to a shared {{int}}.  We can switch this to 
 {{AtomicInteger}} to avoid lock contention.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9417) Support for symlink resolution in LocalFileSystem / RawLocalFileSystem

2013-07-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702679#comment-13702679
 ] 

Colin Patrick McCabe commented on HADOOP-9417:
--

looks good overall.

{code}
+  private String readLink(Path p) {
+/* NB: Use readSymbolicLink in java.nio.file.Path once available. Could
+ * use getCanonicalPath in File to get the target of the symlink but that
+ * does not indicate if the given path refers to a symlink.
+ */
+try {
+  final String path = p.toUri().getPath();
+  return Shell.execCommand(Shell.READ_LINK_COMMAND, path).trim();
+} catch (IOException x) {
+  return ;
+}
+  }
{code}

Silent failure seems like a bad idea here.  We should at least be doing what we 
do in the FileNotFound section of getFileLinkStatusInternal.  Currently you 
just pretend it isn't a symlink at all when an error happens... seems wrong.

from FileSystemTestWrapper.java:
{code}
  @Override
  public void mkdir(Path dir, FsPermission permission, boolean createParent)
  throws AccessControlException, FileAlreadyExistsException,
  FileNotFoundException, ParentNotDirectoryException,
  UnsupportedFileSystemException, IOException {
// Note that there is no mkdir in FileSystem, it always does
// mkdir -p (creating parent directories).
fs.mkdirs(dir, permission);
  }
{code}

Sorry, I guess I missed this in a previous review, but this should throw an 
exception if createParent = false, no?

It bothers me that symlinks on the local filesystem can't link to other 
filesystems.  It seems inconsistent that I can't link, to, say hdfs://foo/bar 
from file:///baz.  But I will open another JIRA to discuss that.

 Support for symlink resolution in LocalFileSystem / RawLocalFileSystem
 --

 Key: HADOOP-9417
 URL: https://issues.apache.org/jira/browse/HADOOP-9417
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-9417-1.patch, hadoop-9417-2.patch, 
 hadoop-9417-3.patch, hadoop-9417-4.patch, hadoop-9417-5.patch


 Add symlink resolution support to LocalFileSystem/RawLocalFileSystem as well 
 as tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1

2013-07-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702699#comment-13702699
 ] 

Suresh Srinivas commented on HADOOP-8873:
-

bq. On 1.2.0, mkdir doesn't fail if parent direcotry exists.
But does mkdir fail if the directory you are creating exists not the parent 
directory?

bq. If -p option is set, both parent/child2 and -p directories are created.
Which release are you running this on? Assuming it is 2.x, not sure I 
understand how this scenario is related to the discussion in this bug.

 Port HADOOP-8175 (Add mkdir -p flag) to branch-1
 

 Key: HADOOP-8873
 URL: https://issues.apache.org/jira/browse/HADOOP-8873
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.2.0
Reporter: Eli Collins
Assignee: Akira AJISAKA
  Labels: newbie

 Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release 
 to help users transition to the new shell behavior. In Hadoop 2.x mkdir 
 currently requires the -p option to create parent directories but a program 
 that specifies it won't work on 1.x since it doesn't support this option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9688) Add globally unique Client ID to RPC requests

2013-07-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702716#comment-13702716
 ] 

Suresh Srinivas commented on HADOOP-9688:
-

Given this is an incompatible change, I propose merging this to 
branch-2.1.0-beta. If I do not hear any objections, I plan to do this by 
tomorrow.

 Add globally unique Client ID to RPC requests
 -

 Key: HADOOP-9688
 URL: https://issues.apache.org/jira/browse/HADOOP-9688
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-9688.clientId.1.patch, 
 HADOOP-9688.clientId.patch, HADOOP-9688.patch


 This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
 ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9688) Add globally unique Client ID to RPC requests

2013-07-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9688:


Priority: Blocker  (was: Major)

 Add globally unique Client ID to RPC requests
 -

 Key: HADOOP-9688
 URL: https://issues.apache.org/jira/browse/HADOOP-9688
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-9688.clientId.1.patch, 
 HADOOP-9688.clientId.patch, HADOOP-9688.patch


 This is a subtask in hadoop-common related to HDFS-4942 to add unique request 
 ID to RPC requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9691) RPC clients can generate call ID using AtomicInteger instead of synchronizing on the Client instance.

2013-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702752#comment-13702752
 ] 

Hadoop QA commented on HADOOP-9691:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591323/HADOOP-9691.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2750//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2750//console

This message is automatically generated.

 RPC clients can generate call ID using AtomicInteger instead of synchronizing 
 on the Client instance.
 -

 Key: HADOOP-9691
 URL: https://issues.apache.org/jira/browse/HADOOP-9691
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-9691.1.patch, HADOOP-9691.2.patch, 
 HADOOP-9691.3.patch


 As noted in discussion on HADOOP-9688, we can optimize generation of call ID 
 in the RPC client code.  Currently, it synchronizes on the {{Client}} 
 instance to coordinate access to a shared {{int}}.  We can switch this to 
 {{AtomicInteger}} to avoid lock contention.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9699) org.apache.hadoop.fs.FileUtil#canRead and canWrite should return false on SecurityExceptions.

2013-07-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702755#comment-13702755
 ] 

Colin Patrick McCabe commented on HADOOP-9699:
--

I think Steve is right here.  If the problem is confined to the unit tests, 
maybe the solution should be too.  Unless there are people running security 
managers in production for whom this is a problem?

You can move the JIRA by looking under more actions.  Please rename it too if 
you're going that route.

 org.apache.hadoop.fs.FileUtil#canRead and canWrite should return false on 
 SecurityExceptions.
 -

 Key: HADOOP-9699
 URL: https://issues.apache.org/jira/browse/HADOOP-9699
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mark Miller
Priority: Minor
 Attachments: HADOOP-9699.patch


 Currently, if a security manager denies access on these calls, a 
 SecurityException is thrown rather than returning false.
 This causes ugly behavior in MiniDFSCluster#createPermissionsDiagnosisString 
 for example. If you are running with a security manager, that method can hide 
 root exceptions on you because when it tries to create the permissions 
 string, canRead and canWrite can throw security exceptions - the original 
 exception is lost, and the problem may not be permissions related at all (it 
 wasn't in the case that I ran into this).
 Rather than hardening createPermissionsDiagnosisString, it seems like these 
 methods should just treat SecurityExceptions as lack of access.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9703) org.apache.hadoop.ipc.Client leaks threads on stop.

2013-07-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702766#comment-13702766
 ] 

Colin Patrick McCabe commented on HADOOP-9703:
--

I don't think we want one of these per client.  You might consider moving the 
SEND_PARAMS_EXECUTOR into the ClientCache, and having that cache pass it into 
the constructor of {{org.apache.hadoop.ipc.Client}}.

Alternately, you could have some kind of reference-counted static object inside 
the Client class.  That would involve fewer code changes.

 org.apache.hadoop.ipc.Client leaks threads on stop.
 ---

 Key: HADOOP-9703
 URL: https://issues.apache.org/jira/browse/HADOOP-9703
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Tsuyoshi OZAWA
Priority: Minor

 org.apache.hadoop.ipc.Client#stop says Stop all threads related to this 
 client. but does not shutdown the static SEND_PARAMS_EXECUTOR, so usage of 
 this class always leaks threads rather than cleanly closing or shutting down.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9418) Add symlink resolution support to DistributedFileSystem

2013-07-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702808#comment-13702808
 ] 

Colin Patrick McCabe commented on HADOOP-9418:
--

{code}
// Don't to a recursive resolve since it requires a full-fledged DFS
{code}

Grammar.  Also, can you be clearer in the comment about why the regular 
delete() won't work here?

{code}
  protected RemoteIteratorLocatedFileStatus listLocatedStatus(final Path p,
  final PathFilter filter)
  throws IOException {
 ...
// Fully resolve symlinks in path first
src = getPathName(resolvePath(p));
{code}

It's not clear to me why the source path needs to be resolved here.  Certainly, 
POSIX doesn't do this in stat().  Doing this here adds an extra getFileStatus 
call to the NN (including the round-trip).  I guess the argument is that since 
symlink resolution is done client-side, the NN can't handle a path with 
symlinks in it?  Unfortunately, that raises the question of what happens when 
someone changes our fully resolved path to include a symlink in between 
resolvePath() and the call to {{namenode.getListing}}.

 Add symlink resolution support to DistributedFileSystem
 ---

 Key: HADOOP-9418
 URL: https://issues.apache.org/jira/browse/HADOOP-9418
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-9418-1.patch, hadoop-9418-2.patch, 
 hadoop-9418-3.patch, hadoop-9418-4.patch, hadoop-9418-5.patch


 Add symlink resolution support to DistributedFileSystem as well as tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9660) [WINDOWS] Powershell / cmd parses -Dkey=value from command line as [-Dkey, value] which breaks GenericsOptionParser

2013-07-08 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702868#comment-13702868
 ] 

Enis Soztutar commented on HADOOP-9660:
---

I've noticed that in HBase, we still have a lot of classes which accept their 
arguments in --key=value or --key=val1,val2,val3 or similar formats. I had 
opened HBASE-8901 for tracking this, in which I created a small .ps1 script to 
pre-parse the arguments. Can you please take a look at that to see whether that 
makes sense or not. 

 [WINDOWS] Powershell / cmd parses -Dkey=value from command line as [-Dkey, 
 value] which breaks GenericsOptionParser
 ---

 Key: HADOOP-9660
 URL: https://issues.apache.org/jira/browse/HADOOP-9660
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts, util
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 3.0.0, 1-win, 2.3.0

 Attachments: hadoop-9660-branch1_v1.patch, 
 hadoop-9660-branch1_v2.patch, hadoop-9660-branch1_v3-addendum.patch, 
 hadoop-9660-branch1_v3.patch, hadoop-9660-branch2_v1.patch, 
 hadoop-9660-branch2_v2.patch, hadoop-9660-branch2_v3.patch


 When parsing parameters to a class implementing Tool, and using ToolRunner, 
 we can pass 
 {code}
 bin/hadoop tool_class -Dkey=value 
 {code}
 However, powershell parses the '=' sign itself, and sends it to  java as 
 [-Dkey, value] which breaks GenericOptionsParser. 
 Using -Dkey=value or '-Dkey=value' does not fix the problem. The only 
 workaround seems to trick PS by using: 
 '-Dkey=value' (single + double quote)
 In cmd, -Dkey=value works, but not '-Dkey=value'. 
 http://stackoverflow.com/questions/4940375/how-do-i-pass-an-equal-sign-when-calling-a-batch-script-in-powershell

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9707) Fix register lists for crc32c inline assembly

2013-07-08 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702872#comment-13702872
 ] 

Todd Lipcon commented on HADOOP-9707:
-

Hey Kihwal. We did see a case where this code was put into another project and 
it caused problems. It was a few months ago, and now I can't remember whether 
the issue was incorrect results or some kind of crash. [~bockelman] also 
mentioned this to me, that he'd lifted this code into another program and ran 
into an issue where the results for 'crc1' ended up writing over 'crc2'.

 Fix register lists for crc32c inline assembly
 -

 Key: HADOOP-9707
 URL: https://issues.apache.org/jira/browse/HADOOP-9707
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-9707.txt


 The inline assembly used for the crc32 instructions has an incorrect clobber 
 list: the computed CRC values are in-out variables and thus need to use the 
 matching constraint syntax in the clobber list.
 This doesn't seem to cause a problem now in Hadoop, but may break in a 
 different compiler version which allocates registers differently, or may 
 break when the same code is used in another context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9703) org.apache.hadoop.ipc.Client leaks threads on stop.

2013-07-08 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702885#comment-13702885
 ] 

Tsuyoshi OZAWA commented on HADOOP-9703:


[~cmccabe], thanks for comments, and I agree with you. I'll implement it.

 org.apache.hadoop.ipc.Client leaks threads on stop.
 ---

 Key: HADOOP-9703
 URL: https://issues.apache.org/jira/browse/HADOOP-9703
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Tsuyoshi OZAWA
Priority: Minor

 org.apache.hadoop.ipc.Client#stop says Stop all threads related to this 
 client. but does not shutdown the static SEND_PARAMS_EXECUTOR, so usage of 
 this class always leaks threads rather than cleanly closing or shutting down.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9417) Support for symlink resolution in LocalFileSystem / RawLocalFileSystem

2013-07-08 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-9417:


Attachment: hadoop-9417-6.patch

1) Not sure I get your drift here. readLink is only ever called by 
getFileLinkStatusInternal, so the check is done there.

2) I switched this out to use primitiveMkdir, which is perhaps the better fix. 
This method is deprecated in DistributedFileSystem, but is highly unlikely to 
be removed.

 Support for symlink resolution in LocalFileSystem / RawLocalFileSystem
 --

 Key: HADOOP-9417
 URL: https://issues.apache.org/jira/browse/HADOOP-9417
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-9417-1.patch, hadoop-9417-2.patch, 
 hadoop-9417-3.patch, hadoop-9417-4.patch, hadoop-9417-5.patch, 
 hadoop-9417-6.patch


 Add symlink resolution support to LocalFileSystem/RawLocalFileSystem as well 
 as tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9418) Add symlink resolution support to DistributedFileSystem

2013-07-08 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-9418:


Attachment: hadoop-9418-6.patch

Thanks Colin.

1) fixed, bigger comment

2) I think this is for performance reasons, else we'd have to resolve the 
symlink (meaning extra RTTs) each time we fetch a new batch of listings. It's 
safe though if it changes in between, {{dfs.listPaths}} will still resolve it.

 Add symlink resolution support to DistributedFileSystem
 ---

 Key: HADOOP-9418
 URL: https://issues.apache.org/jira/browse/HADOOP-9418
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-9418-1.patch, hadoop-9418-2.patch, 
 hadoop-9418-3.patch, hadoop-9418-4.patch, hadoop-9418-5.patch, 
 hadoop-9418-6.patch


 Add symlink resolution support to DistributedFileSystem as well as tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9417) Support for symlink resolution in LocalFileSystem / RawLocalFileSystem

2013-07-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702951#comment-13702951
 ] 

Hadoop QA commented on HADOOP-9417:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12591379/hadoop-9417-6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2752//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2752//console

This message is automatically generated.

 Support for symlink resolution in LocalFileSystem / RawLocalFileSystem
 --

 Key: HADOOP-9417
 URL: https://issues.apache.org/jira/browse/HADOOP-9417
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hadoop-9417-1.patch, hadoop-9417-2.patch, 
 hadoop-9417-3.patch, hadoop-9417-4.patch, hadoop-9417-5.patch, 
 hadoop-9417-6.patch


 Add symlink resolution support to LocalFileSystem/RawLocalFileSystem as well 
 as tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira