[jira] [Updated] (HADOOP-6311) Add support for unix domain sockets to JNI libs

2012-08-27 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-6311:
-

Attachment: HADOOP-6311.014.patch

This patch abstracts out the file descriptor passing code into two classes, 
{{FdServer}} and {{FdClient}}.  {{FdServer}} can publish file descriptors which 
the {{FdClient}} (possibly in another process) can retrieve.

Unlike UNIX domain sockets, this is not platform-specific.  As a side effect, 
there is no Android-derived code in this patch.

The {{FdClient}} uses a 64-bit random number called a cookie to identify the 
file descriptor it wants to fetch.  The idea is that the HDFS Client will 
receive the cookie from the {{DataNode}} via the usual TCP communication.  
Then, it can be used to fetch the {{FileDescriptor}} from the {{DataNode}}.

There are two defenses against a malicious process trying to grab 
{{FileDescriptors}} from the {{DataNode}} without authorization: the randomly 
generated socket path, and the randomly generated 64-bit cookie, which serves 
as a kind of shared secret.  The latter is the stronger defense.

I also factored out the exception generating code into two separate files, 
{{exception.c}} and {{native_io_exception.c}}.  The code was formerly 
integrated into NativeIO.c; however, I did not want to duplicate it.  It should 
prove useful in general for code that needs to raise Java {{RuntimeException}} 
and {{IOException}}.

tree.h is included to implement a red-black tree (similar to a {{TreeMap}} in 
Java).  This code is already in the hadoop-hdfs project.

Finally, there are some pretty extensive unit tests, including a multi-threaded 
one which really puts the server through its paces.

 Add support for unix domain sockets to JNI libs
 ---

 Key: HADOOP-6311
 URL: https://issues.apache.org/jira/browse/HADOOP-6311
 Project: Hadoop Common
  Issue Type: New Feature
  Components: native
Affects Versions: 0.20.0
Reporter: Todd Lipcon
Assignee: Colin Patrick McCabe
 Attachments: 6311-trunk-inprogress.txt, HADOOP-6311.014.patch, 
 HADOOP-6311-0.patch, HADOOP-6311-1.patch, hadoop-6311.txt


 For HDFS-347 we need to use unix domain sockets. This JIRA is to include a 
 library in common which adds a o.a.h.net.unix package based on the code from 
 Android (apache 2 license)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6311) Add support for unix domain sockets to JNI libs

2012-08-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442312#comment-13442312
 ] 

Hadoop QA commented on HADOOP-6311:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542580/HADOOP-6311.014.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 javadoc.  The javadoc tool appears to have generated 12 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.io.nativeio.TestNativeIO
  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1363//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1363//console

This message is automatically generated.

 Add support for unix domain sockets to JNI libs
 ---

 Key: HADOOP-6311
 URL: https://issues.apache.org/jira/browse/HADOOP-6311
 Project: Hadoop Common
  Issue Type: New Feature
  Components: native
Affects Versions: 0.20.0
Reporter: Todd Lipcon
Assignee: Colin Patrick McCabe
 Attachments: 6311-trunk-inprogress.txt, HADOOP-6311.014.patch, 
 HADOOP-6311-0.patch, HADOOP-6311-1.patch, hadoop-6311.txt


 For HDFS-347 we need to use unix domain sockets. This JIRA is to include a 
 library in common which adds a o.a.h.net.unix package based on the code from 
 Android (apache 2 license)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8724) Add improved APIs for globbing

2012-08-27 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442315#comment-13442315
 ] 

Chris Douglas commented on HADOOP-8724:
---

bq. Simply instantiating a Glob object when the path might not be a glob may be 
problematic. Perhaps a better way to handle might be to have a static 
Path.create(String) which returns either a Path or GlobPath.

Distinguishing intent where the {{Path}} is created (as above) could solve part 
of the problem with HADOOP-8709 (caller can resolve which API to use?), but I 
don't think a subtype of {{Path}} will solve the other issues. Dispatch is 
still on the static type, so nothing is solved for the callee.

bq. The first was to simply have a base path and a string pattern as the 
parameters to globStatus. I thought it would be better to encapsulate the two 
into a single Glob object so it is obvious when an API takes a glob and when it 
does not.

Having an API advertise its globiness is useful, I like it. Though users 
specifying a single resource will need to create {{Glob}} objects on top of 
{{Path}} objects which are really {{URI}}s... which seems unnecessarily 
confusing. Still, methods for {{Configuration}} are also straightforward; 
{{setGlob()}} would need to escape everything in the {{URI}} side first (so 
users could continue to specify globs on the commandline as {{Paths}} with 
special characters), but aside from that it seems straightforward. Correcting 
it everywhere in the code may be prohibitive, though...

Since many of these are user-facing, do you think we need a more specific type 
than {{String}} for the glob part? {{ls /users/hadoop/\*.foo}} translated into:
{code}fs.globStatus(new Glob(new Path(hdfs://nn:8020/users/hadoop), 
Pattern.compile(*.foo))){code}
seems like it's strayed from sanity...

 Add improved APIs for globbing
 --

 Key: HADOOP-8724
 URL: https://issues.apache.org/jira/browse/HADOOP-8724
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans

 After the discussion on HADOOP-8709 it was decided that we need better APIs 
 for globbing to remove some of the inconsistencies with other APIs.  Inorder 
 to maintain backwards compatibility we should deprecate the existing APIs and 
 add in new ones.
 See HADOOP-8709 for more information about exactly how those APIs should look 
 and behave.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HADOOP-8724) Add improved APIs for globbing

2012-08-27 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442315#comment-13442315
 ] 

Chris Douglas edited comment on HADOOP-8724 at 8/27/12 6:25 PM:


bq. Simply instantiating a Glob object when the path might not be a glob may be 
problematic. Perhaps a better way to handle might be to have a static 
Path.create(String) which returns either a Path or GlobPath.

Distinguishing intent where the {{Path}} is created (as above) could solve part 
of the problem with HADOOP-8709 (caller can resolve which API to use?), but I 
don't think a subtype of {{Path}} will solve the other issues. Dispatch is 
still on the static type, so nothing is solved for the callee.

bq. The first was to simply have a base path and a string pattern as the 
parameters to globStatus. I thought it would be better to encapsulate the two 
into a single Glob object so it is obvious when an API takes a glob and when it 
does not.

Having an API advertise its globiness is useful, I like it. Though users 
specifying a single resource will need to create {{Glob}} objects on top of 
{{Path}} objects which are really URIs... which seems unnecessarily confusing. 
Still, methods for {{Configuration}} are also straightforward; {{setGlob()}} 
would need to escape everything in the URI side first (so users could continue 
to specify globs on the commandline as {{Paths}} with special characters), but 
aside from that it seems straightforward. Correcting it everywhere in the 
code may be prohibitive, though...

Since many of these are user-facing, do you think we need a more specific type 
than {{String}} for the glob part? {{ls /users/hadoop/\*.foo}} translated into:
{code}fs.globStatus(new Glob(new Path(hdfs://nn:8020/users/hadoop), 
Pattern.compile(*.foo))){code}
seems like it's strayed from sanity...

  was (Author: chris.douglas):
bq. Simply instantiating a Glob object when the path might not be a glob 
may be problematic. Perhaps a better way to handle might be to have a static 
Path.create(String) which returns either a Path or GlobPath.

Distinguishing intent where the {{Path}} is created (as above) could solve part 
of the problem with HADOOP-8709 (caller can resolve which API to use?), but I 
don't think a subtype of {{Path}} will solve the other issues. Dispatch is 
still on the static type, so nothing is solved for the callee.

bq. The first was to simply have a base path and a string pattern as the 
parameters to globStatus. I thought it would be better to encapsulate the two 
into a single Glob object so it is obvious when an API takes a glob and when it 
does not.

Having an API advertise its globiness is useful, I like it. Though users 
specifying a single resource will need to create {{Glob}} objects on top of 
{{Path}} objects which are really {{URI}}s... which seems unnecessarily 
confusing. Still, methods for {{Configuration}} are also straightforward; 
{{setGlob()}} would need to escape everything in the {{URI}} side first (so 
users could continue to specify globs on the commandline as {{Paths}} with 
special characters), but aside from that it seems straightforward. Correcting 
it everywhere in the code may be prohibitive, though...

Since many of these are user-facing, do you think we need a more specific type 
than {{String}} for the glob part? {{ls /users/hadoop/\*.foo}} translated into:
{code}fs.globStatus(new Glob(new Path(hdfs://nn:8020/users/hadoop), 
Pattern.compile(*.foo))){code}
seems like it's strayed from sanity...
  
 Add improved APIs for globbing
 --

 Key: HADOOP-8724
 URL: https://issues.apache.org/jira/browse/HADOOP-8724
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans

 After the discussion on HADOOP-8709 it was decided that we need better APIs 
 for globbing to remove some of the inconsistencies with other APIs.  Inorder 
 to maintain backwards compatibility we should deprecate the existing APIs and 
 add in new ones.
 See HADOOP-8709 for more information about exactly how those APIs should look 
 and behave.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8487) Many HDFS tests use a test path intended for local file system tests

2012-08-27 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-8487:
---

Attachment: HADOOP-8487-branch-1-win(3).update.patch

Apologies for the late response Daryn. To follow up on your commend #2 from 
above, I verified and the super call is not needed. Given that this is already 
committed and the actual change is small/irrelevant, it might be an overhead to 
fix this. I attached the patch that reverts this part anyways, your call on 
whether you want to commit this :)

Thanks for catching this!


 Many HDFS tests use a test path intended for local file system tests
 

 Key: HADOOP-8487
 URL: https://issues.apache.org/jira/browse/HADOOP-8487
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8487-branch-1-win(2).patch, 
 HADOOP-8487-branch-1-win(3).patch, HADOOP-8487-branch-1-win(3).update.patch, 
 HADOOP-8487-branch-1-win.alternate.patch, HADOOP-8487-branch-1-win.patch


 Many tests use a test path intended for local tests setup by build 
 environment. In some cases the tests fails on platforms such as windows 
 because the path contains a c:

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8457) Address file ownership issue for users in Administrators group on Windows.

2012-08-27 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442441#comment-13442441
 ] 

Ivan Mitic commented on HADOOP-8457:


Sanjay, apologies for the late response.

Correct, both 1 and 2 have been explored in this Jira. Problem with #1 is that 
there are many different contexts in which files can be created (and have 
incorrect default ownership): 
 - RawLocalFileSystem
 - Output redirection 1file1 2file2
 - Direct use of File objects
We would have to address all such places appropriately. This is why I like 
approach #2 better. Let me know what you think.

Thanks!

 Address file ownership issue for users in Administrators group on Windows.
 --

 Key: HADOOP-8457
 URL: https://issues.apache.org/jira/browse/HADOOP-8457
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 1.1.0, 0.24.0
Reporter: Chuan Liu
Assignee: Ivan Mitic
Priority: Minor
 Attachments: HADOOP-8457-branch-1-win_Admins(2).patch, 
 HADOOP-8457-branch-1-win_Admins.patch


 On Linux, the initial file owners are the creators. (I think this is true in 
 general. If there are exceptions, please let me know.) On Windows, the file 
 created by a user in the Administrators group has the initial owner 
 ‘Administrators’, i.e. the the Administrators group is the initial owner of 
 the file. As a result, this leads to an exception when we check file 
 ownership in SecureIOUtils .checkStat() method. As a result, this method is 
 disabled right now. We need to address this problem and enable the method on 
 Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8487) Many HDFS tests use a test path intended for local file system tests

2012-08-27 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442467#comment-13442467
 ] 

Daryn Sharp commented on HADOOP-8487:
-

Thanks for the followup, but it's not a big deal since it's a test.

 Many HDFS tests use a test path intended for local file system tests
 

 Key: HADOOP-8487
 URL: https://issues.apache.org/jira/browse/HADOOP-8487
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8487-branch-1-win(2).patch, 
 HADOOP-8487-branch-1-win(3).patch, HADOOP-8487-branch-1-win(3).update.patch, 
 HADOOP-8487-branch-1-win.alternate.patch, HADOOP-8487-branch-1-win.patch


 Many tests use a test path intended for local tests setup by build 
 environment. In some cases the tests fails on platforms such as windows 
 because the path contains a c:

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8487) Many HDFS tests use a test path intended for local file system tests

2012-08-27 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442473#comment-13442473
 ] 

Ivan Mitic commented on HADOOP-8487:


Great, thanks!

 Many HDFS tests use a test path intended for local file system tests
 

 Key: HADOOP-8487
 URL: https://issues.apache.org/jira/browse/HADOOP-8487
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8487-branch-1-win(2).patch, 
 HADOOP-8487-branch-1-win(3).patch, HADOOP-8487-branch-1-win(3).update.patch, 
 HADOOP-8487-branch-1-win.alternate.patch, HADOOP-8487-branch-1-win.patch


 Many tests use a test path intended for local tests setup by build 
 environment. In some cases the tests fails on platforms such as windows 
 because the path contains a c:

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8724) Add improved APIs for globbing

2012-08-27 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442474#comment-13442474
 ] 

Daryn Sharp commented on HADOOP-8724:
-

The issue is that user code doesn't know if the path is a glob, so 
{{listStatus}} is used if the path is considered a literal, whereas 
{{globStatus}} handles both literals and globs.  So code just calls 
{{globStatus}} if a glob is allowed.

A new {{Glob}} with a ctor that takes a regexp might be useful, but it has a 
few problems.  How would it handle paths with multiple globs in it?  Variadic?  
I might not understand your full intent, but how would a user input provided 
path be decomposed?

 Add improved APIs for globbing
 --

 Key: HADOOP-8724
 URL: https://issues.apache.org/jira/browse/HADOOP-8724
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans

 After the discussion on HADOOP-8709 it was decided that we need better APIs 
 for globbing to remove some of the inconsistencies with other APIs.  Inorder 
 to maintain backwards compatibility we should deprecate the existing APIs and 
 add in new ones.
 See HADOOP-8709 for more information about exactly how those APIs should look 
 and behave.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8712) Change default hadoop.security.group.mapping

2012-08-27 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442482#comment-13442482
 ] 

Harsh J commented on HADOOP-8712:
-

+1 overall

Just some description nits:

* s/resovle/resolve

* The description doesn't mention that the groups fallback is used only when 
JNI is unavailable. It is implicit currently, lets make it explicit.

 Change default hadoop.security.group.mapping
 

 Key: HADOOP-8712
 URL: https://issues.apache.org/jira/browse/HADOOP-8712
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0-alpha
Reporter: Robert Parker
Assignee: Robert Parker
Priority: Minor
 Attachments: HADOOP-8712-v1.patch


 Change the hadoop.security.group.mapping in core-site to 
 JniBasedUnixGroupsNetgroupMappingWithFallback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8712) Change default hadoop.security.group.mapping

2012-08-27 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442483#comment-13442483
 ] 

Harsh J commented on HADOOP-8712:
-

One more nit: Lets add the wide description to the core-default.xml as well?

 Change default hadoop.security.group.mapping
 

 Key: HADOOP-8712
 URL: https://issues.apache.org/jira/browse/HADOOP-8712
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0-alpha
Reporter: Robert Parker
Assignee: Robert Parker
Priority: Minor
 Attachments: HADOOP-8712-v1.patch


 Change the hadoop.security.group.mapping in core-site to 
 JniBasedUnixGroupsNetgroupMappingWithFallback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8726) The Secrets in Credentials are not available to MR tasks

2012-08-27 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442486#comment-13442486
 ] 

Daryn Sharp commented on HADOOP-8726:
-

This doesn't work.  Tokens can be added directly to the UGI and this patch 
won't return them.  A new {{UGI}} wrapping the {{Subject}} also won't see its 
tokens.  The {{Credentials}} needs to be constructed from the {{Subject}}.

I think the correct approach is for {{UGI#getCredentials}} to also extract the 
secrets from the {{Subject}}.  While working on HADOOP-8225 I noticed the 
secrets aren't propagate, but I didn't address the issue since I wasn't sure 
what impact it might have.

 The Secrets in Credentials are not available to MR tasks
 

 Key: HADOOP-8726
 URL: https://issues.apache.org/jira/browse/HADOOP-8726
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Benoy Antony
 Attachments: HADOOP-8726.patch


 Though secrets are passed in Credentials, the secrets are not available to 
 the MR tasks.
 This issue  exists with security on/off. 
 This is related to the change in HADOOP-8225

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8724) Add improved APIs for globbing

2012-08-27 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8724:


Attachment: HADOOP-8724.txt

I like the points that have been made so far, but I would prefer to discuss a 
real API over a theoretical one.  The implementation still needs a lot of work 
but the existing tests so far pass.

I am mostly interested in feedback about the APIs themselves.

In general there is a Glob class.  It is made up of a path and a pattern.  The 
Path is optional.  If the Path was supplied and does not exist an exception 
will be thrown.  The pattern is required.  A Glob can also be created from just 
a Path, but that is mostly for backwards compatibility.  In most cases I would 
expect a Glob to be created with just a pattern.

The result of running a glob is a GlobResult which is a 
RemoteIteratorFileStatus, but also provides some extra APIs so that someone 
can easily tell if nothing matched the glob, or (and this this impl is a real 
hack right now) the glob looked more like a path then a glob.  This is only 
really needed by PathData so the shell can adjust the error message 
appropriately.

 Add improved APIs for globbing
 --

 Key: HADOOP-8724
 URL: https://issues.apache.org/jira/browse/HADOOP-8724
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8724.txt


 After the discussion on HADOOP-8709 it was decided that we need better APIs 
 for globbing to remove some of the inconsistencies with other APIs.  Inorder 
 to maintain backwards compatibility we should deprecate the existing APIs and 
 add in new ones.
 See HADOOP-8709 for more information about exactly how those APIs should look 
 and behave.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8724) Add improved APIs for globbing

2012-08-27 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8724:


Status: Patch Available  (was: Open)

Submitting patch so that others are more likely to look at the patch and 
provide feedback.

 Add improved APIs for globbing
 --

 Key: HADOOP-8724
 URL: https://issues.apache.org/jira/browse/HADOOP-8724
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8724.txt


 After the discussion on HADOOP-8709 it was decided that we need better APIs 
 for globbing to remove some of the inconsistencies with other APIs.  Inorder 
 to maintain backwards compatibility we should deprecate the existing APIs and 
 add in new ones.
 See HADOOP-8709 for more information about exactly how those APIs should look 
 and behave.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8060) Add a capability to use of consistent checksums for append and copy

2012-08-27 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442514#comment-13442514
 ] 

Kihwal Lee commented on HADOOP-8060:


In HDFS-3177, Sanjay suggested that the one checksum type per file be enforced 
architecturally, rather than DFSClient doing it using existing facility. The 
changes in HDFS-3177 still allows DistCp, etc. to discover and set checksum 
parameters so that the results of getFileChecksum() on copies can match. I will 
resolve this jira with a modified summary.  I expect Sanjay to file a new Jira 
when he has a proposal.

 Add a capability to use of consistent checksums for append and copy
 ---

 Key: HADOOP-8060
 URL: https://issues.apache.org/jira/browse/HADOOP-8060
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, util
Affects Versions: 0.23.0, 0.23.1, 0.24.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee

 After the improved CRC32C checksum feature became default, some of use cases 
 involving data movement are no longer supported.  For example, when running 
 DistCp to copy from a file stored with the CRC32 checksum to a new cluster 
 with the CRC32C set to default checksum, the final data integrity check fails 
 because of mismatch in checksums.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8060) Add a capability to discover and set checksum types per file.

2012-08-27 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-8060:
---

Summary: Add a capability to discover and set checksum types per file.  
(was: Add a capability to use of consistent checksums for append and copy)

 Add a capability to discover and set checksum types per file.
 -

 Key: HADOOP-8060
 URL: https://issues.apache.org/jira/browse/HADOOP-8060
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, util
Affects Versions: 0.23.0, 0.23.1, 0.24.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee

 After the improved CRC32C checksum feature became default, some of use cases 
 involving data movement are no longer supported.  For example, when running 
 DistCp to copy from a file stored with the CRC32 checksum to a new cluster 
 with the CRC32C set to default checksum, the final data integrity check fails 
 because of mismatch in checksums.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8060) Add a capability to discover and set checksum types per file.

2012-08-27 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-8060:
---

Fix Version/s: 2.2.0-alpha
   3.0.0
   0.23.3

 Add a capability to discover and set checksum types per file.
 -

 Key: HADOOP-8060
 URL: https://issues.apache.org/jira/browse/HADOOP-8060
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, util
Affects Versions: 0.23.0, 0.23.1, 0.24.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.3, 3.0.0, 2.2.0-alpha


 After the improved CRC32C checksum feature became default, some of use cases 
 involving data movement are no longer supported.  For example, when running 
 DistCp to copy from a file stored with the CRC32 checksum to a new cluster 
 with the CRC32C set to default checksum, the final data integrity check fails 
 because of mismatch in checksums.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8060) Add a capability to discover and set checksum types per file.

2012-08-27 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HADOOP-8060.


Resolution: Fixed

 Add a capability to discover and set checksum types per file.
 -

 Key: HADOOP-8060
 URL: https://issues.apache.org/jira/browse/HADOOP-8060
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, util
Affects Versions: 0.23.0, 0.23.1, 0.24.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.3, 3.0.0, 2.2.0-alpha


 After the improved CRC32C checksum feature became default, some of use cases 
 involving data movement are no longer supported.  For example, when running 
 DistCp to copy from a file stored with the CRC32 checksum to a new cluster 
 with the CRC32C set to default checksum, the final data integrity check fails 
 because of mismatch in checksums.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8732) Address intermittent test failures on Windows

2012-08-27 Thread Ivan Mitic (JIRA)
Ivan Mitic created HADOOP-8732:
--

 Summary: Address intermittent test failures on Windows
 Key: HADOOP-8732
 URL: https://issues.apache.org/jira/browse/HADOOP-8732
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Ivan Mitic
Assignee: Ivan Mitic


There are a few tests that fail intermittently on Windows with a timeout error. 
This means that the test was actually killed from the outside, and it would 
continue to run otherwise. 

The following are examples of such tests (there might be others):
 - TestJobInProgress (this issue reproes pretty consistently in Eclipse on this 
one)
 - TestControlledMapReduceJob
 - TestServiceLevelAuthorization


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8732) Address intermittent test failures on Windows

2012-08-27 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-8732:
---

Attachment: HADOOP-8732-IntermittentFailures.patch

Attaching the patch with the fix.

 Address intermittent test failures on Windows
 -

 Key: HADOOP-8732
 URL: https://issues.apache.org/jira/browse/HADOOP-8732
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8732-IntermittentFailures.patch


 There are a few tests that fail intermittently on Windows with a timeout 
 error. This means that the test was actually killed from the outside, and it 
 would continue to run otherwise. 
 The following are examples of such tests (there might be others):
  - TestJobInProgress (this issue reproes pretty consistently in Eclipse on 
 this one)
  - TestControlledMapReduceJob
  - TestServiceLevelAuthorization

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8733) TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail on Windows

2012-08-27 Thread Ivan Mitic (JIRA)
Ivan Mitic created HADOOP-8733:
--

 Summary: TestStreamingTaskLog, TestJvmManager, 
TestLinuxTaskControllerLaunchArgs fail on Windows
 Key: HADOOP-8733
 URL: https://issues.apache.org/jira/browse/HADOOP-8733
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Ivan Mitic
Assignee: Ivan Mitic


Jira tracking test failures related to test .sh script dependencies. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8733) TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail on Windows

2012-08-27 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-8733:
---

Attachment: HADOOP-8733-scripts.patch

Attaching the patch.

 TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail 
 on Windows
 ---

 Key: HADOOP-8733
 URL: https://issues.apache.org/jira/browse/HADOOP-8733
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8733-scripts.patch


 Jira tracking test failures related to test .sh script dependencies. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8731) Public distributed cache support for Windows

2012-08-27 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-8731:
---

Attachment: HADOOP-8731-PublicCache.patch

Attaching the patch.

 Public distributed cache support for Windows
 

 Key: HADOOP-8731
 URL: https://issues.apache.org/jira/browse/HADOOP-8731
 Project: Hadoop Common
  Issue Type: Bug
  Components: filecache
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8731-PublicCache.patch


 A distributed cache file is considered public (sharable between MR jobs) if 
 OTHER has read permissions on the file and +x permissions all the way up in 
 the folder hierarchy. By default, Windows permissions are mapped to 700 all 
 the way up to the drive letter, and it is unreasonable to ask users to change 
 the permission on the whole drive to make the file public. IOW, it is hardly 
 possible to have public distributed cache on Windows. 
 To enable the scenario and make it more Windows friendly, the criteria on 
 when a file is considered public should be relaxed. One proposal is to check 
 whether the user has given EVERYONE group permission on the file only (and 
 discard the +x check on parent folders).
 Security considerations for the proposal: Default permissions on Unix 
 platforms are usually 775 or 755 meaning that OTHER users can read and 
 list folders by default. What this also means is that Hadoop users have to 
 explicitly make the files private in order to make them private in the 
 cluster (please correct me if this is not the case in real life!). On 
 Windows, default permissions are 700. This means that by default all files 
 are private. In the new model, if users want to make them public, they have 
 to explicitly add EVERYONE group permissions on the file. 
 TestTrackerDistributedCacheManager fails because of this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8724) Add improved APIs for globbing

2012-08-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442557#comment-13442557
 ] 

Hadoop QA commented on HADOOP-8724:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542617/HADOOP-8724.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

-1 javac.  The applied patch generated 2091 javac compiler warnings (more 
than the trunk's current 2059 warnings).

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 1 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestHftpDelegationToken

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1364//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1364//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1364//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1364//console

This message is automatically generated.

 Add improved APIs for globbing
 --

 Key: HADOOP-8724
 URL: https://issues.apache.org/jira/browse/HADOOP-8724
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8724.txt


 After the discussion on HADOOP-8709 it was decided that we need better APIs 
 for globbing to remove some of the inconsistencies with other APIs.  Inorder 
 to maintain backwards compatibility we should deprecate the existing APIs and 
 add in new ones.
 See HADOOP-8709 for more information about exactly how those APIs should look 
 and behave.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8719) workaround Hadoop logs errors upon startup on OS X 10.7

2012-08-27 Thread Jianbin Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Wei updated HADOOP-8719:


Attachment: HADOOP-8719.patch

 workaround Hadoop logs errors upon startup on OS X 10.7
 ---

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8719) workaround Hadoop logs errors upon startup on OS X 10.7

2012-08-27 Thread Jianbin Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Wei updated HADOOP-8719:


Status: Open  (was: Patch Available)

 workaround Hadoop logs errors upon startup on OS X 10.7
 ---

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8719) workaround Hadoop logs errors upon startup on OS X 10.7

2012-08-27 Thread Jianbin Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Wei updated HADOOP-8719:


Status: Patch Available  (was: Open)

 workaround Hadoop logs errors upon startup on OS X 10.7
 ---

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8734) LocalJobRunner does not support private distributed cache

2012-08-27 Thread Ivan Mitic (JIRA)
Ivan Mitic created HADOOP-8734:
--

 Summary: LocalJobRunner does not support private distributed cache
 Key: HADOOP-8734
 URL: https://issues.apache.org/jira/browse/HADOOP-8734
 Project: Hadoop Common
  Issue Type: Bug
  Components: filecache
Reporter: Ivan Mitic
Assignee: Ivan Mitic


It seems that LocalJobRunner does not support private distributed cache. The 
issue is more visible on Windows as all DC files are private by default (see 
HADOOP-8731).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8734) LocalJobRunner does not support private distributed cache

2012-08-27 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-8734:
---

Attachment: HADOOP-8734-LocalJobRunner.patch

Attaching the fix proposal.

 LocalJobRunner does not support private distributed cache
 -

 Key: HADOOP-8734
 URL: https://issues.apache.org/jira/browse/HADOOP-8734
 Project: Hadoop Common
  Issue Type: Bug
  Components: filecache
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8734-LocalJobRunner.patch


 It seems that LocalJobRunner does not support private distributed cache. The 
 issue is more visible on Windows as all DC files are private by default (see 
 HADOOP-8731).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8719) workaround Hadoop logs errors upon startup on OS X 10.7

2012-08-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442599#comment-13442599
 ] 

Hadoop QA commented on HADOOP-8719:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542635/HADOOP-8719.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1365//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1365//console

This message is automatically generated.

 workaround Hadoop logs errors upon startup on OS X 10.7
 ---

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8719) workaround Hadoop logs errors upon startup on OS X 10.7

2012-08-27 Thread Jianbin Wei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442604#comment-13442604
 ] 

Jianbin Wei commented on HADOOP-8719:
-

This changes the options used by hadoop-env.sh and yarn-env.sh.  After the 
patch, I did manual test and verified that sbin/[start|stop]-[dfs|yarn].sh do 
not show the annoying messages any more.

 workaround Hadoop logs errors upon startup on OS X 10.7
 ---

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8717) JAVA_HOME detected in hadoop-config.sh under OS X does not work

2012-08-27 Thread Jianbin Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Wei updated HADOOP-8717:


Attachment: HADOOP-8717.patch

 JAVA_HOME detected in hadoop-config.sh under OS X does not work
 ---

 Key: HADOOP-8717
 URL: https://issues.apache.org/jira/browse/HADOOP-8717
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 3.0.0
 Environment: OS: Darwin 11.4.0 Darwin Kernel Version 11.4.0: Mon Apr  
 9 19:32:15 PDT 2012; root:xnu-1699.26.8~1/RELEASE_X86_64 x86_64
 java version 1.6.0_33
 Java(TM) SE Runtime Environment (build 1.6.0_33-b03-424-11M3720)
 Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03-424, mixed mode)
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8717.patch


 After setting up a single node hadoop on mac, copy some text file to it and 
 run
 $ hadoop jar 
 ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar  
 wordcount /file.txt output
 It reports
 12/08/21 15:32:18 INFO Job.java:mapreduce.Job:1265: Running job: 
 job_1345588312126_0001
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1286: Job 
 job_1345588312126_0001 running in uber mode : false
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1293:  map 0% reduce 0%
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1306: Job 
 job_1345588312126_0001 failed with state FAILED due to: Application 
 application_1345588312126_0001 failed 1 times due to AM Container for 
 appattempt_1345588312126_0001_01 exited with  exitCode: 127 due to: 
 .Failing this attempt.. Failing the application.
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1311: Counters: 0
 $ cat 
 /tmp/logs/application_1345588312126_0001/container_1345588312126_0001_01_01/stderr
 /bin/bash: /bin/java: No such file or directory
 The detected JAVA_HOME is not used somehow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8717) JAVA_HOME detected in hadoop-config.sh under OS X does not work

2012-08-27 Thread Jianbin Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Wei updated HADOOP-8717:


Fix Version/s: 3.0.0
   Status: Patch Available  (was: Open)

used non-array to set the JAVA_HOME under mac

 JAVA_HOME detected in hadoop-config.sh under OS X does not work
 ---

 Key: HADOOP-8717
 URL: https://issues.apache.org/jira/browse/HADOOP-8717
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 3.0.0
 Environment: OS: Darwin 11.4.0 Darwin Kernel Version 11.4.0: Mon Apr  
 9 19:32:15 PDT 2012; root:xnu-1699.26.8~1/RELEASE_X86_64 x86_64
 java version 1.6.0_33
 Java(TM) SE Runtime Environment (build 1.6.0_33-b03-424-11M3720)
 Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03-424, mixed mode)
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8717.patch


 After setting up a single node hadoop on mac, copy some text file to it and 
 run
 $ hadoop jar 
 ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar  
 wordcount /file.txt output
 It reports
 12/08/21 15:32:18 INFO Job.java:mapreduce.Job:1265: Running job: 
 job_1345588312126_0001
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1286: Job 
 job_1345588312126_0001 running in uber mode : false
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1293:  map 0% reduce 0%
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1306: Job 
 job_1345588312126_0001 failed with state FAILED due to: Application 
 application_1345588312126_0001 failed 1 times due to AM Container for 
 appattempt_1345588312126_0001_01 exited with  exitCode: 127 due to: 
 .Failing this attempt.. Failing the application.
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1311: Counters: 0
 $ cat 
 /tmp/logs/application_1345588312126_0001/container_1345588312126_0001_01_01/stderr
 /bin/bash: /bin/java: No such file or directory
 The detected JAVA_HOME is not used somehow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8726) The Secrets in Credentials are not available to MR tasks

2012-08-27 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442609#comment-13442609
 ] 

Benoy Antony commented on HADOOP-8726:
--

I agree, Daryn. 
I can  handle secrets similar to tokens using the NamedToken. But I need to 
differentiate between token and secret.
I can add a enum in the NamedToken to indicate if its token  or secret. Is that 
fine ?


 The Secrets in Credentials are not available to MR tasks
 

 Key: HADOOP-8726
 URL: https://issues.apache.org/jira/browse/HADOOP-8726
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Benoy Antony
 Attachments: HADOOP-8726.patch


 Though secrets are passed in Credentials, the secrets are not available to 
 the MR tasks.
 This issue  exists with security on/off. 
 This is related to the change in HADOOP-8225

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8719) workaround Hadoop logs errors upon startup on OS X 10.7

2012-08-27 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442610#comment-13442610
 ] 

Harsh J commented on HADOOP-8719:
-

Looks good. One nit: can macos be renamed to MAC_OSX, just for 
readability/consistency sake?

+1 otherwise, thanks Jianbin.

 workaround Hadoop logs errors upon startup on OS X 10.7
 ---

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-6311) Add support for unix domain sockets to JNI libs

2012-08-27 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-6311:
-

Attachment: HADOOP-6311.016.patch

* catch the correct exception in TestNativeIO

* test a few more edge cases like supplying null to native functions when we're 
not supposed to, etc.

 Add support for unix domain sockets to JNI libs
 ---

 Key: HADOOP-6311
 URL: https://issues.apache.org/jira/browse/HADOOP-6311
 Project: Hadoop Common
  Issue Type: New Feature
  Components: native
Affects Versions: 0.20.0
Reporter: Todd Lipcon
Assignee: Colin Patrick McCabe
 Attachments: 6311-trunk-inprogress.txt, HADOOP-6311.014.patch, 
 HADOOP-6311.016.patch, HADOOP-6311-0.patch, HADOOP-6311-1.patch, 
 hadoop-6311.txt


 For HDFS-347 we need to use unix domain sockets. This JIRA is to include a 
 library in common which adds a o.a.h.net.unix package based on the code from 
 Android (apache 2 license)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8705) Add JSR 107 Caching support

2012-08-27 Thread Dhruv Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442627#comment-13442627
 ] 

Dhruv Kumar commented on HADOOP-8705:
-

Ahmed, definitely, another advantage of having a larger, pluggable 
MapOutputBuffer is the potential reduction of Speculative Execution on other 
nodes which should improve the network performance in the cases of unbalanced 
clusters.

Kapil, the Haloop paper which I linked in this JIRA describes the storing of 
intermediate map results for consumption by reducers. You can find their Apache 
Licensed code on Google Code, if you want to dive down into the specifics. 

Here's another related use case of using Memcached (or any other caching layer) 
with Hadoop, although this is a slightly different plugging point: 
http://www.slideserve.com/layne/mapreduce-and-databases.

 Add JSR 107 Caching support 
 

 Key: HADOOP-8705
 URL: https://issues.apache.org/jira/browse/HADOOP-8705
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Dhruv Kumar

 Having a cache on mappers and reducers could be very useful for some use 
 cases, including but not limited to:
 1. Iterative Map Reduce Programs: Some machine learning algorithms frequently 
 need access to invariant data (see Mahout) over each iteration of MapReduce 
 until convergence. A cache on such nodes could allow easy access to the 
 hotset of data without going all the way to the distributed cache.
 2. Storing of intermediate map and reduce outputs in memory to reduce 
 shuffling time. This optimization has been discussed at length in Haloop 
 (http://www.ics.uci.edu/~yingyib/papers/HaLoop_camera_ready.pdf).
 There are some other scenarios as well where having a cache could come in 
 handy. 
 It will be nice to have some sort of pluggable support for JSR 107 compliant 
 caches. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8689) Make trash a server side configuration option

2012-08-27 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442648#comment-13442648
 ] 

Kihwal Lee commented on HADOOP-8689:


In HDFS-3856, namenode does System.exit(1), because getServerDefaults() is not 
allowed on backup nodes.

 Make trash a server side configuration option
 -

 Key: HADOOP-8689
 URL: https://issues.apache.org/jira/browse/HADOOP-8689
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 2.2.0-alpha

 Attachments: hadoop-8689.txt, hadoop-8689.txt


 Per ATM's suggestion in HADOOP-8598 for v2 let's make {{fs.trash.interval}} 
 configured server side. If it is not configured server side then the client 
 side configuration is used. The {{fs.trash.checkpoint.interval}} option is 
 already server side as the emptier runs in the NameNode. Clients may manually 
 run an emptier via hadoop org.apache.hadoop.fs.Trash but it's OK if it uses a 
 separate interval. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8717) JAVA_HOME detected in hadoop-config.sh under OS X does not work

2012-08-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442649#comment-13442649
 ] 

Hadoop QA commented on HADOOP-8717:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542647/HADOOP-8717.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1366//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1366//console

This message is automatically generated.

 JAVA_HOME detected in hadoop-config.sh under OS X does not work
 ---

 Key: HADOOP-8717
 URL: https://issues.apache.org/jira/browse/HADOOP-8717
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 3.0.0
 Environment: OS: Darwin 11.4.0 Darwin Kernel Version 11.4.0: Mon Apr  
 9 19:32:15 PDT 2012; root:xnu-1699.26.8~1/RELEASE_X86_64 x86_64
 java version 1.6.0_33
 Java(TM) SE Runtime Environment (build 1.6.0_33-b03-424-11M3720)
 Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03-424, mixed mode)
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8717.patch


 After setting up a single node hadoop on mac, copy some text file to it and 
 run
 $ hadoop jar 
 ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar  
 wordcount /file.txt output
 It reports
 12/08/21 15:32:18 INFO Job.java:mapreduce.Job:1265: Running job: 
 job_1345588312126_0001
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1286: Job 
 job_1345588312126_0001 running in uber mode : false
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1293:  map 0% reduce 0%
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1306: Job 
 job_1345588312126_0001 failed with state FAILED due to: Application 
 application_1345588312126_0001 failed 1 times due to AM Container for 
 appattempt_1345588312126_0001_01 exited with  exitCode: 127 due to: 
 .Failing this attempt.. Failing the application.
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1311: Counters: 0
 $ cat 
 /tmp/logs/application_1345588312126_0001/container_1345588312126_0001_01_01/stderr
 /bin/bash: /bin/java: No such file or directory
 The detected JAVA_HOME is not used somehow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8705) Add JSR 107 Caching support

2012-08-27 Thread Dhruv Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442654#comment-13442654
 ] 

Dhruv Kumar commented on HADOOP-8705:
-

For the impatient, I have uploaded a presentation about Haloop which I gave 
some time back in graduate school: 
http://www.slideserve.com/dkumar/optimizing-iterative-mapreduce-jobs

 Add JSR 107 Caching support 
 

 Key: HADOOP-8705
 URL: https://issues.apache.org/jira/browse/HADOOP-8705
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Dhruv Kumar

 Having a cache on mappers and reducers could be very useful for some use 
 cases, including but not limited to:
 1. Iterative Map Reduce Programs: Some machine learning algorithms frequently 
 need access to invariant data (see Mahout) over each iteration of MapReduce 
 until convergence. A cache on such nodes could allow easy access to the 
 hotset of data without going all the way to the distributed cache.
 2. Storing of intermediate map and reduce outputs in memory to reduce 
 shuffling time. This optimization has been discussed at length in Haloop 
 (http://www.ics.uci.edu/~yingyib/papers/HaLoop_camera_ready.pdf).
 There are some other scenarios as well where having a cache could come in 
 handy. 
 It will be nice to have some sort of pluggable support for JSR 107 compliant 
 caches. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8735) Missing support for dfs.umaskmode

2012-08-27 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-8735:
--

 Summary: Missing support for dfs.umaskmode
 Key: HADOOP-8735
 URL: https://issues.apache.org/jira/browse/HADOOP-8735
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha, 0.23.0
Reporter: Jason Lowe
Priority: Critical


dfs.umaskmode was a supported property in Hadoop 0.20/1.x, but it appears to be 
completely ignored in 0.23/2.x.  We should at least have deprecated support for 
this property.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-08-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442668#comment-13442668
 ] 

Colin Patrick McCabe commented on HADOOP-8648:
--

The bug in the inline assembly is that there are extra parameters.  You can see 
this clearly in the patch:
{code}
@@ -433,7 +456,7 @@ static void pipelined_crc32c(uint32_t *crc1, uint32_t 
*crc2, uint32_t *crc3, con
 crc32b (%5), %0;\n\t
 crc32b (%5,%4,1), %1;\n\t
  : =r(c1), =r(c2) 
- : r(c1), r(c2), r(c3), r(block_size), r(bdata)
+ : r(c1), r(c2), r(block_size), r(bdata)
 );
 bdata++;
 remainder--;
{code}

You can see that it doesn't make sense for the assembly to have 7 parameters, 
because only 6 are actually used.  And indeed, the fact that 'c3' is inserted 
in the parameter list is the bug.  Another thing to keep in mind is that c3 is 
actually uninitialized at this point-- another clue that we should not be using 
it.

Incidentally, this patch unconditionally initializes c3 to 0x just to 
avoid heisenbugs in the future.

 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch, 
 HADOOP-8648.003.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when blocksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-08-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442669#comment-13442669
 ] 

Colin Patrick McCabe commented on HADOOP-8648:
--

I should say c3 is *possibly* uninitialized at this point.

 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch, 
 HADOOP-8648.003.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when blocksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8719) workaround Hadoop logs errors upon startup on OS X 10.7

2012-08-27 Thread Jianbin Wei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442673#comment-13442673
 ] 

Jianbin Wei commented on HADOOP-8719:
-

Used the style from 
hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, the check 
for cygwin.  The one used over there should also be changed for consistency I 
think.

 workaround Hadoop logs errors upon startup on OS X 10.7
 ---

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch, 
 HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8735) Missing support for dfs.umaskmode

2012-08-27 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442674#comment-13442674
 ] 

Harsh J commented on HADOOP-8735:
-

Hi,

Please see https://issues.apache.org/jira/browse/HADOOP-8727 on how this came 
to be, and a patch to resolve it. If you agree its the same issue and 0.23 
backport can be done on it itself, we can close this as a dupe?

 Missing support for dfs.umaskmode
 -

 Key: HADOOP-8735
 URL: https://issues.apache.org/jira/browse/HADOOP-8735
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Jason Lowe
Priority: Critical

 dfs.umaskmode was a supported property in Hadoop 0.20/1.x, but it appears to 
 be completely ignored in 0.23/2.x.  We should at least have deprecated 
 support for this property.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8719) workaround Hadoop logs errors upon startup on OS X 10.7

2012-08-27 Thread Jianbin Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Wei updated HADOOP-8719:


Status: Open  (was: Patch Available)

 workaround Hadoop logs errors upon startup on OS X 10.7
 ---

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch, 
 HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8719) workaround Hadoop logs errors upon startup on OS X 10.7

2012-08-27 Thread Jianbin Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Wei updated HADOOP-8719:


Status: Patch Available  (was: Open)

resubmit the patch with renamed 'macos' to 'MAC_OSX'.  Thanks Harsh.

 workaround Hadoop logs errors upon startup on OS X 10.7
 ---

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch, 
 HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8719) workaround Hadoop logs errors upon startup on OS X 10.7

2012-08-27 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8719:


 Target Version/s: 3.0.0
Affects Version/s: 2.0.0-alpha
Fix Version/s: (was: 3.0.0)
 Hadoop Flags: Reviewed

 workaround Hadoop logs errors upon startup on OS X 10.7
 ---

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Minor
 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch, 
 HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8719) workaround Hadoop logs errors upon startup on OS X 10.7

2012-08-27 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8719:


  Priority: Trivial  (was: Minor)
Issue Type: Improvement  (was: Bug)

 workaround Hadoop logs errors upon startup on OS X 10.7
 ---

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Trivial
 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch, 
 HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8719) Workaround for kerberos-related log errors upon running any hadoop command on OSX

2012-08-27 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8719:


Summary: Workaround for kerberos-related log errors upon running any hadoop 
command on OSX  (was: workaround Hadoop logs errors upon startup on OS X 10.7)

 Workaround for kerberos-related log errors upon running any hadoop command on 
 OSX
 -

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Trivial
 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch, 
 HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8719) Workaround for kerberos-related log errors upon running any hadoop command on OSX

2012-08-27 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8719:


  Resolution: Fixed
   Fix Version/s: 3.0.0
Target Version/s:   (was: 3.0.0)
  Status: Resolved  (was: Patch Available)

Committed revision 1377821 to trunk.

Thanks very much for your contribution Jianbin! :)

 Workaround for kerberos-related log errors upon running any hadoop command on 
 OSX
 -

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Trivial
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch, 
 HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8726) The Secrets in Credentials are not available to MR tasks

2012-08-27 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442686#comment-13442686
 ] 

Benoy Antony commented on HADOOP-8726:
--

Another approach will be to delegate token and secret management to Credentials 
itself and keep the entire Credentials object in Subject as PrivateCredentials.

This will avoid UGI class from handling the internals of Credentials. Any 
drawbacks ?


 The Secrets in Credentials are not available to MR tasks
 

 Key: HADOOP-8726
 URL: https://issues.apache.org/jira/browse/HADOOP-8726
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Benoy Antony
 Attachments: HADOOP-8726.patch


 Though secrets are passed in Credentials, the secrets are not available to 
 the MR tasks.
 This issue  exists with security on/off. 
 This is related to the change in HADOOP-8225

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8736) Create a Builder to make an RPC server

2012-08-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-8736:
---

Attachment: HADOOP-8736.patch

 Create a Builder to make an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8736) Create a Builder to make an RPC server

2012-08-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-8736:
---

Status: Patch Available  (was: Open)

 Create a Builder to make an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8717) JAVA_HOME detected in hadoop-config.sh under OS X does not work

2012-08-27 Thread Jianbin Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Wei updated HADOOP-8717:


Attachment: HADOOP-8717.patch

 JAVA_HOME detected in hadoop-config.sh under OS X does not work
 ---

 Key: HADOOP-8717
 URL: https://issues.apache.org/jira/browse/HADOOP-8717
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
 Environment: OS: Darwin 11.4.0 Darwin Kernel Version 11.4.0: Mon Apr  
 9 19:32:15 PDT 2012; root:xnu-1699.26.8~1/RELEASE_X86_64 x86_64
 java version 1.6.0_33
 Java(TM) SE Runtime Environment (build 1.6.0_33-b03-424-11M3720)
 Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03-424, mixed mode)
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8717.patch, HADOOP-8717.patch


 After setting up a single node hadoop on mac, copy some text file to it and 
 run
 $ hadoop jar 
 ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar  
 wordcount /file.txt output
 It reports
 12/08/21 15:32:18 INFO Job.java:mapreduce.Job:1265: Running job: 
 job_1345588312126_0001
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1286: Job 
 job_1345588312126_0001 running in uber mode : false
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1293:  map 0% reduce 0%
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1306: Job 
 job_1345588312126_0001 failed with state FAILED due to: Application 
 application_1345588312126_0001 failed 1 times due to AM Container for 
 appattempt_1345588312126_0001_01 exited with  exitCode: 127 due to: 
 .Failing this attempt.. Failing the application.
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1311: Counters: 0
 $ cat 
 /tmp/logs/application_1345588312126_0001/container_1345588312126_0001_01_01/stderr
 /bin/bash: /bin/java: No such file or directory
 The detected JAVA_HOME is not used somehow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8717) JAVA_HOME detected in hadoop-config.sh under OS X does not work

2012-08-27 Thread Jianbin Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Wei updated HADOOP-8717:


Status: Patch Available  (was: Open)

 JAVA_HOME detected in hadoop-config.sh under OS X does not work
 ---

 Key: HADOOP-8717
 URL: https://issues.apache.org/jira/browse/HADOOP-8717
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
 Environment: OS: Darwin 11.4.0 Darwin Kernel Version 11.4.0: Mon Apr  
 9 19:32:15 PDT 2012; root:xnu-1699.26.8~1/RELEASE_X86_64 x86_64
 java version 1.6.0_33
 Java(TM) SE Runtime Environment (build 1.6.0_33-b03-424-11M3720)
 Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03-424, mixed mode)
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8717.patch, HADOOP-8717.patch


 After setting up a single node hadoop on mac, copy some text file to it and 
 run
 $ hadoop jar 
 ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar  
 wordcount /file.txt output
 It reports
 12/08/21 15:32:18 INFO Job.java:mapreduce.Job:1265: Running job: 
 job_1345588312126_0001
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1286: Job 
 job_1345588312126_0001 running in uber mode : false
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1293:  map 0% reduce 0%
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1306: Job 
 job_1345588312126_0001 failed with state FAILED due to: Application 
 application_1345588312126_0001 failed 1 times due to AM Container for 
 appattempt_1345588312126_0001_01 exited with  exitCode: 127 due to: 
 .Failing this attempt.. Failing the application.
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1311: Counters: 0
 $ cat 
 /tmp/logs/application_1345588312126_0001/container_1345588312126_0001_01_01/stderr
 /bin/bash: /bin/java: No such file or directory
 The detected JAVA_HOME is not used somehow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8719) Workaround for kerberos-related log errors upon running any hadoop command on OSX

2012-08-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442721#comment-13442721
 ] 

Hudson commented on HADOOP-8719:


Integrated in Hadoop-Common-trunk-Commit #2644 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2644/])
HADOOP-8719. Workaround for kerberos-related log errors upon running any 
hadoop command on OSX. Contributed by Jianbin Wei. (harsh) (Revision 1377821)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1377821
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/conf/yarn-env.sh


 Workaround for kerberos-related log errors upon running any hadoop command on 
 OSX
 -

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Trivial
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch, 
 HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8719) Workaround for kerberos-related log errors upon running any hadoop command on OSX

2012-08-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442725#comment-13442725
 ] 

Hudson commented on HADOOP-8719:


Integrated in Hadoop-Hdfs-trunk-Commit #2708 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2708/])
HADOOP-8719. Workaround for kerberos-related log errors upon running any 
hadoop command on OSX. Contributed by Jianbin Wei. (harsh) (Revision 1377821)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1377821
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/conf/yarn-env.sh


 Workaround for kerberos-related log errors upon running any hadoop command on 
 OSX
 -

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Trivial
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch, 
 HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8719) Workaround for kerberos-related log errors upon running any hadoop command on OSX

2012-08-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442730#comment-13442730
 ] 

Hudson commented on HADOOP-8719:


Integrated in Hadoop-Mapreduce-trunk-Commit #2673 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2673/])
HADOOP-8719. Workaround for kerberos-related log errors upon running any 
hadoop command on OSX. Contributed by Jianbin Wei. (harsh) (Revision 1377821)

 Result = FAILURE
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1377821
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/conf/yarn-env.sh


 Workaround for kerberos-related log errors upon running any hadoop command on 
 OSX
 -

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Trivial
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch, 
 HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8736) Create a Builder to make an RPC server

2012-08-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442737#comment-13442737
 ] 

Hadoop QA commented on HADOOP-8736:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542664/HADOOP-8736.patch
  against trunk revision .

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1369//console

This message is automatically generated.

 Create a Builder to make an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6311) Add support for unix domain sockets to JNI libs

2012-08-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442740#comment-13442740
 ] 

Hadoop QA commented on HADOOP-6311:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542648/HADOOP-6311.016.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 javadoc.  The javadoc tool appears to have generated 12 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1367//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1367//console

This message is automatically generated.

 Add support for unix domain sockets to JNI libs
 ---

 Key: HADOOP-6311
 URL: https://issues.apache.org/jira/browse/HADOOP-6311
 Project: Hadoop Common
  Issue Type: New Feature
  Components: native
Affects Versions: 0.20.0
Reporter: Todd Lipcon
Assignee: Colin Patrick McCabe
 Attachments: 6311-trunk-inprogress.txt, HADOOP-6311.014.patch, 
 HADOOP-6311.016.patch, HADOOP-6311-0.patch, HADOOP-6311-1.patch, 
 hadoop-6311.txt


 For HDFS-347 we need to use unix domain sockets. This JIRA is to include a 
 library in common which adds a o.a.h.net.unix package based on the code from 
 Android (apache 2 license)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8717) JAVA_HOME detected in hadoop-config.sh under OS X does not work

2012-08-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442742#comment-13442742
 ] 

Hadoop QA commented on HADOOP-8717:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542666/HADOOP-8717.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1368//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1368//console

This message is automatically generated.

 JAVA_HOME detected in hadoop-config.sh under OS X does not work
 ---

 Key: HADOOP-8717
 URL: https://issues.apache.org/jira/browse/HADOOP-8717
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
 Environment: OS: Darwin 11.4.0 Darwin Kernel Version 11.4.0: Mon Apr  
 9 19:32:15 PDT 2012; root:xnu-1699.26.8~1/RELEASE_X86_64 x86_64
 java version 1.6.0_33
 Java(TM) SE Runtime Environment (build 1.6.0_33-b03-424-11M3720)
 Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03-424, mixed mode)
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8717.patch, HADOOP-8717.patch


 After setting up a single node hadoop on mac, copy some text file to it and 
 run
 $ hadoop jar 
 ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar  
 wordcount /file.txt output
 It reports
 12/08/21 15:32:18 INFO Job.java:mapreduce.Job:1265: Running job: 
 job_1345588312126_0001
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1286: Job 
 job_1345588312126_0001 running in uber mode : false
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1293:  map 0% reduce 0%
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1306: Job 
 job_1345588312126_0001 failed with state FAILED due to: Application 
 application_1345588312126_0001 failed 1 times due to AM Container for 
 appattempt_1345588312126_0001_01 exited with  exitCode: 127 due to: 
 .Failing this attempt.. Failing the application.
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1311: Counters: 0
 $ cat 
 /tmp/logs/application_1345588312126_0001/container_1345588312126_0001_01_01/stderr
 /bin/bash: /bin/java: No such file or directory
 The detected JAVA_HOME is not used somehow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8717) JAVA_HOME detected in hadoop-config.sh under OS X does not work

2012-08-27 Thread Jianbin Wei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442750#comment-13442750
 ] 

Jianbin Wei commented on HADOOP-8717:
-

After the patch, I start dfs and yarn and rerun the job successfully.


⚡ bin/hadoop jar 
./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar wordcount 
/file.txt output3
12/08/27 14:52:34 INFO input.FileInputFormat: Total input paths to process : 1
12/08/27 14:52:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
12/08/27 14:52:34 WARN snappy.LoadSnappy: Snappy native library not loaded
12/08/27 14:52:34 INFO mapreduce.JobSubmitter: number of splits:1
12/08/27 14:52:34 WARN conf.Configuration: mapred.jar is deprecated. Instead, 
use mapreduce.job.jar
12/08/27 14:52:34 WARN conf.Configuration: mapred.output.value.class is 
deprecated. Instead, use mapreduce.job.output.value.class
12/08/27 14:52:34 WARN conf.Configuration: mapreduce.combine.class is 
deprecated. Instead, use mapreduce.job.combine.class
12/08/27 14:52:34 WARN conf.Configuration: mapreduce.map.class is deprecated. 
Instead, use mapreduce.job.map.class
12/08/27 14:52:34 WARN conf.Configuration: mapred.job.name is deprecated. 
Instead, use mapreduce.job.name
12/08/27 14:52:34 WARN conf.Configuration: mapreduce.reduce.class is 
deprecated. Instead, use mapreduce.job.reduce.class
12/08/27 14:52:34 WARN conf.Configuration: mapred.input.dir is deprecated. 
Instead, use mapreduce.input.fileinputformat.inputdir
12/08/27 14:52:34 WARN conf.Configuration: mapred.output.dir is deprecated. 
Instead, use mapreduce.output.fileoutputformat.outputdir
12/08/27 14:52:34 WARN conf.Configuration: mapred.map.tasks is deprecated. 
Instead, use mapreduce.job.maps
12/08/27 14:52:34 WARN conf.Configuration: mapred.output.key.class is 
deprecated. Instead, use mapreduce.job.output.key.class
12/08/27 14:52:34 WARN conf.Configuration: mapred.working.dir is deprecated. 
Instead, use mapreduce.job.working.dir
12/08/27 14:52:35 INFO mapred.ResourceMgrDelegate: Submitted application 
application_1346104327473_0002 to ResourceManager at /0.0.0.0:8032
12/08/27 14:52:35 INFO mapreduce.Job: The url to track the job: 
http://LM-SJN-00714134:8088/proxy/application_1346104327473_0002/
12/08/27 14:52:35 INFO mapreduce.Job: Running job: job_1346104327473_0002
12/08/27 14:52:42 INFO mapreduce.Job: Job job_1346104327473_0002 running in 
uber mode : false
12/08/27 14:52:42 INFO mapreduce.Job:  map 0% reduce 0%
12/08/27 14:52:46 INFO mapreduce.Job:  map 100% reduce 0%
12/08/27 14:52:50 INFO mapreduce.Job:  map 100% reduce 100%
12/08/27 14:52:50 INFO mapreduce.Job: Job job_1346104327473_0002 completed 
successfully
12/08/27 14:52:50 INFO mapreduce.Job: Counters: 43
File System Counters
FILE: Number of bytes read=3267
FILE: Number of bytes written=131819
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2767
HDFS: Number of bytes written=2279
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters 
Launched map tasks=1
Launched reduce tasks=1
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=15752
Total time spent by all reduces in occupied slots (ms)=18328
Map-Reduce Framework
Map input records=87
Map output records=317
Map output bytes=3881
Map output materialized bytes=3027
Input split bytes=95
Combine input records=317
Combine output records=186
Reduce input groups=186
Reduce shuffle bytes=3027
Reduce input records=186
Reduce output records=186
Spilled Records=372
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=15
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=282660864
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters 
Bytes Read=2672
File Output Format Counters 
Bytes Written=2279


 JAVA_HOME detected in hadoop-config.sh under OS X does not work
 

[jira] [Resolved] (HADOOP-8735) Missing support for dfs.umaskmode

2012-08-27 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-8735.


Resolution: Duplicate

 Missing support for dfs.umaskmode
 -

 Key: HADOOP-8735
 URL: https://issues.apache.org/jira/browse/HADOOP-8735
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha
Reporter: Jason Lowe
Priority: Critical

 dfs.umaskmode was a supported property in Hadoop 0.20/1.x, but it appears to 
 be completely ignored in 0.23/2.x.  We should at least have deprecated 
 support for this property.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-08-27 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442756#comment-13442756
 ] 

Andy Isaacson commented on HADOOP-8648:
---

I didn't understand why this code was wrong before, so I looked into it in more 
depth and I agree with Colin's analysis and patch.  In the interest of making 
this easier for others to understand, here are a few references.

http://www.ibiblio.org/gferg/ldp/GCC-Inline-Assembly-HOWTO.html explains the 
GCC inline assembly syntax, and in particular how the {{asm(some assembly : 
inputconstraints : outputconstraints : clobbers)}} syntax is parsed, and how 
the constraints map to the {{%n}} in the assembly string.

http://asm.sourceforge.net/articles/rmiyagi-inline-asm.txt describes the x86 
indexed addressing modes, in particular explaining how {{(%5,%4,1)}} is 
interpreted as the word of memory at {{%5 + 1 * %4}}.

http://softwarecommunity.intel.com/userfiles/en-us/d9156103.pdf describes the 
details of the SSE4 CRC32 instruction in mind-numbing detail, but that's not 
especially relevant to this bug.  All we need to know is that {{crc32}}_size_ 
operates on 8, 32, or 64 bits depending on _size_, and its first argument is 
read-only while its second argument is used as an accumulator (read, modify, 
write).

Finally, the comments in bulk_crc32.c are very helpful.  Critically, the 
{{pipelined_crc32c}} routine optimizes by computing the CRC of up to 3 blocks 
in parallel.  The block size is passed in to {{pipelined_crc32c}} as 
{{block_size}}.  As we can see by looking at one of the other asm blocks in 
pipelined_crc32c, the core idea is that we maintain {{bdata}} as a pointer to 
the word being CRCed in the first block, and then use indexed addressing to 
compute the appropriate address for the word being CRCed in the second (and 
possibly third) blocks.

With all that under our belt, the bug in this code becomes clear:
{code}
crc32b (%5), %0;\n\t
crc32b (%5,%4,1), %1;\n\t
 : =r(c1), =r(c2)
 : r(c1), r(c2), r(c3), r(block_size), r(bdata)
{code}
The first crc32b instruction dereferences %5 which is {{block_size}}, but 
comparing to any other example of the similar asm block such as:
{code}
crc32q (%7), %0;\n\t
crc32q (%7,%6,1), %1;\n\t
crc32q (%7,%6,2), %2;\n\t
 : =r(c1), =r(c2), =r(c3)
 : r(c1), r(c2), r(c3), r(block_size), r(data)
{code}
it should be dereferencing {{bdata}}.  And this is caused because the output 
constraints list includes {{c3}} even though the input constraints list does 
not, also different from all other examples of the asm block.

Therefore, Colin's fix to remove c3 from the list causes the %4 and %5 
references to refer to their intended operands {{block_size}} and {{bdata}} 
respectively.

 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch, 
 HADOOP-8648.003.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when blocksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-08-27 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442761#comment-13442761
 ] 

Andy Isaacson commented on HADOOP-8648:
---

I've reviewed the patch closely and agree that it's right.  The only tiny 
improvement I'd make is to fix this misleading comment in the 64-bit version of 
{{bulk_crc32.c}}:
{code}
374   int remainder = block_size % sizeof(uint64_t);
...
401   /* Take care of the remainder. They are only up to three bytes,
402* so performing byte-level crc32 won't take much time.
403*/
404   bdata = (uint8_t*)data;
405   while (likely(remainder)) {
{code}
The comment says up to three bytes but since this is a uint64_t at a time, it 
should say up to seven bytes.  This came from a copy-and-paste from the 
32-bit version.

Ideally we could refactor the 32-bit and 64-bit versions to one using a 
{{#define WORD_T uint32_t}} or similar, but let's do that in a followup jira.

I've also reviewed the original patch in HADOOP-7446 and confirmed that there 
weren't any other similar bugs added.

Note that the erroneous asm is only hit if the HDFS blocksize is not a multiple 
of the wordsize, which AFAICS can only happen for blocksize7.  And, the bug 
lurked because the remainder codepath didn't have any tests, so thanks for 
adding those.

Overall, LGTM.  Fix the comment if you choose, else ship it.

 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch, 
 HADOOP-8648.003.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when blocksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8717) JAVA_HOME detected in hadoop-config.sh under OS X does not work

2012-08-27 Thread Jianbin Wei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianbin Wei updated HADOOP-8717:


Component/s: (was: bin)
 conf

 JAVA_HOME detected in hadoop-config.sh under OS X does not work
 ---

 Key: HADOOP-8717
 URL: https://issues.apache.org/jira/browse/HADOOP-8717
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
 Environment: OS: Darwin 11.4.0 Darwin Kernel Version 11.4.0: Mon Apr  
 9 19:32:15 PDT 2012; root:xnu-1699.26.8~1/RELEASE_X86_64 x86_64
 java version 1.6.0_33
 Java(TM) SE Runtime Environment (build 1.6.0_33-b03-424-11M3720)
 Java HotSpot(TM) 64-Bit Server VM (build 20.8-b03-424, mixed mode)
Reporter: Jianbin Wei
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8717.patch, HADOOP-8717.patch


 After setting up a single node hadoop on mac, copy some text file to it and 
 run
 $ hadoop jar 
 ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar  
 wordcount /file.txt output
 It reports
 12/08/21 15:32:18 INFO Job.java:mapreduce.Job:1265: Running job: 
 job_1345588312126_0001
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1286: Job 
 job_1345588312126_0001 running in uber mode : false
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1293:  map 0% reduce 0%
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1306: Job 
 job_1345588312126_0001 failed with state FAILED due to: Application 
 application_1345588312126_0001 failed 1 times due to AM Container for 
 appattempt_1345588312126_0001_01 exited with  exitCode: 127 due to: 
 .Failing this attempt.. Failing the application.
 12/08/21 15:32:22 INFO Job.java:mapreduce.Job:1311: Counters: 0
 $ cat 
 /tmp/logs/application_1345588312126_0001/container_1345588312126_0001_01_01/stderr
 /bin/bash: /bin/java: No such file or directory
 The detected JAVA_HOME is not used somehow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8737) cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h

2012-08-27 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-8737:


 Summary: cmake: always use JAVA_HOME to find libjvm.so, jni.h, 
jni_md.h
 Key: HADOOP-8737
 URL: https://issues.apache.org/jira/browse/HADOOP-8737
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


We should always use the {{libjvm.so}}, {{jni.h}}, and {{jni_md.h}} under 
{{JAVA_HOME}}, rather than trying to look for them in system paths.  Since we 
compile with Maven, we know that we'll have a valid {{JAVA_HOME}} at all times. 
 There is no point digging in system paths, and it can lead to host 
contamination if the user has multiple JVMs installed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8727) Gracefully deprecate dfs.umaskmode in 2.x onwards

2012-08-27 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442789#comment-13442789
 ] 

Eli Collins commented on HADOOP-8727:
-


I'd word Configure the umask value to apply when creating new files using the 
FileSystem classes. instead as The umask used when creating files and 
directories. since this applies to both files and directories and eg other 
APIs like FileContext or FsShell.

Otherwise looks great.

 Gracefully deprecate dfs.umaskmode in 2.x onwards
 -

 Key: HADOOP-8727
 URL: https://issues.apache.org/jira/browse/HADOOP-8727
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HADOOP-8727.patch, HADOOP-8727.patch


 While HADOOP-6234 added dfs.umaskmode in 0.21.0, the subsequent HADOOP-6233 
 simply renamed it, again in 0.21.0, without any deprecation mechanism 
 (understandable).
 However, 1.x now carries dfs.umaskmode but there isn't a graceful deprecation 
 when one upgrades to 2.x. We should recreate this prop and add it to the 
 deprecated list.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8737) cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h

2012-08-27 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8737:
-

Attachment: HADOOP-8737.001.patch

 cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h
 --

 Key: HADOOP-8737
 URL: https://issues.apache.org/jira/browse/HADOOP-8737
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8737.001.patch


 We should always use the {{libjvm.so}}, {{jni.h}}, and {{jni_md.h}} under 
 {{JAVA_HOME}}, rather than trying to look for them in system paths.  Since we 
 compile with Maven, we know that we'll have a valid {{JAVA_HOME}} at all 
 times.  There is no point digging in system paths, and it can lead to host 
 contamination if the user has multiple JVMs installed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8737) cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h

2012-08-27 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8737:
-

Status: Patch Available  (was: Open)

 cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h
 --

 Key: HADOOP-8737
 URL: https://issues.apache.org/jira/browse/HADOOP-8737
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8737.001.patch


 We should always use the {{libjvm.so}}, {{jni.h}}, and {{jni_md.h}} under 
 {{JAVA_HOME}}, rather than trying to look for them in system paths.  Since we 
 compile with Maven, we know that we'll have a valid {{JAVA_HOME}} at all 
 times.  There is no point digging in system paths, and it can lead to host 
 contamination if the user has multiple JVMs installed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8736) Create a Builder to make an RPC server

2012-08-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-8736:
---

Attachment: HADOOP-8736.patch

 Create a Builder to make an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-8736.patch, HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8736) Create a Builder to make an RPC server

2012-08-27 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442805#comment-13442805
 ] 

Brandon Li commented on HADOOP-8736:


Uploaded a new patch which is rebased with the HEAD of trunk.

 Create a Builder to make an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-8736.patch, HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8738) junit JAR is showing up in the distro

2012-08-27 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-8738:
--

 Summary: junit JAR is showing up in the distro
 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha


It seems that with the move of YARN module to trunk/ level the test scope in 
junit got lost. This makes the junit JAR to show up in the TAR

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8738) junit JAR is showing up in the distro

2012-08-27 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8738:
---

Attachment: HADOOP-8738.patch

 junit JAR is showing up in the distro
 -

 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8738.patch


 It seems that with the move of YARN module to trunk/ level the test scope in 
 junit got lost. This makes the junit JAR to show up in the TAR

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8738) junit JAR is showing up in the distro

2012-08-27 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8738:
---

Status: Patch Available  (was: Open)

 junit JAR is showing up in the distro
 -

 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8738.patch


 It seems that with the move of YARN module to trunk/ level the test scope in 
 junit got lost. This makes the junit JAR to show up in the TAR

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8736) Create a Builder to make an RPC server

2012-08-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442819#comment-13442819
 ] 

Hadoop QA commented on HADOOP-8736:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542693/HADOOP-8736.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 8 new or modified test 
files.

-1 javac.  The applied patch generated 2069 javac compiler warnings (more 
than the trunk's current 2059 warnings).

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1370//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1370//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1370//console

This message is automatically generated.

 Create a Builder to make an RPC server
 --

 Key: HADOOP-8736
 URL: https://issues.apache.org/jira/browse/HADOOP-8736
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-8736.patch, HADOOP-8736.patch


 There are quite a few variants of getServer() method to create an RPC server. 
 Create a builder class to abstract the building steps and avoid more 
 getServer() variants in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8657) TestCLI fails on Windows because it uses hardcoded file length of test files committed to the source code

2012-08-27 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8657:
---

Attachment: HADOOP-8657.branch-1-win.1.patch

 TestCLI fails on Windows because it uses hardcoded file length of test files 
 committed to the source code
 -

 Key: HADOOP-8657
 URL: https://issues.apache.org/jira/browse/HADOOP-8657
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8657.branch-1-win.1.patch


 The actual length of the file would depend on the character encoding used and 
 hence cannot be hard-coded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8657) TestCLI fails on Windows because it uses hardcoded file length of test files committed to the source code

2012-08-27 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8657:
---

Status: Patch Available  (was: Open)

 TestCLI fails on Windows because it uses hardcoded file length of test files 
 committed to the source code
 -

 Key: HADOOP-8657
 URL: https://issues.apache.org/jira/browse/HADOOP-8657
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8657.branch-1-win.1.patch


 The actual length of the file would depend on the character encoding used and 
 hence cannot be hard-coded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8657) TestCLI fails on Windows because it uses hardcoded file length of test files committed to the source code

2012-08-27 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442823#comment-13442823
 ] 

Bikas Saha commented on HADOOP-8657:


The test was failing because it was checking for file sizes on disk and the 
sizes were hardcoded in the test. text file sizes can be different on different 
platforms based on character encodings etc. The fix was to read the actual file 
size from disk and then check values based on that instead of some hardcoded 
value. The test files are actually checked into the source as resources. 
Ideally, the test would generate these files on the fly instead of checking 
them in but I am leaving that re-organization of the code tree for later when 
the branch is merged back so as to simplify the merge.

 TestCLI fails on Windows because it uses hardcoded file length of test files 
 committed to the source code
 -

 Key: HADOOP-8657
 URL: https://issues.apache.org/jira/browse/HADOOP-8657
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8657.branch-1-win.1.patch


 The actual length of the file would depend on the character encoding used and 
 hence cannot be hard-coded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8739) Cmd scripts for Windows have issues in argument parsing

2012-08-27 Thread Bikas Saha (JIRA)
Bikas Saha created HADOOP-8739:
--

 Summary: Cmd scripts for Windows have issues in argument parsing
 Key: HADOOP-8739
 URL: https://issues.apache.org/jira/browse/HADOOP-8739
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha


The parsing of the arguments has a bug in the way its broken down and this 
break things such at handling globbing (* char).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8737) cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h

2012-08-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442834#comment-13442834
 ] 

Hadoop QA commented on HADOOP-8737:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542691/HADOOP-8737.001.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1371//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1371//console

This message is automatically generated.

 cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h
 --

 Key: HADOOP-8737
 URL: https://issues.apache.org/jira/browse/HADOOP-8737
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8737.001.patch


 We should always use the {{libjvm.so}}, {{jni.h}}, and {{jni_md.h}} under 
 {{JAVA_HOME}}, rather than trying to look for them in system paths.  Since we 
 compile with Maven, we know that we'll have a valid {{JAVA_HOME}} at all 
 times.  There is no point digging in system paths, and it can lead to host 
 contamination if the user has multiple JVMs installed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8664) hadoop streaming job need the full path to commands even when they are in the path

2012-08-27 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8664:
---

Status: Patch Available  (was: Open)

 hadoop streaming job need the full path to commands even when they are in the 
 path
 --

 Key: HADOOP-8664
 URL: https://issues.apache.org/jira/browse/HADOOP-8664
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8664.branch-1-win.1.patch


 run a hadoop streaming job as
 bin/hadoop jar path_to_streaming_jar -input path_on_hdfs -mapper cat -output 
 path_on_hdfs -reducer cat
 will fail saying program cat not found. cat is in the path and works from cmd 
 prompt.
 If i give the full path to cmd.exe the exception is not seen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8664) hadoop streaming job need the full path to commands even when they are in the path

2012-08-27 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8664:
---

Attachment: HADOOP-8664.branch-1-win.1.patch

Path was being overriden incorrectly. Fixed.

 hadoop streaming job need the full path to commands even when they are in the 
 path
 --

 Key: HADOOP-8664
 URL: https://issues.apache.org/jira/browse/HADOOP-8664
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8664.branch-1-win.1.patch


 run a hadoop streaming job as
 bin/hadoop jar path_to_streaming_jar -input path_on_hdfs -mapper cat -output 
 path_on_hdfs -reducer cat
 will fail saying program cat not found. cat is in the path and works from cmd 
 prompt.
 If i give the full path to cmd.exe the exception is not seen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8739) Cmd scripts for Windows have issues in argument parsing

2012-08-27 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8739:
---

Status: Patch Available  (was: Open)

 Cmd scripts for Windows have issues in argument parsing
 ---

 Key: HADOOP-8739
 URL: https://issues.apache.org/jira/browse/HADOOP-8739
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8739.branch-1-win.1.patch


 The parsing of the arguments has a bug in the way its broken down and this 
 break things such at handling globbing (* char).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8657) TestCLI fails on Windows because it uses hardcoded file length of test files committed to the source code

2012-08-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442867#comment-13442867
 ] 

Hadoop QA commented on HADOOP-8657:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12542699/HADOOP-8657.branch-1-win.1.patch
  against trunk revision .

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1372//console

This message is automatically generated.

 TestCLI fails on Windows because it uses hardcoded file length of test files 
 committed to the source code
 -

 Key: HADOOP-8657
 URL: https://issues.apache.org/jira/browse/HADOOP-8657
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8657.branch-1-win.1.patch


 The actual length of the file would depend on the character encoding used and 
 hence cannot be hard-coded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8739) Cmd scripts for Windows have issues in argument parsing

2012-08-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442874#comment-13442874
 ] 

Hadoop QA commented on HADOOP-8739:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12542705/HADOOP-8739.branch-1-win.1.patch
  against trunk revision .

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1374//console

This message is automatically generated.

 Cmd scripts for Windows have issues in argument parsing
 ---

 Key: HADOOP-8739
 URL: https://issues.apache.org/jira/browse/HADOOP-8739
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8739.branch-1-win.1.patch


 The parsing of the arguments has a bug in the way its broken down and this 
 break things such at handling globbing (* char).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8664) hadoop streaming job need the full path to commands even when they are in the path

2012-08-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442875#comment-13442875
 ] 

Hadoop QA commented on HADOOP-8664:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12542706/HADOOP-8664.branch-1-win.1.patch
  against trunk revision .

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1375//console

This message is automatically generated.

 hadoop streaming job need the full path to commands even when they are in the 
 path
 --

 Key: HADOOP-8664
 URL: https://issues.apache.org/jira/browse/HADOOP-8664
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8664.branch-1-win.1.patch


 run a hadoop streaming job as
 bin/hadoop jar path_to_streaming_jar -input path_on_hdfs -mapper cat -output 
 path_on_hdfs -reducer cat
 will fail saying program cat not found. cat is in the path and works from cmd 
 prompt.
 If i give the full path to cmd.exe the exception is not seen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8738) junit JAR is showing up in the distro

2012-08-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442879#comment-13442879
 ] 

Hadoop QA commented on HADOOP-8738:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12542696/HADOOP-8738.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1373//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1373//console

This message is automatically generated.

 junit JAR is showing up in the distro
 -

 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8738.patch


 It seems that with the move of YARN module to trunk/ level the test scope in 
 junit got lost. This makes the junit JAR to show up in the TAR

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8740) Build target to generate findbugs html output

2012-08-27 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8740:
---

 Summary: Build target to generate findbugs html output
 Key: HADOOP-8740
 URL: https://issues.apache.org/jira/browse/HADOOP-8740
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Eli Collins


It would be useful if there was a build target or flag to generate findbugs 
output. It would depend on {{mvn compile findbugs:findbugs}} and run 
{{$FINDBUGS_HOME/bin/convertXmlToText -html ../path/to/findbugsXml.xml 
findbugs.html}} to generate findbugs.html in the target directory.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8740) Build target to generate findbugs html output

2012-08-27 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442932#comment-13442932
 ] 

Alejandro Abdelnur commented on HADOOP-8740:


And, to make easier to pick up a new exclusion to add in the exclude file, we 
could use this custom filter helper 
http://kabir-khan.blogspot.com/2009/10/findbugs-filter-creation.html


 Build target to generate findbugs html output
 -

 Key: HADOOP-8740
 URL: https://issues.apache.org/jira/browse/HADOOP-8740
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Eli Collins

 It would be useful if there was a build target or flag to generate findbugs 
 output. It would depend on {{mvn compile findbugs:findbugs}} and run 
 {{$FINDBUGS_HOME/bin/convertXmlToText -html ../path/to/findbugsXml.xml 
 findbugs.html}} to generate findbugs.html in the target directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6851) Fix '$bin' path duplication in setup scripts

2012-08-27 Thread Jianbin Wei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442960#comment-13442960
 ] 

Jianbin Wei commented on HADOOP-6851:
-

Hi Lane, you may want to reattach a new patch and click the submit patch to 
kick off the automatic hadoop QA process.  In my machine, I got patch apply 
failure.


⚡ dev-support/test-patch.sh ~/Downloads/HADOOP-6851-0.22.patch 
Running in developer mode


==
==
Testing patch for HADOOP-6851-0.22.patch.
==
==



Patch file /Users/jianbwei/Downloads/HADOOP-6851-0.22.patch copied to /tmp
The patch does not appear to apply with p0 to p2
PATCH APPLICATION FAILED




-1 overall.  

-1 patch.  The patch command could not apply the patch.




==
==
Finished build.
==
==


 Fix '$bin' path duplication in setup scripts
 

 Key: HADOOP-6851
 URL: https://issues.apache.org/jira/browse/HADOOP-6851
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Nicolas Spiegelberg
Priority: Trivial
 Attachments: HADOOP-6851-0.22.patch


 I have my bash environment setup to echo absolute pathnames when a relative 
 one is specified in 'cd'. This caused problems with all the Hadoop bash 
 scripts because the script accidentally sets the $bin variable twice in this 
 setup. (e.g. would set $bin=/path/bin/hadoop\n/path/bin/hadoop).
 This jira is for common scripts.  I filed a separate jira for HDFS scripts, 
 which share the same pattern.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8726) The Secrets in Credentials are not available to MR tasks

2012-08-27 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony reassigned HADOOP-8726:


Assignee: Benoy Antony

 The Secrets in Credentials are not available to MR tasks
 

 Key: HADOOP-8726
 URL: https://issues.apache.org/jira/browse/HADOOP-8726
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-8726.patch


 Though secrets are passed in Credentials, the secrets are not available to 
 the MR tasks.
 This issue  exists with security on/off. 
 This is related to the change in HADOOP-8225

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8726) The Secrets in Credentials are not available to MR tasks

2012-08-27 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-8726:
-

Attachment: HADOOP-8726.patch

Attaching a new patch.

TOkens and Secrets are maintained by Credentials.
Credentials object is stored in Subject.privateCredentials.

Benefits of this approach incluse 

1) The UGI class is independent of internal of Credentials class.

2) Tokens and Secrets are handled and the handling is delegated to Credentials 
class.

3) Relatively better performance as the UGI#getCredentials () and UGI#addToken 
functions do  iterate the full collection

 The Secrets in Credentials are not available to MR tasks
 

 Key: HADOOP-8726
 URL: https://issues.apache.org/jira/browse/HADOOP-8726
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-8726.patch, HADOOP-8726.patch


 Though secrets are passed in Credentials, the secrets are not available to 
 the MR tasks.
 This issue  exists with security on/off. 
 This is related to the change in HADOOP-8225

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8738) junit JAR is showing up in the distro

2012-08-27 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442973#comment-13442973
 ] 

Eli Collins commented on HADOOP-8738:
-

+1 lgtm

 junit JAR is showing up in the distro
 -

 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8738.patch


 It seems that with the move of YARN module to trunk/ level the test scope in 
 junit got lost. This makes the junit JAR to show up in the TAR

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira