[jira] [Updated] (HADOOP-8967) Reported source for config property can be misleading

2016-01-04 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8967:

Assignee: (was: Robert Joseph Evans)

> Reported source for config property can be misleading
> -
>
> Key: HADOOP-8967
> URL: https://issues.apache.org/jira/browse/HADOOP-8967
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 0.23.3
>Reporter: Jason Lowe
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-8967.patch, HADOOP-8967.txt
>
>
> Configuration.set tries to track the source of a property being set, but it 
> mistakenly reports properties as being deprecated when they are not.  This is 
> misleading and confusing for users examining a job's configuration.
> For example, run a sleep job and check the job configuration on the job UI.  
> The source for the "mapreduce.job.maps" property will be reported as "job.xml 
> ⬅ because mapreduce.job.maps is deprecated".  This leads users to think 
> mapreduce.job.maps is now a deprecated property and wonder what other 
> property they should use instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8661) RemoteException's Stack Trace would be better returned by getStackTrace

2016-01-04 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8661:

Assignee: (was: Robert Joseph Evans)

> RemoteException's Stack Trace would be better returned by getStackTrace
> ---
>
> Key: HADOOP-8661
> URL: https://issues.apache.org/jira/browse/HADOOP-8661
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
>Reporter: Robert Joseph Evans
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-8661.txt, HADOOP-8661.txt, HADOOP-8661.txt
>
>
> It looks like all exceptions produced by RemoteException include the full 
> stack trace of the original exception in the message.  This is different from 
> 1.0 behavior to aid in debugging, but it would be nice to actually parse the 
> stack trace and return it through getStackTrace instead of through getMessage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8724) Add improved APIs for globbing

2016-01-04 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8724:

Assignee: (was: Robert Joseph Evans)

> Add improved APIs for globbing
> --
>
> Key: HADOOP-8724
> URL: https://issues.apache.org/jira/browse/HADOOP-8724
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Robert Joseph Evans
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-8724.patch, HADOOP-8724.txt
>
>
> After the discussion on HADOOP-8709 it was decided that we need better APIs 
> for globbing to remove some of the inconsistencies with other APIs.  Inorder 
> to maintain backwards compatibility we should deprecate the existing APIs and 
> add in new ones.
> See HADOOP-8709 for more information about exactly how those APIs should look 
> and behave.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-6842) hadoop fs -text does not give a useful text representation of MapWritable objects

2015-05-08 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-6842:

   Resolution: Fixed
Fix Version/s: 2.8.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks [~ajisakaa],

I merged this into trunk and branch-2.

 hadoop fs -text does not give a useful text representation of MapWritable 
 objects
 ---

 Key: HADOOP-6842
 URL: https://issues.apache.org/jira/browse/HADOOP-6842
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 0.20.0
Reporter: Steven Wong
Assignee: Akira AJISAKA
 Fix For: 3.0.0, 2.8.0

 Attachments: HADOOP-6842.002.patch, HADOOP-6842.patch


 If a sequence file contains MapWritable objects, running hadoop fs -text on 
 the file prints the following for each MapWritable:
 org.apache.hadoop.io.MapWritable@4f8235ed
 To be more useful, it should print out the contents of the map instead. This 
 can be done by adding a toString method to MapWritable, i.e. something like:
 public String toString() {
 return (new TreeMapWritable, Writable(instance)).toString();
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6842) hadoop fs -text does not give a useful text representation of MapWritable objects

2015-05-08 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534876#comment-14534876
 ] 

Robert Joseph Evans commented on HADOOP-6842:
-

+1 looks good to me.  I'll check this in.

 hadoop fs -text does not give a useful text representation of MapWritable 
 objects
 ---

 Key: HADOOP-6842
 URL: https://issues.apache.org/jira/browse/HADOOP-6842
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 0.20.0
Reporter: Steven Wong
Assignee: Akira AJISAKA
 Attachments: HADOOP-6842.002.patch, HADOOP-6842.patch


 If a sequence file contains MapWritable objects, running hadoop fs -text on 
 the file prints the following for each MapWritable:
 org.apache.hadoop.io.MapWritable@4f8235ed
 To be more useful, it should print out the contents of the map instead. This 
 can be done by adding a toString method to MapWritable, i.e. something like:
 public String toString() {
 return (new TreeMapWritable, Writable(instance)).toString();
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-6842) hadoop fs -text does not give a useful text representation of MapWritable objects

2015-05-08 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-6842:

Labels:   (was: BB2015-05-RFC)

 hadoop fs -text does not give a useful text representation of MapWritable 
 objects
 ---

 Key: HADOOP-6842
 URL: https://issues.apache.org/jira/browse/HADOOP-6842
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 0.20.0
Reporter: Steven Wong
Assignee: Akira AJISAKA
 Attachments: HADOOP-6842.002.patch, HADOOP-6842.patch


 If a sequence file contains MapWritable objects, running hadoop fs -text on 
 the file prints the following for each MapWritable:
 org.apache.hadoop.io.MapWritable@4f8235ed
 To be more useful, it should print out the contents of the map instead. This 
 can be done by adding a toString method to MapWritable, i.e. something like:
 public String toString() {
 return (new TreeMapWritable, Writable(instance)).toString();
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11206) TestCryptoCodec.testOpensslAesCtrCryptoCodec fails on master without native code compiled

2014-10-15 Thread Robert Joseph Evans (JIRA)
Robert Joseph Evans created HADOOP-11206:


 Summary: TestCryptoCodec.testOpensslAesCtrCryptoCodec fails on 
master without native code compiled
 Key: HADOOP-11206
 URL: https://issues.apache.org/jira/browse/HADOOP-11206
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans


I tried to run the unit tests recently for another issue, and didn't turn on 
native code.  I got the following error.

{code}
Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.71 sec  
FAILURE! - in org.apache.hadoop.crypto.TestCryptoCodec
testOpensslAesCtrCryptoCodec(org.apache.hadoop.crypto.TestCryptoCodec)  Time 
elapsed: 0.064 sec   ERROR!
java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl()Z
at org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl(Native 
Method)
at 
org.apache.hadoop.crypto.TestCryptoCodec.testOpensslAesCtrCryptoCodec(TestCryptoCodec.java:66)
{code}

Looks like that test needs an assume that native code is loaded/compiled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9422) HADOOP_HOME should not be required to be set to be able to launch commands using hadoop.util.Shell

2014-10-15 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172433#comment-14172433
 ] 

Robert Joseph Evans commented on HADOOP-9422:
-

We have run into this issue trying to use the latest HBase through storm, and 
also with trying to use slider on a custom app.  It really would be nice to get 
this in to a 2.X release at some point soon, so I don't have to hack it by 
setting HADOOP_HOME=/tmp/ which really just feels wrong to me.

The code compiles and the unit tests (related to this) pass for me (despite the 
age of the patch) I am +1, but as I have not been actively involved in Hadoop 
Development for quite a while I would like others to take a look too. [~jlowe] 
do you mind taking a look too?

I filed HADOOP-11206 for the one test error I found.

 HADOOP_HOME should not be required to be set to be able to launch commands 
 using hadoop.util.Shell
 --

 Key: HADOOP-9422
 URL: https://issues.apache.org/jira/browse/HADOOP-9422
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Hitesh Shah
Assignee: Arpit Gupta
 Attachments: HADOOP-9422.patch


 Not sure why this is an enforced requirement especially in cases where a 
 deployment is done using multiple tar-balls ( one each for 
 common/hdfs/mapreduce/yarn ). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-10776) Open up Delegation token fetching and renewal to STORM (Possibly others)

2014-07-02 Thread Robert Joseph Evans (JIRA)
Robert Joseph Evans created HADOOP-10776:


 Summary: Open up Delegation token fetching and renewal to STORM 
(Possibly others)
 Key: HADOOP-10776
 URL: https://issues.apache.org/jira/browse/HADOOP-10776
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans


Storm would like to be able to fetch delegation tokens and forward them on to 
running topologies so that they can access HDFS (STORM-346).  But to do so we 
need to open up access to some of APIs. 

Most notably FileSystem.addDelegationTokens(), Token.renew, 
Credentials.getAllTokens, and UserGroupInformation but there may be others.

At a minimum adding in storm to the list of allowed API users. But ideally 
making them public. Restricting access to such important functionality to just 
MR really makes secure HDFS inaccessible to anything except MR, or tools that 
reuse MR input formats.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10164) Allow UGI to login with a known Subject

2013-12-18 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852147#comment-13852147
 ] 

Robert Joseph Evans commented on HADOOP-10164:
--

Great I'll merge it in.

 Allow UGI to login with a known Subject
 ---

 Key: HADOOP-10164
 URL: https://issues.apache.org/jira/browse/HADOOP-10164
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: login-from-subject-branch-0.23.txt, 
 login-from-subject.txt


 For storm I would love to let Hadoop initialize based off of credentials that 
 were already populated in a Subject.  This is not currently possible because 
 logging in a user always creates a new blank Subject.  This is to allow a 
 user to be logged in based off a pre-existing subject through a new method.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10164) Allow UGI to login with a known Subject

2013-12-18 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-10164:
-

  Resolution: Fixed
   Fix Version/s: 0.23.11
  2.4.0
  3.0.0
Target Version/s: 2.2.0, 0.23.10, 3.0.0  (was: 3.0.0, 0.23.10, 2.2.0)
  Status: Resolved  (was: Patch Available)

I checked this into trunk, branch-2, and branch-0.23

 Allow UGI to login with a known Subject
 ---

 Key: HADOOP-10164
 URL: https://issues.apache.org/jira/browse/HADOOP-10164
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 3.0.0, 2.4.0, 0.23.11

 Attachments: login-from-subject-branch-0.23.txt, 
 login-from-subject.txt


 For storm I would love to let Hadoop initialize based off of credentials that 
 were already populated in a Subject.  This is not currently possible because 
 logging in a user always creates a new blank Subject.  This is to allow a 
 user to be logged in based off a pre-existing subject through a new method.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10164) Allow UGI to login with a known Subject

2013-12-13 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-10164:
-

Attachment: login-from-subject.txt
login-from-subject-branch-0.23.txt

login-from-subject-branch-0.23 is only for branch-0.23.  login-from-subject 
applies to trunk, branch-2, and branch-2.2

 Allow UGI to login with a known Subject
 ---

 Key: HADOOP-10164
 URL: https://issues.apache.org/jira/browse/HADOOP-10164
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: login-from-subject-branch-0.23.txt, 
 login-from-subject.txt


 For storm I would love to let Hadoop initialize based off of credentials that 
 were already populated in a Subject.  This is not currently possible because 
 logging in a user always creates a new blank Subject.  This is to allow a 
 user to be logged in based off a pre-existing subject through a new method.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (HADOOP-10164) Allow UGI to login with a known Subject

2013-12-13 Thread Robert Joseph Evans (JIRA)
Robert Joseph Evans created HADOOP-10164:


 Summary: Allow UGI to login with a known Subject
 Key: HADOOP-10164
 URL: https://issues.apache.org/jira/browse/HADOOP-10164
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: login-from-subject-branch-0.23.txt, login-from-subject.txt

For storm I would love to let Hadoop initialize based off of credentials that 
were already populated in a Subject.  This is not currently possible because 
logging in a user always creates a new blank Subject.  This is to allow a user 
to be logged in based off a pre-existing subject through a new method.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10164) Allow UGI to login with a known Subject

2013-12-13 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-10164:
-

Target Version/s: 2.2.0, 0.23.10, 3.0.0  (was: 3.0.0, 0.23.10, 2.2.0)
  Status: Patch Available  (was: Open)

 Allow UGI to login with a known Subject
 ---

 Key: HADOOP-10164
 URL: https://issues.apache.org/jira/browse/HADOOP-10164
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: login-from-subject-branch-0.23.txt, 
 login-from-subject.txt


 For storm I would love to let Hadoop initialize based off of credentials that 
 were already populated in a Subject.  This is not currently possible because 
 logging in a user always creates a new blank Subject.  This is to allow a 
 user to be logged in based off a pre-existing subject through a new method.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10164) Allow UGI to login with a known Subject

2013-12-13 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847930#comment-13847930
 ] 

Robert Joseph Evans commented on HADOOP-10164:
--

The test failure appears to be spurious.  The test failed for me 1 out of 4 
runs. It is a multi-threaded test case that failed too. I will look into it a 
bit more and will file a JIRA about the test failure.

I did not include any new tests, because it is a really small re-factor and the 
existing tests should all pass.



 Allow UGI to login with a known Subject
 ---

 Key: HADOOP-10164
 URL: https://issues.apache.org/jira/browse/HADOOP-10164
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: login-from-subject-branch-0.23.txt, 
 login-from-subject.txt


 For storm I would love to let Hadoop initialize based off of credentials that 
 were already populated in a Subject.  This is not currently possible because 
 logging in a user always creates a new blank Subject.  This is to allow a 
 user to be logged in based off a pre-existing subject through a new method.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10164) Allow UGI to login with a known Subject

2013-12-13 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847932#comment-13847932
 ] 

Robert Joseph Evans commented on HADOOP-10164:
--

Ahh it looks someone else found this already HADOOP-10062.

 Allow UGI to login with a known Subject
 ---

 Key: HADOOP-10164
 URL: https://issues.apache.org/jira/browse/HADOOP-10164
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: login-from-subject-branch-0.23.txt, 
 login-from-subject.txt


 For storm I would love to let Hadoop initialize based off of credentials that 
 were already populated in a Subject.  This is not currently possible because 
 logging in a user always creates a new blank Subject.  This is to allow a 
 user to be logged in based off a pre-existing subject through a new method.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-8477) Pull in Yahoo! Hadoop Tutorial and update it accordingly.

2013-08-12 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13736873#comment-13736873
 ] 

Robert Joseph Evans commented on HADOOP-8477:
-

I do like the question you have.
{quote}What are other Apache partners doing?{quote}

We definitely want to make the tutorial so it is Apache centric and not Yahoo! 
centric.  I think it is fine to include some of the cool things that different 
groups are doing with Hadoop too. But me being at Yahoo! makes it kind of hard 
to really talk about what others are doing.

In the section on YARN I think it is good to indicate that it is in 2.X, but 
not 1.x.  Just to get around the old numbering that we are still recovering 
from.

What does the future hold? I see a lot more application types running on YARN.

Over all I like what you have done.  Thanks.  

 Pull in Yahoo! Hadoop Tutorial and update it accordingly.
 -

 Key: HADOOP-8477
 URL: https://issues.apache.org/jira/browse/HADOOP-8477
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 1.1.0, 2.0.0-alpha
Reporter: Robert Joseph Evans
 Attachments: Module 1 Hadoop Tutorial Introduction.docx, tutorial.tgz


 I was able to get the Yahoo! Hadoop tutorial released under an Apache 2.0 
 license.  This allows us to make it a official part of the Hadoop Project.  
 This ticket is to pull the tutorial and update it as needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8477) Pull in Yahoo! Hadoop Tutorial and update it accordingly.

2013-07-08 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702033#comment-13702033
 ] 

Robert Joseph Evans commented on HADOOP-8477:
-

Always glad to see more people helping out with the community.  If you want 
some help building the documentation feel free to send me the build error you 
are seeing, or send a mail to u...@hadoop.apache.org

Once you start posting some patches we probably want to start creating some 
sub-tasks to put them in piecemeal.

 

 Pull in Yahoo! Hadoop Tutorial and update it accordingly.
 -

 Key: HADOOP-8477
 URL: https://issues.apache.org/jira/browse/HADOOP-8477
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 1.1.0, 2.0.0-alpha
Reporter: Robert Joseph Evans
 Attachments: tutorial.tgz


 I was able to get the Yahoo! Hadoop tutorial released under an Apache 2.0 
 license.  This allows us to make it a official part of the Hadoop Project.  
 This ticket is to pull the tutorial and update it as needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Issue Comment Deleted] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-20 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9421:


Comment: was deleted

(was: I think it's close. It needs to be rebased against trunk for atm's 
security fix. I'm also adding two unit tests to make sure fallback prevention 
actually works.)

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Issue Comment Deleted] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-20 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9421:


Comment: was deleted

(was: Test failures were due to atm's r1494787 checkin. New patch to make the 
tests work again.)

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Issue Comment Deleted] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing

2013-06-20 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9421:


Comment: was deleted

(was: Looks like I need to merge with atm's fall-back-to-simple option commit 
(without a JIRA). )

 Convert SASL to use ProtoBuf and add lengths for non-blocking processing
 

 Key: HADOOP-9421
 URL: https://issues.apache.org/jira/browse/HADOOP-9421
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.3-alpha
Reporter: Sanjay Radia
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
 HADOOP-9421.patch, HADOOP-9421-v2-demo.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2013-05-21 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663199#comment-13663199
 ] 

Robert Joseph Evans commented on HADOOP-9438:
-

I think the patch looks fine and I am +1 for this, but I really would like 
someone who is much more on the HDFS side to also take a look before checking 
it in.  Especially because this is technically an incompatible change.

 LocalFileContext does not throw an exception on mkdir for already existing 
 directory
 

 Key: HADOOP-9438
 URL: https://issues.apache.org/jira/browse/HADOOP-9438
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Robert Joseph Evans
Priority: Critical
 Attachments: HADOOP-9438.20130501.1.patch, 
 HADOOP-9438.20130521.1.patch, HADOOP-9438.patch, HADOOP-9438.patch


 according to 
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
 should throw a FileAlreadyExistsException if the directory already exists.
 I tested this and 
 {code}
 FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
 Path p = new Path(/tmp/bobby.12345);
 FsPermission cachePerms = new FsPermission((short) 0755);
 lfc.mkdir(p, cachePerms, false);
 lfc.mkdir(p, cachePerms, false);
 {code}
 never throws an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9046) provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT

2013-05-14 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657095#comment-13657095
 ] 

Robert Joseph Evans commented on HADOOP-9046:
-

Ivan, I am sorry I dropped the ball on this, and did not repond sooner.  Sadly 
though the only time we want to put something into just 0.23 without going into 
trunk/branch-2 first is if there is a bug that only exists in branch-0.23.  
This is a very rare occurrence.  If you think the coverage is good enough on 
branch-2 and trunk then would it be acceptable if we resolve this JIRA instead.

 provide unit-test coverage of class 
 org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT
 --

 Key: HADOOP-9046
 URL: https://issues.apache.org/jira/browse/HADOOP-9046
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-9046-branch-0.23--c.patch, 
 HADOOP-9046-branch-0.23--d.patch, HADOOP-9046-branch-0.23-over-9049.patch, 
 HADOOP-9046-branch-0.23--over-HDFS-4567.patch, HADOOP-9046-branch-0.23.patch, 
 HADOOP-9046-branch-2--over-HDFS-4567.patch, HADOOP-9046--c.patch, 
 HADOOP-9046--d.patch, HADOOP-9046--e.patch, HADOOP-9046-over-9049.patch, 
 HADOOP-9046.patch, HADOOP-9046-trunk--over-HDFS-4567.patch


 The class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT has zero 
 coverage in entire cumulative test run. Provide test(s) to cover this class.
 Note: the request submitted to HDFS project because the class likely to be 
 tested by tests in that project.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2013-05-03 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13648443#comment-13648443
 ] 

Robert Joseph Evans commented on HADOOP-9438:
-

Like I said initially I am fine with having the javadocs and interface for 
FileContext.mkdir changed.  The problem isn't that FileContext does not throw 
the exception the problem is that it is inconsistent with the documented 
interface. This resulted in incorrect code being written in YARN. The above 
code is not going to solve the problem because there is a race.  If fs.exists 
returns false and then the file is created the mkdir will become a noop and the 
interface still is not followed.

Changing the FileSystem definition to throw an exception on a mkdir is not 
acceptable either.  This will not just break tests it will break lots of 
downstream customers.  

The trick with changing the FileContext definition is that we have to be sure 
that it is in line with the other implementations as well.  If all of the 
FileContext implementations are wrappers around FileSystem implementations then 
it should not be a problem to change this.

 LocalFileContext does not throw an exception on mkdir for already existing 
 directory
 

 Key: HADOOP-9438
 URL: https://issues.apache.org/jira/browse/HADOOP-9438
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Robert Joseph Evans
Priority: Critical
 Attachments: HADOOP-9438.20130501.1.patch, HADOOP-9438.patch, 
 HADOOP-9438.patch


 according to 
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
 should throw a FileAlreadyExistsException if the directory already exists.
 I tested this and 
 {code}
 FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
 Path p = new Path(/tmp/bobby.12345);
 FsPermission cachePerms = new FsPermission((short) 0755);
 lfc.mkdir(p, cachePerms, false);
 lfc.mkdir(p, cachePerms, false);
 {code}
 never throws an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9426) Hadoop should expose Jar location utilities on its public API

2013-04-04 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1362#comment-1362
 ] 

Robert Joseph Evans commented on HADOOP-9426:
-

I would prefer not to see jaring on the fly, or if we do include it that there 
is a separate option that does not do it.  It adds in a lot of complexity that 
I would prefer to not deal with, and I doubt that most people need it.  Even 
oozie I would argue does not need it any more.  The distributed cache does not 
fully unzip jars any more.  Because of this it is probably picking up the 
proper jar already.

 Hadoop should expose Jar location utilities on its public API
 -

 Key: HADOOP-9426
 URL: https://issues.apache.org/jira/browse/HADOOP-9426
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0, 1.1.0, 2.0.0-alpha
Reporter: Nick Dimiduk
 Attachments: 
 0001-HADOOP-9426-Promote-JarFinder-out-of-test-jar.patch, 
 0001-HADOOP-9426-Promote-JarFinder-out-of-test-jar.patch


 The facilities behind JobConf#setJarByClass and the JarFinder utility in test 
 are both generally useful. As the core platform, these should be published as 
 part of the public API. In addition to HBase, they are probably useful for 
 Pig and Hive as well. See also HBASE-2588, HBASE-5317, HBASE-8140.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9426) Hadoop should expose Jar location utilities on its public API

2013-04-02 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619880#comment-13619880
 ] 

Robert Joseph Evans commented on HADOOP-9426:
-

[~tucu00],  The reason for putting this functionality in is because now that 
YARN is gaining traction there are more and more tools that run on top of YARN 
that want this ability, similar to how Map/Reduce uses it.  Yes -libjars is a 
great way to ship dependencies for Map/Reduce, but MR also exposes this through 
other interfaces and for non-mapreduce tools it would be nice to have this 
functionality in a common place.  I am not sure that hadoop-common is the best 
place for this.  yarn might be a better place, but I think common is an OK 
place.

 Hadoop should expose Jar location utilities on its public API
 -

 Key: HADOOP-9426
 URL: https://issues.apache.org/jira/browse/HADOOP-9426
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0, 1.1.0, 2.0.0-alpha
Reporter: Nick Dimiduk
 Attachments: 
 0001-HADOOP-9426-Promote-JarFinder-out-of-test-jar.patch, 
 0001-HADOOP-9426-Promote-JarFinder-out-of-test-jar.patch


 The facilities behind JobConf#setJarByClass and the JarFinder utility in test 
 are both generally useful. As the core platform, these should be published as 
 part of the public API. In addition to HBase, they are probably useful for 
 Pig and Hive as well. See also HBASE-2588, HBASE-5317, HBASE-8140.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9426) Hadoop should expose Jar location utilities on its public API

2013-04-02 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619905#comment-13619905
 ] 

Robert Joseph Evans commented on HADOOP-9426:
-

That being said, I don't see much reason to have most of the other methods in 
there.  You have grabbed a class that is designed for testing, and re-purposed 
it.  I see no reason to automatically create a jar if the .class file is not in 
a jar already.  Especially placing it in a directory defined by test.build.dir, 
and probably pointing to target/test-dir.  I also see no reason to turn an 
IOException in a RuntimeException.  This is fine for testing, but makes it more 
difficult to deal with exceptions correctly in other places.

I would much rather see ClassUtil.java be made public and stable.  It is what 
MR is using anyways, although it still turns an IOException into a 
RuntimeException, but I guess you cannot have everything.

 Hadoop should expose Jar location utilities on its public API
 -

 Key: HADOOP-9426
 URL: https://issues.apache.org/jira/browse/HADOOP-9426
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0, 1.1.0, 2.0.0-alpha
Reporter: Nick Dimiduk
 Attachments: 
 0001-HADOOP-9426-Promote-JarFinder-out-of-test-jar.patch, 
 0001-HADOOP-9426-Promote-JarFinder-out-of-test-jar.patch


 The facilities behind JobConf#setJarByClass and the JarFinder utility in test 
 are both generally useful. As the core platform, these should be published as 
 part of the public API. In addition to HBase, they are probably useful for 
 Pig and Hive as well. See also HBASE-2588, HBASE-5317, HBASE-8140.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9426) Hadoop should expose Jar location utilities on its public API

2013-04-02 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619972#comment-13619972
 ] 

Robert Joseph Evans commented on HADOOP-9426:
-

Projects are going to do this, like you said oozie does, and either they will 
copy and paste the content of ClassUtil.java (which I did for my storm on yarn 
work), implement their own version, or we can provide them approved access to 
ClassUtil.  I personally would prefer to provide the ability to do this so 
there is less duplication.  I can see how it is error prone but MR has been 
exposing this functionality for a long time.  If we are already on the hook for 
providing this functionality through MR, why not support it more broadly for 
YARN too?

Alejandro, Would moving this JIRA to YARN make it any more palatable to you?

 Hadoop should expose Jar location utilities on its public API
 -

 Key: HADOOP-9426
 URL: https://issues.apache.org/jira/browse/HADOOP-9426
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0, 1.1.0, 2.0.0-alpha
Reporter: Nick Dimiduk
 Attachments: 
 0001-HADOOP-9426-Promote-JarFinder-out-of-test-jar.patch, 
 0001-HADOOP-9426-Promote-JarFinder-out-of-test-jar.patch


 The facilities behind JobConf#setJarByClass and the JarFinder utility in test 
 are both generally useful. As the core platform, these should be published as 
 part of the public API. In addition to HBase, they are probably useful for 
 Pig and Hive as well. See also HBASE-2588, HBASE-5317, HBASE-8140.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2013-03-28 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616330#comment-13616330
 ] 

Robert Joseph Evans commented on HADOOP-9438:
-

Thanks Omkar,

This and HDFS-4619 look like they may be dupes.

 LocalFileContext does not throw an exception on mkdir for already existing 
 directory
 

 Key: HADOOP-9438
 URL: https://issues.apache.org/jira/browse/HADOOP-9438
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Robert Joseph Evans
Priority: Critical

 according to 
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
 should throw a FileAlreadyExistsException if the directory already exists.
 I tested this and 
 {code}
 FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
 Path p = new Path(/tmp/bobby.12345);
 FsPermission cachePerms = new FsPermission((short) 0755);
 lfc.mkdir(p, cachePerms, false);
 lfc.mkdir(p, cachePerms, false);
 {code}
 never throws an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2013-03-26 Thread Robert Joseph Evans (JIRA)
Robert Joseph Evans created HADOOP-9438:
---

 Summary: LocalFileContext does not throw an exception on mkdir for 
already existing directory
 Key: HADOOP-9438
 URL: https://issues.apache.org/jira/browse/HADOOP-9438
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Robert Joseph Evans
Priority: Critical


according to 
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29

should throw a FileAlreadyExistsException if the directory already exists.

I tested this and 
{code}
FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
Path p = new Path(/tmp/bobby.12345);
FsPermission cachePerms = new FsPermission((short) 0755);
lfc.mkdir(p, cachePerms, false);
lfc.mkdir(p, cachePerms, false);
{code}

never throws an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9389) test-patch marks -1 due to a context @Test by mistake

2013-03-25 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612844#comment-13612844
 ] 

Robert Joseph Evans commented on HADOOP-9389:
-

Ultimately what we really need is HADOOP-9330, and then we can remove all of 
the code that is checking for timeouts.

 test-patch marks -1 due to a context @Test by mistake
 -

 Key: HADOOP-9389
 URL: https://issues.apache.org/jira/browse/HADOOP-9389
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: HADOOP-9389_1.patch


 HADOOP-9112 enables the function of marking -1 when the newly added tests 
 don't have timeout. However, test-patch will mark -1 due to a context @Test 
 by mistake. Bellow is the problematic part of the YARN-378_3.patch that I've 
 created.
 {code}
 +}
 +  }
 +
@Test
public void testRMAppSubmitWithQueueAndName() throws Exception {
  long now = System.currentTimeMillis();
 {code}
 There's a @Test without timeout (most existing tests don't have timeout) in 
 the context. In test-patch, $AWK '\{ printf %s , $0 \}' collapses these 
 lines into one line, i.e.,
 {code}
 +} +  } +@Testpublic void testRMAppSubmitWithQueueAndName() 
 throws Exception {  long now = System.currentTimeMillis();
 {code}
 Then, @Test in the context follows a +, and is regarded as a newly added 
 test by mistake. Consequently, the following regex will accept the context 
 @Test. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9330) Add custom JUnit4 test runner with configurable timeout

2013-03-25 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13612886#comment-13612886
 ] 

Robert Joseph Evans commented on HADOOP-9330:
-

The concept seems fine, but the Timeout rule and @Test(timeout=XXX) are not 
aware of each other.  This means that the effective timeout of any test is 
which ever is smaller.  I don't think that this is a real problem, just that 
the comments and the name of the member variable defaultTimout is slightly 
misleading. I also don't know if we have any tests that are intended to run for 
more than 100s.  If so they will always timeout after 100s unless they do not 
extend the HadoopBase, or we set the default to be higher.

Also, I don't know if there is anything we can do about this or not, but when 
we use both timeouts, the Timeout rule's backtrace, when it fails is close to 
useless.

{code}
testSleep(org.apache.hadoop.test.TestSomething)  Time elapsed: 1091 sec   
ERROR!
java.lang.Exception: test timed out after 1000 milliseconds
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1194)
at 
org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:36)
at 
org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
{code}

It simply says that the code that timed out was a thread waiting for the 
actual test to finish running :) This is because there are actually two threads 
monitoring the test, instead of just one.

I realize that a lot of my complaints are perhaps things that need to just be 
addressed by the JUnit, I just want us to be fully aware of them as we go into 
this and document things appropriately, so we know what is happening when 
issues arise.

 Add custom JUnit4 test runner with configurable timeout
 ---

 Key: HADOOP-9330
 URL: https://issues.apache.org/jira/browse/HADOOP-9330
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Steve Loughran
 Attachments: HADOOP-9330-timeouts-1.patch


 HADOOP-9112 has added a requirement for all new test methods to declare a 
 timeout, so that jenkins/maven builds will have better information on a 
 timeout.
 Hard coding timeouts into tests is dangerous as it will generate spurious 
 failures on slower machines/networks and when debugging a test.
 I propose providing a custom JUnit4 test runner that test cases can declare 
 as their test runner; this can provide timeouts specified at run-time, rather 
 than in-source.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9330) Add custom JUnit4 test runner with configurable timeout

2013-03-25 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13613012#comment-13613012
 ] 

Robert Joseph Evans commented on HADOOP-9330:
-

I agree that there should be something about writing and running tests, but I 
am not aware of it either.  I was thinking of just the javadocs for 
HadoopTestBase, but a dedicated wiki page or a subsection of HowToContribute 
would probably be better.  I agree that having timeout= in the code is brittle, 
and we probably want to start removing it once this goes in (along with the 
changes to test-patch.sh).

But in a follow on JIRA I was thinking we probably could support something 
similar to what [~vicaya] proposed.  It should not be that hard to add in our 
own timeout test runner that can look for an @Test annotation with a timeout, 
output a warning about the timeout, and then allow JUnit to run with that 
timeout.  We could also provide an @Timeout annotation that would let us 
specify a timeout multiplier that is X times the configured base timeout.  That 
way we can keep a 100s timeout and adjust it for tests that do take longer.

I am not tied to the idea though, and if it feels like too much work compared 
simply upping the default to something like 600s works we could do that 
instead. 

 Add custom JUnit4 test runner with configurable timeout
 ---

 Key: HADOOP-9330
 URL: https://issues.apache.org/jira/browse/HADOOP-9330
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Steve Loughran
 Attachments: HADOOP-9330-timeouts-1.patch


 HADOOP-9112 has added a requirement for all new test methods to declare a 
 timeout, so that jenkins/maven builds will have better information on a 
 timeout.
 Hard coding timeouts into tests is dangerous as it will generate spurious 
 failures on slower machines/networks and when debugging a test.
 I propose providing a custom JUnit4 test runner that test cases can declare 
 as their test runner; this can provide timeouts specified at run-time, rather 
 than in-source.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-9419) CodecPool should avoid OOMs with buggy codecs

2013-03-19 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans moved MAPREDUCE-5082 to HADOOP-9419:


Issue Type: Improvement  (was: Bug)
   Key: HADOOP-9419  (was: MAPREDUCE-5082)
   Project: Hadoop Common  (was: Hadoop Map/Reduce)

 CodecPool should avoid OOMs with buggy codecs
 -

 Key: HADOOP-9419
 URL: https://issues.apache.org/jira/browse/HADOOP-9419
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans

 I recently found a bug in the gpl compression libraries that was causing map 
 tasks for a particular job to OOM.
 https://github.com/omalley/hadoop-gpl-compression/issues/3
 Now granted it does not make a lot of sense for a job to use the LzopCodec 
 for map output compression over the LzoCodec, but arguably other codecs could 
 be doing similar things and causing the same sort of memory leaks.  I propose 
 that we do a sanity check when creating a new decompressor/compressor.  If 
 the codec newly created object does not match the value from getType... it 
 should turn off caching for that Codec.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9419) CodecPool should avoid OOMs with buggy codecs

2013-03-19 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans resolved HADOOP-9419.
-

Resolution: Won't Fix

Never mind.  I created a patch, and it is completely useless in fixing this 
problem.  The tasks still OOM because the codec itself is so small and the 
MergeManager creates new codecs so quickly that on a job with lots of reduces 
it literally uses up all of the address space with direct byte buffers.  Some 
of the processes get killed by the NM for going over the virtual address space 
before they OOM. We could try and have the CodecPool detect that the codec is 
doing the wrong thing and correct it for the codec, but that is too heavy 
handed in my opinion.

 CodecPool should avoid OOMs with buggy codecs
 -

 Key: HADOOP-9419
 URL: https://issues.apache.org/jira/browse/HADOOP-9419
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Robert Joseph Evans

 I recently found a bug in the gpl compression libraries that was causing map 
 tasks for a particular job to OOM.
 https://github.com/omalley/hadoop-gpl-compression/issues/3
 Now granted it does not make a lot of sense for a job to use the LzopCodec 
 for map output compression over the LzoCodec, but arguably other codecs could 
 be doing similar things and causing the same sort of memory leaks.  I propose 
 that we do a sanity check when creating a new decompressor/compressor.  If 
 the codec newly created object does not match the value from getType... it 
 should turn off caching for that Codec.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-21 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583379#comment-13583379
 ] 

Robert Joseph Evans commented on HADOOP-9112:
-

Sorry I should have caught the return code being wrong. I just checked in the 
fixed return codes in version 7 of the patch.

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Fix For: 3.0.0

 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch, HADOOP-9112-7.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582240#comment-13582240
 ] 

Robert Joseph Evans commented on HADOOP-9112:
-

Looks good. There is no parameter to replace TR from the command line like 
there is for GREP or the others.  This is fairly minor, especially because tr 
should be on the path for just about everyone, and tr has not really changed in 
a long time.

I am fine with checking it in as is, but it would probably be best to just add 
it in.

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13582308#comment-13582308
 ] 

Robert Joseph Evans commented on HADOOP-9112:
-

Looks good.  Thanks for your patience on this.  +1.  I'll check it in.

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-20 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9112:


   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Surenkumar,

I checked this into trunk.

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Fix For: 3.0.0

 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-19 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13581329#comment-13581329
 ] 

Robert Joseph Evans commented on HADOOP-9112:
-

A quick look seems reasonable, but I have not had time to dig deeply into the 
REGEXP yet. Could you replace the grep with $GREP and add in similar code to 
support $TR instead of calling tr directly?

Also what operating systems/distros have you run this on?  We want to be fairly 
conservative in adding in new dependencies, and want to be sure that it at 
least works out of the box on stock Ubuntu, RedHat, and MacOS X.

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-19 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13581348#comment-13581348
 ] 

Robert Joseph Evans commented on HADOOP-9112:
-

The regexp seems reasonable. There are still some corner cases where it may 
produce a false -1, but all that I can think of are invalid java and would 
probably require a lot more complex code to get it right.  Once you fix the 
$GREP and $TR issue I am a +1 for this.  

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8967) Reported source for config property can be misleading

2013-02-08 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8967:


Assignee: Robert Joseph Evans

 Reported source for config property can be misleading
 -

 Key: HADOOP-8967
 URL: https://issues.apache.org/jira/browse/HADOOP-8967
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.3
Reporter: Jason Lowe
Assignee: Robert Joseph Evans
Priority: Minor

 Configuration.set tries to track the source of a property being set, but it 
 mistakenly reports properties as being deprecated when they are not.  This is 
 misleading and confusing for users examining a job's configuration.
 For example, run a sleep job and check the job configuration on the job UI.  
 The source for the mapreduce.job.maps property will be reported as job.xml 
 ⬅ because mapreduce.job.maps is deprecated.  This leads users to think 
 mapreduce.job.maps is now a deprecated property and wonder what other 
 property they should use instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8967) Reported source for config property can be misleading

2013-02-08 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8967:


Attachment: HADOOP-8967.txt

This patch cleans it up, by making it more transparent as to what Configuration 
itself is doing.  The real problem now is that it now shows how convoluted some 
of what Configuration does.

For example mapreduce.output.fileoutputformat.outputdir is set 
programmatically, but because it is set programmatically mapred.output.dir is 
also set.  But when job.xml is written out the HashMap backing Configuration 
will put mapred.output.dir after mapreduce.output.fileoutputformat.outputdir. 
So when job.xml is read back in the source information for 
mapreduce.output.fileoutputformat.outputdir indicates that it was set because 
mapred.output.dir was in job.xml.

I am not sure if there is a good way to not confuse customers unless we clean 
up Configuration to be less confusing or we stop reporting it to end users. 

 Reported source for config property can be misleading
 -

 Key: HADOOP-8967
 URL: https://issues.apache.org/jira/browse/HADOOP-8967
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.3
Reporter: Jason Lowe
Assignee: Robert Joseph Evans
Priority: Minor
 Attachments: HADOOP-8967.txt


 Configuration.set tries to track the source of a property being set, but it 
 mistakenly reports properties as being deprecated when they are not.  This is 
 misleading and confusing for users examining a job's configuration.
 For example, run a sleep job and check the job configuration on the job UI.  
 The source for the mapreduce.job.maps property will be reported as job.xml 
 ⬅ because mapreduce.job.maps is deprecated.  This leads users to think 
 mapreduce.job.maps is now a deprecated property and wonder what other 
 property they should use instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8967) Reported source for config property can be misleading

2013-02-08 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8967:


Status: Patch Available  (was: Open)

 Reported source for config property can be misleading
 -

 Key: HADOOP-8967
 URL: https://issues.apache.org/jira/browse/HADOOP-8967
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.3
Reporter: Jason Lowe
Assignee: Robert Joseph Evans
Priority: Minor
 Attachments: HADOOP-8967.txt


 Configuration.set tries to track the source of a property being set, but it 
 mistakenly reports properties as being deprecated when they are not.  This is 
 misleading and confusing for users examining a job's configuration.
 For example, run a sleep job and check the job configuration on the job UI.  
 The source for the mapreduce.job.maps property will be reported as job.xml 
 ⬅ because mapreduce.job.maps is deprecated.  This leads users to think 
 mapreduce.job.maps is now a deprecated property and wonder what other 
 property they should use instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-07 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573660#comment-13573660
 ] 

Robert Joseph Evans commented on HADOOP-9112:
-

There are a lot of people still using JDK 6.  All of our build boxes are still 
using JDK 6.  I would start with APT. Either that or have the build boxes move 
to JDK 7 and have us officially drop support for JDK 6 in Hadoop.

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon

 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9052) fix 6 failing tests in hadoop-streaming

2013-02-04 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans resolved HADOOP-9052.
-

Resolution: Duplicate

 fix 6 failing tests in hadoop-streaming
 ---

 Key: HADOOP-9052
 URL: https://issues.apache.org/jira/browse/HADOOP-9052
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9052.patch


 The following 6 tests in hadoop-tools/hadoop-streaming are failing because of 
 absence of the 2 yarn.scheduler.capacity... properties:
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestFileArgs.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestMultipleArchiveFiles.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestMultipleCachefiles.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingBadRecords.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingTaskLog.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestSymLink.java

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9052) fix 6 failing tests in hadoop-streaming

2013-02-04 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13570341#comment-13570341
 ] 

Robert Joseph Evans commented on HADOOP-9052:
-

OK I will dupe this to MAPREDUCE-4884, and then merge it into branch-2.

Thanks Ivan and Andrey for looking into this and finding the solution.

 fix 6 failing tests in hadoop-streaming
 ---

 Key: HADOOP-9052
 URL: https://issues.apache.org/jira/browse/HADOOP-9052
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9052.patch


 The following 6 tests in hadoop-tools/hadoop-streaming are failing because of 
 absence of the 2 yarn.scheduler.capacity... properties:
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestFileArgs.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestMultipleArchiveFiles.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestMultipleCachefiles.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingBadRecords.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingTaskLog.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestSymLink.java

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9052) fix 6 failing tests in hadoop-streaming

2013-02-04 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans reopened HADOOP-9052:
-


Reopening to resolve as a duplicate.

 fix 6 failing tests in hadoop-streaming
 ---

 Key: HADOOP-9052
 URL: https://issues.apache.org/jira/browse/HADOOP-9052
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9052.patch


 The following 6 tests in hadoop-tools/hadoop-streaming are failing because of 
 absence of the 2 yarn.scheduler.capacity... properties:
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestFileArgs.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestMultipleArchiveFiles.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestMultipleCachefiles.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingBadRecords.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingTaskLog.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestSymLink.java

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9052) fix 6 failing tests in hadoop-streaming

2013-02-01 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13568995#comment-13568995
 ] 

Robert Joseph Evans commented on HADOOP-9052:
-

Sorry this has taken me so long to respond.  I don't think this is needed any 
longer.  The tests all pass on trunk without the patch. If you disagree we can 
reopen the JIRA.

 fix 6 failing tests in hadoop-streaming
 ---

 Key: HADOOP-9052
 URL: https://issues.apache.org/jira/browse/HADOOP-9052
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9052.patch


 The following 6 tests in hadoop-tools/hadoop-streaming are failing because of 
 absence of the 2 yarn.scheduler.capacity... properties:
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestFileArgs.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestMultipleArchiveFiles.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestMultipleCachefiles.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingBadRecords.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingTaskLog.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestSymLink.java

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9052) fix 6 failing tests in hadoop-streaming

2013-02-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9052:


Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

Looks like it was already fixed, but I don't know by what JIRA exactly.

 fix 6 failing tests in hadoop-streaming
 ---

 Key: HADOOP-9052
 URL: https://issues.apache.org/jira/browse/HADOOP-9052
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9052.patch


 The following 6 tests in hadoop-tools/hadoop-streaming are failing because of 
 absence of the 2 yarn.scheduler.capacity... properties:
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestFileArgs.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestMultipleArchiveFiles.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestMultipleCachefiles.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingBadRecords.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestStreamingTaskLog.java
 #   modified:   
 hadoop-tools/hadoop-streaming/src/test/java/org/apache/hadoop/streaming/TestSymLink.java

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9067) provide test for method org.apache.hadoop.fs.LocalFileSystem.reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long)

2013-02-01 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13569037#comment-13569037
 ] 

Robert Joseph Evans commented on HADOOP-9067:
-

The patch is a little old (sorry about taking so long to review it) but it 
still applies and runs just fine on trunk and branch-0.23.  I kicked Jenkins 
again to be sure everything else comes out OK.  If it does I am +1 on this 
patch.  I'll check it in once  Jenkins finishes.

 provide test for method 
 org.apache.hadoop.fs.LocalFileSystem.reportChecksumFailure(Path, 
 FSDataInputStream, long, FSDataInputStream, long)
 --

 Key: HADOOP-9067
 URL: https://issues.apache.org/jira/browse/HADOOP-9067
 Project: Hadoop Common
  Issue Type: Test
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-9067--b.patch, HADOOP-9067.patch


 this method is not covered by the existing unit tests. Provide a test to 
 cover it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9067) provide test for method org.apache.hadoop.fs.LocalFileSystem.reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long)

2013-02-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9067:


   Resolution: Fixed
Fix Version/s: 0.23.7
   2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Ivan, I put this into trunk, branch-2, and branch-0.23.

 provide test for method 
 org.apache.hadoop.fs.LocalFileSystem.reportChecksumFailure(Path, 
 FSDataInputStream, long, FSDataInputStream, long)
 --

 Key: HADOOP-9067
 URL: https://issues.apache.org/jira/browse/HADOOP-9067
 Project: Hadoop Common
  Issue Type: Test
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.7

 Attachments: HADOOP-9067--b.patch, HADOOP-9067.patch


 this method is not covered by the existing unit tests. Provide a test to 
 cover it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9255) relnotes.py missing last jira

2013-01-28 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564554#comment-13564554
 ] 

Robert Joseph Evans commented on HADOOP-9255:
-

I ran it and verified that it is not dropping jira.  Sorry about my off by 1 
error glad that you caught it.  +1.  feel free to check it in.

 relnotes.py missing last jira
 -

 Key: HADOOP-9255
 URL: https://issues.apache.org/jira/browse/HADOOP-9255
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Attachments: HADOOP-9255.patch


 generating the release notes for 0.23.6 via  python 
 ./dev-support/relnotes.py -v 0.23.6  misses the last jira that was 
 committed.  In this case it was YARN-354.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2013-01-17 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13556491#comment-13556491
 ] 

Robert Joseph Evans commented on HADOOP-8849:
-

I just kicked the build again because the first time trunk was broken.  The new 
patch looks fine to me +1. I'll check it in.

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8849-trunk--5.patch, HADOOP-8849-vs-trunk-4.patch


 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2013-01-17 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8849:


   Resolution: Fixed
Fix Version/s: 0.23.7
   2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Ivan,

I put this into trunk, branch-2, and branch-0.23

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.7

 Attachments: HADOOP-8849-trunk--5.patch, HADOOP-8849-vs-trunk-4.patch


 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9202) test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to the build

2013-01-14 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13552772#comment-13552772
 ] 

Robert Joseph Evans commented on HADOOP-9202:
-

The change looks fine to me +1

I'll check it in.  Thanks for finding this.

 test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to 
 the build
 --

 Key: HADOOP-9202
 URL: https://issues.apache.org/jira/browse/HADOOP-9202
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-9202.1.patch


 test-patch.sh tries running mvn eclipse:eclipse after applying the patch.  It 
 runs this before running mvn install.  The mvn eclipse:eclipse command 
 doesn't actually build the code, so if the patch in question is adding a 
 whole new module, then any other modules dependent on finding it in the 
 reactor will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9202) test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to the build

2013-01-14 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9202:


   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

I put this into trunk

 test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to 
 the build
 --

 Key: HADOOP-9202
 URL: https://issues.apache.org/jira/browse/HADOOP-9202
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HADOOP-9202.1.patch


 test-patch.sh tries running mvn eclipse:eclipse after applying the patch.  It 
 runs this before running mvn install.  The mvn eclipse:eclipse command 
 doesn't actually build the code, so if the patch in question is adding a 
 whole new module, then any other modules dependent on finding it in the 
 reactor will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9139) improve script hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh

2013-01-11 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551256#comment-13551256
 ] 

Robert Joseph Evans commented on HADOOP-9139:
-

The change looks fine to me. +1 I'll check it in.

 improve script 
 hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh
 

 Key: HADOOP-9139
 URL: https://issues.apache.org/jira/browse/HADOOP-9139
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-9139--b.patch, HADOOP-9139.patch


 Script hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh 
 is used in internal Kerberos tests to kill started apacheds server.
 There are 2 problems in the script:
 1) it invokes kill even if there are no running apacheds servers;
 2) it does not work correctly on all Linux platforms since cut -f4 -d ' ' 
 command relies upon the exact number of spaces in the ps potput, but this 
 number can be different.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9139) improve script hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh

2013-01-11 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9139:


   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Ivan for the patch.  I put this into trunk.

 improve script 
 hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh
 

 Key: HADOOP-9139
 URL: https://issues.apache.org/jira/browse/HADOOP-9139
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-9139--b.patch, HADOOP-9139.patch


 Script hadoop-common-project/hadoop-common/src/test/resources/kdc/killKdc.sh 
 is used in internal Kerberos tests to kill started apacheds server.
 There are 2 problems in the script:
 1) it invokes kill even if there are no running apacheds servers;
 2) it does not work correctly on all Linux platforms since cut -f4 -d ' ' 
 command relies upon the exact number of spaces in the ps potput, but this 
 number can be different.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2013-01-11 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13551385#comment-13551385
 ] 

Robert Joseph Evans commented on HADOOP-8849:
-

I am not sure that we actually want to do this change.  FileUtil.fullyDelete is 
called by RawLocalFileSystem.delete for recursive deletes.  With this change 
RawLocalFilesSystem will ignore permissions on recursive deletes if the user 
has the permission to change those permissions.  In all other places that I 
have seen the API used I think it is OK, but this is also a publicly visible 
API so I don't know who else this may cause problems with.  I would rather see 
a new API created separate from the original one, and the javadocs updated to 
explain the difference between the two APIs.  Perhaps something like

{code}
public static boolean fullyDelete(final File dir) {
  return fullyDelete(dir, false);
}

  /**
   * Delete a directory and all its contents.  If
   * we return false, the directory may be partially-deleted.
   * (1) If dir is symlink to a file, the symlink is deleted. The file pointed
   * to by the symlink is not deleted.
   * (2) If dir is symlink to a directory, symlink is deleted. The directory
   * pointed to by symlink is not deleted.
   * (3) If dir is a normal file, it is deleted.
   * (4) If dir is a normal directory, then dir and all its contents recursively
   * are deleted.
   * @param dir the file or directory to be deleted
   * @param tryUpdatePerms true if permissions should be modified to delete a 
file.
   * @return true on success false on failure.
   */
public static boolean fullyDelete(final File dir, boolean tryUpdatePerms) {
 ...
{code}

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8849-vs-trunk-4.patch


 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9201) Trash can get Namespace collision

2013-01-11 Thread Robert Joseph Evans (JIRA)
Robert Joseph Evans created HADOOP-9201:
---

 Summary: Trash can get Namespace collision
 Key: HADOOP-9201
 URL: https://issues.apache.org/jira/browse/HADOOP-9201
 Project: Hadoop Common
  Issue Type: Bug
  Components: trash
Affects Versions: 0.23.5, 2.0.2-alpha, 1.0.2
Reporter: Robert Joseph Evans


{noformat}
$ hadoop fs -touchz test
$ hadoop fs -rm test
Moved: 'hdfs://nn:8020/user/ME/test' to trash at: 
hdfs://nn:8020/user/ME/.Trash/Current
$ hadoop fs -mkdir test
$ hadoop fs -touchz test/1
$ hadoop fs -rm test/1
WARN fs.TrashPolicyDefault: Can't create trash directory: 
hdfs://nn:8020/user/ME/.Trash/Current/user/ME/test
rm: Failed to move to trash: hdfs://nn:8020/user/ME/test/1. Consider using 
-skipTrash option
{noformat}

On 1.0.2 it looks more like
{noformat}
 WARN fs.Trash: Can't create trash directory: 
hdfs://nn:8020/user/ME/.Trash/Current/user/ME/test
Problem with Trash.java.io.FileNotFoundException: Parent path is not a 
directory: /user/ME/.Trash/Current/user/ME/test
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.mkdirs(FSDirectory.java:949)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2069)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2030)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.mkdirs(NameNode.java:817)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
. Consider using -skipTrash option
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9169) Bring branch-0.23 ExitUtil up to same level as branch-2

2012-12-26 Thread Robert Joseph Evans (JIRA)
Robert Joseph Evans created HADOOP-9169:
---

 Summary: Bring branch-0.23 ExitUtil up to same level as branch-2
 Key: HADOOP-9169
 URL: https://issues.apache.org/jira/browse/HADOOP-9169
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.5
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans


ExitUtil in 0.23 is behind branch-2, because a number of changes went in that 
were part of HDFS JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9169) Bring branch-0.23 ExitUtil up to same level as branch-2

2012-12-26 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9169:


Attachment: HADOOP-9196-branch-0.23.txt

 Bring branch-0.23 ExitUtil up to same level as branch-2
 ---

 Key: HADOOP-9169
 URL: https://issues.apache.org/jira/browse/HADOOP-9169
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.5
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: HADOOP-9196-branch-0.23.txt


 ExitUtil in 0.23 is behind branch-2, because a number of changes went in that 
 were part of HDFS JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9169) Bring branch-0.23 ExitUtil up to same level as branch-2

2012-12-26 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9169:


Release Note: 
This patch is intended to only be for branch-0.23.  It makes ExitUtils 
identical to what is on trunk, except it adds in clearTerminateCalled for 
backwards compatibility.

All of the common tests still pass.

{noformat}
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Annotations . SUCCESS [2.628s]
[INFO] Apache Hadoop Auth  SUCCESS [3.831s]
[INFO] Apache Hadoop Auth Examples ... SUCCESS [0.141s]
[INFO] Apache Hadoop Common .. SUCCESS [6:44.731s]
[INFO] Apache Hadoop Common Project .. SUCCESS [0.063s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 6:51.961s
[INFO] Finished at: Wed Dec 26 14:12:55 CST 2012
[INFO] Final Memory: 16M/115M
[INFO] 
{noformat}
  Status: Patch Available  (was: Open)

 Bring branch-0.23 ExitUtil up to same level as branch-2
 ---

 Key: HADOOP-9169
 URL: https://issues.apache.org/jira/browse/HADOOP-9169
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.5
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: HADOOP-9196-branch-0.23.txt


 ExitUtil in 0.23 is behind branch-2, because a number of changes went in that 
 were part of HDFS JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9169) Bring branch-0.23 ExitUtil up to same level as branch-2

2012-12-26 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9169:


Release Note:   (was: This patch is intended to only be for branch-0.23.  
It makes ExitUtils identical to what is on trunk, except it adds in 
clearTerminateCalled for backwards compatibility.

All of the common tests still pass.

{noformat}
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Annotations . SUCCESS [2.628s]
[INFO] Apache Hadoop Auth  SUCCESS [3.831s]
[INFO] Apache Hadoop Auth Examples ... SUCCESS [0.141s]
[INFO] Apache Hadoop Common .. SUCCESS [6:44.731s]
[INFO] Apache Hadoop Common Project .. SUCCESS [0.063s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 6:51.961s
[INFO] Finished at: Wed Dec 26 14:12:55 CST 2012
[INFO] Final Memory: 16M/115M
[INFO] 
{noformat})

This patch is intended to only be for branch-0.23.  It makes ExitUtils 
identical to what is on trunk, except it adds in clearTerminateCalled for 
backwards compatibility.

All of the common tests still pass.

{noformat}
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Annotations . SUCCESS [2.628s]
[INFO] Apache Hadoop Auth  SUCCESS [3.831s]
[INFO] Apache Hadoop Auth Examples ... SUCCESS [0.141s]
[INFO] Apache Hadoop Common .. SUCCESS [6:44.731s]
[INFO] Apache Hadoop Common Project .. SUCCESS [0.063s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 6:51.961s
[INFO] Finished at: Wed Dec 26 14:12:55 CST 2012
[INFO] Final Memory: 16M/115M
[INFO] 
{noformat}

 Bring branch-0.23 ExitUtil up to same level as branch-2
 ---

 Key: HADOOP-9169
 URL: https://issues.apache.org/jira/browse/HADOOP-9169
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.5
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: HADOOP-9196-branch-0.23.txt


 ExitUtil in 0.23 is behind branch-2, because a number of changes went in that 
 were part of HDFS JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9169) Bring branch-0.23 ExitUtil up to same level as branch-2

2012-12-26 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13539673#comment-13539673
 ] 

Robert Joseph Evans commented on HADOOP-9169:
-

The failure is expected the patch only applies to branch-0.23

 Bring branch-0.23 ExitUtil up to same level as branch-2
 ---

 Key: HADOOP-9169
 URL: https://issues.apache.org/jira/browse/HADOOP-9169
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.5
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: HADOOP-9196-branch-0.23.txt


 ExitUtil in 0.23 is behind branch-2, because a number of changes went in that 
 were part of HDFS JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9105) FsShell -moveFromLocal erroneously fails

2012-12-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537093#comment-13537093
 ] 

Robert Joseph Evans commented on HADOOP-9105:
-

The changes look fine to me.  I just don't totally understand why it was 
failing before.  Is it because there is a bug in FileSystem.moveFromLocalFile? 
If so do we need to fix it too?

 FsShell -moveFromLocal erroneously fails
 

 Key: HADOOP-9105
 URL: https://issues.apache.org/jira/browse/HADOOP-9105
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9105.branch-0.23.patch, HADOOP-9105.patch


 The move successfully completes, but then reports error upon trying to delete 
 the local source directory even though it succeeded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9105) FsShell -moveFromLocal erroneously fails

2012-12-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13537120#comment-13537120
 ] 

Robert Joseph Evans commented on HADOOP-9105:
-

OK I am +1.  Even if HADOOP-9161 were in, it would not give the same detailed 
error messages on failure that this patch does, and it would not be as 
consistent with other FsShell commands.  I'll check it in.

 FsShell -moveFromLocal erroneously fails
 

 Key: HADOOP-9105
 URL: https://issues.apache.org/jira/browse/HADOOP-9105
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9105.branch-0.23.patch, HADOOP-9105.patch


 The move successfully completes, but then reports error upon trying to delete 
 the local source directory even though it succeeded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9105) FsShell -moveFromLocal erroneously fails

2012-12-20 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9105:


   Resolution: Fixed
Fix Version/s: 0.23.6
   2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Daryn,

I put this in trunk, branch-2, and branch-0.23

 FsShell -moveFromLocal erroneously fails
 

 Key: HADOOP-9105
 URL: https://issues.apache.org/jira/browse/HADOOP-9105
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9105.branch-0.23.patch, HADOOP-9105.patch


 The move successfully completes, but then reports error upon trying to delete 
 the local source directory even though it succeeded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9046) provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT

2012-11-28 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13505661#comment-13505661
 ] 

Robert Joseph Evans commented on HADOOP-9046:
-

It is looking good, and I like how much faster the tests run.

I have a few more comments.

# {code}TODO: should we reject the addition if an action for this 'fs' is 
already present in the queue?{code} If this is an issue then we should file a 
JIRA to fix it, otherwise please remove the TODO. I am leaning more towards 
filing a separate JIRA.
# I don't really like the name lock0 and available0.  I would rather have them 
be named something more descriptive.  This is very minor
# in removeRenewAction we are no longer canceling the token when it is removed. 
I am not sure if that is intentional or not, but it definitely changes the 
functionality of the method. 
# I am not sure that renewerThreadStarted.compairAndSet is that much better 
then isAlive().  isAlive has the problem that if the thread stopped, calling 
add again would result in an IllegalThreadStateException, but compairAndSet has 
the problem that if a new thread cannot be created (Which I have seen happen) 
then only the first call to add will get an exception all other calls to add 
will look like they succeeded but no renewal will ever happen. I would rather 
see errors happen on other calls to add then to have them silently fail.


 provide unit-test coverage of class 
 org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT
 --

 Key: HADOOP-9046
 URL: https://issues.apache.org/jira/browse/HADOOP-9046
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-9046-branch-0.23--c.patch, 
 HADOOP-9046-branch-0.23-over-9049.patch, HADOOP-9046-branch-0.23.patch, 
 HADOOP-9046--c.patch, HADOOP-9046-over-9049.patch, HADOOP-9046.patch


 The class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT has zero 
 coverage in entire cumulative test run. Provide test(s) to cover this class.
 Note: the request submitted to HDFS project because the class likely to be 
 tested by tests in that project.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9092) Coverage fixing for org.apache.hadoop.mapreduce.jobhistory

2012-11-27 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13504694#comment-13504694
 ] 

Robert Joseph Evans commented on HADOOP-9092:
-

The change looks good.  I have one minor comment still.

in TestJobHistoryEventHandler there is a line where you are using ...+ to 
convert something to a string.  This works, but I would prefer to see something 
more like Stirng.valueOf(...) instead.  Other then that I am +1 for the patch.

 Coverage fixing for org.apache.hadoop.mapreduce.jobhistory 
 ---

 Key: HADOOP-9092
 URL: https://issues.apache.org/jira/browse/HADOOP-9092
 Project: Hadoop Common
  Issue Type: Test
  Components: tools
Reporter: Aleksey Gorshkov
 Attachments: HADOOP-9092-branch-0.23.patch, 
 HADOOP-9092-branch-2.patch, HADOOP-9092-trunk.patch


 Coverage fixing for package org.apache.hadoop.mapreduce.jobhistory 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9046) provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT

2012-11-27 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13504760#comment-13504760
 ] 

Robert Joseph Evans commented on HADOOP-9046:
-

I am glad that you caught it.  Sorry I did not respond sooner.

 provide unit-test coverage of class 
 org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT
 --

 Key: HADOOP-9046
 URL: https://issues.apache.org/jira/browse/HADOOP-9046
 Project: Hadoop Common
  Issue Type: Test
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-9046-branch-0.23-over-9049.patch, 
 HADOOP-9046-branch-0.23.patch, HADOOP-9046-over-9049.patch, HADOOP-9046.patch


 The class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT has zero 
 coverage in entire cumulative test run. Provide test(s) to cover this class.
 Note: the request submitted to HDFS project because the class likely to be 
 tested by tests in that project.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8992) Enhance unit-test coverage of class HarFileSystem

2012-11-26 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8992:


   Resolution: Fixed
Fix Version/s: 0.23.6
   2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Ivan,

I put this into branch-2, trunk, and branch-0.23

 Enhance unit-test coverage of class HarFileSystem
 -

 Key: HADOOP-8992
 URL: https://issues.apache.org/jira/browse/HADOOP-8992
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-8992-branch-0.23--a.patch, 
 HADOOP-8992-branch-0.23--b.patch, HADOOP-8992-branch-0.23--c.patch, 
 HADOOP-8992-branch-2--a.patch, HADOOP-8992-branch-2--b.patch, 
 HADOOP-8992-branch-2--c.patch


 New unit test TestHarFileSystem2 provided in order to enhance coverage of 
 class HarFileSystem.
 Also some unused methods deleted from class HarFileSystem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9016) Provide unit tests for class org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream

2012-11-26 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13503972#comment-13503972
 ] 

Robert Joseph Evans commented on HADOOP-9016:
-

Could you please update the patch? it does not apply cleanly now that 
HADOOP-8992 is in.

 Provide unit tests for class 
 org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream 
 -

 Key: HADOOP-9016
 URL: https://issues.apache.org/jira/browse/HADOOP-9016
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-9016--b.patch, HADOOP-9016--c.patch, 
 HADOOP-9016.patch


 unit-test coverage of classes 
 org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream,
 org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream is 
 zero.
 Suggested to provide unit-tests covering these classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9038) provide unit-test coverage of class org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator

2012-11-26 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13503975#comment-13503975
 ] 

Robert Joseph Evans commented on HADOOP-9038:
-

The changes look fine to me.  I like that we are now conforming correctly to 
the Iterable interface. +1

 provide unit-test coverage of class 
 org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator
 ---

 Key: HADOOP-9038
 URL: https://issues.apache.org/jira/browse/HADOOP-9038
 Project: Hadoop Common
  Issue Type: Test
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-9038.patch


 The class 
 org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator 
 currently has zero unit-test coverage. Add/enhance the tests to provide one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9038) provide unit-test coverage of class org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator

2012-11-26 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9038:


   Resolution: Fixed
Fix Version/s: 0.23.6
   2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Ivan,

I put this into trunk, branch-2, and branch-0.23

 provide unit-test coverage of class 
 org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator
 ---

 Key: HADOOP-9038
 URL: https://issues.apache.org/jira/browse/HADOOP-9038
 Project: Hadoop Common
  Issue Type: Test
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.6

 Attachments: HADOOP-9038.patch


 The class 
 org.apache.hadoop.fs.LocalDirAllocator.AllocatorPerContext.PathIterator 
 currently has zero unit-test coverage. Add/enhance the tests to provide one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9046) provide unit-test coverage of class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT

2012-11-26 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13504017#comment-13504017
 ] 

Robert Joseph Evans commented on HADOOP-9046:
-

The updated test takes almost 7 mins to run.  The original test took about 1 
min.  (at least on my slow desktop run once so it is very scientific) I don't 
want to be a stickler for this, but it feels a little odd to have a unit test 
run that long. 1 min even feels too long for testing code that really just 
appears to be a wrapper around a delay queue that has a little bit of extra 
logic to handle a weak reference to a file system.

 provide unit-test coverage of class 
 org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT
 --

 Key: HADOOP-9046
 URL: https://issues.apache.org/jira/browse/HADOOP-9046
 Project: Hadoop Common
  Issue Type: Test
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-9046-branch-0.23-over-9049.patch, 
 HADOOP-9046-branch-0.23.patch, HADOOP-9046-over-9049.patch, HADOOP-9046.patch


 The class org.apache.hadoop.fs.DelegationTokenRenewer.RenewActionT has zero 
 coverage in entire cumulative test run. Provide test(s) to cover this class.
 Note: the request submitted to HDFS project because the class likely to be 
 tested by tests in that project.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9070) Kerberos SASL server cannot find kerberos key

2012-11-21 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13502088#comment-13502088
 ] 

Robert Joseph Evans commented on HADOOP-9070:
-

The new patch looks OK to me.  Have you tested this on a secure cluster to be 
sure there isn't anything else lurking behind the scenes?  Have you checked for 
backwards compatibility?

 Kerberos SASL server cannot find kerberos key
 -

 Key: HADOOP-9070
 URL: https://issues.apache.org/jira/browse/HADOOP-9070
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9070.patch, HADOOP-9070.patch


 HADOOP-9015 inadvertently removed a {{doAs}} block around instantiation of 
 the sasl server which renders a server incapable of accepting kerberized 
 connections.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9070) Kerberos SASL server cannot find kerberos key

2012-11-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13501319#comment-13501319
 ] 

Robert Joseph Evans commented on HADOOP-9070:
-

The change looks good to me, assuming Jenkins is OK with it. +1

 Kerberos SASL server cannot find kerberos key
 -

 Key: HADOOP-9070
 URL: https://issues.apache.org/jira/browse/HADOOP-9070
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9070.patch


 HADOOP-9015 inadvertently removed a {{doAs}} block around instantiation of 
 the sasl server which renders a server incapable of accepting kerberized 
 connections.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9035) Generalize setup of LoginContext

2012-11-15 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498342#comment-13498342
 ] 

Robert Joseph Evans commented on HADOOP-9035:
-

The changes look good to me.  All of the tests pass. +1

I'll check it in.

 Generalize setup of LoginContext
 

 Key: HADOOP-9035
 URL: https://issues.apache.org/jira/browse/HADOOP-9035
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9035.patch, HADOOP-9035.patch, HADOOP-9035.patch, 
 HADOOP-9035.patch, HADOOP-9035.patch


 The creation of the {{LoginContext}} in {{UserGroupInformation}} has specific 
 cases for specific authentication types.  This is inflexible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9035) Generalize setup of LoginContext

2012-11-15 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9035:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Daryn,

I put this in trunk and branch-2

 Generalize setup of LoginContext
 

 Key: HADOOP-9035
 URL: https://issues.apache.org/jira/browse/HADOOP-9035
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9035.patch, HADOOP-9035.patch, HADOOP-9035.patch, 
 HADOOP-9035.patch, HADOOP-9035.patch


 The creation of the {{LoginContext}} in {{UserGroupInformation}} has specific 
 cases for specific authentication types.  This is inflexible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9035) Generalize setup of LoginContext

2012-11-14 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497192#comment-13497192
 ] 

Robert Joseph Evans commented on HADOOP-9035:
-

The code looks good to me. About my only comment is that the new 
isSecurityEnabled(AuthenticationMethod) is kind of named badly.  It is not 
really checking if security is enabled, it is checking to see if a given 
AuthenticationMethod is enabled or not.  Could you rename it.  I am +1 
otherwise.

 Generalize setup of LoginContext
 

 Key: HADOOP-9035
 URL: https://issues.apache.org/jira/browse/HADOOP-9035
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9035.patch, HADOOP-9035.patch


 The creation of the {{LoginContext}} in {{UserGroupInformation}} has specific 
 cases for specific authentication types.  This is inflexible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8999) SASL negotiation is flawed

2012-11-13 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13496277#comment-13496277
 ] 

Robert Joseph Evans commented on HADOOP-8999:
-

The change looks OK to me.  So the problem is that the wrapper protocol around 
SASL that we have been using requires that the client not finish (aka 
isComplete() returns true) after a single challenge, and if it does we need to 
unconditionally read the response to possibly get the switch to SIMPLE message. 
Also that the server must reply at least once, again so that all clients both 
old and new will possibly get the switch to SIMPLE message.

I don't like the special case you put into the server for PLAIN, but I don't 
see any other way around it without also changing the protocol version like you 
said previously.

Daryn could you please file a separate JIRA to fix our SASL wrapper protocol so 
that we can send the success/failure/switch to SIMPLE message so that we can 
the plug in any java SASL client/server pair without needing to worry about 
special cases for them. I know that it would require a protocol version change 
but I think it is worth it.  Perhaps not for 2.0, but definitely for a 3.0.

+1 feel free to check it in.

 SASL negotiation is flawed
 --

 Key: HADOOP-8999
 URL: https://issues.apache.org/jira/browse/HADOOP-8999
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-8999.patch


 The RPC protocol used for SASL negotiation is flawed.  The server's RPC 
 response contains the next SASL challenge token, but a SASL server can return 
 null (I'm done) or a N-many byte challenge.  The server currently will not 
 send a RPC success response to the client if the SASL server returns null, 
 which causes the client to hang until it times out.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9022) Hadoop distcp tool fails to copy file if -m 0 specified

2012-11-12 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495673#comment-13495673
 ] 

Robert Joseph Evans commented on HADOOP-9022:
-

The patch looks fine to me.  I am +1 pending Jenkins.

 Hadoop distcp tool fails to copy file if -m 0 specified
 ---

 Key: HADOOP-9022
 URL: https://issues.apache.org/jira/browse/HADOOP-9022
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.1, 0.23.3, 0.23.4
Reporter: Haiyang Jiang
Assignee: Jonathan Eagles
 Attachments: HADOOP-9022.patch


 When trying to copy file using distcp on H23, if -m 0 is specified, distcp 
 will just spawn 0 mapper tasks and the file will not be copied.
 But this used to work before H23, even when -m 0 specified, distcp will 
 always copy the files.
 Checked the code of DistCp.java
 Before the rewrite, it set the number maps at least to 1
 job.setNumMapTasks(Math.max(numMaps, 1));
 But in the newest code, it just takes the input from user:
 job.getConfiguration().set(JobContext.NUM_MAPS,
   String.valueOf(inputOptions.getMaxMaps()));

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9021) Enforce configured SASL method on the server

2012-11-12 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495680#comment-13495680
 ] 

Robert Joseph Evans commented on HADOOP-9021:
-

Looks good +1. I'll check it in.

 Enforce configured SASL method on the server
 

 Key: HADOOP-9021
 URL: https://issues.apache.org/jira/browse/HADOOP-9021
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9021.patch, HADOOP-9021.patch


 The RPC needs to restrict itself to only using the configured SASL method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9021) Enforce configured SASL method on the server

2012-11-12 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9021:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Daryn,

I put this in trunk, and branch-2

 Enforce configured SASL method on the server
 

 Key: HADOOP-9021
 URL: https://issues.apache.org/jira/browse/HADOOP-9021
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9021.patch, HADOOP-9021.patch


 The RPC needs to restrict itself to only using the configured SASL method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9025) org.apache.hadoop.tools.TestCopyListing failing

2012-11-12 Thread Robert Joseph Evans (JIRA)
Robert Joseph Evans created HADOOP-9025:
---

 Summary: org.apache.hadoop.tools.TestCopyListing failing
 Key: HADOOP-9025
 URL: https://issues.apache.org/jira/browse/HADOOP-9025
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans


https://builds.apache.org/job/PreCommit-HADOOP-Build/1732//testReport/org.apache.hadoop.tools/TestCopyListing/testDuplicates/



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9022) Hadoop distcp tool fails to copy file if -m 0 specified

2012-11-12 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13495710#comment-13495710
 ] 

Robert Joseph Evans commented on HADOOP-9022:
-

The test failure seems to be happening with or without the patch. I filed 
HADOOP-9025 for it.

 Hadoop distcp tool fails to copy file if -m 0 specified
 ---

 Key: HADOOP-9022
 URL: https://issues.apache.org/jira/browse/HADOOP-9022
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.1, 0.23.3, 0.23.4
Reporter: Haiyang Jiang
Assignee: Jonathan Eagles
 Attachments: HADOOP-9022.patch


 When trying to copy file using distcp on H23, if -m 0 is specified, distcp 
 will just spawn 0 mapper tasks and the file will not be copied.
 But this used to work before H23, even when -m 0 specified, distcp will 
 always copy the files.
 Checked the code of DistCp.java
 Before the rewrite, it set the number maps at least to 1
 job.setNumMapTasks(Math.max(numMaps, 1));
 But in the newest code, it just takes the input from user:
 job.getConfiguration().set(JobContext.NUM_MAPS,
   String.valueOf(inputOptions.getMaxMaps()));

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9022) Hadoop distcp tool fails to copy file if -m 0 specified

2012-11-12 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9022:


   Resolution: Fixed
Fix Version/s: 0.23.5
   2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Jon,

I pulled this into trunk, branch-2, and branch-0.23

 Hadoop distcp tool fails to copy file if -m 0 specified
 ---

 Key: HADOOP-9022
 URL: https://issues.apache.org/jira/browse/HADOOP-9022
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.1, 0.23.3, 0.23.4
Reporter: Haiyang Jiang
Assignee: Jonathan Eagles
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.5

 Attachments: HADOOP-9022.patch


 When trying to copy file using distcp on H23, if -m 0 is specified, distcp 
 will just spawn 0 mapper tasks and the file will not be copied.
 But this used to work before H23, even when -m 0 specified, distcp will 
 always copy the files.
 Checked the code of DistCp.java
 Before the rewrite, it set the number maps at least to 1
 job.setNumMapTasks(Math.max(numMaps, 1));
 But in the newest code, it just takes the input from user:
 job.getConfiguration().set(JobContext.NUM_MAPS,
   String.valueOf(inputOptions.getMaxMaps()));

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9020) Add a SASL PLAIN server

2012-11-09 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13494293#comment-13494293
 ] 

Robert Joseph Evans commented on HADOOP-9020:
-

Looks good to me. I am not a SASL expert, but as much as I can tell it meets 
the standard and complies with the SaslServer API.

I wonder a bit about having PLAIN be installed programatically instead of 
through the java security configuration, but I think it is OK because it just 
for Hadoop.

+1

 Add a SASL PLAIN server
 ---

 Key: HADOOP-9020
 URL: https://issues.apache.org/jira/browse/HADOOP-9020
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9020.patch


 Java includes a SASL PLAIN client but not a server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9020) Add a SASL PLAIN server

2012-11-09 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9020:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Daryn,

I put this into trunk and branch-2

 Add a SASL PLAIN server
 ---

 Key: HADOOP-9020
 URL: https://issues.apache.org/jira/browse/HADOOP-9020
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9020.patch


 Java includes a SASL PLAIN client but not a server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9021) Enforce configured SASL method on the server

2012-11-09 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13494434#comment-13494434
 ] 

Robert Joseph Evans commented on HADOOP-9021:
-

My only real issue with this is that if no secret manager is passed in, and 
only TOKEN is passed in you get an error of Server is not configured to accept 
any authentication.  I would prefer to see at a minimum before that a Warning 
that TOKEN was configured and no Secret manager was passed in. But preferably a 
better error saying A Server configured with Token must have a Secret manager 
to go with it.

 Enforce configured SASL method on the server
 

 Key: HADOOP-9021
 URL: https://issues.apache.org/jira/browse/HADOOP-9021
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9021.patch


 The RPC needs to restrict itself to only using the configured SASL method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9013) UGI should not hardcode loginUser's authenticationType

2012-11-07 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492443#comment-13492443
 ] 

Robert Joseph Evans commented on HADOOP-9013:
-

The change looks good to me too. +1

 UGI should not hardcode loginUser's authenticationType
 --

 Key: HADOOP-9013
 URL: https://issues.apache.org/jira/browse/HADOOP-9013
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9013.patch


 {{UGI.loginUser}} assumes that the user's auth type for security on = 
 kerberos, security off = simple.  It should instead use the configured auth 
 type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9013) UGI should not hardcode loginUser's authenticationType

2012-11-07 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9013:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Daryn,

I put this into trunk and branch-2

 UGI should not hardcode loginUser's authenticationType
 --

 Key: HADOOP-9013
 URL: https://issues.apache.org/jira/browse/HADOOP-9013
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9013.patch


 {{UGI.loginUser}} assumes that the user's auth type for security on = 
 kerberos, security off = simple.  It should instead use the configured auth 
 type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9014) Standardize creation of SaslRpcClients

2012-11-07 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492452#comment-13492452
 ] 

Robert Joseph Evans commented on HADOOP-9014:
-

This change looks good +1.

 Standardize creation of SaslRpcClients
 --

 Key: HADOOP-9014
 URL: https://issues.apache.org/jira/browse/HADOOP-9014
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 0.23.0, 3.0.0, 2.0.3-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9014.patch


 To ease adding additional SASL support, need to change the chained 
 conditionals into a switch and make one standard call to createSaslClient.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9014) Standardize creation of SaslRpcClients

2012-11-07 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9014:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Daryn,

I put this in trunk and branch-2

 Standardize creation of SaslRpcClients
 --

 Key: HADOOP-9014
 URL: https://issues.apache.org/jira/browse/HADOOP-9014
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 0.23.0, 3.0.0, 2.0.3-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9014.patch


 To ease adding additional SASL support, need to change the chained 
 conditionals into a switch and make one standard call to createSaslClient.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9015) Standardize creation of SaslRpcServers

2012-11-07 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492783#comment-13492783
 ] 

Robert Joseph Evans commented on HADOOP-9015:
-

The changes look OK to me.  I am +1. I'll check them in.

 Standardize creation of SaslRpcServers
 --

 Key: HADOOP-9015
 URL: https://issues.apache.org/jira/browse/HADOOP-9015
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9015.patch


 To ease adding additional SASL support, need to merge the multiple switches 
 for mechanism type and server creation into a single switch with a single 
 call to createSaslServer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9015) Standardize creation of SaslRpcServers

2012-11-07 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9015:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Daryn,

I put this into trunk and branch-2

 Standardize creation of SaslRpcServers
 --

 Key: HADOOP-9015
 URL: https://issues.apache.org/jira/browse/HADOOP-9015
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9015.patch


 To ease adding additional SASL support, need to merge the multiple switches 
 for mechanism type and server creation into a single switch with a single 
 call to createSaslServer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9012) IPC Client sends wrong connection context

2012-11-06 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13491531#comment-13491531
 ] 

Robert Joseph Evans commented on HADOOP-9012:
-

The patch looks good to me. +1. It looks mostly like a simple refactor to make 
the connection context generated later.  This allows the UserInfo to be 
generated when the correct auth method is known. 

 IPC Client sends wrong connection context
 -

 Key: HADOOP-9012
 URL: https://issues.apache.org/jira/browse/HADOOP-9012
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9012.patch


 The IPC client will send the wrong connection context when asked to switch to 
 simple auth.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9012) IPC Client sends wrong connection context

2012-11-06 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9012:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Daryn,

I put this into trunk and branch-2.

 IPC Client sends wrong connection context
 -

 Key: HADOOP-9012
 URL: https://issues.apache.org/jira/browse/HADOOP-9012
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9012.patch


 The IPC client will send the wrong connection context when asked to switch to 
 simple auth.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   3   4   5   >