[jira] [Updated] (HADOOP-9321) fix coverage org.apache.hadoop.net

2013-02-21 Thread Aleksey Gorshkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Gorshkov updated HADOOP-9321:
-

Status: Patch Available  (was: Open)

 fix coverage  org.apache.hadoop.net
 ---

 Key: HADOOP-9321
 URL: https://issues.apache.org/jira/browse/HADOOP-9321
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 0.23.5, 2.0.3-alpha, 3.0.0
Reporter: Aleksey Gorshkov
 Attachments: HADOOP-9321-trunk.patch


 fix coverage  org.apache.hadoop.net
 HADOOP-9321-trunk.patch patch for trunk, branch-2, branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9314) Cover package org.apache.hadoop.hdfs.server.common with tests

2013-02-21 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev resolved HADOOP-9314.


Resolution: Duplicate

 Cover package org.apache.hadoop.hdfs.server.common with tests
 -

 Key: HADOOP-9314
 URL: https://issues.apache.org/jira/browse/HADOOP-9314
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9268) Cover package org.apache.hadoop.hdfs.server.common with tests

2013-02-21 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev resolved HADOOP-9268.


Resolution: Duplicate

 Cover package org.apache.hadoop.hdfs.server.common  with tests
 --

 Key: HADOOP-9268
 URL: https://issues.apache.org/jira/browse/HADOOP-9268
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
 Attachments: HADOOP-9268-branch-0.23-a.patch, 
 HADOOP-9268-branch-2-a.patch, HADOOP-9268-trunk-a.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9298) Cover with unit test package org.apache.hadoop.hdfs.tools

2013-02-21 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev resolved HADOOP-9298.


Resolution: Duplicate

 Cover with unit test package org.apache.hadoop.hdfs.tools
 -

 Key: HADOOP-9298
 URL: https://issues.apache.org/jira/browse/HADOOP-9298
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
 Attachments: HADOOP-9298-branch-0.23-a.patch, 
 HADOOP-9298-branch-2-a.patch, HADOOP-9298-trunk-a.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9321) fix coverage org.apache.hadoop.net

2013-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583038#comment-13583038
 ] 

Hadoop QA commented on HADOOP-9321:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12570273/HADOOP-9321-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

  {color:red}-1 one of tests included doesn't have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2217//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2217//console

This message is automatically generated.

 fix coverage  org.apache.hadoop.net
 ---

 Key: HADOOP-9321
 URL: https://issues.apache.org/jira/browse/HADOOP-9321
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5
Reporter: Aleksey Gorshkov
 Attachments: HADOOP-9321-trunk.patch


 fix coverage  org.apache.hadoop.net
 HADOOP-9321-trunk.patch patch for trunk, branch-2, branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583079#comment-13583079
 ] 

Steve Loughran commented on HADOOP-9112:


sorry, I've only just seen this. As the only person on *-dev whose ever written 
1 JUnit test runner, I do encourage people to point me at these kind of JIRAs.

Timing out tests without explicit {{timeout}} attributes are probably dealt 
with by having maven killing the JUnit running process. Due to the (historical, 
flawed) fact that the XML results stick the summary data up as attributes on 
the root XML node, XML report generators have to buffer up the entire file 
before writing. Killed process = no output. Text reporters shouldn't have this 
problem, a past XHTML reporter I did would stream HTML out and provide 
something useful, along with colour coding log4j outputs based on severity 
levels. 
[OneHostHtmlListener|http://smartfrog.svn.sourceforge.net/viewvc/smartfrog/trunk/core/components/xunit/src/org/smartfrog/services/xunit/listeners/html/OneHostHtmlListener.java?revision=8886view=markup]
 . Fixing the Ant-originated XML and tooling would be the ideal outcome here, 
just hard, as you have to not just delve into Ant and Maven code, but into 
downstream tools like Jenkins. 

One problem with timeout= attributes is that they can be very brittle -my test 
running machine may be slower than yours. We need a standard recommended (long) 
test run time, which should be minutes, just to ensure that the Maven JUnit4 
runner runs it in timeout mode. It doesn't matter what the timeout is as long 
as it is 0, less than the time it takes to complete on everyones boxes/VMs, 
and less than the absolute maven test run timeout.

Once the {{@Test(timeout)}} property is set, themselves can raise 
{{TimeoutException}}, which is translated into a timeout -so permitting tests 
to implement their own (property-configurable) timeout logic. 


 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Fix For: 3.0.0

 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch, HADOOP-9112-7.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583094#comment-13583094
 ] 

Hudson commented on HADOOP-9112:


Integrated in Hadoop-Yarn-trunk #134 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/134/])
HADOOP-9112. test-patch should -1 for @Tests without a timeout (Surenkumar 
Nihalani via bobby) (Revision 1448285)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1448285
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Fix For: 3.0.0

 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch, HADOOP-9112-7.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-21 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583115#comment-13583115
 ] 

nkeywal commented on HADOOP-9112:
-

An issue I have with timeouts is that we have to change them during debugging 
-may be there is an option I don't knwow-.

Anyway, a test process can fail in the afterSuite (basically, when you're 
shutting down the cluster). And surefire may not kill it, and you won't know, 
and you will find it at the next build. 

In HBase, we do that before running the tests:
  ### kill any process remaining from another test, maybe even another project
  jps | grep surefirebooter | cut -d ' ' -f 1 | xargs kill -9 2/dev/null

And this after
  ZOMBIE_TESTS_COUNT=`jps | grep surefirebooter | wc -l`
  if [[ $ZOMBIE_TESTS_COUNT != 0 ]] ; then
#It seems sometimes the tests are not dying immediately. Let's give them 30s
echo Suspicious java process found - waiting 30s to see if there are just 
slow to stop
sleep 30
ZOMBIE_TESTS_COUNT=`jps | grep surefirebooter | wc -l`
if [[ $ZOMBIE_TESTS_COUNT != 0 ]] ; then
  echo There are $ZOMBIE_TESTS_COUNT zombie tests, they should have been 
killed by surefire but survived
  echo  BEGIN zombies jstack extract
  ZB_STACK=`jps | grep surefirebooter | cut -d ' ' -f 1 | xargs -n 1 jstack 
| grep .test | grep \.java`
  jps | grep surefirebooter | cut -d ' ' -f 1 | xargs -n 1 jstack
  echo  END  zombies jstack extract
  JIRA_COMMENT=$JIRA_COMMENT

 {color:red}-1 core zombie tests{color}.  There are ${ZOMBIE_TESTS_COUNT} 
zombie test(s): ${ZB_STACK}
  BAD=1
  jps | grep surefirebooter | cut -d ' ' -f 1 | xargs kill -9
else
  echo We're ok: there is no zombie test, but some tests took some time to 
stop
fi
  else
echo We're ok: there is no zombie test
  fi

See http://www.mail-archive.com/issues@hbase.apache.org/msg73169.html for the 
outcome (it's actually a hdfs zombie, this was before we started killing the 
zombies at the beginning of our tests). The whole stack is in the build logs.

It has improved the precommit success ratio.



It was my two cents :-)



 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Fix For: 3.0.0

 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch, HADOOP-9112-7.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-21 Thread Surenkumar Nihalani (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583123#comment-13583123
 ] 

Surenkumar Nihalani commented on HADOOP-9112:
-

[~ste...@apache.org], having a recommended value works, I was thinking of 
having intermediate value substitution in test-patch. If the value is x, and 
the coefficient of my machine (= 1), we could have it configurable so that it 
substitutes {{(long)(coefficient * valueOfTimeoutForThat@Test)}} This way if 
anyone faces timeout exceptions, we can keep on increasing the configurable 
coefficient till all of the tests pass as part of initial setup.

Would that be too much overhead for configuration?

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Fix For: 3.0.0

 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch, HADOOP-9112-7.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583158#comment-13583158
 ] 

Hudson commented on HADOOP-9112:


Integrated in Hadoop-Hdfs-trunk #1323 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1323/])
HADOOP-9112. test-patch should -1 for @Tests without a timeout (Surenkumar 
Nihalani via bobby) (Revision 1448285)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1448285
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Fix For: 3.0.0

 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch, HADOOP-9112-7.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583173#comment-13583173
 ] 

Hudson commented on HADOOP-9112:


Integrated in Hadoop-Mapreduce-trunk #1351 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1351/])
HADOOP-9112. test-patch should -1 for @Tests without a timeout (Surenkumar 
Nihalani via bobby) (Revision 1448285)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1448285
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Fix For: 3.0.0

 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch, HADOOP-9112-7.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9317) User cannot specify a kerberos keytab for commands

2013-02-21 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583223#comment-13583223
 ] 

Daryn Sharp commented on HADOOP-9317:
-

As background for the motivation:  In some production environments we have 
hundreds of job launches every few mins.  The launches may perform dozens of 
hadoop commands before actually submitting the job.  We are seeing a huge 
failure rate, necessitating unnecessary retry loops, because of this kinit 
issue whether it be explicitly by the user or implicitly by hadoop's background 
renewal.  As the job load is increased, we are seeing more and more failures 
that are breaking through the retry loop.

@Aaron:
I have not tested with IBM's java.  If you have convenient access, would you be 
able to test it for me?  On the bright side, even if it's broken, it won't be a 
problem unless the user sets the KRB5KEYTAB env to activate the new code.  If 
it is broken, could I file another jira to make it work for IBM's java?

@Allen:
Yes, kinit will regardless of -R, unlink the file, open/write the principal, 
open/write the TGT.  So your suggestion won't work because concurrent launches 
issuing the kinit will still result in the race condition where one process may 
be issuing the kinit while another is trying to run hadoop commands.  Obtaining 
a new TGT for every launch would place tremendously more pressure on the KDC, 
thus why this change tries the ticket cache, falls back to the keytab, and 
updates the ticket cache if it had to fallback.


 User cannot specify a kerberos keytab for commands
 --

 Key: HADOOP-9317
 URL: https://issues.apache.org/jira/browse/HADOOP-9317
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-9317.branch-23.patch, 
 HADOOP-9317.branch-23.patch, HADOOP-9317.patch, HADOOP-9317.patch, 
 HADOOP-9317.patch


 {{UserGroupInformation}} only allows kerberos users to be logged in via the 
 ticket cache when running hadoop commands.  {{UGI}} allows a keytab to be 
 used, but it's only exposed programatically.  This forces keytab-based users 
 running hadoop commands to periodically issue a kinit from the keytab.  A 
 race condition exists during the kinit when the ticket cache is deleted and 
 re-created.  Hadoop commands will fail when the ticket cache does not 
 momentarily exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9298) Cover with unit test package org.apache.hadoop.hdfs.tools

2013-02-21 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583315#comment-13583315
 ] 

Suresh Srinivas commented on HADOOP-9298:
-

[~vbondarev] There is no need to close the issue. You can move the issue from 
one project to the other.

 Cover with unit test package org.apache.hadoop.hdfs.tools
 -

 Key: HADOOP-9298
 URL: https://issues.apache.org/jira/browse/HADOOP-9298
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
 Attachments: HADOOP-9298-branch-0.23-a.patch, 
 HADOOP-9298-branch-2-a.patch, HADOOP-9298-trunk-a.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9313) Remove spurious mkdir from hadoop-config.cmd

2013-02-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-9313.
-

   Resolution: Fixed
Fix Version/s: trunk-win
 Hadoop Flags: Reviewed

+1 for the patch. I committed it to branch-trunk-win.

Thank you Ivan!

 Remove spurious mkdir from hadoop-config.cmd
 

 Key: HADOOP-9313
 URL: https://issues.apache.org/jira/browse/HADOOP-9313
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: trunk-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: trunk-win

 Attachments: HADOOP-9313.branch-trunk-win.cmd.patch


 The following mkdir seems to have been accidentally added to Windows cmd 
 script and should be removed:
 {code}
 mkdir c:\tmp\dir1
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9043) winutils can create unusable symlinks

2013-02-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583336#comment-13583336
 ] 

Chris Nauroth commented on HADOOP-9043:
---

{quote}
To fix this specific problem, we would want to change FileUtil#symlink to 
normalize the slashes. If you take a look at the branch-1-win code, it 
already does this so you can just forward port the patch.
{quote}

This code was already ported to branch-trunk-win several months ago:

{code}
  public static int symLink(String target, String linkname) throws IOException{
// Run the input paths through Java's File so that they are converted to the
// native OS form
File targetFile = new File(target);
File linkFile = new File(linkname);
{code}

I believe this jira is no longer valid, at least under its current description. 
 When I filed it, I didn't realize that Windows requires slightly different API 
calls for creating a symlink that targets a directory vs. a file.  Therefore, 
winutils really does need to call {{DirectoryCheck}} to determine the type of 
target.  As a consequence, winutils differs from Unix ln in that it cannot 
create a dangling symlink.  (It has no way of knowing whether the caller is 
trying to create a dangling file symlink or a dangling directory symlink.)  
I believe that both the Java code and the C code are doing the right thing for 
us now, without further changes.

The remaining issue is the failure of {{TestLocalFSFileContextSymlink}} on 
Windows, which is what prompted me to file this jira initially.  We now know 
that this is an inevitable platform difference, so let's use 
{{Assert.assumeTrue(!Shell.WINDOWS)}} to skip the tests that can't possibly 
pass on Windows.  If needed, we could also add more tests to cover Windows 
behavior, guarded with {{Assert.assumeTrue(Shell.WINDOWS)}}.  AFAIK, Hadoop 
product code does not actually require the ability to create a dangling 
symlink, and the test suite is just trying to cover ln functionality 
exhaustively.

I propose that we close this jira as invalid and create a new one to fix the 
tests.


 winutils can create unusable symlinks
 -

 Key: HADOOP-9043
 URL: https://issues.apache.org/jira/browse/HADOOP-9043
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 1-win, trunk-win
Reporter: Chris Nauroth
Assignee: Arpit Agarwal
 Attachments: HADOOP-9043.branch-1-win.patch, HADOOP-9043.trunk.patch


 In general, the winutils symlink command rejects attempts to create symlinks 
 targeting a destination file that does not exist.  However, if given a 
 symlink destination with forward slashes pointing at a file that does exist, 
 then it creates the symlink with the forward slashes, and then attempts to 
 open the file through the symlink will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9315) CLONE of HADOOP-9249 for branch-2 - hadoop-maven-plugins version-info goal causes build failure when running with Clover

2013-02-21 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583342#comment-13583342
 ] 

Suresh Srinivas commented on HADOOP-9315:
-

This is not required for trunk?

 CLONE of HADOOP-9249 for branch-2 - hadoop-maven-plugins version-info goal 
 causes build failure when running with Clover
 

 Key: HADOOP-9315
 URL: https://issues.apache.org/jira/browse/HADOOP-9315
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Dennis Y
Assignee: Chris Nauroth
 Attachments: HADOOP-9249.1.patch


 clone of https://issues.apache.org/jira/browse/HADOOP-9249 for branch-2
 Running Maven with the -Pclover option for code coverage causes the build to 
 fail because of not finding a Clover class while running hadoop-maven-plugins 
 version-info.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9315) CLONE of HADOOP-9249 for branch-2 - hadoop-maven-plugins version-info goal causes build failure when running with Clover

2013-02-21 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583343#comment-13583343
 ] 

Suresh Srinivas commented on HADOOP-9315:
-

Never mind, I read the title of the jira :)

 CLONE of HADOOP-9249 for branch-2 - hadoop-maven-plugins version-info goal 
 causes build failure when running with Clover
 

 Key: HADOOP-9315
 URL: https://issues.apache.org/jira/browse/HADOOP-9315
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Dennis Y
Assignee: Chris Nauroth
 Attachments: HADOOP-9249.1.patch


 clone of https://issues.apache.org/jira/browse/HADOOP-9249 for branch-2
 Running Maven with the -Pclover option for code coverage causes the build to 
 fail because of not finding a Clover class while running hadoop-maven-plugins 
 version-info.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9322) LdapGroupsMapping doesn't seem to set a timeout for its directory search

2013-02-21 Thread Harsh J (JIRA)
Harsh J created HADOOP-9322:
---

 Summary: LdapGroupsMapping doesn't seem to set a timeout for its 
directory search
 Key: HADOOP-9322
 URL: https://issues.apache.org/jira/browse/HADOOP-9322
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Priority: Minor


We don't appear to be setting a timeout via 
http://docs.oracle.com/javase/6/docs/api/javax/naming/directory/SearchControls.html#setTimeLimit(int)
 before we search with 
http://docs.oracle.com/javase/6/docs/api/javax/naming/directory/DirContext.html#search(javax.naming.Name,%20java.lang.String,%20javax.naming.directory.SearchControls).

This may occasionally lead to some unwanted NN pauses due to lock-holding on 
the operations that do group lookups. A timeout is better to define than rely 
on 0 (infinite wait).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9322) LdapGroupsMapping doesn't seem to set a timeout for its directory search

2013-02-21 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583353#comment-13583353
 ] 

Harsh J commented on HADOOP-9322:
-

However, http://osdir.com/ml/java.sun.jndi/2005-09/msg5.html does note that 
some systems may disobey this. FWIW, lets make it work for systems that do 
respect it at least.

 LdapGroupsMapping doesn't seem to set a timeout for its directory search
 

 Key: HADOOP-9322
 URL: https://issues.apache.org/jira/browse/HADOOP-9322
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Priority: Minor
  Labels: performance

 We don't appear to be setting a timeout via 
 http://docs.oracle.com/javase/6/docs/api/javax/naming/directory/SearchControls.html#setTimeLimit(int)
  before we search with 
 http://docs.oracle.com/javase/6/docs/api/javax/naming/directory/DirContext.html#search(javax.naming.Name,%20java.lang.String,%20javax.naming.directory.SearchControls).
 This may occasionally lead to some unwanted NN pauses due to lock-holding on 
 the operations that do group lookups. A timeout is better to define than rely 
 on 0 (infinite wait).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9315) Port HADOOP-9249 to branch-2 to fix build failures

2013-02-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9315:


Summary: Port HADOOP-9249 to branch-2 to fix build failures  (was: CLONE of 
HADOOP-9249 for branch-2 - hadoop-maven-plugins version-info goal causes build 
failure when running with Clover)

 Port HADOOP-9249 to branch-2 to fix build failures
 --

 Key: HADOOP-9315
 URL: https://issues.apache.org/jira/browse/HADOOP-9315
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Dennis Y
Assignee: Chris Nauroth
 Attachments: HADOOP-9249.1.patch


 clone of https://issues.apache.org/jira/browse/HADOOP-9249 for branch-2
 Running Maven with the -Pclover option for code coverage causes the build to 
 fail because of not finding a Clover class while running hadoop-maven-plugins 
 version-info.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9315) Port HADOOP-9249 to branch-2 to fix build failures

2013-02-21 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583356#comment-13583356
 ] 

Suresh Srinivas commented on HADOOP-9315:
-

[~dennisyv] Can you please sign the ICLA - 
http://www.apache.org/licenses/icla.txt. Once that is done, I will add you as a 
contributor, assign this jira to you, and commit the patch.

 Port HADOOP-9249 to branch-2 to fix build failures
 --

 Key: HADOOP-9315
 URL: https://issues.apache.org/jira/browse/HADOOP-9315
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Dennis Y
Assignee: Chris Nauroth
 Attachments: HADOOP-9249.1.patch


 clone of https://issues.apache.org/jira/browse/HADOOP-9249 for branch-2
 Running Maven with the -Pclover option for code coverage causes the build to 
 fail because of not finding a Clover class while running hadoop-maven-plugins 
 version-info.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-21 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583379#comment-13583379
 ] 

Robert Joseph Evans commented on HADOOP-9112:
-

Sorry I should have caught the return code being wrong. I just checked in the 
fixed return codes in version 7 of the patch.

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Fix For: 3.0.0

 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch, HADOOP-9112-7.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9043) winutils can create unusable symlinks

2013-02-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583384#comment-13583384
 ] 

Arpit Agarwal commented on HADOOP-9043:
---

Hi Ivan, Chris, 

I think the behavior of winutils is orthogonal to what the Java code invoking 
it does. If we are shipping winutils with the distribution it should do the 
right thing, which is to either fail the creation of unusable symlinks or 
handle the path conversion.

 winutils can create unusable symlinks
 -

 Key: HADOOP-9043
 URL: https://issues.apache.org/jira/browse/HADOOP-9043
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 1-win, trunk-win
Reporter: Chris Nauroth
Assignee: Arpit Agarwal
 Attachments: HADOOP-9043.branch-1-win.patch, HADOOP-9043.trunk.patch


 In general, the winutils symlink command rejects attempts to create symlinks 
 targeting a destination file that does not exist.  However, if given a 
 symlink destination with forward slashes pointing at a file that does exist, 
 then it creates the symlink with the forward slashes, and then attempts to 
 open the file through the symlink will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583389#comment-13583389
 ] 

Hudson commented on HADOOP-9112:


Integrated in Hadoop-trunk-Commit #3374 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3374/])
amendment to HADOOP-9112 fix return codes (Surenkumar Nihalani via bobby) 
(Revision 1448745)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1448745
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh


 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Fix For: 3.0.0

 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch, HADOOP-9112-7.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9309) test failures on Windows due to UnsatisfiedLinkError in NativeCodeLoader#buildSupportsSnappy

2013-02-21 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583417#comment-13583417
 ] 

Suresh Srinivas commented on HADOOP-9309:
-

+1. I will commit this patch shortly.

 test failures on Windows due to UnsatisfiedLinkError in 
 NativeCodeLoader#buildSupportsSnappy
 

 Key: HADOOP-9309
 URL: https://issues.apache.org/jira/browse/HADOOP-9309
 Project: Hadoop Common
  Issue Type: Bug
  Components: native, util
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Arpit Agarwal
 Attachments: HADOOP-9309.1.patch, HADOOP-9309.patch


 Checking for Snappy support calls native method 
 {{NativeCodeLoader#buildSupportsSnappy}}.  This method has not been 
 implemented for Windows in hadoop.dll, so it throws {{UnsatisfiedLinkError}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9309) test failures on Windows due to UnsatisfiedLinkError in NativeCodeLoader#buildSupportsSnappy

2013-02-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-9309.
-

   Resolution: Fixed
Fix Version/s: trunk-win
 Hadoop Flags: Reviewed

I committed the patch to branch-trunk-win. Thank you Aprit!

Thank you Chris for the review.

 test failures on Windows due to UnsatisfiedLinkError in 
 NativeCodeLoader#buildSupportsSnappy
 

 Key: HADOOP-9309
 URL: https://issues.apache.org/jira/browse/HADOOP-9309
 Project: Hadoop Common
  Issue Type: Bug
  Components: native, util
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Arpit Agarwal
 Fix For: trunk-win

 Attachments: HADOOP-9309.1.patch, HADOOP-9309.patch


 Checking for Snappy support calls native method 
 {{NativeCodeLoader#buildSupportsSnappy}}.  This method has not been 
 implemented for Windows in hadoop.dll, so it throws {{UnsatisfiedLinkError}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9309) test failures on Windows due to UnsatisfiedLinkError in NativeCodeLoader#buildSupportsSnappy

2013-02-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583437#comment-13583437
 ] 

Arpit Agarwal commented on HADOOP-9309:
---

Thanks Suresh.

 test failures on Windows due to UnsatisfiedLinkError in 
 NativeCodeLoader#buildSupportsSnappy
 

 Key: HADOOP-9309
 URL: https://issues.apache.org/jira/browse/HADOOP-9309
 Project: Hadoop Common
  Issue Type: Bug
  Components: native, util
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Arpit Agarwal
 Fix For: trunk-win

 Attachments: HADOOP-9309.1.patch, HADOOP-9309.patch


 Checking for Snappy support calls native method 
 {{NativeCodeLoader#buildSupportsSnappy}}.  This method has not been 
 implemented for Windows in hadoop.dll, so it throws {{UnsatisfiedLinkError}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-02-21 Thread Surenkumar Nihalani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surenkumar Nihalani resolved HADOOP-9112.
-

Resolution: Fixed

 test-patch should -1 for @Tests without a timeout
 -

 Key: HADOOP-9112
 URL: https://issues.apache.org/jira/browse/HADOOP-9112
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Surenkumar Nihalani
 Fix For: 3.0.0

 Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
 HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
 HADOOP-9112-6.patch, HADOOP-9112-7.patch


 With our current test running infrastructure, if a test with no timeout set 
 runs too long, it triggers a surefire-wide timeout, which for some reason 
 doesn't show up as a failed test in the test-patch output. Given that, we 
 should require that all tests have a timeout set, and have test-patch enforce 
 this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-1) initial import of code from Nutch

2013-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583509#comment-13583509
 ] 

Hudson commented on HADOOP-1:
-

Integrated in hive-trunk-hadoop1 #96 (See 
[https://builds.apache.org/job/hive-trunk-hadoop1/96/])
HIVE-3788 : testCliDriver_repair fails on hadoop-1 (Gunther Hagleitner via 
Ashutosh Chauhan) (Revision 1448699)

 Result = ABORTED

 initial import of code from Nutch
 -

 Key: HADOOP-1
 URL: https://issues.apache.org/jira/browse/HADOOP-1
 Project: Hadoop Common
  Issue Type: Task
Reporter: Doug Cutting
Assignee: Doug Cutting
 Fix For: 0.1.0


 The initial code for Hadoop will be copied from Nutch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9318) when exiting on a signal, print the signal name first

2013-02-21 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583562#comment-13583562
 ] 

Suresh Srinivas commented on HADOOP-9318:
-

This is a useful functionality.

Comments:
# Please add javadoc to the methods
# Minor nits - in register() method, create StringBuilder after the first check 
that throws exception. Optionally, is it better to throw IllegalStateException 
instead RTE?
# Given the way Handler code is, you just need only a single instance of 
Handler. It can be registered in the register method itself using 
{{signal.handle()}} call right?


 when exiting on a signal, print the signal name first
 -

 Key: HADOOP-9318
 URL: https://issues.apache.org/jira/browse/HADOOP-9318
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.4-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9318.001.patch


 On UNIX, it would be nice to know when a Hadoop daemon had exited on a 
 signal.  For example, if a daemon exited because the system administrator 
 sent SIGTERM (i.e. {{killall java}}), it would be nice to know that.  
 Although some of this can be deduced from context and {{SHUTDOWN_MSG}}, it 
 would be nice to have it be explicit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9318) when exiting on a signal, print the signal name first

2013-02-21 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583563#comment-13583563
 ] 

Suresh Srinivas commented on HADOOP-9318:
-

Also it may be a good idea to write a simple unit test to see if multiple 
register calls indeed results in exception.

 when exiting on a signal, print the signal name first
 -

 Key: HADOOP-9318
 URL: https://issues.apache.org/jira/browse/HADOOP-9318
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.4-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-9318.001.patch


 On UNIX, it would be nice to know when a Hadoop daemon had exited on a 
 signal.  For example, if a daemon exited because the system administrator 
 sent SIGTERM (i.e. {{killall java}}), it would be nice to know that.  
 Although some of this can be deduced from context and {{SHUTDOWN_MSG}}, it 
 would be nice to have it be explicit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9293) For S3 use credentials file

2013-02-21 Thread Andy Sautins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583569#comment-13583569
 ] 

Andy Sautins commented on HADOOP-9293:
--


  I would be interested to get the perspective of someone who uses EMR/AWS.  In 
my opinion it is a very EMR/AWS specific use case that I'm trying to address, 
but I agree that my initial stab probably probably isn't the best approach.

  I still am uncomfortable with your suggestions for scripts to extract 
credentials.  At the end of the day I want the ( probably ) already existing 
credentials file to be the system of record for client side credentials.  To me 
it just doesn't make sense to have to place the credentials in the hadoop 
configuration files ( either directly or through some script manipulation ) if 
they are already available in another location. 

  I find the SOCKS proxy implementation to be very interesting for this 
situation.  It is not only very similar to what I'm trying to achieve, but 
would most likely be used in conjunction with the S3Credentials mechanism I am 
proposing.  If you look at how one might use SOCKS you would do the following:

  On the client machine in core-site.xml

  
propertynamehadoop.rpc.socket.factory.class.default/namevalueorg.apache.hadoop.net.SocksSocketFactory/value/property

  Then on the server nodes you would set the following:

  
propertynamehadoop.rpc.socket.factory.class.default/namevalueorg.apache.hadoop.net.StandardSocketFactory/valuefinaltrue/final/property

  That uses the SOCKS proxy factory on the client machine only.  I uploaded 
another patch that takes an approach very similar to the SOCKS proxy 
configuration.  With this approach I would set the following

  On the client machine in core-site.xml

  
propertynamefs.s3.credentials.class/namevalueorg.apache.hadoop.fs.s3.S3CredentialsFromFile/value/property
  
propertynamefs.s3.credentials.file/namevalue/path/to/credentials.json/value/property

  On the server

  
propertynamefs.s3.credentials.class/namevalueorg.apache.hadoop.fs.s3.S3Credentials/valuefinaltrue/final/property

 That mimics what is done with the SOCKS proxy reasonably nicely I think and 
allows for specialized S3Credentials behavior.  

 Note if you still don't like it I'm happy to look to add this to contrib or 
just close out the JIRA.  This is functionality we are using and I believe 
others may find value in it as well.

 If this seems like a reasonable approach I'll address your above concerns 
around documentation and tests next.




 For S3 use credentials file
 ---

 Key: HADOOP-9293
 URL: https://issues.apache.org/jira/browse/HADOOP-9293
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 1.0.2
 Environment: Linux
Reporter: Andy Sautins
Priority: Minor
  Labels: features, newbie
 Attachments: HADOOP-9293.patch

   Original Estimate: 2h
  Remaining Estimate: 2h

 The following document describes the current way that S3 credentials can be 
 specified ( http://wiki.apache.org/hadoop/AmazonS3 ).  In summary they are:
   * in the S3 URI.
   * in the hadoop-site.xml file as 
   ** fs.s3.awsAccessKeyId
   ** fs.s3.awsSecretAccessKey 
   ** fs.s3n.awsAccessKeyId
   ** fs.s3n.aswSecretAccessKey
 The amazon EMR tool elastic-mapreduce already provide the ability to use a 
 credentials file ( see 
 http://s3.amazonaws.com/awsdocs/ElasticMapReduce/latest/emr-qrc.pdf ).  
 I would propose that we allow roughly the same access to credentials through 
 a credentials file that is currently provided by elastic-mapreduce.  This 
 should allow for centralized administration of credentials which should be 
 positive for security.
 I propose the following properties:
 {quote}

 propertynamef3.s3.awsCredentialsFile/namevalue/path/to/file/value/property

 propertynamefs.s3n.awsCredentialsFile/namevalue/path/to/file/value/property
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9293) For S3 use credentials file

2013-02-21 Thread Andy Sautins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Sautins updated HADOOP-9293:
-

Attachment: HADOOP-9293_1.patch

 For S3 use credentials file
 ---

 Key: HADOOP-9293
 URL: https://issues.apache.org/jira/browse/HADOOP-9293
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 1.0.2
 Environment: Linux
Reporter: Andy Sautins
Priority: Minor
  Labels: features, newbie
 Attachments: HADOOP-9293_1.patch, HADOOP-9293.patch

   Original Estimate: 2h
  Remaining Estimate: 2h

 The following document describes the current way that S3 credentials can be 
 specified ( http://wiki.apache.org/hadoop/AmazonS3 ).  In summary they are:
   * in the S3 URI.
   * in the hadoop-site.xml file as 
   ** fs.s3.awsAccessKeyId
   ** fs.s3.awsSecretAccessKey 
   ** fs.s3n.awsAccessKeyId
   ** fs.s3n.aswSecretAccessKey
 The amazon EMR tool elastic-mapreduce already provide the ability to use a 
 credentials file ( see 
 http://s3.amazonaws.com/awsdocs/ElasticMapReduce/latest/emr-qrc.pdf ).  
 I would propose that we allow roughly the same access to credentials through 
 a credentials file that is currently provided by elastic-mapreduce.  This 
 should allow for centralized administration of credentials which should be 
 positive for security.
 I propose the following properties:
 {quote}

 propertynamef3.s3.awsCredentialsFile/namevalue/path/to/file/value/property

 propertynamefs.s3n.awsCredentialsFile/namevalue/path/to/file/value/property
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9320) Hadoop native build failure on ARM hard-float

2013-02-21 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-9320:


Labels: build-failure  (was: )

 Hadoop native build failure on ARM hard-float
 -

 Key: HADOOP-9320
 URL: https://issues.apache.org/jira/browse/HADOOP-9320
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.3-alpha
 Environment: $ uname -a
 Linux 3.5.0-1000-highbank #154-Ubuntu SMP Thu Jan 10 09:13:40 UTC 2013 armv7l 
 armv7l armv7l GNU/Linux
 $ java -version
 java version 1.8.0-ea
 Java(TM) SE Runtime Environment (build 1.8.0-ea-b36e)
 Java HotSpot(TM) Client VM (build 25.0-b04, mixed mode)
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: build-failure
 Attachments: HADOOP-9320.patch


 ARM JVM float ABI detection is failing in JNIFlags.cmake because 
 JAVA_JVM_LIBRARY is not set at that point. The failure currently causes CMake 
 to assume a soft-float JVM. This causes the build to fail with hard-float 
 OpenJDK (but don't use that) and [Oracle Java 8 Preview for 
 ARM|http://jdk8.java.net/fxarmpreview/]. Hopefully the April update of Oracle 
 Java 7 will support hard-float as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9117) replace protoc ant plugin exec with a maven plugin

2013-02-21 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583686#comment-13583686
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-9117:


Hi Alejandro,

Some maven plugin errors showed up recently.
{noformat}
Plugin execution not covered by lifecycle configuration: 
org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc 
(execution: compile-protoc, phase: generate-sources)pom.xml /hadoop-common  
line 296
{noformat}
It seems that they are related to this.  Do you know how to fix them?

 replace protoc ant plugin exec with a maven plugin
 --

 Key: HADOOP-9117
 URL: https://issues.apache.org/jira/browse/HADOOP-9117
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch, 
 HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch


 The protoc compiler is currently invoked using ant plugin exec. There is a 
 bug in the ant plugin exec task which does not consume the STDOUT or STDERR 
 appropriately making the build to stop sometimes (you need to press enter to 
 continue).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9323) Typos in API documentation

2013-02-21 Thread Hao Zhong (JIRA)
Hao Zhong created HADOOP-9323:
-

 Summary: Typos in API documentation
 Key: HADOOP-9323
 URL: https://issues.apache.org/jira/browse/HADOOP-9323
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Hao Zhong
Priority: Critical


Some typos are as follows:
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/ChecksumFileSystem.html
basice-basic

http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html
sytem-system

http://hadoop.apache.org/docs/current/api/index.html?org/apache/hadoop/fs/RawLocalFileSystem.html
http://hadoop.apache.org/docs/current/api/index.html?org/apache/hadoop/fs/FilterFileSystem.html
inital-initial

http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/TrashPolicy.html
paramater-parameter

http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/PositionedReadable.html
equalt-equal

http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/BytesWritable.html
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/record/Buffer.html
seqeunce-sequence

http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/Text.html
instatiation-instantiation

http://hadoop.apache.org/docs/current/api/org/apache/hadoop/record/RecordOutput.html
alll-all

Please revise the documentation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9043) winutils can create unusable symlinks

2013-02-21 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583766#comment-13583766
 ] 

Ivan Mitic commented on HADOOP-9043:


Thanks Arpit. I am fine with failing early in winutils#symlink in case we 
detect a forward slash. Seems useful for this scenario given that the symlink 
creation succeeds, but the link is not working. However, I would not recommend 
trying to support both forward and backward slashes in wintuils. Java is meant 
to solve this problem for us, so let's just build on top of it. 

 winutils can create unusable symlinks
 -

 Key: HADOOP-9043
 URL: https://issues.apache.org/jira/browse/HADOOP-9043
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 1-win, trunk-win
Reporter: Chris Nauroth
Assignee: Arpit Agarwal
 Attachments: HADOOP-9043.branch-1-win.patch, HADOOP-9043.trunk.patch


 In general, the winutils symlink command rejects attempts to create symlinks 
 targeting a destination file that does not exist.  However, if given a 
 symlink destination with forward slashes pointing at a file that does exist, 
 then it creates the symlink with the forward slashes, and then attempts to 
 open the file through the symlink will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9232) JniBasedUnixGroupsMappingWithFallback fails on Windows with UnsatisfiedLinkError

2013-02-21 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583774#comment-13583774
 ] 

Ivan Mitic commented on HADOOP-9232:


{quote}However, the JniBasedUnixGroupsMappingWin name seems a weird for me.
I think a better approach may be to create a seperate 
JniBasedWinGroupsMapping.java class and add some Java code to use choose 
JniBasedWinGroupsMapping and JniBasedUnixGroupsMapping based on the platform. 
This way we can also seperate the native implementation more easily in the 
future.
{quote}
Thanks Chuan for the review! Actually, I will not agree on this one. We should 
try to make Java side interfaces platform independent (where we can) and only 
have platform dependent implementations. In this specific case, the interface 
is quite simple and works well cross platforms so I think this is fine. Let me 
know what you think.

 JniBasedUnixGroupsMappingWithFallback fails on Windows with 
 UnsatisfiedLinkError
 

 Key: HADOOP-9232
 URL: https://issues.apache.org/jira/browse/HADOOP-9232
 Project: Hadoop Common
  Issue Type: Bug
  Components: native, security
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Ivan Mitic
 Attachments: HADOOP-9232.branch-trunk-win.jnigroups.2.patch, 
 HADOOP-9232.branch-trunk-win.jnigroups.3.patch, 
 HADOOP-9232.branch-trunk-win.jnigroups.patch


 {{JniBasedUnixGroupsMapping}} calls native code which isn't implemented 
 properly for Windows, causing {{UnsatisfiedLinkError}}.  The fallback logic 
 in {{JniBasedUnixGroupsMappingWithFallback}} works by checking if the native 
 code is loaded during startup.  In this case, hadoop.dll is present and 
 loaded, but it doesn't contain the right code.  There will be no attempt to 
 fallback to {{ShellBasedUnixGroupsMapping}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9324) Out of date API document

2013-02-21 Thread Hao Zhong (JIRA)
Hao Zhong created HADOOP-9324:
-

 Summary: Out of date API document
 Key: HADOOP-9324
 URL: https://issues.apache.org/jira/browse/HADOOP-9324
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Hao Zhong


The documentation is out of date. Some code references are broken:
1. 
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FSDataInputStream.html
All Implemented Interfaces:
Closeable, DataInput, *org.apache.hadoop.fs.ByteBufferReadable*, 
*org.apache.hadoop.fs.HasFileDescriptor*, PositionedReadable, Seekable 

2.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Cluster.html
renewDelegationToken(*org.apache.hadoop.security.token.Tokenorg.apache.hadoop.mapreduce.security.token.delegation.DelegationTokenIdentifier*
 token)
  Deprecated. Use Token.renew(*org.apache.hadoop.conf.Configuration*) 
instead

3.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/JobConf.html
Use MRAsyncDiskService.moveAndDeleteAllVolumes instead. 
I cannot find the MRAsyncDiskService class in the documentation of 2.0.3. 

4.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/join/CompositeRecordReader.html
 protected *org.apache.hadoop.mapred.join.CompositeRecordReader.JoinCollector* 
jc
Please globally search JoinCollector. It is deleted, but mentioned many times 
in the current documentation.

5.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/OutputCommitter.html
abortJob(JobContext context, *org.apache.hadoop.mapreduce.JobStatus.State 
runState*)  
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Job.html
public *org.apache.hadoop.mapreduce.JobStatus.State* getJobState()

4.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/SequenceFileOutputFormat.html
 static *org.apache.hadoop.io.SequenceFile.CompressionType* 
getOutputCompressionType
 static *org.apache.hadoop.io.SequenceFile.Reader[]*   getReaders

5.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/TaskCompletionEvent.html
Returns enum Status.SUCESS or Status.FAILURE.-Status.SUCCEEDED? 

6.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Job.html
 static *org.apache.hadoop.mapreduce.Job.TaskStatusFilter* 
getTaskOutputFilter
  org.apache.hadoop.mapreduce.TaskReport[] getTaskReports(TaskType type) 

7.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/Reducer.html
cleanup(*org.apache.hadoop.mapreduce.Reducer.Context* context) 

8.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/SequenceFileOutputFormat.html
 static *org.apache.hadoop.io.SequenceFile.CompressionType*
getOutputCompressionType(JobConf conf)
  Get the *SequenceFile.CompressionType* for the output SequenceFile.
 static *org.apache.hadoop.io.SequenceFile.Reader[]*   getReaders 

9.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/lib/partition/InputSampler.html
writePartitionFile(Job job, 
*org.apache.hadoop.mapreduce.lib.partition.InputSampler.SamplerK,V* sampler) 

10.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/lib/partition/TotalOrderPartitioner.html
contain JobContextImpl.getNumReduceTasks() - 1 keys. 
The JobContextImpl class is already deleted.

11. 
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/OutputCommitter.html
Note that this is invoked for jobs with final runstate as 
JobStatus.State.FAILED or JobStatus.State.KILLED.-JobStatus.FAILED 
JobStatus.KILLED?

12.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/TaskAttemptContext.html
All Superinterfaces:
JobContext, *org.apache.hadoop.mapreduce.MRJobConfig*, Progressable, 
TaskAttemptContext 

13.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics/file/FileContext.html
All Implemented Interfaces:
*org.apache.hadoop.metrics.MetricsContext*

14.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics/spi/AbstractMetricsContext.html
   
*org.apache.hadoop.metrics.MetricsRecord*  createRecord(String recordName)

15. 
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/net/DNSToSwitchMapping.html
If a name cannot be resolved to a rack, the implementation should return 
NetworkTopology.DEFAULT_RACK.
NetworkTopology is deleted.

16.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics2/package-summary.html
myprefix.sink.file.class=org.hadoop.metrics2.sink.FileSink -
org.apache.hadoop.metrics2.sink.FileSink?
org.apache.hadoop.metrics2.impl - The package is not found.

17.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/ha/HAServiceTarget.html
 abstract  *org.apache.hadoop.ha.NodeFencer*   getFencer() 

18.http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/MarkableIterator.html
MarkableIterator is a wrapper iterator 

[jira] [Commented] (HADOOP-1) initial import of code from Nutch

2013-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583851#comment-13583851
 ] 

Hudson commented on HADOOP-1:
-

Integrated in Hive-trunk-h0.21 #1981 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1981/])
HIVE-3788 : testCliDriver_repair fails on hadoop-1 (Gunther Hagleitner via 
Ashutosh Chauhan) (Revision 1448699)

 Result = FAILURE

 initial import of code from Nutch
 -

 Key: HADOOP-1
 URL: https://issues.apache.org/jira/browse/HADOOP-1
 Project: Hadoop Common
  Issue Type: Task
Reporter: Doug Cutting
Assignee: Doug Cutting
 Fix For: 0.1.0


 The initial code for Hadoop will be copied from Nutch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-1) initial import of code from Nutch

2013-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583892#comment-13583892
 ] 

Hudson commented on HADOOP-1:
-

Integrated in Hive-trunk-hadoop2 #133 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/133/])
HIVE-3788 : testCliDriver_repair fails on hadoop-1 (Gunther Hagleitner via 
Ashutosh Chauhan) (Revision 1448699)

 Result = FAILURE

 initial import of code from Nutch
 -

 Key: HADOOP-1
 URL: https://issues.apache.org/jira/browse/HADOOP-1
 Project: Hadoop Common
  Issue Type: Task
Reporter: Doug Cutting
Assignee: Doug Cutting
 Fix For: 0.1.0


 The initial code for Hadoop will be copied from Nutch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9117) replace protoc ant plugin exec with a maven plugin

2013-02-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583912#comment-13583912
 ] 

Chris Nauroth commented on HADOOP-9117:
---

One additional note: Nicholas is seeing this problem in Eclipse, not command 
line.  mvn command line builds are working fine.  I don't use Eclipse, so I 
didn't catch this during our code review.

Some really quick research turned up this documentation:

http://wiki.eclipse.org/M2E_plugin_execution_not_covered

This seems to indicate that we need additional configuration in the pom.xml for 
Eclipse compatibility.  Alejandro, we're wondering if you have any other 
thoughts on a fix.  If not, then I suggest we file a follow-up jira to address 
Eclipse compatibility.


 replace protoc ant plugin exec with a maven plugin
 --

 Key: HADOOP-9117
 URL: https://issues.apache.org/jira/browse/HADOOP-9117
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch, 
 HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch


 The protoc compiler is currently invoked using ant plugin exec. There is a 
 bug in the ant plugin exec task which does not consume the STDOUT or STDERR 
 appropriately making the build to stop sometimes (you need to press enter to 
 continue).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9323) Typos in API documentation

2013-02-21 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583927#comment-13583927
 ] 

Suresh Srinivas commented on HADOOP-9323:
-

[~drzhonghao] do you want to take a stab at fixing these typo and post a patch? 

 Typos in API documentation
 --

 Key: HADOOP-9323
 URL: https://issues.apache.org/jira/browse/HADOOP-9323
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Hao Zhong
Priority: Critical

 Some typos are as follows:
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/ChecksumFileSystem.html
 basice-basic
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html
 sytem-system
 http://hadoop.apache.org/docs/current/api/index.html?org/apache/hadoop/fs/RawLocalFileSystem.html
 http://hadoop.apache.org/docs/current/api/index.html?org/apache/hadoop/fs/FilterFileSystem.html
 inital-initial
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/TrashPolicy.html
 paramater-parameter
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/PositionedReadable.html
 equalt-equal
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/BytesWritable.html
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/record/Buffer.html
 seqeunce-sequence
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/Text.html
 instatiation-instantiation
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/record/RecordOutput.html
 alll-all
 Please revise the documentation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9232) JniBasedUnixGroupsMappingWithFallback fails on Windows with UnsatisfiedLinkError

2013-02-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13583959#comment-13583959
 ] 

Chris Nauroth commented on HADOOP-9232:
---

{quote}
We should try to make Java side interfaces platform independent (where we can) 
and only have platform dependent implementations.
{quote}

Agreed with this.  The established pattern is to code platform-agnostic 
interfaces on the Java side and build platform-specific implementations of the 
JNI functions using conditional compilation.  Introducing a 
JniBasedWinGroupsMapping.java would be a divergence from this pattern.


 JniBasedUnixGroupsMappingWithFallback fails on Windows with 
 UnsatisfiedLinkError
 

 Key: HADOOP-9232
 URL: https://issues.apache.org/jira/browse/HADOOP-9232
 Project: Hadoop Common
  Issue Type: Bug
  Components: native, security
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Ivan Mitic
 Attachments: HADOOP-9232.branch-trunk-win.jnigroups.2.patch, 
 HADOOP-9232.branch-trunk-win.jnigroups.3.patch, 
 HADOOP-9232.branch-trunk-win.jnigroups.patch


 {{JniBasedUnixGroupsMapping}} calls native code which isn't implemented 
 properly for Windows, causing {{UnsatisfiedLinkError}}.  The fallback logic 
 in {{JniBasedUnixGroupsMappingWithFallback}} works by checking if the native 
 code is loaded during startup.  In this case, hadoop.dll is present and 
 loaded, but it doesn't contain the right code.  There will be no attempt to 
 fallback to {{ShellBasedUnixGroupsMapping}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7435) Make pre-commit checks run against the correct branch

2013-02-21 Thread Dennis Y (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Y updated HADOOP-7435:
-

Attachment: HADOOP-7435-branch-2--N7.patch
HADOOP-7435-branch-0.23--N7.patch

added HADOOP-9112 (test-patch should -1 for @Tests without a timeout) for 
branch-2 and branch-0.23

 Make pre-commit checks run against the correct branch
 -

 Key: HADOOP-7435
 URL: https://issues.apache.org/jira/browse/HADOOP-7435
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 0.23.0
Reporter: Aaron T. Myers
Assignee: Matt Foley
 Attachments: HADOOP-7435-branch-0.23--N3.patch, 
 HADOOP-7435-branch-0.23--N5.patch, HADOOP-7435-branch-0.23--N6.patch, 
 HADOOP-7435-branch-0.23--N7.patch, 
 HADOOP-7435-branch-0.23-patch-from-[branch-0.23-gd]-to-[fb-HADOOP-7435-branch-0.23-gd]-N2-1.patch,
  HADOOP-7435-branch-2--N2.patch, HADOOP-7435-branch-2--N5.patch, 
 HADOOP-7435-branch-2--N7.patch, HADOOP-7435-for-branch-0.23.patch, 
 HADOOP-7435-for-branch-2.patch, 
 HADOOP-7435-for-trunk-do-not-apply-this.patch, HADOOP-7435-trunk--N5.patch


 The Hudson pre-commit tests are presently only capable of testing a patch 
 against trunk. It'd be nice if this could be extended to automatically run 
 against the correct branch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7435) Make pre-commit checks run against the correct branch

2013-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13584070#comment-13584070
 ] 

Hadoop QA commented on HADOOP-7435:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12570442/HADOOP-7435-branch-2--N7.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2218//console

This message is automatically generated.

 Make pre-commit checks run against the correct branch
 -

 Key: HADOOP-7435
 URL: https://issues.apache.org/jira/browse/HADOOP-7435
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 0.23.0
Reporter: Aaron T. Myers
Assignee: Matt Foley
 Attachments: HADOOP-7435-branch-0.23--N3.patch, 
 HADOOP-7435-branch-0.23--N5.patch, HADOOP-7435-branch-0.23--N6.patch, 
 HADOOP-7435-branch-0.23--N7.patch, 
 HADOOP-7435-branch-0.23-patch-from-[branch-0.23-gd]-to-[fb-HADOOP-7435-branch-0.23-gd]-N2-1.patch,
  HADOOP-7435-branch-2--N2.patch, HADOOP-7435-branch-2--N5.patch, 
 HADOOP-7435-branch-2--N7.patch, HADOOP-7435-for-branch-0.23.patch, 
 HADOOP-7435-for-branch-2.patch, 
 HADOOP-7435-for-trunk-do-not-apply-this.patch, HADOOP-7435-trunk--N5.patch


 The Hudson pre-commit tests are presently only capable of testing a patch 
 against trunk. It'd be nice if this could be extended to automatically run 
 against the correct branch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira