[jira] [Commented] (HADOOP-11379) Fix new findbugs warnings in hadoop-auth*

2014-12-09 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240124#comment-14240124
 ] 

Suresh Srinivas commented on HADOOP-11379:
--

[~gtCarrera9], feel free to fix findbugs in a single jira. It is not necessary 
to open individual bugs per module.

 Fix new findbugs warnings in hadoop-auth*
 -

 Key: HADOOP-11379
 URL: https://issues.apache.org/jira/browse/HADOOP-11379
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Li Lu
Assignee: Li Lu
  Labels: findbugs
 Fix For: 2.7.0

 Attachments: HADOOP-11379-120914.patch


 When locally run findbugs 3.0, there are new warnings generated. This Jira 
 aims to address the new warnings in hadoop-auth and hadoop-auth-examples. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11280) TestWinUtils#testChmod fails after removal of NO_PROPAGATE_INHERIT_ACE.

2014-11-07 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14201760#comment-14201760
 ] 

Suresh Srinivas commented on HADOOP-11280:
--

+1 for the patch

 TestWinUtils#testChmod fails after removal of NO_PROPAGATE_INHERIT_ACE.
 ---

 Key: HADOOP-11280
 URL: https://issues.apache.org/jira/browse/HADOOP-11280
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Trivial
 Attachments: HADOOP-11280.1.patch


 As part of the Windows YARN secure container executor changes in YARN-2198, 
 {{chmod}} calls no longer use the {{NO_PROPAGATE_INHERIT_ACE}} flag.  This 
 change in behavior violates one of the assertions in 
 {{TestWinUtils#testChmod}}, so we need to update the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101212#comment-14101212
 ] 

Suresh Srinivas commented on HADOOP-9902:
-

I reviewed the code the best I can. I only reviewed core hadoop and hdfs 
changes. It is is really hard given some code formatting is mixed real 
improvements etc. This is a change that could have been done in a feature 
branch. [~aw], certainly reviews could have been made easier that way. That 
said, thank you for cleaning up the scripts. It is looks much better now!

Comments:
# bin/hadoop not longer checks for hdfs commands portmap and nfs3. Is this 
intentional?
# hadoop-daemon.sh usage no longer prints --hosts optional paramter in usage; 
this is intentional right? Also does all daemons now support option status 
along with start and stop?
# locating HADOOP_PREFIX is repeated in bin/hadoop and hadoop-daemon.sh (this 
can be optimized in a future patch)
# start-all.sh and stop-all.sh exits with warning. Why retain code after that. 
Expect users to delete the exit in the beginning?jj
# hadoop_error is not used in some cases and still echo is used. 
# hadoop-env.sh - we should document the GC configuration for max, min, young 
generation starting and max size. Also think that secondary namenode should 
just be set to primary namenode settings. This can be done in another jira. BTW 
nice job for explicitly specifying the overridable functionas in hadoop-env.sh!
# cowsay is cute. But can get annoying :). Hopefully hadoop_usage is in every 
script (I checked, it is).


 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
 HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
 HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101217#comment-14101217
 ] 

Suresh Srinivas commented on HADOOP-9902:
-

BTW I forgot to include the main part of my comment. +1 for the patch with the 
comments addressed (and comments which explicitly states things can be done in 
another jira can be done separately).

Thanks [~aw] for the rewrite!

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
 HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
 HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101386#comment-14101386
 ] 

Suresh Srinivas commented on HADOOP-9902:
-

bq. Yes. Those commands were never hooked into the hadoop command in the Apache 
source that I saw... but I guess I could have missed one? In any case, I didn't 
see a reason to have an explicit check for something that never existed as a 
result, especially considering how much other, actually deprecated stuff is 
there.
When you say hooked into hadoop command, do you mean usage? If so, that might 
be a bug. [~brandonli], can bin/hadoop be used to start the nfs gateway and 
portmap? In that case in bin/hadoop may need to include it in the case to 
trigger those commands using hdfs script.

bq. Shame on you for ruining my easter egg... but your check wasn't very 
thorough
Sorry. I know one now. A script named with three letters? Did I miss more?


 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
 HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
 HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101386#comment-14101386
 ] 

Suresh Srinivas edited comment on HADOOP-9902 at 8/18/14 9:57 PM:
--

bq. Yes. Those commands were never hooked into the hadoop command in the Apache 
source that I saw... but I guess I could have missed one? In any case, I didn't 
see a reason to have an explicit check for something that never existed as a 
result, especially considering how much other, actually deprecated stuff is 
there.
When you say hooked into hadoop command, do you mean usage? If so, that might 
be a bug. [~brandonli], can bin/hadoop be used to start the nfs gateway and 
portmap? In that case in bin/hadoop may need to include it in the case to 
trigger those commands using hdfs script.

bq. Shame on you for ruining my easter egg... 
Sorry

bq. but your check wasn't very thorough
I know one now. A script named with three letters? Did I miss more?



was (Author: sureshms):
bq. Yes. Those commands were never hooked into the hadoop command in the Apache 
source that I saw... but I guess I could have missed one? In any case, I didn't 
see a reason to have an explicit check for something that never existed as a 
result, especially considering how much other, actually deprecated stuff is 
there.
When you say hooked into hadoop command, do you mean usage? If so, that might 
be a bug. [~brandonli], can bin/hadoop be used to start the nfs gateway and 
portmap? In that case in bin/hadoop may need to include it in the case to 
trigger those commands using hdfs script.

bq. Shame on you for ruining my easter egg... but your check wasn't very 
thorough
Sorry. I know one now. A script named with three letters? Did I miss more?


 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
 HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
 HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14101406#comment-14101406
 ] 

Suresh Srinivas commented on HADOOP-9902:
-

bq. Nope. I specifically mean 'hadoop portmap' and 'hadoop nfs3' never worked. 
The code always declared them as a deprecated command and to run hdfs instead.

Doesn't the following from old script print warning and delegate nfs3 and 
portmap to hdfs script?
{noformat}
namenode|secondarynamenode|datanode|dfs|dfsadmin|fsck|balancer|fetchdt|oiv|dfsgroups|portmap|nfs3)
echo DEPRECATED: Use of this script to execute hdfs command is 
deprecated. 12
echo Instead use the hdfs command for it. 12
echo  12
#try to locate hdfs and if present, delegate to it.  
shift
if [ -f ${HADOOP_HDFS_HOME}/bin/hdfs ]; then
  exec ${HADOOP_HDFS_HOME}/bin/hdfs ${COMMAND/dfsgroups/groups}  $@
elif [ -f ${HADOOP_PREFIX}/bin/hdfs ]; then
  exec ${HADOOP_PREFIX}/bin/hdfs ${COMMAND/dfsgroups/groups} $@
else
  echo HADOOP_HDFS_HOME not found!
  exit 1
fi
;;
{noformat}

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-2.patch, 
 HADOOP-9902-3.patch, HADOOP-9902-4.patch, HADOOP-9902-5.patch, 
 HADOOP-9902-6.patch, HADOOP-9902-7.patch, HADOOP-9902-8.patch, 
 HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
 more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-13 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14096204#comment-14096204
 ] 

Suresh Srinivas commented on HADOOP-9902:
-

[~aw], I would like to review these scripts as well. Please give time till the 
next Wednesday (earlier if I can find time). In general this is a major 
rewrite. It could have been done in multiple increments in a separate jiras to 
help review better.

Some high level comments - Is there any concerns you see with the existing 
environment in mandating bash v3? Also can you please add new functionality 
(jnipath, distch) to a separate jira, instead of mixing it with rewrite.


 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
 HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
 HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-13 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14096244#comment-14096244
 ] 

Suresh Srinivas commented on HADOOP-9902:
-

[~chris.douglas], this is very good work done by [~aw] to do the much needed 
cleanup. However other than Roman, I have not seen any committer review this 
change thoroughly and ready to +1 it. Even Roman has bunch of caveats. Not sure 
if reviews can be effective where rewrite and addition of functionality all has 
happened together. It the only concern is this patch becoming stale, I will 
help in rebasing it.

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
 HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
 HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-13 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14096249#comment-14096249
 ] 

Suresh Srinivas commented on HADOOP-9902:
-

bq. While thorough review could take awhile, validating the general direction 
should be quick.
I am happy with the general direction. My concern is about the possible 
incompatibilities and breaking existing set of tools. Also bugs (which we can 
always fix them and stabilize in trunk).

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
 HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
 HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HADOOP-9902) Shell script rewrite

2014-08-13 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14096244#comment-14096244
 ] 

Suresh Srinivas edited comment on HADOOP-9902 at 8/14/14 1:03 AM:
--

[~chris.douglas], this is very good work by [~aw] to do the much needed 
cleanup. However other than Roman, I have not seen any committer review this 
change thoroughly and ready to +1 it. Even Roman has a bunch of caveats. Not 
sure if reviews can be effective where rewrite and addition of functionality 
all has happened together. If the only concern is this patch becoming stale, I 
will help in rebasing it.


was (Author: sureshms):
[~chris.douglas], this is very good work done by [~aw] to do the much needed 
cleanup. However other than Roman, I have not seen any committer review this 
change thoroughly and ready to +1 it. Even Roman has bunch of caveats. Not sure 
if reviews can be effective where rewrite and addition of functionality all has 
happened together. It the only concern is this patch becoming stale, I will 
help in rebasing it.

 Shell script rewrite
 

 Key: HADOOP-9902
 URL: https://issues.apache.org/jira/browse/HADOOP-9902
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: releasenotes
 Fix For: 3.0.0

 Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
 HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
 HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
 HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
 HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
 HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt


 Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8069) Enable TCP_NODELAY by default for IPC

2014-07-24 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073789#comment-14073789
 ] 

Suresh Srinivas commented on HADOOP-8069:
-

+1 for enabling TCP_NODELAY by default.

 Enable TCP_NODELAY by default for IPC
 -

 Key: HADOOP-8069
 URL: https://issues.apache.org/jira/browse/HADOOP-8069
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8069.txt


 I think we should switch the default for the IPC client and server NODELAY 
 options to true. As wikipedia says:
 {quote}
 In general, since Nagle's algorithm is only a defense against careless 
 applications, it will not benefit a carefully written application that takes 
 proper care of buffering; the algorithm has either no effect, or negative 
 effect on the application.
 {quote}
 Since our IPC layer is well contained and does its own buffering, we 
 shouldn't be careless.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10782) Typo in DataChecksum classs

2014-07-07 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10782:
-

   Resolution: Fixed
Fix Version/s: 2.5.0
   Status: Resolved  (was: Patch Available)

I committed the change. Thank you [~jingguo] for the patch.

 Typo in DataChecksum classs
 ---

 Key: HADOOP-10782
 URL: https://issues.apache.org/jira/browse/HADOOP-10782
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jingguo Yao
Assignee: Jingguo Yao
Priority: Trivial
 Fix For: 2.5.0

 Attachments: HADOOP-10782.patch

   Original Estimate: 5m
  Remaining Estimate: 5m





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10389) Native RPCv9 client

2014-06-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14028201#comment-14028201
 ] 

Suresh Srinivas commented on HADOOP-10389:
--

Can you please tone down the rudeness. The following comments are unnecessary 
for productive discussion:
bq. I'm very familiar with C++ and I don't need a lecture on its advantages, 
having been a user for a decade.

You have already made your point on your c++ cred. Lets continue the discussion 
in the right tone.

 Native RPCv9 client
 ---

 Key: HADOOP-10389
 URL: https://issues.apache.org/jira/browse/HADOOP-10389
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HADOOP-10388
Reporter: Binglin Chang
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-10388.001.patch, HADOOP-10389.002.patch, 
 HADOOP-10389.004.patch, HADOOP-10389.005.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10389) Native RPCv9 client

2014-06-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14028221#comment-14028221
 ] 

Suresh Srinivas commented on HADOOP-10389:
--

[~wheat9], can you please provide information on how many lines of code is not 
necessary and can be replaced by c++ standard libraries. Of course that is one 
of the factors and not the only reason to choose the direction of this solution.

 Native RPCv9 client
 ---

 Key: HADOOP-10389
 URL: https://issues.apache.org/jira/browse/HADOOP-10389
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HADOOP-10388
Reporter: Binglin Chang
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-10388.001.patch, HADOOP-10389.002.patch, 
 HADOOP-10389.004.patch, HADOOP-10389.005.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10630) Possible race condition in RetryInvocationHandler

2014-05-28 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14011387#comment-14011387
 ] 

Suresh Srinivas commented on HADOOP-10630:
--

+1 for the patch, once the failover tests pass.

 Possible race condition in RetryInvocationHandler
 -

 Key: HADOOP-10630
 URL: https://issues.apache.org/jira/browse/HADOOP-10630
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HADOOP-10630.000.patch


 In one of our system tests with NameNode HA setup, we ran 300 threads in 
 LoadGenerator. While one of the NameNodes was already in the active state and 
 started to serve, we still saw one of the client thread failed all the 
 retries in a 20 seconds window. In the meanwhile, we saw a lot of following 
 warning msg in the log:
 {noformat}
 WARN retry.RetryInvocationHandler: A failover has occurred since the start of 
 this method invocation attempt.
 {noformat}
 After checking the code, we see the following code in RetryInvocationHandler:
 {code}
   while (true) {
   // The number of times this invocation handler has ever been failed 
 over,
   // before this method invocation attempt. Used to prevent concurrent
   // failed method invocations from triggering multiple failover attempts.
   long invocationAttemptFailoverCount;
   synchronized (proxyProvider) {
 invocationAttemptFailoverCount = proxyProviderFailoverCount;
   }
   ..
   if (action.action == RetryAction.RetryDecision.FAILOVER_AND_RETRY) {
 // Make sure that concurrent failed method invocations only cause 
 a
 // single actual fail over.
 synchronized (proxyProvider) {
   if (invocationAttemptFailoverCount == 
 proxyProviderFailoverCount) {
 proxyProvider.performFailover(currentProxy.proxy);
 proxyProviderFailoverCount++;
 currentProxy = proxyProvider.getProxy();
   } else {
 LOG.warn(A failover has occurred since the start of this 
 method
 +  invocation attempt.);
   }
 }
 invocationFailoverCount++;
   }
  ..
 {code}
 We can see we refresh the value of currentProxy only when the thread performs 
 the failover (while holding the monitor of the proxyProvider). Because 
 currentProxy is not volatile,  a thread that does not perform the failover 
 (in which case it will log the warning msg) may fail to get the new value of 
 currentProxy.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10566) Refactor proxyservers out of ProxyUsers

2014-05-14 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13996638#comment-13996638
 ] 

Suresh Srinivas commented on HADOOP-10566:
--

[~benoyantony] I have committed this to trunk. On branch-2 the code does not 
compile. Can you post a branch-2 patch?
{noformat}
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile 
(default-testCompile) on project hadoop-hdfs: Compilation failure: Compilation 
failure:
[ERROR] 
/Users/suresh/Documents/workspace/committer/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java:[413,8]
 cannot find symbol
[ERROR] symbol: variable jspWriterOutput
[ERROR] jspWriterOutput += (String) args[0];
[ERROR] 
/Users/suresh/Documents/workspace/committer/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java:[418,4]
 cannot find symbol
[ERROR] symbol  : variable jspWriterOutput
[ERROR] location: class org.apache.hadoop.hdfs.server.common.TestJspHelper
[ERROR] 
/Users/suresh/Documents/workspace/committer/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java:[426,43]
 cannot find symbol
[ERROR] symbol  : variable jspWriterOutput
[ERROR] location: class org.apache.hadoop.hdfs.server.common.TestJspHelper
{noformat}

 Refactor proxyservers out of ProxyUsers
 ---

 Key: HADOOP-10566
 URL: https://issues.apache.org/jira/browse/HADOOP-10566
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.4.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10566.patch, HADOOP-10566.patch, 
 HADOOP-10566.patch, HADOOP-10566.patch, HADOOP-10566.patch


 HADOOP-10498 added proxyservers feature in ProxyUsers. It is beneficial to 
 treat this as a separate feature since 
 1 The ProxyUsers is per proxyuser where as proxyservers is per cluster. The 
 cardinality is different. 
 2 The ProxyUsers.authorize() and ProxyUsers.isproxyUser() are synchronized 
 and hence share the same lock  and impacts performance.
 Since these are two separate features, it will be an improvement to keep them 
 separate. It also enables one to fine-tune each feature independently.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10566) Refactor proxyservers out of ProxyUsers

2014-05-13 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13996618#comment-13996618
 ] 

Suresh Srinivas commented on HADOOP-10566:
--

I will commit this patch shortly.

 Refactor proxyservers out of ProxyUsers
 ---

 Key: HADOOP-10566
 URL: https://issues.apache.org/jira/browse/HADOOP-10566
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.4.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10566.patch, HADOOP-10566.patch, 
 HADOOP-10566.patch, HADOOP-10566.patch, HADOOP-10566.patch


 HADOOP-10498 added proxyservers feature in ProxyUsers. It is beneficial to 
 treat this as a separate feature since 
 1 The ProxyUsers is per proxyuser where as proxyservers is per cluster. The 
 cardinality is different. 
 2 The ProxyUsers.authorize() and ProxyUsers.isproxyUser() are synchronized 
 and hence share the same lock  and impacts performance.
 Since these are two separate features, it will be an improvement to keep them 
 separate. It also enables one to fine-tune each feature independently.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10448) Support pluggable mechanism to specify proxy user settings

2014-05-13 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13996643#comment-13996643
 ] 

Suresh Srinivas commented on HADOOP-10448:
--

[~daryn], do you have any further comments?

 Support pluggable mechanism to specify proxy user settings
 --

 Key: HADOOP-10448
 URL: https://issues.apache.org/jira/browse/HADOOP-10448
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.3.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10448.patch, HADOOP-10448.patch, 
 HADOOP-10448.patch, HADOOP-10448.patch, HADOOP-10448.patch, 
 HADOOP-10448.patch, HADOOP-10448.patch, HADOOP-10448.patch, HADOOP-10448.patch


 We have a requirement to support large number of superusers. (users who 
 impersonate as another user) 
 (http://hadoop.apache.org/docs/r1.2.1/Secure_Impersonation.html) 
 Currently each  superuser needs to be defined in the core-site.xml via 
 proxyuser settings. This will be cumbersome when there are 1000 entries.
 It seems useful to have a pluggable mechanism to specify  proxy user settings 
 with the current approach as the default. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2014-05-05 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13990036#comment-13990036
 ] 

Suresh Srinivas commented on HADOOP-10571:
--

I agree with [~arpitagarwal]. Lets decouple the current set of improvements 
from SLF4J.

 Use Log.*(Object, Throwable) overload to log exceptions
 ---

 Key: HADOOP-10571
 URL: https://issues.apache.org/jira/browse/HADOOP-10571
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HADOOP-10571.01.patch


 When logging an exception, we often convert the exception to string or call 
 {{.getMessage}}. Instead we can use the log method overloads which take 
 {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2014-05-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988257#comment-13988257
 ] 

Suresh Srinivas commented on HADOOP-10571:
--

There are some place where we specifically avoid printing stack trace. When 
making this change, we need to be careful to keep the exception printing terse 
where necessary.

 Use Log.*(Object, Throwable) overload to log exceptions
 ---

 Key: HADOOP-10571
 URL: https://issues.apache.org/jira/browse/HADOOP-10571
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Arpit Agarwal

 When logging an exception, we often convert the exception to string or call 
 {{.getMessage}}. Instead we can use the log method overloads which take 
 {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2014-05-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988280#comment-13988280
 ] 

Suresh Srinivas commented on HADOOP-10571:
--

This might be an opportunity to add a comment to that says the exception is 
terse in Log message by design to avoid someone changing it to verbose stack 
trace.

 Use Log.*(Object, Throwable) overload to log exceptions
 ---

 Key: HADOOP-10571
 URL: https://issues.apache.org/jira/browse/HADOOP-10571
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HADOOP-10571.01.patch


 When logging an exception, we often convert the exception to string or call 
 {{.getMessage}}. Instead we can use the log method overloads which take 
 {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Moved] (HADOOP-10562) Namenode exits on exception without printing stack trace in AbstractDelegationTokenSecretManager

2014-05-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas moved HDFS-6320 to HADOOP-10562:


  Component/s: (was: namenode)
Affects Version/s: (was: 1.2.1)
   (was: 2.0.0-alpha)
   2.0.0-alpha
   1.2.1
  Key: HADOOP-10562  (was: HDFS-6320)
  Project: Hadoop Common  (was: Hadoop HDFS)

 Namenode exits on exception without printing stack trace in 
 AbstractDelegationTokenSecretManager
 

 Key: HADOOP-10562
 URL: https://issues.apache.org/jira/browse/HADOOP-10562
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.1, 2.0.0-alpha
Reporter: Suresh Srinivas

 Not printing the stack trace makes debugging harder.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10562) Namenode exits on exception without printing stack trace in AbstractDelegationTokenSecretManager

2014-05-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10562:
-

Priority: Critical  (was: Major)

 Namenode exits on exception without printing stack trace in 
 AbstractDelegationTokenSecretManager
 

 Key: HADOOP-10562
 URL: https://issues.apache.org/jira/browse/HADOOP-10562
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha, 1.2.1
Reporter: Suresh Srinivas
Priority: Critical

 Not printing the stack trace makes debugging harder.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10562) Namenode exits on exception without printing stack trace in AbstractDelegationTokenSecretManager

2014-05-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10562:
-

Attachment: HADOOP-10562.patch

Patch to print the exception stack trace

 Namenode exits on exception without printing stack trace in 
 AbstractDelegationTokenSecretManager
 

 Key: HADOOP-10562
 URL: https://issues.apache.org/jira/browse/HADOOP-10562
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha, 1.2.1
Reporter: Suresh Srinivas
Priority: Critical
 Attachments: HADOOP-10562.patch


 Not printing the stack trace makes debugging harder.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10562) Namenode exits on exception without printing stack trace in AbstractDelegationTokenSecretManager

2014-05-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10562:
-

Status: Patch Available  (was: Open)

 Namenode exits on exception without printing stack trace in 
 AbstractDelegationTokenSecretManager
 

 Key: HADOOP-10562
 URL: https://issues.apache.org/jira/browse/HADOOP-10562
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.1, 2.0.0-alpha
Reporter: Suresh Srinivas
Priority: Critical
 Attachments: HADOOP-10562.patch


 Not printing the stack trace makes debugging harder.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10562) Namenode exits on exception without printing stack trace in AbstractDelegationTokenSecretManager

2014-05-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10562:
-

Attachment: HADOOP-10562.1.patch

Slightly updated patch with two additions:
# Print the current number of tokens (we saw namenode going out of memory while 
creating an array in this part of the code, this will help debug it)
# Some code cleanup

 Namenode exits on exception without printing stack trace in 
 AbstractDelegationTokenSecretManager
 

 Key: HADOOP-10562
 URL: https://issues.apache.org/jira/browse/HADOOP-10562
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha, 1.2.1
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Critical
 Attachments: HADOOP-10562.1.patch, HADOOP-10562.branch-1.patch, 
 HADOOP-10562.patch


 Not printing the stack trace makes debugging harder.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9919) Rewrite hadoop-metrics2.properties

2014-04-21 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975775#comment-13975775
 ] 

Suresh Srinivas commented on HADOOP-9919:
-

+1 for this change. [~ajisakaa], I will commit this shortly.

 Rewrite hadoop-metrics2.properties
 --

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
 HADOOP-9919.4.patch, HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9919) Update hadoop-metrics2.properties to Yarn

2014-04-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9919:


Summary: Update hadoop-metrics2.properties to Yarn  (was: Rewrite 
hadoop-metrics2.properties)

 Update hadoop-metrics2.properties to Yarn
 -

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
 HADOOP-9919.4.patch, HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9919) Update hadoop-metrics2.properties examples to Yarn

2014-04-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9919:


Summary: Update hadoop-metrics2.properties examples to Yarn  (was: Update 
hadoop-metrics2.properties to Yarn)

 Update hadoop-metrics2.properties examples to Yarn
 --

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
 HADOOP-9919.4.patch, HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9919) Update hadoop-metrics2.properties examples to Yarn

2014-04-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9919:


   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~ajisakaa] for the 
contribution.

 Update hadoop-metrics2.properties examples to Yarn
 --

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.5.0

 Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
 HADOOP-9919.4.patch, HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10350) BUILDING.txt should mention openssl dependency required for hadoop-pipes

2014-04-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13965610#comment-13965610
 ] 

Suresh Srinivas commented on HADOOP-10350:
--

bq. Will commit this soon.
[~vinayrpet], you need a +1 from a committer to commit the patch. That said, I 
am +1 for this change.

 BUILDING.txt should mention openssl dependency required for hadoop-pipes
 

 Key: HADOOP-10350
 URL: https://issues.apache.org/jira/browse/HADOOP-10350
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HADOOP-10350.patch, HADOOP-10350.patch


 BUILDING.txt should mention openssl dependency required for hadoop-pipes



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10441) Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit can't be correctly processed by Ganglia

2014-03-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948215#comment-13948215
 ] 

Suresh Srinivas commented on HADOOP-10441:
--

This should be a blocker for 2.4.0. Marking it as such.

 Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit can't be 
 correctly processed by Ganglia
 

 Key: HADOOP-10441
 URL: https://issues.apache.org/jira/browse/HADOOP-10441
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor

 The issue is reported by [~dsen]:
 Recently added Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit 
 can't be correctly processed by Ganglia because its name contains /
 Proposal: Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit should 
 be renamed to rpc.RetryCache.NameNodeRetryCache.CacheHit
 Here - org.apache.hadoop.ipc.metrics.RetryCacheMetrics#RetryCacheMetrics
 {code}
   RetryCacheMetrics(RetryCache retryCache) {
 name = RetryCache/+ retryCache.getCacheName();
 registry = new MetricsRegistry(name);
 if (LOG.isDebugEnabled()) {
   LOG.debug(Initialized + registry);
 }
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10441) Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit can't be correctly processed by Ganglia

2014-03-26 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10441:
-

Priority: Blocker  (was: Minor)

 Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit can't be 
 correctly processed by Ganglia
 

 Key: HADOOP-10441
 URL: https://issues.apache.org/jira/browse/HADOOP-10441
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Blocker

 The issue is reported by [~dsen]:
 Recently added Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit 
 can't be correctly processed by Ganglia because its name contains /
 Proposal: Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit should 
 be renamed to rpc.RetryCache.NameNodeRetryCache.CacheHit
 Here - org.apache.hadoop.ipc.metrics.RetryCacheMetrics#RetryCacheMetrics
 {code}
   RetryCacheMetrics(RetryCache retryCache) {
 name = RetryCache/+ retryCache.getCacheName();
 registry = new MetricsRegistry(name);
 if (LOG.isDebugEnabled()) {
   LOG.debug(Initialized + registry);
 }
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10441) Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit can't be correctly processed by Ganglia

2014-03-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948228#comment-13948228
 ] 

Suresh Srinivas commented on HADOOP-10441:
--

+1 for the patch. Are there other metrics introduced that have / in them. We 
should do a quick review.

 Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit can't be 
 correctly processed by Ganglia
 

 Key: HADOOP-10441
 URL: https://issues.apache.org/jira/browse/HADOOP-10441
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Blocker
 Attachments: HADOOP-10441.000.patch


 The issue is reported by [~dsen]:
 Recently added Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit 
 can't be correctly processed by Ganglia because its name contains /
 Proposal: Namenode metric rpc.RetryCache/NameNodeRetryCache.CacheHit should 
 be renamed to rpc.RetryCache.NameNodeRetryCache.CacheHit
 Here - org.apache.hadoop.ipc.metrics.RetryCacheMetrics#RetryCacheMetrics
 {code}
   RetryCacheMetrics(RetryCache retryCache) {
 name = RetryCache/+ retryCache.getCacheName();
 registry = new MetricsRegistry(name);
 if (LOG.isDebugEnabled()) {
   LOG.debug(Initialized + registry);
 }
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10434) Is it possible to use df to calculating the dfs usage instead of du

2014-03-25 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10434:
-

Summary: Is it possible to use df to calculating the dfs usage instead of 
du  (was: Is it possible to use df to calculating the dfs usage indtead of 
du)

 Is it possible to use df to calculating the dfs usage instead of du
 ---

 Key: HADOOP-10434
 URL: https://issues.apache.org/jira/browse/HADOOP-10434
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.3.0
Reporter: MaoYuan Xian
Priority: Minor

 When we run datanode from the machine with big disk volume, it's found du 
 operations from org.apache.hadoop.fs.DU's DURefreshThread cost lots of disk 
 performance.
 As we use the whole disk for hdfs storage, it is possible calculate volume 
 usage via df command. Is it necessary adding the df option for usage 
 calculation in hdfs 
 (org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice)?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10015) UserGroupInformation prints out excessive ERROR warnings

2014-03-21 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13943221#comment-13943221
 ] 

Suresh Srinivas commented on HADOOP-10015:
--

Comment:
isDebugEnabled check is needed around the section that prints debug log.

 UserGroupInformation prints out excessive ERROR warnings
 

 Key: HADOOP-10015
 URL: https://issues.apache.org/jira/browse/HADOOP-10015
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Haohui Mai
Assignee: Nicolas Liochon
 Fix For: 3.0.0, 2.3.0, 2.4.0

 Attachments: 10015.v3.patch, 10015.v4.patch, 10015.v5.patch, 
 HADOOP-10015.000.patch, HADOOP-10015.001.patch, HADOOP-10015.002.patch


 In UserGroupInformation::doAs(), it prints out a log at ERROR level whenever 
 it catches an exception.
 However, it prints benign warnings in the following paradigm:
 {noformat}
  try {
 ugi.doAs(new PrivilegedExceptionActionFileStatus() {
   @Override
   public FileStatus run() throws Exception {
 return fs.getFileStatus(nonExist);
   }
 });
   } catch (FileNotFoundException e) {
   }
 {noformat}
 For example, FileSystem#exists() follows this paradigm. Distcp uses this 
 paradigm too. The exception is expected therefore there should be no ERROR 
 logs printed in the namenode logs.
 Currently, the user quickly finds out that the namenode log is quickly filled 
 by _benign_ ERROR logs when he or she runs distcp in secure set up. This 
 behavior confuses the operators.
 This jira proposes to move the log to DEBUG level.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10398) KerberosAuthenticator failed to fall back to PseudoAuthenticator after HADOOP-10078

2014-03-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13940188#comment-13940188
 ] 

Suresh Srinivas commented on HADOOP-10398:
--

If this is going to take time to resolve, is reverting HADOOP-10078 an option?

 KerberosAuthenticator failed to fall back to PseudoAuthenticator after 
 HADOOP-10078
 ---

 Key: HADOOP-10398
 URL: https://issues.apache.org/jira/browse/HADOOP-10398
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: a.txt, c10398_20140310.patch


 {code}
 //KerberosAuthenticator.java
   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
 LOG.debug(JDK performed authentication on our behalf.);
 // If the JDK already did the SPNEGO back-and-forth for
 // us, just pull out the token.
 AuthenticatedURL.extractToken(conn, token);
 return;
   } else ...
 {code}
 The problem of the code above is that HTTP_OK does not implies authentication 
 completed.  We should check if the token can be extracted successfully.
 This problem was reported by [~bowenzhangusa] in [this 
 comment|https://issues.apache.org/jira/browse/HADOOP-10078?focusedCommentId=13896823page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13896823]
  earlier.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8691) FsShell can print Found xxx items unnecessarily often

2014-03-07 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13924216#comment-13924216
 ] 

Suresh Srinivas commented on HADOOP-8691:
-

[~wheat9], this is an incompatible change, given the the output of the command 
changes. Can you please mark this as incompatible? Also please add Release 
Notes to cover the details of incompatibility.

 FsShell can print Found xxx items unnecessarily often
 ---

 Key: HADOOP-8691
 URL: https://issues.apache.org/jira/browse/HADOOP-8691
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Daryn Sharp
Priority: Minor
 Fix For: 2.4.0

 Attachments: HADOOP-8691.txt


 The Found xxx items header that is printed with a file listing will often 
 appear multiple times in not-so-helpful ways in light of globbing.  For 
 example:
 {noformat}
 $ hadoop fs -ls 'teradata/*'  
 Found 1 items
 -rw-r--r--   1 someuser somegroup  0 2012-08-06 16:55 
 teradata/_SUCCESS
 Found 1 items
 -rw-r--r--   1 someuser somegroup   5000 2012-08-06 16:55 
 teradata/part-m-0
 Found 1 items
 -rw-r--r--   1 someuser somegroup   5000 2012-08-06 16:55 
 teradata/part-m-1
 {noformat}
 Seems like it should just print Found 3 items once at the top, or maybe not 
 even print a header at all.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8691) FsShell can print Found xxx items unnecessarily often

2014-03-07 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13924218#comment-13924218
 ] 

Suresh Srinivas commented on HADOOP-8691:
-

Additionally, this change should move to Incompatible section in CHANGES.txt 
(that change could be done without filing a jira).

 FsShell can print Found xxx items unnecessarily often
 ---

 Key: HADOOP-8691
 URL: https://issues.apache.org/jira/browse/HADOOP-8691
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Daryn Sharp
Priority: Minor
 Fix For: 2.4.0

 Attachments: HADOOP-8691.txt


 The Found xxx items header that is printed with a file listing will often 
 appear multiple times in not-so-helpful ways in light of globbing.  For 
 example:
 {noformat}
 $ hadoop fs -ls 'teradata/*'  
 Found 1 items
 -rw-r--r--   1 someuser somegroup  0 2012-08-06 16:55 
 teradata/_SUCCESS
 Found 1 items
 -rw-r--r--   1 someuser somegroup   5000 2012-08-06 16:55 
 teradata/part-m-0
 Found 1 items
 -rw-r--r--   1 someuser somegroup   5000 2012-08-06 16:55 
 teradata/part-m-1
 {noformat}
 Seems like it should just print Found 3 items once at the top, or maybe not 
 even print a header at all.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10078) KerberosAuthenticator always does SPNEGO

2014-03-05 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921342#comment-13921342
 ] 

Suresh Srinivas commented on HADOOP-10078:
--

[~tucu00], can you please take a look?

 KerberosAuthenticator always does SPNEGO
 

 Key: HADOOP-10078
 URL: https://issues.apache.org/jira/browse/HADOOP-10078
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Minor
 Fix For: 2.3.0

 Attachments: HADOOP-10078.patch


 HADOOP-8883 made this change to {{KerberosAuthenticator}}
 {code:java}
 @@ -158,7 +158,7 @@ public class KerberosAuthenticator implements 
 Authenticator {
conn.setRequestMethod(AUTH_HTTP_METHOD);
conn.connect();

 -  if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
 +  if (conn.getRequestProperty(AUTHORIZATION) != null  
 conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
  LOG.debug(JDK performed authentication on our behalf.);
  // If the JDK already did the SPNEGO back-and-forth for
  // us, just pull out the token.
 {code}
 to fix OOZIE-1010.  However, as [~aklochkov] pointed out recently, this 
 inadvertently made the if statement always false because it turns out that 
 the JDK excludes some headers, including the Authorization one that we're 
 checking (see discussion 
 [here|https://issues.apache.org/jira/browse/HADOOP-8883?focusedCommentId=13807596page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13807596]).
   This means that it was always either calling {{doSpnegoSequence(token);}} 
 or {{getFallBackAuthenticator().authenticate(url, token);}}, which is 
 actually the old behavior that existed before HADOOP-8855 changed it in the 
 first place.
 In any case, I tried removing the Authorization check and Oozie still works 
 with and without Kerberos; the NPE reported in OOZIE-1010 has since been 
 properly fixed due as a side effect for a similar issue in OOZIE-1368.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10378) Typo in Help

2014-03-03 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918619#comment-13918619
 ] 

Suresh Srinivas commented on HADOOP-10378:
--

+1 for the patch.

 Typo in Help
 

 Key: HADOOP-10378
 URL: https://issues.apache.org/jira/browse/HADOOP-10378
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 0.23.9, 2.4.0
Reporter: Mit Desai
Assignee: Mit Desai
 Attachments: HADOOP-10378-v2.patch, HADOOP-10378.patch


 There is a typo in the description of the following command
 hdfs dfs -help
 {noformat}
 -count [-q] path ...:   Count the number of directories, files and 
 bytes under the paths
   that match the specified file pattern.  The output columns are:
   DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME or
   QUOTA REMAINING_QUATA SPACE_QUOTA REMAINING_SPACE_QUOTA 
 DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME
 {noformat}
 REMAINING_QUATA should be REMAINING_QUOTA



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10378) Typo in hdfs dfs -help

2014-03-03 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10378:
-

Summary: Typo in hdfs dfs -help  (was: Typo in Help)

 Typo in hdfs dfs -help
 --

 Key: HADOOP-10378
 URL: https://issues.apache.org/jira/browse/HADOOP-10378
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 0.23.9, 2.4.0
Reporter: Mit Desai
Assignee: Mit Desai
 Attachments: HADOOP-10378-v2.patch, HADOOP-10378.patch


 There is a typo in the description of the following command
 hdfs dfs -help
 {noformat}
 -count [-q] path ...:   Count the number of directories, files and 
 bytes under the paths
   that match the specified file pattern.  The output columns are:
   DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME or
   QUOTA REMAINING_QUATA SPACE_QUOTA REMAINING_SPACE_QUOTA 
 DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME
 {noformat}
 REMAINING_QUATA should be REMAINING_QUOTA



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10378) Typo in help printed by hdfs dfs -help

2014-03-03 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10378:
-

Summary: Typo in help printed by hdfs dfs -help  (was: Typo in hdfs dfs 
-help)

 Typo in help printed by hdfs dfs -help
 --

 Key: HADOOP-10378
 URL: https://issues.apache.org/jira/browse/HADOOP-10378
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 0.23.9, 2.4.0
Reporter: Mit Desai
Assignee: Mit Desai
 Attachments: HADOOP-10378-v2.patch, HADOOP-10378.patch


 There is a typo in the description of the following command
 hdfs dfs -help
 {noformat}
 -count [-q] path ...:   Count the number of directories, files and 
 bytes under the paths
   that match the specified file pattern.  The output columns are:
   DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME or
   QUOTA REMAINING_QUATA SPACE_QUOTA REMAINING_SPACE_QUOTA 
 DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME
 {noformat}
 REMAINING_QUATA should be REMAINING_QUOTA



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10378) Typo in help printed by hdfs dfs -help

2014-03-03 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10378:
-

   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch to trunk and branch-2. Thank you [~mitdesai]!

 Typo in help printed by hdfs dfs -help
 --

 Key: HADOOP-10378
 URL: https://issues.apache.org/jira/browse/HADOOP-10378
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 0.23.9, 2.4.0
Reporter: Mit Desai
Assignee: Mit Desai
 Fix For: 2.5.0

 Attachments: HADOOP-10378-v2.patch, HADOOP-10378.patch


 There is a typo in the description of the following command
 hdfs dfs -help
 {noformat}
 -count [-q] path ...:   Count the number of directories, files and 
 bytes under the paths
   that match the specified file pattern.  The output columns are:
   DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME or
   QUOTA REMAINING_QUATA SPACE_QUOTA REMAINING_SPACE_QUOTA 
 DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME
 {noformat}
 REMAINING_QUATA should be REMAINING_QUOTA



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10078) KerberosAuthenticator always does SPNEGO

2014-02-24 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13910724#comment-13910724
 ] 

Suresh Srinivas commented on HADOOP-10078:
--

[~tucu00], can you please responds to [~bowenzhangusa]?

 KerberosAuthenticator always does SPNEGO
 

 Key: HADOOP-10078
 URL: https://issues.apache.org/jira/browse/HADOOP-10078
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Minor
 Fix For: 2.3.0

 Attachments: HADOOP-10078.patch


 HADOOP-8883 made this change to {{KerberosAuthenticator}}
 {code:java}
 @@ -158,7 +158,7 @@ public class KerberosAuthenticator implements 
 Authenticator {
conn.setRequestMethod(AUTH_HTTP_METHOD);
conn.connect();

 -  if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
 +  if (conn.getRequestProperty(AUTHORIZATION) != null  
 conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
  LOG.debug(JDK performed authentication on our behalf.);
  // If the JDK already did the SPNEGO back-and-forth for
  // us, just pull out the token.
 {code}
 to fix OOZIE-1010.  However, as [~aklochkov] pointed out recently, this 
 inadvertently made the if statement always false because it turns out that 
 the JDK excludes some headers, including the Authorization one that we're 
 checking (see discussion 
 [here|https://issues.apache.org/jira/browse/HADOOP-8883?focusedCommentId=13807596page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13807596]).
   This means that it was always either calling {{doSpnegoSequence(token);}} 
 or {{getFallBackAuthenticator().authenticate(url, token);}}, which is 
 actually the old behavior that existed before HADOOP-8855 changed it in the 
 first place.
 In any case, I tried removing the Authorization check and Oozie still works 
 with and without Kerberos; the NPE reported in OOZIE-1010 has since been 
 properly fixed due as a side effect for a similar issue in OOZIE-1368.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10311) Cleanup vendor names in the code base

2014-01-29 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HADOOP-10311:


 Summary: Cleanup vendor names in the code base
 Key: HADOOP-10311
 URL: https://issues.apache.org/jira/browse/HADOOP-10311
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Suresh Srinivas
Priority: Blocker






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10311) Cleanup vendor names from the code base

2014-01-29 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10311:
-

Summary: Cleanup vendor names from the code base  (was: Cleanup vendor 
names in the code base)

 Cleanup vendor names from the code base
 ---

 Key: HADOOP-10311
 URL: https://issues.apache.org/jira/browse/HADOOP-10311
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Suresh Srinivas
Priority: Blocker





--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10311) Cleanup vendor names from the code base

2014-01-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13886242#comment-13886242
 ] 

Suresh Srinivas commented on HADOOP-10311:
--

Following should be changed:
{noformat}
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2118.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2118.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2118.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2118.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2118.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2118.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2118.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2118.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2118.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2118.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2118.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2118.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2115.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2115.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2115.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2115.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2115.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2115.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2115.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2115.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2115.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2115.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2115.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2115.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2115.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2115.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2115.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2115.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2116.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2116.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2116.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2116.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2116.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2116.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2116.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2116.halxg.cloudera.com,
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:
layers : [ default-rack, a2116.halxg.cloudera.com ]
./hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json:  
hostName : /default-rack/a2116.halxg.cloudera.com,

[jira] [Commented] (HADOOP-10300) Allowed deferred sending of call responses

2014-01-28 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13884525#comment-13884525
 ] 

Suresh Srinivas commented on HADOOP-10300:
--

Big +1 for this feature. This will be able to reduce the number of handlers we 
currently need. Only thing that we need to protect is accepting too many 
requests and responding to it becomes a bottleneck. That can be addressed as we 
continue to work on this issue.

 Allowed deferred sending of call responses
 --

 Key: HADOOP-10300
 URL: https://issues.apache.org/jira/browse/HADOOP-10300
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp

 RPC handlers currently do not return until the RPC call completes and 
 response is sent, or a partially sent response has been queued for the 
 responder.  It would be useful for a proxy method to notify the handler to 
 not yet the send the call's response.
 An potential use case is a namespace handler in the NN might want to return 
 before the edit log is synced so it can service more requests and allow 
 increased batching of edits per sync.  Background syncing could later trigger 
 the sending of the call response to the client.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10255) Copy the HttpServer in 2.2 back to branch-2

2014-01-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13883326#comment-13883326
 ] 

Suresh Srinivas commented on HADOOP-10255:
--

bq. Are we going to end up having 2 copies of HttpServer in 2.2 now? If so, I 
don't think it is a good idea from a maintenance perspective.
As agreed upon in HADOOP-10253, we will maintain two copies of HttpServer. One 
deprecated and retained for backward compatibility reasons. The other cleaned 
up version to be used only with in Hadoop. Undoing the cleanup is a lot of work 
and a step in the backward direction.

 Copy the HttpServer in 2.2 back to branch-2
 ---

 Key: HADOOP-10255
 URL: https://issues.apache.org/jira/browse/HADOOP-10255
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Blocker
 Fix For: 2.3.0

 Attachments: HADOOP-10255.000.patch, HADOOP-10255.001.patch, 
 HADOOP-10255.002.patch, HADOOP-10255.003.patch


 As suggested in HADOOP-10253, HBase needs a temporary copy of {{HttpServer}} 
 from branch-2.2 to make sure it works across multiple 2.x releases.
 This patch renames the current {{HttpServer}} into {{HttpServer2}}, and bring 
  the {{HttpServer}} in branch-2.2 into the repository.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10292) Bring HttpServer in branch-2.2 into branch-2

2014-01-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13883635#comment-13883635
 ] 

Suresh Srinivas commented on HADOOP-10292:
--

Thanks [~stack] for the help. Once you confirm that it is working, I will 
commit the change. 

 Bring HttpServer in branch-2.2 into branch-2
 

 Key: HADOOP-10292
 URL: https://issues.apache.org/jira/browse/HADOOP-10292
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HADOOP-10292.000.patch


 This jira is a follow-up jira of HADOOP-10255. It brings in the HttpServer in 
 branch-2.2 directly into branch-2 to restore the compatibility of HBase.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10292) Bring HttpServer in branch-2.2 into branch-2

2014-01-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13883844#comment-13883844
 ] 

Suresh Srinivas commented on HADOOP-10292:
--

+1 for the patch.

I am going to commit these changes soon. [~stack], if you do any more tests or 
find issues, please comment on this jira. We can have a separate follow up.

 Bring HttpServer in branch-2.2 into branch-2
 

 Key: HADOOP-10292
 URL: https://issues.apache.org/jira/browse/HADOOP-10292
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HADOOP-10292.000.patch


 This jira is a follow-up jira of HADOOP-10255. It brings in the HttpServer in 
 branch-2.2 directly into branch-2 to restore the compatibility of HBase.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10255) Rename HttpServer into HttpServer2

2014-01-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13883849#comment-13883849
 ] 

Suresh Srinivas commented on HADOOP-10255:
--

[~wheat9], please file a jira if you have further cleanup of HttpServer2. One 
thing we could do in that jira to remove deprecated method. I see one such 
method - #getPort().

+1 for the patch.

[~stack], once the copy of HttpServer is made in HBase, we can delete it from 
future 2.x release. This is an incompatible. However given this removal is 
breaking compatibility only for HBase and HBase will no longer use this class, 
such an incompatible change should be fine. Do you agree? We need to agree upon 
a release to align this change.

 Rename HttpServer into HttpServer2
 --

 Key: HADOOP-10255
 URL: https://issues.apache.org/jira/browse/HADOOP-10255
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Blocker
 Fix For: 2.3.0

 Attachments: HADOOP-10255-branch2.000.patch, HADOOP-10255.000.patch, 
 HADOOP-10255.001.patch, HADOOP-10255.002.patch, HADOOP-10255.003.patch, 
 HADOOP-10255.003.patch


 As suggested in HADOOP-10253, HBase needs a temporary copy of {{HttpServer}} 
 from branch-2.2 to make sure it works across multiple 2.x releases.
 This patch renames the current {{HttpServer}} into {{HttpServer2}}, and bring 
  the {{HttpServer}} in branch-2.2 into the repository.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10255) Rename HttpServer to HttpServer2 to retain older HttpServer in branch-2 for compatibility

2014-01-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10255:
-

Summary: Rename HttpServer to HttpServer2 to retain older HttpServer in 
branch-2 for compatibility  (was: Rename HttpServer into HttpServer2)

 Rename HttpServer to HttpServer2 to retain older HttpServer in branch-2 for 
 compatibility
 -

 Key: HADOOP-10255
 URL: https://issues.apache.org/jira/browse/HADOOP-10255
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Blocker
 Fix For: 2.3.0

 Attachments: HADOOP-10255-branch2.000.patch, HADOOP-10255.000.patch, 
 HADOOP-10255.001.patch, HADOOP-10255.002.patch, HADOOP-10255.003.patch, 
 HADOOP-10255.003.patch


 As suggested in HADOOP-10253, HBase needs a temporary copy of {{HttpServer}} 
 from branch-2.2 to make sure it works across multiple 2.x releases.
 This patch renames the current {{HttpServer}} into {{HttpServer2}}, and bring 
  the {{HttpServer}} in branch-2.2 into the repository.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10292) Bring HttpServer in branch-2.2 into branch-2

2014-01-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10292:
-

Fix Version/s: 2.3.0

 Bring HttpServer in branch-2.2 into branch-2
 

 Key: HADOOP-10292
 URL: https://issues.apache.org/jira/browse/HADOOP-10292
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.3.0

 Attachments: HADOOP-10292.000.patch


 This jira is a follow-up jira of HADOOP-10255. It brings in the HttpServer in 
 branch-2.2 directly into branch-2 to restore the compatibility of HBase.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10292) Restore HttpServer from branch-2.2 in branch-2

2014-01-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10292:
-

Summary: Restore HttpServer from branch-2.2 in branch-2  (was: Bring 
HttpServer in branch-2.2 into branch-2)

 Restore HttpServer from branch-2.2 in branch-2
 --

 Key: HADOOP-10292
 URL: https://issues.apache.org/jira/browse/HADOOP-10292
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.3.0

 Attachments: HADOOP-10292.000.patch


 This jira is a follow-up jira of HADOOP-10255. It brings in the HttpServer in 
 branch-2.2 directly into branch-2 to restore the compatibility of HBase.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10255) Rename HttpServer to HttpServer2 to retain older HttpServer in branch-2 for compatibility

2014-01-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10255:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I have committed this to branch-2 and trunk. Thank you [~wheat9]. Thank you 
[~stack] for review and testing.

 Rename HttpServer to HttpServer2 to retain older HttpServer in branch-2 for 
 compatibility
 -

 Key: HADOOP-10255
 URL: https://issues.apache.org/jira/browse/HADOOP-10255
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Blocker
 Fix For: 2.3.0

 Attachments: HADOOP-10255-branch2.000.patch, HADOOP-10255.000.patch, 
 HADOOP-10255.001.patch, HADOOP-10255.002.patch, HADOOP-10255.003.patch, 
 HADOOP-10255.003.patch


 As suggested in HADOOP-10253, HBase needs a temporary copy of {{HttpServer}} 
 from branch-2.2 to make sure it works across multiple 2.x releases.
 This patch renames the current {{HttpServer}} into {{HttpServer2}}, and bring 
  the {{HttpServer}} in branch-2.2 into the repository.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10255) Rename HttpServer to HttpServer2 to retain older HttpServer in branch-2 for compatibility

2014-01-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10255:
-

Affects Version/s: 2.3.0

 Rename HttpServer to HttpServer2 to retain older HttpServer in branch-2 for 
 compatibility
 -

 Key: HADOOP-10255
 URL: https://issues.apache.org/jira/browse/HADOOP-10255
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Blocker
 Fix For: 2.3.0

 Attachments: HADOOP-10255-branch2.000.patch, HADOOP-10255.000.patch, 
 HADOOP-10255.001.patch, HADOOP-10255.002.patch, HADOOP-10255.003.patch, 
 HADOOP-10255.003.patch


 As suggested in HADOOP-10253, HBase needs a temporary copy of {{HttpServer}} 
 from branch-2.2 to make sure it works across multiple 2.x releases.
 This patch renames the current {{HttpServer}} into {{HttpServer2}}, and bring 
  the {{HttpServer}} in branch-2.2 into the repository.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10255) Rename HttpServer to HttpServer2 to retain older HttpServer in branch-2 for compatibility

2014-01-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10255:
-

Target Version/s: 2.3.0

 Rename HttpServer to HttpServer2 to retain older HttpServer in branch-2 for 
 compatibility
 -

 Key: HADOOP-10255
 URL: https://issues.apache.org/jira/browse/HADOOP-10255
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Blocker
 Fix For: 2.3.0

 Attachments: HADOOP-10255-branch2.000.patch, HADOOP-10255.000.patch, 
 HADOOP-10255.001.patch, HADOOP-10255.002.patch, HADOOP-10255.003.patch, 
 HADOOP-10255.003.patch


 As suggested in HADOOP-10253, HBase needs a temporary copy of {{HttpServer}} 
 from branch-2.2 to make sure it works across multiple 2.x releases.
 This patch renames the current {{HttpServer}} into {{HttpServer2}}, and bring 
  the {{HttpServer}} in branch-2.2 into the repository.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HADOOP-10292) Restore HttpServer from branch-2.2 in branch-2

2014-01-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-10292.
--

  Resolution: Fixed
Hadoop Flags: Reviewed

I have committed this change to branch-2. Thank you [~wheat9]. Thank you 
[~stack] for testing and review.

HttpServer needs to be removed in branch-2 once HBase stops using it from 
Hadoop Common.

 Restore HttpServer from branch-2.2 in branch-2
 --

 Key: HADOOP-10292
 URL: https://issues.apache.org/jira/browse/HADOOP-10292
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.3.0

 Attachments: HADOOP-10292.000.patch


 This jira is a follow-up jira of HADOOP-10255. It brings in the HttpServer in 
 branch-2.2 directly into branch-2 to restore the compatibility of HBase.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10253) Remove deprecated methods in HttpServer

2014-01-22 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879248#comment-13879248
 ] 

Suresh Srinivas commented on HADOOP-10253:
--

bq. HttpServer is a private API and it should not support any downstream uses.
[~wheat9], given that this class is marked as LimitedPrivate this class cannot 
be removed or made private. ,The interface is marked as evolving, so 
incompatible changes should be allowed. However, I suggest just making a copy 
of this HttpServer (HttpServer2?) for internal use in HDFS and MapReduce, with 
cleaner code and leave HttpServer class alone. At some point in time when HBase 
folks are ready, they can copy this to their project and we can delete this 
from Hadoop, possibly in 3.0.

 Remove deprecated methods in HttpServer
 ---

 Key: HADOOP-10253
 URL: https://issues.apache.org/jira/browse/HADOOP-10253
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai

 There are a lot of deprecated methods in {{HttpServer}}. They are not used 
 anymore. They should be removed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10249) LdapGroupsMapping should trim ldap password read from file

2014-01-22 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879491#comment-13879491
 ] 

Suresh Srinivas commented on HADOOP-10249:
--

[~darumugam], please post the above diff as a patch. I am +1 on the change. 
Once Jenkins +1s the patch, I will commit it.

 LdapGroupsMapping should trim ldap password read from file
 --

 Key: HADOOP-10249
 URL: https://issues.apache.org/jira/browse/HADOOP-10249
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Dilli Arumugam
Assignee: Dilli Arumugam

  org.apache.hadoop.security.LdapGroupsMapping allows specifying ldap 
 connection password in a file using property key
 hadoop.security.group.mapping.ldap.bind.password.file
 The code in LdapGroupsMapping  that reads the content of the password file 
 does not trim the password value. This causes ldap connection failure as the 
 password in the password file ends up having a trailing newline.
 Most of the text editors and echo adds a new line at the end of file.
 So, LdapGroupsMapping should trim the password read from the file.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10249) LdapGroupsMapping should trim ldap password read from file

2014-01-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10249:
-

Assignee: Dilli Arumugam

 LdapGroupsMapping should trim ldap password read from file
 --

 Key: HADOOP-10249
 URL: https://issues.apache.org/jira/browse/HADOOP-10249
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Dilli Arumugam
Assignee: Dilli Arumugam

  org.apache.hadoop.security.LdapGroupsMapping allows specifying ldap 
 connection password in a file using property key
 hadoop.security.group.mapping.ldap.bind.password.file
 The code in LdapGroupsMapping  that reads the content of the password file 
 does not trim the password value. This causes ldap connection failure as the 
 password in the password file ends up having a trailing newline.
 Most of the text editors and echo adds a new line at the end of file.
 So, LdapGroupsMapping should trim the password read from the file.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10125) no need to process RPC request if the client connection has been dropped

2014-01-16 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13874067#comment-13874067
 ] 

Suresh Srinivas commented on HADOOP-10125:
--

[~brandonli], can you please commit this to branch-2 as well?

 no need to process RPC request if the client connection has been dropped
 

 Key: HADOOP-10125
 URL: https://issues.apache.org/jira/browse/HADOOP-10125
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 3.0.0
Reporter: Ming Ma
Assignee: Ming Ma
 Fix For: 3.0.0

 Attachments: hadoop_10125_trunk.patch


 If the client has dropped the connection before the RPC is processed, RPC 
 server doesn't need to process the RPC call. We have encountered issues where 
 bad applications can bring down the NN. 
 https://issues.apache.org/jira/i#browse/Hadoop-9640 tries to address that. 
 When this occurs, NN's RPC queues are filled up with client requests and DN 
 requests, sometimes we want to stop the flooding by stopping the bad 
 applications and/or DNs. Some RPC processing like 
 DatanodeProtocol::blockReport could take couple hundred milliseconds. So it 
 is worthwhile to have NN skip the RPC calls if DNs have been stopped.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10236) Fix typo in o.a.h.ipc.Client#checkResponse

2014-01-15 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872363#comment-13872363
 ] 

Suresh Srinivas commented on HADOOP-10236:
--

+1 for the patch.

 Fix typo in o.a.h.ipc.Client#checkResponse
 --

 Key: HADOOP-10236
 URL: https://issues.apache.org/jira/browse/HADOOP-10236
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-10236.patch


 There's a typo in o.a.h.ipc.Client.java. 
 {code}
   throw new IOException(Client IDs not matched: local ID=
   + StringUtils.byteToHexString(clientId) + , ID in reponse=
   + 
 StringUtils.byteToHexString(header.getClientId().toByteArray()));
 {code}
 It should be fixed as follows:
 {code}
   throw new IOException(Client IDs not matched: local ID=
   + StringUtils.byteToHexString(clientId) + , ID in response=
   + 
 StringUtils.byteToHexString(header.getClientId().toByteArray()));
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10236) Fix typo in o.a.h.ipc.Client#checkResponse

2014-01-15 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10236:
-

   Resolution: Fixed
Fix Version/s: 2.4.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I have committed the patch to trunk and branch-2. Thank you Akira Ajisaka!

 Fix typo in o.a.h.ipc.Client#checkResponse
 --

 Key: HADOOP-10236
 URL: https://issues.apache.org/jira/browse/HADOOP-10236
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Trivial
  Labels: newbie
 Fix For: 2.4.0

 Attachments: HADOOP-10236.patch


 There's a typo in o.a.h.ipc.Client.java. 
 {code}
   throw new IOException(Client IDs not matched: local ID=
   + StringUtils.byteToHexString(clientId) + , ID in reponse=
   + 
 StringUtils.byteToHexString(header.getClientId().toByteArray()));
 {code}
 It should be fixed as follows:
 {code}
   throw new IOException(Client IDs not matched: local ID=
   + StringUtils.byteToHexString(clientId) + , ID in response=
   + 
 StringUtils.byteToHexString(header.getClientId().toByteArray()));
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10234) hadoop.cmd jar does not propagate exit code.

2014-01-14 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871405#comment-13871405
 ] 

Suresh Srinivas commented on HADOOP-10234:
--

+1 for the patch.

 hadoop.cmd jar does not propagate exit code.
 --

 Key: HADOOP-10234
 URL: https://issues.apache.org/jira/browse/HADOOP-10234
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.2.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.3.0

 Attachments: HADOOP-10234.1.patch


 Running hadoop.cmd jar does not always propagate the exit code to the 
 caller.  In interactive use, it works fine.  However, in some usages (notably 
 Hive), it gets called through {{Shell#getRunScriptCommand}}, which needs to 
 do an intermediate cmd /c to execute the script.  In that case, the last 
 exit code is getting dropped, so Hive can't detect job failures.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10106) Incorrect thread name in RPC log messages

2014-01-13 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10106:
-

Assignee: Ming Ma

 Incorrect thread name in RPC log messages
 -

 Key: HADOOP-10106
 URL: https://issues.apache.org/jira/browse/HADOOP-10106
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
Priority: Minor
 Fix For: 2.4.0

 Attachments: hadoop_10106_trunk.patch, hadoop_10106_trunk_2.patch


 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: 
 readAndProcess from client 10.115.201.46 threw exception 
 org.apache.hadoop.ipc.RpcServerException: Unknown out of band call 
 #-2147483647
 This is thrown by a reader thread, so the message should be like
 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for port 8020: 
 readAndProcess from client 10.115.201.46 threw exception 
 org.apache.hadoop.ipc.RpcServerException: Unknown out of band call 
 #-2147483647
 Another example is Responder.processResponse, which can also be called by 
 handler thread. When that happend, the thread name should be the handler 
 thread, not the responder thread.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10106) Incorrect thread name in RPC log messages

2014-01-13 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870059#comment-13870059
 ] 

Suresh Srinivas commented on HADOOP-10106:
--

[~mingma], I have added you as a contributor and  assigned the jira to you.

 Incorrect thread name in RPC log messages
 -

 Key: HADOOP-10106
 URL: https://issues.apache.org/jira/browse/HADOOP-10106
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
Priority: Minor
 Fix For: 2.4.0

 Attachments: hadoop_10106_trunk.patch, hadoop_10106_trunk_2.patch


 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: 
 readAndProcess from client 10.115.201.46 threw exception 
 org.apache.hadoop.ipc.RpcServerException: Unknown out of band call 
 #-2147483647
 This is thrown by a reader thread, so the message should be like
 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for port 8020: 
 readAndProcess from client 10.115.201.46 threw exception 
 org.apache.hadoop.ipc.RpcServerException: Unknown out of band call 
 #-2147483647
 Another example is Responder.processResponse, which can also be called by 
 handler thread. When that happend, the thread name should be the handler 
 thread, not the responder thread.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10208) Remove duplicate initialization in StringUtils.getStringCollection

2014-01-06 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863930#comment-13863930
 ] 

Suresh Srinivas commented on HADOOP-10208:
--

+1 for the patch. I will commit it soon.

 Remove duplicate initialization in StringUtils.getStringCollection
 --

 Key: HADOOP-10208
 URL: https://issues.apache.org/jira/browse/HADOOP-10208
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Trivial
 Attachments: HADOOP-10208.patch


 The *values* is initialized twice.
 {code:title=StringUtils.java|borderStyle=solid}
  public static CollectionString getStringCollection(String str){
 ListString values = new ArrayListString();
 if (str == null)
   return values;
 StringTokenizer tokenizer = new StringTokenizer (str,,);
 values = new ArrayListString();
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10208) Remove duplicate initialization in StringUtils.getStringCollection

2014-01-06 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10208:
-

Status: Patch Available  (was: Open)

 Remove duplicate initialization in StringUtils.getStringCollection
 --

 Key: HADOOP-10208
 URL: https://issues.apache.org/jira/browse/HADOOP-10208
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Trivial
 Attachments: HADOOP-10208.patch


 The *values* is initialized twice.
 {code:title=StringUtils.java|borderStyle=solid}
  public static CollectionString getStringCollection(String str){
 ListString values = new ArrayListString();
 if (str == null)
   return values;
 StringTokenizer tokenizer = new StringTokenizer (str,,);
 values = new ArrayListString();
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10168) fix javadoc of ReflectionUtils.copy

2013-12-17 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13850231#comment-13850231
 ] 

Suresh Srinivas commented on HADOOP-10168:
--

+1 for the patch. I will commit this shortly.

 fix javadoc of ReflectionUtils.copy 
 

 Key: HADOOP-10168
 URL: https://issues.apache.org/jira/browse/HADOOP-10168
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HADOOP-10168.1.patch


 In the javadoc of ReflectionUtils.copy, the return value is not documented, 
 the arguments are named incorrectly.
 {code}
   /** 
   
   

* Make a copy of the writable object using serialization to a buffer   
   
   

* @param dst the object to copy from   
   
   

* @param src the object to copy into, which is destroyed   
   
   

* @throws IOException  
   
   

*/
   @SuppressWarnings(unchecked)
   public static T T copy(Configuration conf,
 T src, T dst) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10168) fix javadoc of ReflectionUtils.copy

2013-12-17 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10168:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I committed the patch to trunk and branch-2. Thank you Thejas.

 fix javadoc of ReflectionUtils.copy 
 

 Key: HADOOP-10168
 URL: https://issues.apache.org/jira/browse/HADOOP-10168
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 2.4.0

 Attachments: HADOOP-10168.1.patch


 In the javadoc of ReflectionUtils.copy, the return value is not documented, 
 the arguments are named incorrectly.
 {code}
   /** 
   
   

* Make a copy of the writable object using serialization to a buffer   
   
   

* @param dst the object to copy from   
   
   

* @param src the object to copy into, which is destroyed   
   
   

* @throws IOException  
   
   

*/
   @SuppressWarnings(unchecked)
   public static T T copy(Configuration conf,
 T src, T dst) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10168) fix javadoc of ReflectionUtils.copy

2013-12-17 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10168:
-

Fix Version/s: 2.4.0

 fix javadoc of ReflectionUtils.copy 
 

 Key: HADOOP-10168
 URL: https://issues.apache.org/jira/browse/HADOOP-10168
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 2.4.0

 Attachments: HADOOP-10168.1.patch


 In the javadoc of ReflectionUtils.copy, the return value is not documented, 
 the arguments are named incorrectly.
 {code}
   /** 
   
   

* Make a copy of the writable object using serialization to a buffer   
   
   

* @param dst the object to copy from   
   
   

* @param src the object to copy into, which is destroyed   
   
   

* @throws IOException  
   
   

*/
   @SuppressWarnings(unchecked)
   public static T T copy(Configuration conf,
 T src, T dst) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-9640) RPC Congestion Control with FairCallQueue

2013-12-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13844506#comment-13844506
 ] 

Suresh Srinivas commented on HADOOP-9640:
-

I had in person meeting with [~chrili] on this. This is excellent work!

bq. Parsing the MapReduce job name out of the DFSClient name is kind of an ugly 
hack. The client name also isn't that reliable since it's formed from the 
client's Configuration
I had suggested this to [~chrili]. I realize that the configuration passed from 
MapReduce is actually a task ID. So the client name based on that will not be 
useful, unless we parse it to get the job ID.

I agree that this is not the way the final solution should work. I propose 
adding some kind of configuration that can be passed to establish context in 
which access to services is happening. Currently this is done by mapreduce 
framework. It sets the configuration  which gets used in forming DFSClient 
name.

We could do the following to satisfy the various user requirements:
# Add a new configuration in common called hadoop.application.context to 
HDFS. Other services that want to do the same thing can either use this same 
configuration and find another way to configure it. This information should be 
marshalled from the client to the server. The congestion control can be built 
based on that.
# Lets also make identities used for accounting configurable. They can be 
either based on context, user, token, or default. That way people who 
do not like the default configuration can make changes.

 RPC Congestion Control with FairCallQueue
 -

 Key: HADOOP-9640
 URL: https://issues.apache.org/jira/browse/HADOOP-9640
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.2.0
Reporter: Xiaobo Peng
  Labels: hdfs, qos, rpc
 Attachments: MinorityMajorityPerformance.pdf, 
 NN-denial-of-service-updated-plan.pdf, faircallqueue.patch, 
 faircallqueue2.patch, faircallqueue3.patch, faircallqueue4.patch, 
 faircallqueue5.patch, rpc-congestion-control-draft-plan.pdf


 Several production Hadoop cluster incidents occurred where the Namenode was 
 overloaded and failed to respond. 
 We can improve quality of service for users during namenode peak loads by 
 replacing the FIFO call queue with a [Fair Call 
 Queue|https://issues.apache.org/jira/secure/attachment/12616864/NN-denial-of-service-updated-plan.pdf].
  (this plan supersedes rpc-congestion-control-draft-plan).
 Excerpted from the communication of one incident, “The map task of a user was 
 creating huge number of small files in the user directory. Due to the heavy 
 load on NN, the JT also was unable to communicate with NN...The cluster 
 became responsive only once the job was killed.”
 Excerpted from the communication of another incident, “Namenode was 
 overloaded by GetBlockLocation requests (Correction: should be getFileInfo 
 requests. the job had a bug that called getFileInfo for a nonexistent file in 
 an endless loop). All other requests to namenode were also affected by this 
 and hence all jobs slowed down. Cluster almost came to a grinding 
 halt…Eventually killed jobtracker to kill all jobs that are running.”
 Excerpted from HDFS-945, “We've seen defective applications cause havoc on 
 the NameNode, for e.g. by doing 100k+ 'listStatus' on very large directories 
 (60k files) etc.”



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10148) backport hadoop-10107 to branch-0.23

2013-12-06 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13841771#comment-13841771
 ] 

Suresh Srinivas commented on HADOOP-10148:
--

Are there  plans for new 0.23 release?

 backport hadoop-10107 to branch-0.23
 

 Key: HADOOP-10148
 URL: https://issues.apache.org/jira/browse/HADOOP-10148
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Reporter: Chen He
Assignee: Chen He
Priority: Minor

 Found this in [build 
 #5440|https://builds.apache.org/job/PreCommit-HDFS-Build/5440/testReport/junit/org.apache.hadoop.hdfs.server.blockmanagement/TestUnderReplicatedBlocks/testSetrepIncWithUnderReplicatedBlocks/]
 Caused by: java.lang.NullPointerException
   at org.apache.hadoop.ipc.Server.getNumOpenConnections(Server.java:2434)
   at 
 org.apache.hadoop.ipc.metrics.RpcMetrics.numOpenConnections(RpcMetrics.java:74)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10126) LightWeightGSet log message is confusing : 2.0% max memory = 2.0 GB

2013-11-25 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13831726#comment-13831726
 ] 

Suresh Srinivas commented on HADOOP-10126:
--

+1 for the change.

 LightWeightGSet log message is confusing : 2.0% max memory = 2.0 GB
 -

 Key: HADOOP-10126
 URL: https://issues.apache.org/jira/browse/HADOOP-10126
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Vinay
Assignee: Vinay
Priority: Minor
 Attachments: HADOOP-10126.patch


 Following message log message from LightWeightGSet is confusing.
 {noformat}2013-11-21 18:00:21,198 INFO org.apache.hadoop.util.GSet: 2.0% max 
 memory = 2.0 GB{noformat}, 
 where 2GB is max JVM memory, but log message confuses like 2% of max memory 
 is 2GB. 
 It can be better like this
 2.0% of max memory 2.0 GB = 40.9 MB



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10126) LightWeightGSet log message is confusing : 2.0% max memory = 2.0 GB

2013-11-25 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10126:
-

   Resolution: Fixed
Fix Version/s: 2.3.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I have committed the patch. Thank you Vinay!

 LightWeightGSet log message is confusing : 2.0% max memory = 2.0 GB
 -

 Key: HADOOP-10126
 URL: https://issues.apache.org/jira/browse/HADOOP-10126
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Vinay
Assignee: Vinay
Priority: Minor
 Fix For: 2.3.0

 Attachments: HADOOP-10126.patch


 Following message log message from LightWeightGSet is confusing.
 {noformat}2013-11-21 18:00:21,198 INFO org.apache.hadoop.util.GSet: 2.0% max 
 memory = 2.0 GB{noformat}, 
 where 2GB is max JVM memory, but log message confuses like 2% of max memory 
 is 2GB. 
 It can be better like this
 2.0% of max memory 2.0 GB = 40.9 MB



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10124) Option to shuffle splits of equal size

2013-11-22 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830384#comment-13830384
 ] 

Suresh Srinivas commented on HADOOP-10124:
--

[~mikeliddell], can you please set the Affects Version/s? I think you want this 
in branch-1. Also is this functionality relevant for trunk?

I am going to move this to MapReduce project.

 Option to shuffle splits of equal size
 --

 Key: HADOOP-10124
 URL: https://issues.apache.org/jira/browse/HADOOP-10124
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Mike Liddell
 Attachments: HADOOP-10124.1.patch


 Mapreduce split calculation has the following base logic (via JobClient and 
 the major InputFormat implementations ):
 ◾enumerate input files in natural (aka linear) order.
 ◾create one split for each 'block-size' of each input. Apart from 
 rack-awareness, combining and so on, the input file order remains in its 
 natural order.
 ◾sort the splits by size using a stable sort based on splitsize.
 When data from multiple storage services are used in a single hadoop job, we 
 get better I/O utilization if the list of splits does round-robin or 
 random-access across the services. 
 The particular scenario arises in Azure HDInsight where jobs can easily read 
 from many storage accounts and each storage account has hard limits on 
 throughtput.  Concurrent access to the accounts is substantially better than 
  
 Two common scenarios can cause non-ideal access pattern:
  1. many/all input files are the same size
  2. files have different sizes, but many/all input files have sizeblocksize.
  In the second scenario, for each file will have one or more splits with size 
 exactly equal to block size so it basically degenerates to the first scenario.
 There are various ways to solve the problem but the simplest is to alter the 
 mapreduce JobClient to sort splits by size _and_ randomize the order of 
 splits with equal size. This keeps the old behavior effectively unchanged 
 while also fixing both common problematic scenarios.
 Some rare scenarios will still suffer bad access patterns due. For example if 
 two storage accounts are used and the files from one storage account are all 
 smaller than from the other then problems can arise. Addressing these 
 scenarios would be further work, perhaps by completely randomizing the split 
 order. These problematic scenarios are considered rare and not requiring 
 immediate attention.
 If further algorithms for split ordering are necessary, the implementation in 
 JobClient will change to being interface-based (eg interface splitOrderer) 
 with various standard implementations.  At this time there is only the need 
 for two implementations and so simple Boolean flag and if/then logic is used.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10020) disable symlinks temporarily

2013-11-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13825618#comment-13825618
 ] 

Suresh Srinivas commented on HADOOP-10020:
--

Can this be done as a separate jira instead of addendum to the existing patch?

 disable symlinks temporarily
 

 Key: HADOOP-10020
 URL: https://issues.apache.org/jira/browse/HADOOP-10020
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.2.0
Reporter: Colin Patrick McCabe
Assignee: Sanjay Radia
Priority: Blocker
 Fix For: 2.2.0

 Attachments: 
 0001-HADOOP-10020-addendum.-Fix-TestOfflineEditsViewer.patch, 
 Hadoop-10020-2.patch, Hadoop-10020-3.patch, 
 Hadoop-10020-4-forBranch2.1beta.patch, Hadoop-10020-4.patch, 
 Hadoop-10020.patch


 disable symlinks temporarily until we can make them production-ready in 
 Hadoop 2.3



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2013-10-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13799167#comment-13799167
 ] 

Suresh Srinivas commented on HADOOP-9652:
-

+1 for reverting this. Lets revisit this later with other symlink issues. 
[~sanjay.radia], please consider this in symlink discussions.

 RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
 -

 Key: HADOOP-9652
 URL: https://issues.apache.org/jira/browse/HADOOP-9652
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
 Fix For: 2.3.0

 Attachments: hadoop-9452-1.patch, hadoop-9652-2.patch, 
 hadoop-9652-3.patch, hadoop-9652-4.patch, hadoop-9652-5.patch, 
 hadoop-9652-6.patch


 {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
 the symlink, but instead uses the owner and mode of the symlink target.  If 
 the target can't be found, it fills in bogus values (the empty string and 
 FsPermission.getDefault) for these.
 Symlinks have an owner distinct from the owner of the target they point to, 
 and getFileLinkStatus ought to expose this.
 In some operating systems, symlinks can have a permission other than 0777.  
 We ought to expose this in RawLocalFilesystem and other places, although we 
 don't necessarily have to support this behavior in HDFS.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10005) No need to check INFO severity level is enabled or not

2013-10-16 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10005:
-

   Resolution: Fixed
Fix Version/s: 2.2.1
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the change to branch-2.2, branch-2 and trunk. Thank you Jackie 
Chang.

 No need to check INFO severity level is enabled or not
 --

 Key: HADOOP-10005
 URL: https://issues.apache.org/jira/browse/HADOOP-10005
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jackie Chang
Priority: Trivial
 Fix For: 2.2.1

 Attachments: HADOOP-10005.patch


 As a convention in developers, INFO is the default level and INFO logs should 
 be always available. So no need to check it for most of the cases.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10005) No need to check INFO severity level is enabled or not

2013-10-16 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10005:
-

Assignee: Jackie Chang

 No need to check INFO severity level is enabled or not
 --

 Key: HADOOP-10005
 URL: https://issues.apache.org/jira/browse/HADOOP-10005
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jackie Chang
Assignee: Jackie Chang
Priority: Trivial
 Fix For: 2.2.1

 Attachments: HADOOP-10005.patch


 As a convention in developers, INFO is the default level and INFO logs should 
 be always available. So no need to check it for most of the cases.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10043) Convert org.apache.hadoop.security.token.SecretManager to be an AbstractService

2013-10-14 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13794912#comment-13794912
 ] 

Suresh Srinivas commented on HADOOP-10043:
--

I have not looked at the patch closely. [~ozawa], can you please add more 
details about this. SecretManager and AbstractDelegationTokenSecretManager are 
used in MapReduce/Yarn, HDFS and even in Hive. I am not convinced for YARN 
related changes we need to make this change that impacts all the other users.

The main motivation seems to be - Convert *SecretManagers in the RM to 
services
Why can't this be done using composition instead of inheritance?

 Convert org.apache.hadoop.security.token.SecretManager to be an 
 AbstractService
 ---

 Key: HADOOP-10043
 URL: https://issues.apache.org/jira/browse/HADOOP-10043
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Attachments: HADOOP-10043.1.patch, HADOOP-10043.2.patch, 
 HADOOP-10043.3.patch


 I'm dealing with YARN-1172, a subtask of YARN-1139(ResourceManager HA related 
 task). The sentence as follows is a quoted from YARN-1172's my comment:
 {quote}
 I've found that it requires org.apache.hadoop.security.token.SecretManager to 
 be an AbstractService,
 because both AbstractService and 
 org.apache.hadoop.security.token.SecretManager are abstract class and we 
 cannot extend both of them at the same time.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10042) Heap space error during copy from maptask to reduce task

2013-10-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792729#comment-13792729
 ] 

Suresh Srinivas commented on HADOOP-10042:
--

Jira is for reporting bugs. Not for asking questions. Please use user mailing 
list for questions. See http://hadoop.apache.org/mailing_lists.html

 Heap space error during copy from maptask to reduce task
 

 Key: HADOOP-10042
 URL: https://issues.apache.org/jira/browse/HADOOP-10042
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 1.2.1
 Environment: Ubuntu cluster
Reporter: Dieter De Witte
 Fix For: 1.2.1

 Attachments: mapred-site.OLDxml


 http://stackoverflow.com/questions/19298357/out-of-memory-error-in-mapreduce-shuffle-phase
 I've described the problem on stackoverflow as well. It contains a link to 
 another JIRA: 
 http://hadoop-common.472056.n3.nabble.com/Shuffle-In-Memory-OutOfMemoryError-td433197.html
 My errors are completely the same: out of memory error when 
 mapred.job.shuffle.input.buffer.percent = 0.7, the program does work when I 
 put it to 0.2, does this mean the original JIRA was not resolved?
 Does anybody have an idea whether this is a mapreduce issue or is it a 
 misconfiguration from my part?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HADOOP-10042) Heap space error during copy from maptask to reduce task

2013-10-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-10042.
--

Resolution: Invalid

 Heap space error during copy from maptask to reduce task
 

 Key: HADOOP-10042
 URL: https://issues.apache.org/jira/browse/HADOOP-10042
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 1.2.1
 Environment: Ubuntu cluster
Reporter: Dieter De Witte
 Fix For: 1.2.1

 Attachments: mapred-site.OLDxml


 http://stackoverflow.com/questions/19298357/out-of-memory-error-in-mapreduce-shuffle-phase
 I've described the problem on stackoverflow as well. It contains a link to 
 another JIRA: 
 http://hadoop-common.472056.n3.nabble.com/Shuffle-In-Memory-OutOfMemoryError-td433197.html
 My errors are completely the same: out of memory error when 
 mapred.job.shuffle.input.buffer.percent = 0.7, the program does work when I 
 put it to 0.2, does this mean the original JIRA was not resolved?
 Does anybody have an idea whether this is a mapreduce issue or is it a 
 misconfiguration from my part?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10042) Heap space error during copy from maptask to reduce task

2013-10-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13792852#comment-13792852
 ] 

Suresh Srinivas commented on HADOOP-10042:
--

bq. But I think it's a bug (see my reference to other JIRA)
Sorry I could not find it. What is the jira number?

 Heap space error during copy from maptask to reduce task
 

 Key: HADOOP-10042
 URL: https://issues.apache.org/jira/browse/HADOOP-10042
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 1.2.1
 Environment: Ubuntu cluster
Reporter: Dieter De Witte
 Fix For: 1.2.1

 Attachments: mapred-site.OLDxml


 http://stackoverflow.com/questions/19298357/out-of-memory-error-in-mapreduce-shuffle-phase
 I've described the problem on stackoverflow as well. It contains a link to 
 another JIRA: 
 http://hadoop-common.472056.n3.nabble.com/Shuffle-In-Memory-OutOfMemoryError-td433197.html
 My errors are completely the same: out of memory error when 
 mapred.job.shuffle.input.buffer.percent = 0.7, the program does work when I 
 put it to 0.2, does this mean the original JIRA was not resolved?
 Does anybody have an idea whether this is a mapreduce issue or is it a 
 misconfiguration from my part?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10029) Specifying har file to MR job fails in secure cluster

2013-10-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10029:
-

Fix Version/s: (was: 2.2.0)

 Specifying har file to MR job fails in secure cluster
 -

 Key: HADOOP-10029
 URL: https://issues.apache.org/jira/browse/HADOOP-10029
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HADOOP-10029.1.patch, HADOOP-10029.2.patch, 
 HADOOP-10029.3.patch, HADOOP-10029.patch


 This is an issue found by [~rramya]. See the exception stack trace in the 
 following comment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10029) Specifying har file to MR job fails in secure cluster

2013-10-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10029:
-

Status: Patch Available  (was: Open)

 Specifying har file to MR job fails in secure cluster
 -

 Key: HADOOP-10029
 URL: https://issues.apache.org/jira/browse/HADOOP-10029
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 2.2.0

 Attachments: HADOOP-10029.1.patch, HADOOP-10029.2.patch, 
 HADOOP-10029.3.patch, HADOOP-10029.patch


 This is an issue found by [~rramya]. See the exception stack trace in the 
 following comment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10029) Specifying har file to MR job fails in secure cluster

2013-10-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10029:
-

Affects Version/s: 2.0.0-alpha

 Specifying har file to MR job fails in secure cluster
 -

 Key: HADOOP-10029
 URL: https://issues.apache.org/jira/browse/HADOOP-10029
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HADOOP-10029.1.patch, HADOOP-10029.2.patch, 
 HADOOP-10029.3.patch, HADOOP-10029.patch


 This is an issue found by [~rramya]. See the exception stack trace in the 
 following comment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10029) Specifying har file to MR job fails in secure cluster

2013-10-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10029:
-

Attachment: HADOOP-10029.4.patch

The javac warnings are due to use of deprecated methods. I have 
@SuppressWarnings annotation to suppress the warnings.

 Specifying har file to MR job fails in secure cluster
 -

 Key: HADOOP-10029
 URL: https://issues.apache.org/jira/browse/HADOOP-10029
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HADOOP-10029.1.patch, HADOOP-10029.2.patch, 
 HADOOP-10029.3.patch, HADOOP-10029.4.patch, HADOOP-10029.4.patch, 
 HADOOP-10029.patch


 This is an issue found by [~rramya]. See the exception stack trace in the 
 following comment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10029) Specifying har file to MR job fails in secure cluster

2013-10-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10029:
-

Attachment: HADOOP-10029.5.patch

Updated patch. For some reason @SuppressWarnings for deprecation is not working 
for TestHarFileSystem. I removed explicitly declaring the deprecated 
AccessControlException in the interface to work around it.

 Specifying har file to MR job fails in secure cluster
 -

 Key: HADOOP-10029
 URL: https://issues.apache.org/jira/browse/HADOOP-10029
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HADOOP-10029.1.patch, HADOOP-10029.2.patch, 
 HADOOP-10029.3.patch, HADOOP-10029.4.patch, HADOOP-10029.4.patch, 
 HADOOP-10029.5.patch, HADOOP-10029.patch


 This is an issue found by [~rramya]. See the exception stack trace in the 
 following comment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10035) Cleanup TestFilterFileSystem

2013-10-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13791943#comment-13791943
 ] 

Suresh Srinivas commented on HADOOP-10035:
--

Seems reasonable. Let me update the patch.

 Cleanup TestFilterFileSystem
 

 Key: HADOOP-10035
 URL: https://issues.apache.org/jira/browse/HADOOP-10035
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.1-beta
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HADOOP-10035.patch


 Currently TestFilterFileSystem only checks for FileSystem methods that must 
 be implemented in FilterFileSystem with a list of methods that are exception 
 to this rule. This jira wants to make this check stricter by adding a test 
 for ensuring the methods in exception rule list must not be implemented by 
 the FilterFileSystem.
 This also cleans up the current class that has methods from exception rule 
 list to interface to avoid having to provide dummy implementation of the 
 methods.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10035) Cleanup TestFilterFileSystem

2013-10-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13791944#comment-13791944
 ] 

Suresh Srinivas commented on HADOOP-10035:
--

I also will do the same change in HADOOP-10029.

 Cleanup TestFilterFileSystem
 

 Key: HADOOP-10035
 URL: https://issues.apache.org/jira/browse/HADOOP-10035
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.1-beta
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HADOOP-10035.patch


 Currently TestFilterFileSystem only checks for FileSystem methods that must 
 be implemented in FilterFileSystem with a list of methods that are exception 
 to this rule. This jira wants to make this check stricter by adding a test 
 for ensuring the methods in exception rule list must not be implemented by 
 the FilterFileSystem.
 This also cleans up the current class that has methods from exception rule 
 list to interface to avoid having to provide dummy implementation of the 
 methods.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10029) Specifying har file to MR job fails in secure cluster

2013-10-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10029:
-

Attachment: HADOOP-10029.6.patch

Renamed the interface DoNoCheck to MustNotImplement.

 Specifying har file to MR job fails in secure cluster
 -

 Key: HADOOP-10029
 URL: https://issues.apache.org/jira/browse/HADOOP-10029
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HADOOP-10029.1.patch, HADOOP-10029.2.patch, 
 HADOOP-10029.3.patch, HADOOP-10029.4.patch, HADOOP-10029.4.patch, 
 HADOOP-10029.5.patch, HADOOP-10029.6.patch, HADOOP-10029.patch


 This is an issue found by [~rramya]. See the exception stack trace in the 
 following comment.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10035) Cleanup TestFilterFileSystem

2013-10-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10035:
-

Attachment: HADOOP-10035.1.patch

Updated patch to address the comments.

 Cleanup TestFilterFileSystem
 

 Key: HADOOP-10035
 URL: https://issues.apache.org/jira/browse/HADOOP-10035
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.1-beta
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HADOOP-10035.1.patch, HADOOP-10035.patch


 Currently TestFilterFileSystem only checks for FileSystem methods that must 
 be implemented in FilterFileSystem with a list of methods that are exception 
 to this rule. This jira wants to make this check stricter by adding a test 
 for ensuring the methods in exception rule list must not be implemented by 
 the FilterFileSystem.
 This also cleans up the current class that has methods from exception rule 
 list to interface to avoid having to provide dummy implementation of the 
 methods.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10035) Cleanup TestFilterFileSystem

2013-10-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10035:
-

Attachment: HADOOP-10035.2.patch

Updated patch.

 Cleanup TestFilterFileSystem
 

 Key: HADOOP-10035
 URL: https://issues.apache.org/jira/browse/HADOOP-10035
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.1-beta
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HADOOP-10035.1.patch, HADOOP-10035.2.patch, 
 HADOOP-10035.patch


 Currently TestFilterFileSystem only checks for FileSystem methods that must 
 be implemented in FilterFileSystem with a list of methods that are exception 
 to this rule. This jira wants to make this check stricter by adding a test 
 for ensuring the methods in exception rule list must not be implemented by 
 the FilterFileSystem.
 This also cleans up the current class that has methods from exception rule 
 list to interface to avoid having to provide dummy implementation of the 
 methods.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Moved] (HADOOP-10039) Add Hive to the list of projects using AbstractDelegationTokenSecretManager

2013-10-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas moved HDFS-5340 to HADOOP-10039:


Component/s: (was: security)
 security
Key: HADOOP-10039  (was: HDFS-5340)
Project: Hadoop Common  (was: Hadoop HDFS)

 Add Hive to the list of projects using AbstractDelegationTokenSecretManager
 ---

 Key: HADOOP-10039
 URL: https://issues.apache.org/jira/browse/HADOOP-10039
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas

 org.apache.hadoop.hive.thrift.DelegationTokenSecretManager extends 
 AbstractDelegationTokenSecretManager. This should be captured in the 
 InterfaceAudience annotation of AbstractDelegationTokenSecretManager.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10039) Add Hive to the list of projects using AbstractDelegationTokenSecretManager

2013-10-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10039:
-

Affects Version/s: 2.0.0-alpha

 Add Hive to the list of projects using AbstractDelegationTokenSecretManager
 ---

 Key: HADOOP-10039
 URL: https://issues.apache.org/jira/browse/HADOOP-10039
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas

 org.apache.hadoop.hive.thrift.DelegationTokenSecretManager extends 
 AbstractDelegationTokenSecretManager. This should be captured in the 
 InterfaceAudience annotation of AbstractDelegationTokenSecretManager.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   3   4   5   6   7   8   9   10   >