[jira] [Updated] (HADOOP-9064) Augment DelegationTokenRenewer API to cancel the tokens on calls to removeRenewAction

2012-11-21 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9064:
-

Status: Patch Available  (was: Open)

 Augment DelegationTokenRenewer API to cancel the tokens on calls to 
 removeRenewAction
 -

 Key: HADOOP-9064
 URL: https://issues.apache.org/jira/browse/HADOOP-9064
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: hadoop-9064.patch


 Post HADOOP-9049, FileSystems register with DelegationTokenRenewer (a 
 singleton), to renew tokens. 
 To avoid a bunch of defunct tokens clog the NN, we should augment the API to 
 {{#removeRenewAction(boolean cancel)}} and cancel the token appropriately.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9064) Augment DelegationTokenRenewer API to cancel the tokens on calls to removeRenewAction

2012-11-21 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9064:
-

Attachment: hadoop-9064.patch

Changed the logging in removeRenewAction()

 Augment DelegationTokenRenewer API to cancel the tokens on calls to 
 removeRenewAction
 -

 Key: HADOOP-9064
 URL: https://issues.apache.org/jira/browse/HADOOP-9064
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: hadoop-9064.patch, hadoop-9064.patch


 Post HADOOP-9049, FileSystems register with DelegationTokenRenewer (a 
 singleton), to renew tokens. 
 To avoid a bunch of defunct tokens clog the NN, we should augment the API to 
 {{#removeRenewAction(boolean cancel)}} and cancel the token appropriately.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9064) Augment DelegationTokenRenewer API to cancel the tokens on calls to removeRenewAction

2012-11-21 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9064:
-

Attachment: hadoop-9064.patch

Thanks Alejandro. Here is an updated patch incorporating your comments.

 Augment DelegationTokenRenewer API to cancel the tokens on calls to 
 removeRenewAction
 -

 Key: HADOOP-9064
 URL: https://issues.apache.org/jira/browse/HADOOP-9064
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: hadoop-9064.patch, hadoop-9064.patch, hadoop-9064.patch


 Post HADOOP-9049, FileSystems register with DelegationTokenRenewer (a 
 singleton), to renew tokens. 
 To avoid a bunch of defunct tokens clog the NN, we should augment the API to 
 {{#removeRenewAction(boolean cancel)}} and cancel the token appropriately.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9064) Augment DelegationTokenRenewer API to cancel the tokens on calls to removeRenewAction

2012-11-26 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9064:
-

Attachment: hadoop-9064.patch

Uploading a path that does the following:
# Add findbugs exclusion for the reported warning: {{removeRenewAction}} 
defines a generic type T that extends both {{FileSystem}} and {{Renewable}}. 
findbugs was reporting warning about casting {{T}} to {{Renewable}}
# Merge tests for addRenewAction and removeRenewAction to remove redundancy, 
and bump up the loop counter to accommodate any external delays. This increase 
shouldn't affect the run time though.

 Augment DelegationTokenRenewer API to cancel the tokens on calls to 
 removeRenewAction
 -

 Key: HADOOP-9064
 URL: https://issues.apache.org/jira/browse/HADOOP-9064
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: hadoop-9064.patch, hadoop-9064.patch, hadoop-9064.patch, 
 hadoop-9064.patch


 Post HADOOP-9049, FileSystems register with DelegationTokenRenewer (a 
 singleton), to renew tokens. 
 To avoid a bunch of defunct tokens clog the NN, we should augment the API to 
 {{#removeRenewAction(boolean cancel)}} and cancel the token appropriately.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9107) Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented

2012-11-29 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13506853#comment-13506853
 ] 

Karthik Kambatla commented on HADOOP-9107:
--

The things to fix look like:
# document that the method eats up {{InterruptedException}}
# break after setting interrupted to true in the catch block
# throw appropriate exception in the {{else}} branch of {{if (call.error != 
null)}}

 Hadoop IPC client eats InterruptedException and sets interrupt on the thread 
 which is not documented
 

 Key: HADOOP-9107
 URL: https://issues.apache.org/jira/browse/HADOOP-9107
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 1.1.0, 2.0.2-alpha
Reporter: Hari Shreedharan

 This code in Client.java looks fishy:
 {code}
   public Writable call(RPC.RpcKind rpcKind, Writable rpcRequest,
   ConnectionId remoteId) throws InterruptedException, IOException {
 Call call = new Call(rpcKind, rpcRequest);
 Connection connection = getConnection(remoteId, call);
 connection.sendParam(call); // send the parameter
 boolean interrupted = false;
 synchronized (call) {
   while (!call.done) {
 try {
   call.wait();   // wait for the result
 } catch (InterruptedException ie) {
   // save the fact that we were interrupted
   interrupted = true;
 }
   }
   if (interrupted) {
 // set the interrupt flag now that we are done waiting
 Thread.currentThread().interrupt();
   }
   if (call.error != null) {
 if (call.error instanceof RemoteException) {
   call.error.fillInStackTrace();
   throw call.error;
 } else { // local exception
   InetSocketAddress address = connection.getRemoteAddress();
   throw NetUtils.wrapException(address.getHostName(),
   address.getPort(),
   NetUtils.getHostname(),
   0,
   call.error);
 }
   } else {
 return call.getRpcResult();
   }
 }
   }
 {code}
 Blocking calls are expected to throw InterruptedException if that is 
 interrupted. Also it seems like this method waits on the call objects even if 
 it  is interrupted. Currently, this method does not throw an 
 InterruptedException, nor is it documented that this method interrupts the 
 thread calling it. If it is interrupted, this method should still throw 
 InterruptedException, it should not matter if the call was successful or not.
 This is a major issue for clients which do not call this directly, but call 
 HDFS client API methods to write to HDFS, which may be interrupted by the 
 client due to timeouts, but does not throw InterruptedException. Any HDFS 
 client calls can interrupt the thread but it is not documented anywhere. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9107) Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented

2012-11-30 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13507701#comment-13507701
 ] 

Karthik Kambatla commented on HADOOP-9107:
--

From HADOOP-6221:
bq. I think a good tactic would be rather than trying to make the old RPC stack 
interruptible, focus on making Avro something that you can interrupt, so that 
going forward you can interrupt client programs trying to talk to unresponsive 
servers.

Steve, is there a reason for not making the old RPC stack interruptible?

I feel we should do both - what Hari is proposing here, and what HADOOP-6221 
addresses.

 Hadoop IPC client eats InterruptedException and sets interrupt on the thread 
 which is not documented
 

 Key: HADOOP-9107
 URL: https://issues.apache.org/jira/browse/HADOOP-9107
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 1.1.0, 2.0.2-alpha
Reporter: Hari Shreedharan

 This code in Client.java looks fishy:
 {code}
   public Writable call(RPC.RpcKind rpcKind, Writable rpcRequest,
   ConnectionId remoteId) throws InterruptedException, IOException {
 Call call = new Call(rpcKind, rpcRequest);
 Connection connection = getConnection(remoteId, call);
 connection.sendParam(call); // send the parameter
 boolean interrupted = false;
 synchronized (call) {
   while (!call.done) {
 try {
   call.wait();   // wait for the result
 } catch (InterruptedException ie) {
   // save the fact that we were interrupted
   interrupted = true;
 }
   }
   if (interrupted) {
 // set the interrupt flag now that we are done waiting
 Thread.currentThread().interrupt();
   }
   if (call.error != null) {
 if (call.error instanceof RemoteException) {
   call.error.fillInStackTrace();
   throw call.error;
 } else { // local exception
   InetSocketAddress address = connection.getRemoteAddress();
   throw NetUtils.wrapException(address.getHostName(),
   address.getPort(),
   NetUtils.getHostname(),
   0,
   call.error);
 }
   } else {
 return call.getRpcResult();
   }
 }
   }
 {code}
 Blocking calls are expected to throw InterruptedException if that is 
 interrupted. Also it seems like this method waits on the call objects even if 
 it  is interrupted. Currently, this method does not throw an 
 InterruptedException, nor is it documented that this method interrupts the 
 thread calling it. If it is interrupted, this method should still throw 
 InterruptedException, it should not matter if the call was successful or not.
 This is a major issue for clients which do not call this directly, but call 
 HDFS client API methods to write to HDFS, which may be interrupted by the 
 client due to timeouts, but does not throw InterruptedException. Any HDFS 
 client calls can interrupt the thread but it is not documented anywhere. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9113) o.a.h.fs.TestDelegationTokenRenewer is failing intermittently

2012-12-03 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-9113:


 Summary: o.a.h.fs.TestDelegationTokenRenewer is failing 
intermittently
 Key: HADOOP-9113
 URL: https://issues.apache.org/jira/browse/HADOOP-9113
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Fix For: 2.0.3-alpha


In the following code snippets, the test tries to check the renewCount for the 
token to verify if the FileSystem has been de-queued.

{code}
@Override
public long renew(Configuration conf) {
  if (renewCount == MAX_RENEWALS) {
Thread.currentThread().interrupt();
  } else {
renewCount++;
  }
  return renewCount;
}

testAddRemoveRenewAction() {
  // some test code
  assertTrue(Token not removed, (tfs.testToken.renewCount  MAX_RENEWALS));
}
{code}

On slower machines, the renewCount can actually reach MAX_RENEWALS resulting in 
a test failure.

renewCount should not be used to verify this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9113) o.a.h.fs.TestDelegationTokenRenewer is failing intermittently

2012-12-03 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9113:
-

Status: Patch Available  (was: Open)

 o.a.h.fs.TestDelegationTokenRenewer is failing intermittently
 -

 Key: HADOOP-9113
 URL: https://issues.apache.org/jira/browse/HADOOP-9113
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Fix For: 2.0.3-alpha

 Attachments: hadoop-9113.patch


 In the following code snippets, the test tries to check the renewCount for 
 the token to verify if the FileSystem has been de-queued.
 {code}
 @Override
 public long renew(Configuration conf) {
   if (renewCount == MAX_RENEWALS) {
 Thread.currentThread().interrupt();
   } else {
 renewCount++;
   }
   return renewCount;
 }
 testAddRemoveRenewAction() {
   // some test code
   assertTrue(Token not removed, (tfs.testToken.renewCount  MAX_RENEWALS));
 }
 {code}
 On slower machines, the renewCount can actually reach MAX_RENEWALS resulting 
 in a test failure.
 renewCount should not be used to verify this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9113) o.a.h.fs.TestDelegationTokenRenewer is failing intermittently

2012-12-03 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9113:
-

Attachment: hadoop-9113.patch

Eli suggested I should annotate the method  with @VisibleForTesting - updating 
the patch with that.

 o.a.h.fs.TestDelegationTokenRenewer is failing intermittently
 -

 Key: HADOOP-9113
 URL: https://issues.apache.org/jira/browse/HADOOP-9113
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Fix For: 2.0.3-alpha

 Attachments: hadoop-9113.patch, hadoop-9113.patch


 In the following code snippets, the test tries to check the renewCount for 
 the token to verify if the FileSystem has been de-queued.
 {code}
 @Override
 public long renew(Configuration conf) {
   if (renewCount == MAX_RENEWALS) {
 Thread.currentThread().interrupt();
   } else {
 renewCount++;
   }
   return renewCount;
 }
 testAddRemoveRenewAction() {
   // some test code
   assertTrue(Token not removed, (tfs.testToken.renewCount  MAX_RENEWALS));
 }
 {code}
 On slower machines, the renewCount can actually reach MAX_RENEWALS resulting 
 in a test failure.
 renewCount should not be used to verify this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9113) o.a.h.fs.TestDelegationTokenRenewer is failing intermittently

2012-12-04 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13510218#comment-13510218
 ] 

Karthik Kambatla commented on HADOOP-9113:
--

Looks like Hadoop QA is down. Here is the output from local run of test-patch.sh

{color:green}+1 overall{color}.  

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version ) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.



 o.a.h.fs.TestDelegationTokenRenewer is failing intermittently
 -

 Key: HADOOP-9113
 URL: https://issues.apache.org/jira/browse/HADOOP-9113
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Fix For: 2.0.3-alpha

 Attachments: hadoop-9113.patch, hadoop-9113.patch


 In the following code snippets, the test tries to check the renewCount for 
 the token to verify if the FileSystem has been de-queued.
 {code}
 @Override
 public long renew(Configuration conf) {
   if (renewCount == MAX_RENEWALS) {
 Thread.currentThread().interrupt();
   } else {
 renewCount++;
   }
   return renewCount;
 }
 testAddRemoveRenewAction() {
   // some test code
   assertTrue(Token not removed, (tfs.testToken.renewCount  MAX_RENEWALS));
 }
 {code}
 On slower machines, the renewCount can actually reach MAX_RENEWALS resulting 
 in a test failure.
 renewCount should not be used to verify this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-10085) CompositeService should allow adding services while being inited

2014-01-29 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13885862#comment-13885862
 ] 

Karthik Kambatla commented on HADOOP-10085:
---

Sorry for the delay in getting around to this. Looks like the patch doesn't 
apply anymore - was hard to see the changes to tests themselves, mind 
refreshing the patch? I ll try to review before it goes stale this time. 

Let me make sure I understand the fix here. Without the fix, adding a child 
service to a CompositeService while the CompositeService is initing all its 
child services leads to ConcurrentModificationException. The patch allows 
adding these services even during this time, but the newly added child service 
will never be inited if it not already. So, the patch allows adding services 
but places the onus on the caller to make sure it is in the correct state; 
otherwise, bad things can happen. 

I am not sure if it is better to have deterministic behavior where we force 
users to add all the services before CompositeService#serviceInit is called or 
to allow adding services but leave the onus on the users. 

It would have been nicer to have a check for the child being in at least the 
parent's state. Would it make sense to have the parent service enter INIT only 
after all its child services have been INITed? That way, if the parent is 
already in INIT, we can disallow adding a service in UNINITED state. Also, the 
current usage pattern of adding services followed by a call to 
super.serviceInit() will remain valid? 

 CompositeService should allow adding services while being inited
 

 Key: HADOOP-10085
 URL: https://issues.apache.org/jira/browse/HADOOP-10085
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Steve Loughran
 Attachments: HADOOP-10085-002.patch, HADOOP-10085-003.patch


 We can add services to a CompositeService. However, if we do that while 
 initing the CompositeService, it leads to a ConcurrentModificationException.
 It would be nice to allow adding services even during the init of 
 CompositeService.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10085) CompositeService should allow adding services while being inited

2014-01-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887886#comment-13887886
 ] 

Karthik Kambatla commented on HADOOP-10085:
---

Reading through the issues addressed by YARN-117 and the related discussions, I 
now see the reason behind the current approach of parent service entering a 
state before child services. Agree we ll probably have to add transient states 
like INITING, STARTING etc. to address to go the other way, and that it is too 
large a change to just bring in.

I think there is merit to getting this patch in. Let me take a closer look at 
the tests.

 CompositeService should allow adding services while being inited
 

 Key: HADOOP-10085
 URL: https://issues.apache.org/jira/browse/HADOOP-10085
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Steve Loughran
 Attachments: HADOOP-10085-002.patch, HADOOP-10085-003.patch, 
 HADOOP-10085-004.patch


 We can add services to a CompositeService. However, if we do that while 
 initing the CompositeService, it leads to a ConcurrentModificationException.
 It would be nice to allow adding services even during the init of 
 CompositeService.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10085) CompositeService should allow adding services while being inited

2014-01-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887911#comment-13887911
 ] 

Karthik Kambatla commented on HADOOP-10085:
---

Really like the use of AddBlockingService.

Comments:
# Is there a need for the static method AddBlockingService#addChildService()? 
Why not just call parent.addService? 
# Nit: Would be nice to have the tests in an order - adding (Uninited, Inited, 
Started, Stopped) children to Uninited parent, Inited parent etc. - 16 tests in 
all. Then, the test serves as a rubric for someone to understand the behavior.  
# Nit: Rename testAddSiblingInStart to testAddStartedSiblingInStart
# Nit: Rename testAddSiblingInStop to testAddStartedSiblingInStop
# Nit: Rename testAddSiblingInInit to testAddInitedSiblingInInit

 CompositeService should allow adding services while being inited
 

 Key: HADOOP-10085
 URL: https://issues.apache.org/jira/browse/HADOOP-10085
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Steve Loughran
 Attachments: HADOOP-10085-002.patch, HADOOP-10085-003.patch, 
 HADOOP-10085-004.patch


 We can add services to a CompositeService. However, if we do that while 
 initing the CompositeService, it leads to a ConcurrentModificationException.
 It would be nice to allow adding services even during the init of 
 CompositeService.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10085) CompositeService should allow adding services while being inited

2014-01-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13887951#comment-13887951
 ] 

Karthik Kambatla commented on HADOOP-10085:
---

bq. Nit: Would be nice to have the tests in an order - adding (Uninited, 
Inited, Started, Stopped) children to Uninited parent, Inited parent etc. - 16 
tests in all. Then, the test serves as a rubric for someone to understand the 
behavior.

Given we have already spent enough time on this, we can may be do this in a 
follow up JIRA - filed HADOOP-10321 for the same.

 CompositeService should allow adding services while being inited
 

 Key: HADOOP-10085
 URL: https://issues.apache.org/jira/browse/HADOOP-10085
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Steve Loughran
 Attachments: HADOOP-10085-002.patch, HADOOP-10085-003.patch, 
 HADOOP-10085-004.patch


 We can add services to a CompositeService. However, if we do that while 
 initing the CompositeService, it leads to a ConcurrentModificationException.
 It would be nice to allow adding services even during the init of 
 CompositeService.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10085) CompositeService should allow adding services while being inited

2014-02-02 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889006#comment-13889006
 ] 

Karthik Kambatla commented on HADOOP-10085:
---

bq. I don't agree with adding new states as (a) it adds a lot more complexity 
and is generally irrelevant. (b) breaks compabitility of an API that is already 
in use in external applications. You are free to file a JIRA on the topic, but 
I won't support it.
As I already said in my previous comment, I agree with your assessment here 
completely. So, yeah, I was/am not planning to file a JIRA. The JIRA I was 
filing was to add more tests - adding a child in every state to a parent in 
every state. 

The latest patch looks good to me. +1. Committing this.

 CompositeService should allow adding services while being inited
 

 Key: HADOOP-10085
 URL: https://issues.apache.org/jira/browse/HADOOP-10085
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Steve Loughran
Priority: Blocker
 Fix For: 2.3.0

 Attachments: HADOOP-10085-002.patch, HADOOP-10085-003.patch, 
 HADOOP-10085-004.patch, HADOOP-10085-005.patch


 We can add services to a CompositeService. However, if we do that while 
 initing the CompositeService, it leads to a ConcurrentModificationException.
 It would be nice to allow adding services even during the init of 
 CompositeService.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10085) CompositeService should allow adding services while being inited

2014-02-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10085:
--

   Resolution: Fixed
Fix Version/s: (was: 2.3.0)
   2.4.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Steve for the extensive investigation of various options and the fix. 

Just committed this to trunk and branch-2. 

 CompositeService should allow adding services while being inited
 

 Key: HADOOP-10085
 URL: https://issues.apache.org/jira/browse/HADOOP-10085
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Steve Loughran
Priority: Blocker
 Fix For: 2.4.0

 Attachments: HADOOP-10085-002.patch, HADOOP-10085-003.patch, 
 HADOOP-10085-004.patch, HADOOP-10085-005.patch


 We can add services to a CompositeService. However, if we do that while 
 initing the CompositeService, it leads to a ConcurrentModificationException.
 It would be nice to allow adding services even during the init of 
 CompositeService.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10085) CompositeService should allow adding services while being inited

2014-02-02 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889048#comment-13889048
 ] 

Karthik Kambatla commented on HADOOP-10085:
---

I don't necessarily see this as a blocker for 2.3. If you think it is, please 
feel free to commit this to branch-2.3 or let me know and I can. 

 CompositeService should allow adding services while being inited
 

 Key: HADOOP-10085
 URL: https://issues.apache.org/jira/browse/HADOOP-10085
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Steve Loughran
Priority: Blocker
 Fix For: 2.4.0

 Attachments: HADOOP-10085-002.patch, HADOOP-10085-003.patch, 
 HADOOP-10085-004.patch, HADOOP-10085-005.patch


 We can add services to a CompositeService. However, if we do that while 
 initing the CompositeService, it leads to a ConcurrentModificationException.
 It would be nice to allow adding services even during the init of 
 CompositeService.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter) for improved performance

2014-03-12 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved HADOOP-8649.
--

Resolution: Won't Fix

It has been close to 2 years since any activity. Closing this as Won't Fix 
for inactivity. Can re-open or create new one if needed. 

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter) for improved performance
 

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.3, 2.0.0-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch, 
 HADOOP-8649_branch1.patch_v2, HADOOP-8649_branch1.patch_v3, 
 TestChecksumFileSystemOnDFS.java, branch1-HADOOP-8649.patch, 
 branch1-HADOOP-8649.patch, trunk-HADOOP-8649.patch, trunk-HADOOP-8649.patch


 Currently, ChecksumFileSystem implements only listStatus(Path). 
 The other form of listStatus(Path, customFilter) results in parsing the list 
 twice to apply each of the filters - custom and checksum filter.
 By using a composite filter instead, we limit the parsing to once.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10423) Clarify compatibility policy document for combination of new client and old server.

2014-03-24 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13945842#comment-13945842
 ] 

Karthik Kambatla commented on HADOOP-10423:
---

Looks good. +1. Will commit this later today. Thanks for drafting it, Chris. 

 Clarify compatibility policy document for combination of new client and old 
 server.
 ---

 Key: HADOOP-10423
 URL: https://issues.apache.org/jira/browse/HADOOP-10423
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.3.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-10423.1.patch


 As discussed on the dev mailing lists and MAPREDUCE-4052, we need to update 
 the text of the compatibility policy to discuss a new client combined with an 
 old server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10423) Clarify compatibility policy document for combination of new client and old server.

2014-03-24 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10423:
--

   Resolution: Fixed
Fix Version/s: 2.4.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Chris. Just committed this to trunk, branch-2, and branch-2.4.

 Clarify compatibility policy document for combination of new client and old 
 server.
 ---

 Key: HADOOP-10423
 URL: https://issues.apache.org/jira/browse/HADOOP-10423
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.3.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-10423.1.patch


 As discussed on the dev mailing lists and MAPREDUCE-4052, we need to update 
 the text of the compatibility policy to discuss a new client combined with an 
 old server.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10609) .gitignore should ignore .orig and .rej files

2014-05-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10609:
--

Status: Patch Available  (was: Open)

 .gitignore should ignore .orig and .rej files
 -

 Key: HADOOP-10609
 URL: https://issues.apache.org/jira/browse/HADOOP-10609
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: hadoop-10609.patch


 .gitignore file should ignore .orig and .rej files



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Moved] (HADOOP-10609) .gitignore should ignore .orig and .rej files

2014-05-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla moved YARN-2058 to HADOOP-10609:
-

 Target Version/s: 2.5.0  (was: 2.5.0)
Affects Version/s: (was: 2.4.0)
   2.4.0
  Key: HADOOP-10609  (was: YARN-2058)
  Project: Hadoop Common  (was: Hadoop YARN)

 .gitignore should ignore .orig and .rej files
 -

 Key: HADOOP-10609
 URL: https://issues.apache.org/jira/browse/HADOOP-10609
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla

 .gitignore file should ignore .orig and .rej files



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10609) .gitignore should ignore .orig and .rej files

2014-05-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10609:
--

Attachment: hadoop-10609.patch

 .gitignore should ignore .orig and .rej files
 -

 Key: HADOOP-10609
 URL: https://issues.apache.org/jira/browse/HADOOP-10609
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: hadoop-10609.patch


 .gitignore file should ignore .orig and .rej files



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10584) ActiveStandbyElector goes down if ZK quorum become unavailable

2014-05-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10584:
--

Attachment: hadoop-10584-prelim.patch

Preliminary patch that articulates what I have in mind.

 ActiveStandbyElector goes down if ZK quorum become unavailable
 --

 Key: HADOOP-10584
 URL: https://issues.apache.org/jira/browse/HADOOP-10584
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.4.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Critical
 Attachments: hadoop-10584-prelim.patch


 ActiveStandbyElector retries operations for a few times. If the ZK quorum 
 itself is down, it goes down and the daemons will have to be brought up 
 again. 
 Instead, it should log the fact that it is unable to talk to ZK, call 
 becomeStandby on its client, and continue to attempt connecting to ZK.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10584) ActiveStandbyElector goes down if ZK quorum become unavailable

2014-05-15 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13993080#comment-13993080
 ] 

Karthik Kambatla commented on HADOOP-10584:
---

More background: We saw this when ZK became inaccessible for a few minutes. 
ZKFC went down and the corresponding master was transitioned to Standby. 

bq. You mean instead of calling fatalError() like its doing now?
Yes. Or, we should have two retry modes. The retries we have today followed by 
a call to becomeStandby, within an outer retry-forever loop that sleeps for a 
shorter time between inner-loops.



 ActiveStandbyElector goes down if ZK quorum become unavailable
 --

 Key: HADOOP-10584
 URL: https://issues.apache.org/jira/browse/HADOOP-10584
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.4.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Critical

 ActiveStandbyElector retries operations for a few times. If the ZK quorum 
 itself is down, it goes down and the daemons will have to be brought up 
 again. 
 Instead, it should log the fact that it is unable to talk to ZK, call 
 becomeStandby on its client, and continue to attempt connecting to ZK.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10609) .gitignore should ignore .orig and .rej files

2014-05-16 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10609:
--

   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for the review, Sandy. Just committed this to trunk and branch-2.

 .gitignore should ignore .orig and .rej files
 -

 Key: HADOOP-10609
 URL: https://issues.apache.org/jira/browse/HADOOP-10609
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Fix For: 2.5.0

 Attachments: hadoop-10609.patch


 .gitignore file should ignore .orig and .rej files



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Moved] (HADOOP-10686) Writables are not configured by framework

2014-06-12 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla moved MAPREDUCE-5914 to HADOOP-10686:
--

Affects Version/s: (was: 2.4.0)
   2.4.0
  Key: HADOOP-10686  (was: MAPREDUCE-5914)
  Project: Hadoop Common  (was: Hadoop Map/Reduce)

 Writables are not configured by framework
 -

 Key: HADOOP-10686
 URL: https://issues.apache.org/jira/browse/HADOOP-10686
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Abraham Elmahrek
Assignee: Abraham Elmahrek
 Attachments: MAPREDUCE-5914.0.patch, MAPREDUCE-5914.1.patch, 
 MAPREDUCE-5914.2.patch


 Seeing the following exception:
 {noformat}
 java.lang.Exception: java.lang.NullPointerException
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.sqoop.job.io.SqoopWritable.readFields(SqoopWritable.java:59)
   at 
 org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:129)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.compare(MapTask.java:1248)
   at org.apache.hadoop.util.QuickSort.fix(QuickSort.java:35)
   at org.apache.hadoop.util.QuickSort.sortInternal(QuickSort.java:87)
   at org.apache.hadoop.util.QuickSort.sort(QuickSort.java:63)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1582)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1467)
   at 
 org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:699)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:769)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:744)
 {noformat}
 It turns out that WritableComparator does not configure Writable objects 
 :https://github.com/apache/hadoop-common/blob/branch-2.3.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java.
  This is during the sort phase for an MR job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10686) Writables are not configured by framework

2014-06-12 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14028894#comment-14028894
 ] 

Karthik Kambatla commented on HADOOP-10686:
---

Moved to Hadoop Common since the main changes are in the common code. 

 Writables are not configured by framework
 -

 Key: HADOOP-10686
 URL: https://issues.apache.org/jira/browse/HADOOP-10686
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Abraham Elmahrek
Assignee: Abraham Elmahrek
 Attachments: MAPREDUCE-5914.0.patch, MAPREDUCE-5914.1.patch, 
 MAPREDUCE-5914.2.patch


 Seeing the following exception:
 {noformat}
 java.lang.Exception: java.lang.NullPointerException
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.sqoop.job.io.SqoopWritable.readFields(SqoopWritable.java:59)
   at 
 org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:129)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.compare(MapTask.java:1248)
   at org.apache.hadoop.util.QuickSort.fix(QuickSort.java:35)
   at org.apache.hadoop.util.QuickSort.sortInternal(QuickSort.java:87)
   at org.apache.hadoop.util.QuickSort.sort(QuickSort.java:63)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1582)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1467)
   at 
 org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:699)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:769)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:744)
 {noformat}
 It turns out that WritableComparator does not configure Writable objects 
 :https://github.com/apache/hadoop-common/blob/branch-2.3.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java.
  This is during the sort phase for an MR job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10686) Writables are not always configured

2014-06-12 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10686:
--

Summary: Writables are not always configured  (was: Writables are not 
configured by framework)

 Writables are not always configured
 ---

 Key: HADOOP-10686
 URL: https://issues.apache.org/jira/browse/HADOOP-10686
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Abraham Elmahrek
Assignee: Abraham Elmahrek
 Attachments: MAPREDUCE-5914.0.patch, MAPREDUCE-5914.1.patch, 
 MAPREDUCE-5914.2.patch


 Seeing the following exception:
 {noformat}
 java.lang.Exception: java.lang.NullPointerException
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.sqoop.job.io.SqoopWritable.readFields(SqoopWritable.java:59)
   at 
 org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:129)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.compare(MapTask.java:1248)
   at org.apache.hadoop.util.QuickSort.fix(QuickSort.java:35)
   at org.apache.hadoop.util.QuickSort.sortInternal(QuickSort.java:87)
   at org.apache.hadoop.util.QuickSort.sort(QuickSort.java:63)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1582)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1467)
   at 
 org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:699)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:769)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:744)
 {noformat}
 It turns out that WritableComparator does not configure Writable objects 
 :https://github.com/apache/hadoop-common/blob/branch-2.3.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java.
  This is during the sort phase for an MR job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10686) Writables are not always configured

2014-06-12 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10686:
--

   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Abe. Just committed this to trunk and branch-2. 

 Writables are not always configured
 ---

 Key: HADOOP-10686
 URL: https://issues.apache.org/jira/browse/HADOOP-10686
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Abraham Elmahrek
Assignee: Abraham Elmahrek
 Fix For: 2.5.0

 Attachments: MAPREDUCE-5914.0.patch, MAPREDUCE-5914.1.patch, 
 MAPREDUCE-5914.2.patch


 Seeing the following exception:
 {noformat}
 java.lang.Exception: java.lang.NullPointerException
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.sqoop.job.io.SqoopWritable.readFields(SqoopWritable.java:59)
   at 
 org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:129)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.compare(MapTask.java:1248)
   at org.apache.hadoop.util.QuickSort.fix(QuickSort.java:35)
   at org.apache.hadoop.util.QuickSort.sortInternal(QuickSort.java:87)
   at org.apache.hadoop.util.QuickSort.sort(QuickSort.java:63)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1582)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1467)
   at 
 org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:699)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:769)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:744)
 {noformat}
 It turns out that WritableComparator does not configure Writable objects 
 :https://github.com/apache/hadoop-common/blob/branch-2.3.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java.
  This is during the sort phase for an MR job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10584) ActiveStandbyElector goes down if ZK quorum become unavailable

2014-06-20 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039564#comment-14039564
 ] 

Karthik Kambatla commented on HADOOP-10584:
---

Logs from when we saw this error:

{noformat}
-yy-xx 06:01:30,039 INFO org.apache.zookeeper.ClientCnxn: Client session 
timed out, have not heard from server in 3335ms for sessionid 
0x2459abcbfd0027f, closing socket connection and attempting reconnect
-yy-xx 06:01:30,144 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session 
disconnected. Entering neutral mode...
-yy-xx 06:01:30,233 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
connection to server MASKED-1/10.1.128.51:2181. Will not attempt to 
authenticate using SASL (unknown error)
-yy-xx 06:01:30,233 INFO org.apache.zookeeper.ClientCnxn: Socket connection 
established to MASKED-1/10.1.128.51:2181, initiating session
-yy-xx 06:01:31,901 INFO org.apache.zookeeper.ClientCnxn: Client session 
timed out, have not heard from server in 1667ms for sessionid 
0x2459abcbfd0027f, closing socket connection and attempting reconnect
-yy-xx 06:01:32,405 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
connection to server MASKED-2/10.1.128.48:2181. Will not attempt to 
authenticate using SASL (unknown error)
-yy-xx 06:01:32,406 INFO org.apache.zookeeper.ClientCnxn: Socket connection 
established to MASKED-2/10.1.128.48:2181, initiating session
-yy-xx 06:01:32,409 INFO org.apache.zookeeper.ClientCnxn: Session 
establishment complete on server MASKED-2/10.1.128.48:2181, sessionid = 
0x2459abcbfd0027f, negotiated timeout = 5000
-yy-xx 06:01:32,412 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session 
connected.
-yy-xx 06:01:35,742 INFO org.apache.zookeeper.ClientCnxn: Client session 
timed out, have not heard from server in 3334ms for sessionid 
0x2459abcbfd0027f, closing socket connection and attempting reconnect
-yy-xx 06:01:35,850 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session 
disconnected. Entering neutral mode...
-yy-xx 06:01:35,966 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
connection to server MASKED-3/10.1.128.49:2181. Will not attempt to 
authenticate using SASL (unknown error)
-yy-xx 06:01:35,967 INFO org.apache.zookeeper.ClientCnxn: Socket connection 
established to MASKED-3/10.1.128.49:2181, initiating session
-yy-xx 06:01:35,968 INFO org.apache.zookeeper.ClientCnxn: Session 
establishment complete on server MASKED-3/10.1.128.49:2181, sessionid = 
0x2459abcbfd0027f, negotiated timeout = 5000
-yy-xx 06:01:35,972 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session 
connected.
-yy-xx 06:01:39,303 INFO org.apache.zookeeper.ClientCnxn: Client session 
timed out, have not heard from server in 3335ms for sessionid 
0x2459abcbfd0027f, closing socket connection and attempting reconnect
-yy-xx 06:01:39,411 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session 
disconnected. Entering neutral mode...
-yy-xx 06:01:39,904 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
connection to server MASKED-1/10.1.128.51:2181. Will not attempt to 
authenticate using SASL (unknown error)
-yy-xx 06:01:39,904 INFO org.apache.zookeeper.ClientCnxn: Socket connection 
established to MASKED-1/10.1.128.51:2181, initiating session
-yy-xx 06:01:41,572 INFO org.apache.zookeeper.ClientCnxn: Client session 
timed out, have not heard from server in 1668ms for sessionid 
0x2459abcbfd0027f, closing socket connection and attempting reconnect
-yy-xx 06:01:41,678 FATAL org.apache.hadoop.ha.ActiveStandbyElector: 
Received stat error from Zookeeper. code:CONNECTIONLOSS. Not retrying further 
znode monitoring connection errors.
-yy-xx 06:01:41,926 INFO org.apache.zookeeper.ZooKeeper: Session: 
0x2459abcbfd0027f closed
-yy-xx 06:01:41,927 FATAL org.apache.hadoop.ha.ZKFailoverController: Fatal 
error occurred:Received stat error from Zookeeper. code:CONNECTIONLOSS. Not 
retrying further znode monitoring connection errors.
-yy-xx 06:01:41,927 WARN org.apache.hadoop.ha.ActiveStandbyElector: 
Ignoring stale result from old client with sessionId 0x2459abcbfd0027f
-yy-xx 06:01:41,927 INFO org.apache.hadoop.ipc.Server: Stopping server on 
8018
-yy-xx 06:01:41,927 INFO org.apache.zookeeper.ClientCnxn: EventThread shut 
down
-yy-xx 06:01:41,928 INFO org.apache.hadoop.ha.ActiveStandbyElector: 
Yielding from election
-yy-xx 06:01:41,928 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server 
listener on 8018
-yy-xx 06:01:41,928 INFO org.apache.hadoop.ha.HealthMonitor: Stopping 
HealthMonitor thread
-yy-xx 06:01:41,928 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server 
Responder
{noformat}

 ActiveStandbyElector goes down if ZK quorum become unavailable
 --

 Key: HADOOP-10584
 URL: 

[jira] [Updated] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2014-06-27 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9361:
-

Priority: Blocker  (was: Major)

 Strictly define the expected behavior of filesystem APIs and write tests to 
 verify compliance
 -

 Key: HADOOP-9361
 URL: https://issues.apache.org/jira/browse/HADOOP-9361
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 2.4.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Blocker
 Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
 HADOOP-9361-003.patch, HADOOP-9361-004.patch, HADOOP-9361-005.patch, 
 HADOOP-9361-006.patch, HADOOP-9361-007.patch, HADOOP-9361-008.patch, 
 HADOOP-9361-009.patch, HADOOP-9361-011.patch, HADOOP-9361-012.patch, 
 HADOOP-9361-013.patch, HADOOP-9361-014.patch, HADOOP-9361-015.patch, 
 HADOOP-9361-016.patch, HADOOP-9361.awang-addendum.patch


 {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
 HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
 don't.
 The only tests that are common are those of {{FileSystemContractTestBase}}, 
 which HADOOP-9258 shows is incomplete.
 I propose 
 # writing more tests which clarify expected behavior
 # testing operations in the interface being in their own JUnit4 test classes, 
 instead of one big test suite. 
 # Having each FS declare via a properties file what behaviors they offer, 
 such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
 methods can downgrade to skipped test cases if a feature is missing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10506) LimitedPrivate annotation not useful

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10506:
--

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 LimitedPrivate annotation not useful
 

 Key: HADOOP-10506
 URL: https://issues.apache.org/jira/browse/HADOOP-10506
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.4.0
Reporter: Thomas Graves

 The LimitedPrivate annotation isn't useful.  The intention seems to have been 
 those interfaces were only intended to be used by these components.  But in 
 many cases those components are separate from core hadoop.  This means any 
 changes to them will break backwards compatibility with those, which breaks 
 the new compatibility rules in Hadoop.  
 Note that many of the annotation are also not marked properly, or have fallen 
 out of date.  I see Public Interfaces that use LimitedPrivate classes in the 
 api.  (TokenCache using Credentials is an example). 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10661) Ineffective user/passsword check in FTPFileSystem#initialize()

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10661:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 Ineffective user/passsword check in FTPFileSystem#initialize()
 --

 Key: HADOOP-10661
 URL: https://issues.apache.org/jira/browse/HADOOP-10661
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
 Attachments: HADOOP-10661.patch


 Here is related code:
 {code}
   userAndPassword = (conf.get(fs.ftp.user. + host, null) + : + conf
   .get(fs.ftp.password. + host, null));
   if (userAndPassword == null) {
 throw new IOException(Invalid user/passsword specified);
   }
 {code}
 The intention seems to be checking that username / password should not be 
 null.
 But due to the presence of colon, the above check is not effective.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10392:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)
 

 Key: HADOOP-10392
 URL: https://issues.apache.org/jira/browse/HADOOP-10392
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.3.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10392.2.patch, HADOOP-10392.3.patch, 
 HADOOP-10392.4.patch, HADOOP-10392.4.patch, HADOOP-10392.patch


 There're some methods calling Path.makeQualified(FileSystem), which causes 
 javac warning.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9852) UGI login user keytab and principal should not be static

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9852:
-

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 UGI login user keytab and principal should not be static
 

 Key: HADOOP-9852
 URL: https://issues.apache.org/jira/browse/HADOOP-9852
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9852.patch


 The static keytab and principal for the login user is problematic.  The login 
 conf explicitly references these statics.  As a result, 
 loginUserFromKeytabAndReturnUGI is unnecessarily synch'ed on the class to 
 swap out the login user's keytab and principal, login, then restore the 
 keytab/principal.  This method's synch blocks further de-synching of other 
 methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10507) FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is specified

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10507:
--

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is 
 specified
 --

 Key: HADOOP-10507
 URL: https://issues.apache.org/jira/browse/HADOOP-10507
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0, 2.4.0
Reporter: Stephen Chu
Assignee: sathish
Priority: Minor
 Attachments: HDFS-6205-0001.patch, HDFS-6205.patch


 If users don't specify the perm of an acl when using the FsShell's setfacl 
 command, a fatal internal error ArrayIndexOutOfBoundsException will be thrown.
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob: /user/hdfs/td1
 -setfacl: Fatal internal error
 java.lang.ArrayIndexOutOfBoundsException: 2
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclEntry(AclEntry.java:285)
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclSpec(AclEntry.java:221)
   at 
 org.apache.hadoop.fs.shell.AclCommands$SetfaclCommand.processOptions(AclCommands.java:260)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 [root@hdfs-nfs ~]# 
 {code}
 An improvement would be if it returned something like this:
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob:rww /user/hdfs/td1
 -setfacl: Invalid permission in aclSpec : user:bob:rww
 Usage: hadoop fs [generic options] -setfacl [-R] [{-b|-k} {-m|-x acl_spec} 
 path]|[--set acl_spec path]
 [root@hdfs-nfs ~]# 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10065) Fix namenode format documentation

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10065:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 Fix namenode format documentation
 -

 Key: HADOOP-10065
 URL: https://issues.apache.org/jira/browse/HADOOP-10065
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10065.2.patch, HADOOP-10065.3.patch, 
 HADOOP-10065.patch


 Current namenode format doc
 http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#namenode
 Does not list the various options format can be called with and their use.
 {code}
 [-format [-clusterid cid ] [-force] [-nonInteractive] ]
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10584) ActiveStandbyElector goes down if ZK quorum become unavailable

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10584:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 ActiveStandbyElector goes down if ZK quorum become unavailable
 --

 Key: HADOOP-10584
 URL: https://issues.apache.org/jira/browse/HADOOP-10584
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.4.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Critical
 Attachments: hadoop-10584-prelim.patch


 ActiveStandbyElector retries operations for a few times. If the ZK quorum 
 itself is down, it goes down and the daemons will have to be brought up 
 again. 
 Instead, it should log the fact that it is unable to talk to ZK, call 
 becomeStandby on its client, and continue to attempt connecting to ZK.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10610) Upgrade S3n s3.fs.buffer.dir to support multi directories

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10610:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 Upgrade S3n s3.fs.buffer.dir to support multi directories
 -

 Key: HADOOP-10610
 URL: https://issues.apache.org/jira/browse/HADOOP-10610
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.4.0
Reporter: Ted Malaska
Assignee: Ted Malaska
Priority: Minor
 Attachments: HADOOP-10610.patch, HADOOP_10610-2.patch, HDFS-6383.patch


 s3.fs.buffer.dir defines the tmp folder where files will be written to before 
 getting sent to S3.  Right now this is limited to a single folder which 
 causes to major issues.
 1. You need a drive with enough space to store all the tmp files at once
 2. You are limited to the IO speeds of a single drive
 This solution will resolve both and has been tested to increase the S3 write 
 speed by 2.5x with 10 mappers on hs1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9438:
-

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 LocalFileContext does not throw an exception on mkdir for already existing 
 directory
 

 Key: HADOOP-9438
 URL: https://issues.apache.org/jira/browse/HADOOP-9438
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Robert Joseph Evans
Priority: Critical
 Attachments: HADOOP-9438.20130501.1.patch, 
 HADOOP-9438.20130521.1.patch, HADOOP-9438.patch, HADOOP-9438.patch


 according to 
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
 should throw a FileAlreadyExistsException if the directory already exists.
 I tested this and 
 {code}
 FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
 Path p = new Path(/tmp/bobby.12345);
 FsPermission cachePerms = new FsPermission((short) 0755);
 lfc.mkdir(p, cachePerms, false);
 lfc.mkdir(p, cachePerms, false);
 {code}
 never throws an exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10119) Document hadoop archive -p option

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10119:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 Document hadoop archive -p option
 -

 Key: HADOOP-10119
 URL: https://issues.apache.org/jira/browse/HADOOP-10119
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10119.patch


 Now hadoop archive -p (relative parent path) option is required but the 
 option is not documented.
 See 
 http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#archive
  .



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9238) FsShell -put from stdin auto-creates paths

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9238:
-

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 FsShell -put from stdin auto-creates paths
 --

 Key: HADOOP-9238
 URL: https://issues.apache.org/jira/browse/HADOOP-9238
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
 Attachments: HADOOP-9238.patch, HADOOP-9238.patch, HADOOP-9238.patch


 FsShell put is no longer supposed to auto-create paths.  There's an 
 inconsistency where a put from stdin will still auto-create paths.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10744) LZ4 Compression fails to recognize PowerPC Little Endian Architecture

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10744:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 LZ4 Compression fails to recognize PowerPC Little Endian Architecture
 -

 Key: HADOOP-10744
 URL: https://issues.apache.org/jira/browse/HADOOP-10744
 Project: Hadoop Common
  Issue Type: Test
  Components: io, native
Affects Versions: 2.2.0, 2.3.0, 2.4.0
 Environment: PowerPC Little Endian (ppc64le)
Reporter: Ayappan
 Attachments: HADOOP-10744.patch


 Lz4 Compression fails to identify the PowerPC Little Endian Architecture. It 
 recognizes it as Big Endian and several testcases( 
 TestCompressorDecompressor, TestCodec, TestLz4CompressorDecompressor)  fails 
 due to this.
 Running org.apache.hadoop.io.compress.TestCompressorDecompressor
 Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.435 sec  
 FAILURE! - in org.apache.hadoop.io.compress.TestCompressorDecompressor
 testCompressorDecompressor(org.apache.hadoop.io.compress.TestCompressorDecompressor)
   Time elapsed: 0.308 sec   FAILURE!
 org.junit.internal.ArrayComparisonFailure: 
 org.apache.hadoop.io.compress.lz4.Lz4Compressor_org.apache.hadoop.io.compress.lz4.Lz4Decompressor-
   byte arrays not equals error !!!: arrays first differed at element [1428]; 
 expected:4 but was:10
 at 
 org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
 at org.junit.Assert.internalArrayEquals(Assert.java:473)
 at org.junit.Assert.assertArrayEquals(Assert.java:294)
 at 
 org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:325)
 at 
 org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
 at 
 org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor(TestCompressorDecompressor.java:58)
 ...
 ...
 .



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10312) Shell.ExitCodeException to have more useful toString

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10312:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 Shell.ExitCodeException to have more useful toString
 

 Key: HADOOP-10312
 URL: https://issues.apache.org/jira/browse/HADOOP-10312
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.4.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
 Attachments: HADOOP-10312-001.patch, HADOOP-10312-002.patch


 Shell's ExitCodeException doesn't include the exit code in the toString 
 value, so isn't that useful in diagnosing container start failures in YARN



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9747) Reduce unnecessary UGI synchronization

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9747:
-

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 Reduce unnecessary UGI synchronization
 --

 Key: HADOOP-9747
 URL: https://issues.apache.org/jira/browse/HADOOP-9747
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical

 Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
 UGI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10692) Update metrics2 document and examples to be case sensitive

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10692:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 Update metrics2 document and examples to be case sensitive
 --

 Key: HADOOP-10692
 URL: https://issues.apache.org/jira/browse/HADOOP-10692
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf, metrics
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-10692.2.patch, HADOOP-10692.patch


 After HADOOP-10468, the prefix of the properties in metrics2 become case 
 sensitive. We should also update package-info and hadoop-metrics2.properties 
 examples to be case sensitive.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9317) User cannot specify a kerberos keytab for commands

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9317:
-

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 User cannot specify a kerberos keytab for commands
 --

 Key: HADOOP-9317
 URL: https://issues.apache.org/jira/browse/HADOOP-9317
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-9317.branch-23.patch, 
 HADOOP-9317.branch-23.patch, HADOOP-9317.patch, HADOOP-9317.patch, 
 HADOOP-9317.patch, HADOOP-9317.patch


 {{UserGroupInformation}} only allows kerberos users to be logged in via the 
 ticket cache when running hadoop commands.  {{UGI}} allows a keytab to be 
 used, but it's only exposed programatically.  This forces keytab-based users 
 running hadoop commands to periodically issue a kinit from the keytab.  A 
 race condition exists during the kinit when the ticket cache is deleted and 
 re-created.  Hadoop commands will fail when the ticket cache does not 
 momentarily exist.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10048) LocalDirAllocator should avoid holding locks while accessing the filesystem

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10048:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 LocalDirAllocator should avoid holding locks while accessing the filesystem
 ---

 Key: HADOOP-10048
 URL: https://issues.apache.org/jira/browse/HADOOP-10048
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.3.0
Reporter: Jason Lowe
Assignee: Jason Lowe
 Attachments: HADOOP-10048.patch


 As noted in MAPREDUCE-5584 and HADOOP-7016, LocalDirAllocator can be a 
 bottleneck for multithreaded setups like the ShuffleHandler.  We should 
 consider moving to a lockless design or minimizing the critical sections to a 
 very small amount of time that does not involve I/O operations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10334) make user home directory customizable

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10334:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 make user home directory customizable
 -

 Key: HADOOP-10334
 URL: https://issues.apache.org/jira/browse/HADOOP-10334
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.2.0
Reporter: Kevin Odell
Priority: Minor

 The path is currently hardcoded:
 public Path getHomeDirectory() {
 return makeQualified(new Path(/user/ + dfs.ugi.getShortUserName()));
   }
 It would be nice to have that as a customizable value.  
 Thank you



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9856) Avoid Krb5LoginModule.logout issue

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9856:
-

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 Avoid Krb5LoginModule.logout issue
 --

 Key: HADOOP-9856
 URL: https://issues.apache.org/jira/browse/HADOOP-9856
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9856.patch


 The kerberos login module's logout method arguably has a bug.  
 {{Subject#getPrivateCredentials()}} returns a synchronized set.  Iterating 
 the set requires explicitly locking the set.  The 
 {{Krb5LoginModule#logout()}} is iterating and modifying the set w/o a lock.  
 This may lead to a {{ConcurrentModificationException}} which is what lead to 
 {{UGI.getCurrentUser()}} being unnecessarily synchronized.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10623) Provide a utility to be able inspect the config as seen by a hadoop client / daemon

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10623:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 Provide a utility to be able inspect the config as seen by a hadoop client / 
 daemon 
 

 Key: HADOOP-10623
 URL: https://issues.apache.org/jira/browse/HADOOP-10623
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: HADOOP-10623.v01.patch, HADOOP-10623.v02.patch, 
 HADOOP-10623.v03.patch


 To ease debugging of config issues it is convenient to be able to generate a 
 config as seen by the job client or a hadoop daemon
 {noformat}
 ]$ hadoop org.apache.hadoop.util.ConfigTool -help 
 Usage: ConfigTool [ -xml | -json ] [ -loadDefaults ] [ resource1... ]
   if resource contains '/', load from local filesystem
   otherwise, load from the classpath
 Generic options supported are
 -conf configuration file specify an application configuration file
 -D property=valueuse value for given property
 -fs local|namenode:port  specify a namenode
 -jt local|jobtracker:portspecify a job tracker
 -files comma separated list of filesspecify comma separated files to be 
 copied to the map reduce cluster
 -libjars comma separated list of jarsspecify comma separated jar files 
 to include in the classpath.
 -archives comma separated list of archivesspecify comma separated 
 archives to be unarchived on the compute machines.
 The general command line syntax is
 bin/hadoop command [genericOptions] [commandOptions]
 {noformat}
 {noformat}
 $ hadoop org.apache.hadoop.util.ConfigTool -Dmy.test.conf=val mapred-site.xml 
 ./hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/core-site.xml | python 
 -mjson.tool
 {
 properties: [
 {
 isFinal: false,
 key: mapreduce.framework.name,
 resource: mapred-site.xml,
 value: yarn
 },
 {
 isFinal: false,
 key: mapreduce.client.genericoptionsparser.used,
 resource: programatically,
 value: true
 },
 {
 isFinal: false,
 key: my.test.conf,
 resource: from command line,
 value: val
 },
 {
 isFinal: false,
 key: from.file.key,
 resource: 
 hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/etc/hadoop/core-site.xml,
 value: from.file.val
 },
 {
 isFinal: false,
 key: mapreduce.shuffle.port,
 resource: mapred-site.xml,
 value: ${my.mapreduce.shuffle.port}
 }
 ]
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8808) Update FsShell documentation to mention deprecation of some of the commands, and mention alternatives

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8808:
-

Target Version/s: 2.6.0  (was: 2.5.0)

 Update FsShell documentation to mention deprecation of some of the commands, 
 and mention alternatives
 -

 Key: HADOOP-8808
 URL: https://issues.apache.org/jira/browse/HADOOP-8808
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation, fs
Affects Versions: 2.2.0
Reporter: Hemanth Yamijala
Assignee: Akira AJISAKA
 Attachments: HADOOP-8808.2.patch, HADOOP-8808.patch


 In HADOOP-7286, we deprecated the following 3 commands dus, lsr and rmr, in 
 favour of du -s, ls -r and rm -r respectively. The FsShell documentation 
 should be updated to mention these, so that users can start switching. Also, 
 there are places where we refer to the deprecated commands as alternatives. 
 This can be changed as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9629) Support Windows Azure Storage - Blob as a file system in Hadoop

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9629:
-

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 Support Windows Azure Storage - Blob as a file system in Hadoop
 ---

 Key: HADOOP-9629
 URL: https://issues.apache.org/jira/browse/HADOOP-9629
 Project: Hadoop Common
  Issue Type: New Feature
  Components: tools
Reporter: Mostafa Elhemali
Assignee: Mike Liddell
 Fix For: 3.0.0

 Attachments: HADOOP-9629 - Azure Filesystem - Information for 
 developers.docx, HADOOP-9629 - Azure Filesystem - Information for 
 developers.pdf, HADOOP-9629.2.patch, HADOOP-9629.3.patch, HADOOP-9629.patch, 
 HADOOP-9629.trunk.1.patch, HADOOP-9629.trunk.2.patch, 
 HADOOP-9629.trunk.3.patch, HADOOP-9629.trunk.4.patch, 
 HADOOP-9629.trunk.5.patch


 h2. Description
 This JIRA incorporates adding a new file system implementation for accessing 
 Windows Azure Storage - Blob from within Hadoop, such as using blobs as input 
 to MR jobs or configuring MR jobs to put their output directly into blob 
 storage.
 h2. High level design
 At a high level, the code here extends the FileSystem class to provide an 
 implementation for accessing blob storage; the scheme wasb is used for 
 accessing it over HTTP, and wasbs for accessing over HTTPS. We use the URI 
 scheme: {code}wasb[s]://container@account/path/to/file{code} to address 
 individual blobs. We use the standard Azure Java SDK 
 (com.microsoft.windowsazure) to do most of the work. In order to map a 
 hierarchical file system over the flat name-value pair nature of blob 
 storage, we create a specially tagged blob named path/to/dir whenever we 
 create a directory called path/to/dir, then files under that are stored as 
 normal blobs path/to/dir/file. We have many metrics implemented for it using 
 the Metrics2 interface. Tests are implemented mostly using a mock 
 implementation for the Azure SDK functionality, with an option to test 
 against a real blob storage if configured (instructions provided inside in 
 README.txt).
 h2. Credits and history
 This has been ongoing work for a while, and the early version of this work 
 can be seen in HADOOP-8079. This JIRA is a significant revision of that and 
 we'll post the patch here for Hadoop trunk first, then post a patch for 
 branch-1 as well for backporting the functionality if accepted. Credit for 
 this work goes to the early team: [~minwei], [~davidlao], [~lengningliu] and 
 [~stojanovic] as well as multiple people who have taken over this work since 
 then (hope I don't forget anyone): [~dexterb], Johannes Klein, [~ivanmi], 
 Michael Rys, [~mostafae], [~brian_swan], [~mikelid], [~xifang], and 
 [~chuanliu].
 h2. Test
 Besides unit tests, we have used WASB as the default file system in our 
 service product. (HDFS is also used but not as default file system.) Various 
 different customer and test workloads have been run against clusters with 
 such configurations for quite some time. The current version reflects to the 
 version of the code tested and used in our production environment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10121) Fix javadoc spelling for HadoopArchives#writeTopLevelDirs

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10121:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 Fix javadoc spelling for HadoopArchives#writeTopLevelDirs
 -

 Key: HADOOP-10121
 URL: https://issues.apache.org/jira/browse/HADOOP-10121
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-10121.patch


 There's a misspelling at HadoopArchives.java. It should be fixed as follows: 
 {code}
 -  * @param parentPath the parent path that you wnat the archives
 +  * @param parentPath the parent path that you want the archives
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10615) FileInputStream in JenkinsHash#main() is never closed

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10615:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 FileInputStream in JenkinsHash#main() is never closed
 -

 Key: HADOOP-10615
 URL: https://issues.apache.org/jira/browse/HADOOP-10615
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
 Attachments: HADOOP-10615.patch


 {code}
 FileInputStream in = new FileInputStream(args[0]);
 {code}
 The above FileInputStream is not closed upon exit of main.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10059) RPC authentication and authorization metrics overflow to negative values on busy clusters

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10059:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 RPC authentication and authorization metrics overflow to negative values on 
 busy clusters
 -

 Key: HADOOP-10059
 URL: https://issues.apache.org/jira/browse/HADOOP-10059
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 0.23.9, 2.2.0
Reporter: Jason Lowe
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HADOOP-10059.1.patch, HADOOP-10059.2.patch


 The RPC metrics for authorization and authentication successes can easily 
 overflow to negative values on a busy cluster that has been up for a long 
 time.  We should consider providing 64-bit values for these counters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10525) Remove DRFA.MaxBackupIndex config from log4j.properties

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10525:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 Remove DRFA.MaxBackupIndex config from log4j.properties
 ---

 Key: HADOOP-10525
 URL: https://issues.apache.org/jira/browse/HADOOP-10525
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10525.patch


 From [hadoop-user mailing 
 list|http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3C534FACD3.8040907%40corp.badoo.com%3E].
 {code}
 # 30-day backup
 # log4j.appender.DRFA.MaxBackupIndex=30
 {code}
 In {{log4j.properties}}, the above lines should be removed because 
 DailyRollingFileAppender(DRFA) doesn't support MaxBackupIndex config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9749) Remove synchronization for UGI.getCurrentUser

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9749:
-

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 Remove synchronization for UGI.getCurrentUser
 -

 Key: HADOOP-9749
 URL: https://issues.apache.org/jira/browse/HADOOP-9749
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical

 HADOOP-7854 added synchronization to {{getCurrentUser}} due to 
 {{ConcurrentModificationExceptions}}.  This degrades NN call handler 
 performance.
 The problem was not well understood at the time, but it's caused by a 
 collision between relogin and {{getCurrentUser}} due to a bug in 
 {{Krb5LoginModule}}.  Avoiding the collision will allow removal of the 
 synchronization.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10729) Add tests for PB RPC in case version mismatch of client and server

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10729:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 Add tests for PB RPC in case version mismatch of client and server
 --

 Key: HADOOP-10729
 URL: https://issues.apache.org/jira/browse/HADOOP-10729
 Project: Hadoop Common
  Issue Type: Test
  Components: ipc
Affects Versions: 2.4.0
Reporter: Junping Du
Assignee: Junping Du
 Attachments: HADOOP-10729.patch


 We have ProtocolInfo specified in protocol interface with version info, but 
 we don't have unit test to verify if/how it works. We should have tests to 
 track this annotation work as expectation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10387) Misspelling of threshold in log4j.properties for tests

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10387:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 Misspelling of threshold in log4j.properties for tests
 --

 Key: HADOOP-10387
 URL: https://issues.apache.org/jira/browse/HADOOP-10387
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf, test
Affects Versions: 2.3.0
Reporter: Kenji Kikushima
Priority: Minor
 Attachments: HADOOP-10387-2.patch, HADOOP-10387.patch


 log4j.properties file for test contains misspelling log4j.threshhold.
 We should use log4j.threshold correctly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10231) Add some components in Native Libraries document

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10231:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 Add some components in Native Libraries document
 

 Key: HADOOP-10231
 URL: https://issues.apache.org/jira/browse/HADOOP-10231
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10231.patch


 The documented components in Native Libraries are only zlib and gzip.
 Now Native Libraries includes some other components such as other compression 
 formats (lz4, snappy), libhdfs and fuse module. These components should be 
 documented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-6310) bash completion doesn't quite work.

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-6310:
-

Target Version/s: 2.6.0  (was: 2.5.0)

 bash completion doesn't quite work.
 ---

 Key: HADOOP-6310
 URL: https://issues.apache.org/jira/browse/HADOOP-6310
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.18.3
Reporter: Paul Huff
Assignee: Sean Mackrory
Priority: Trivial
 Attachments: HADOOP-6310.patch, HADOOP-6310.patch.1


 The bash completion script in src/contrib/bash-tab-completion/hadoop.sh 
 doesn't quite work the way you'd expect it to against 18.3 (and I assume 
 anything afterwards, since the author claimed compatibility with 16-20).
 It doesn't complete things like you'd expect against HDFS, and it doesn't 
 have job-id completion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10591) Compression codecs must used pooled direct buffers or deallocate direct buffers when stream is closed

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10591:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 Compression codecs must used pooled direct buffers or deallocate direct 
 buffers when stream is closed
 -

 Key: HADOOP-10591
 URL: https://issues.apache.org/jira/browse/HADOOP-10591
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Hari Shreedharan
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-10591.001.patch


 Currently direct buffers allocated by compression codecs like Gzip (which 
 allocates 2 direct buffers per instance) are not deallocated when the stream 
 is closed. Eventually for long running processes which create a huge number 
 of files, these direct buffers are left hanging till a full gc, which may or 
 may not happen in a reasonable amount of time - especially if the process 
 does not use a whole lot of heap.
 Either these buffers should be pooled or they should be deallocated when the 
 stream is closed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10714) AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10714:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 AmazonS3Client.deleteObjects() need to be limited to 1000 entries per call
 --

 Key: HADOOP-10714
 URL: https://issues.apache.org/jira/browse/HADOOP-10714
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.5.0
Reporter: David S. Wang
Assignee: David S. Wang
Priority: Critical
  Labels: s3
 Attachments: HADOOP-10714-1.patch


 In the patch for HADOOP-10400, calls to AmazonS3Client.deleteObjects() need 
 to have the number of entries at 1000 or below. Otherwise we get a Malformed 
 XML error similar to:
 com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
 Service: Amazon S3, AWS Request ID: 6626AD56A3C76F5B, AWS Error Code: 
 MalformedXML, AWS Error Message: The XML you provided was not well-formed or 
 did not validate against our published schema, S3 Extended Request ID: 
 DOt6C+Y84mGSoDuaQTCo33893VaoKGEVC3y1k2zFIQRm+AJkFH2mTyrDgnykSL+v
 at 
 com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
 at 
 com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
 at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
 at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
 at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
 at 
 com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:1739)
 at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:388)
 at 
 org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:829)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at 
 org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:874)
 at 
 org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:878)
 Note that this is mentioned in the AWS documentation:
 http://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html
 The Multi-Object Delete request contains a list of up to 1000 keys that you 
 want to delete. In the XML, you provide the object key names, and optionally, 
 version IDs if you want to delete a specific version of the object from a 
 versioning-enabled bucket. For each key, Amazon S3….”
 Thanks to Matteo Bertozzi and Rahul Bhartia from AWS for identifying the 
 problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8087) Paths that start with a double slash cause No filesystem for scheme: null errors

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8087:
-

Target Version/s: 2.6.0  (was: 2.5.0)

 Paths that start with a double slash cause No filesystem for scheme: null 
 errors
 --

 Key: HADOOP-8087
 URL: https://issues.apache.org/jira/browse/HADOOP-8087
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0, 0.24.0
Reporter: Daryn Sharp
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8087.001.patch, HADOOP-8087.002.patch


 {{Path}} is incorrectly parsing {{//dir/path}} in a very unexpected way.  
 While it should translate to the directory {{$fs.default.name}/dir/path}}, it 
 instead discards the {{//dir}} and returns
 {{$fs.default.name/path}}.  The problem is {{Path}} is trying to parsing an 
 authority even when a scheme is not present.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10689) InputStream is not closed in AzureNativeFileSystemStore#retrieve()

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10689:
--

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 InputStream is not closed in AzureNativeFileSystemStore#retrieve()
 --

 Key: HADOOP-10689
 URL: https://issues.apache.org/jira/browse/HADOOP-10689
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-10689.patch


 In the catch block:
 {code}
 if(in != null){
   inDataStream.close();
 }
 {code}
 We check against in but try to close inDataStream which should have been 
 closed by the if statement above.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9907) Webapp http://hostname:port/metrics link is not working

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-9907:
-

Target Version/s: 2.6.0  (was: 2.5.0)

 Webapp http://hostname:port/metrics  link is not working 
 -

 Key: HADOOP-9907
 URL: https://issues.apache.org/jira/browse/HADOOP-9907
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Jian He
Assignee: Akira AJISAKA
Priority: Critical
 Attachments: HADOOP-9907.patch


 This link is not working which just shows a blank page.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10690) Lack of synchronization on access to InputStream in NativeAzureFileSystem#NativeAzureFsInputStream#close()

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10690:
--

Target Version/s: 2.6.0  (was: 3.0.0, 2.5.0)

 Lack of synchronization on access to InputStream in 
 NativeAzureFileSystem#NativeAzureFsInputStream#close()
 --

 Key: HADOOP-10690
 URL: https://issues.apache.org/jira/browse/HADOOP-10690
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-10690.patch


 {code}
 public void close() throws IOException {
   in.close();
 }
 {code}
 The close() method should be protected by synchronized keyword.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10452) BUILDING.txt needs to be updated

2014-07-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10452:
--

Target Version/s: 2.6.0  (was: 2.5.0)

 BUILDING.txt needs to be updated
 

 Key: HADOOP-10452
 URL: https://issues.apache.org/jira/browse/HADOOP-10452
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.3.0
Reporter: Travis Thompson
Assignee: Travis Thompson
Priority: Minor

 BUILDING.txt is missing some information about native compression libraries.  
 Noticeably if you are missing zlib/bzip2/snappy devel libraries, those will 
 get silently skipped unless you pass the {{-Drequire.$LIB}} option (e.x. 
 {{-Drequire.snappy}}).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-7738) Document incompatible API changes between 0.20.20x and 0.23.0 release

2014-07-03 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14051122#comment-14051122
 ] 

Karthik Kambatla commented on HADOOP-7738:
--

We should probably change the title to reflect that this is repurposed to 
capture 1.x - 2.x. That said, I believe the status of API compatibility 
between 1.x and 2.x is reasonably well know at this point. I wonder if there is 
much to gain from this documentation at this point. 

In any case, do we want this to block 2.5? 

 Document incompatible API changes between 0.20.20x and 0.23.0 release
 -

 Key: HADOOP-7738
 URL: https://issues.apache.org/jira/browse/HADOOP-7738
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tom White
Assignee: Tom White
Priority: Blocker
 Attachments: apicheck-hadoop-0.20.204.0-0.24.0-SNAPSHOT.txt


 0.20.20x to 0.23.0 will be a common upgrade path, so we should document any 
 incompatible API changes that will affect users.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-7738) Document incompatible API changes between 0.20.20x and 0.23.0 release

2014-07-07 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-7738:
-

Priority: Critical  (was: Blocker)
Target Version/s: 2.6.0  (was: 2.5.0)

Moving to 2.6 and demoted to a Critical issue. Please update if you strongly 
feel differently. 

 Document incompatible API changes between 0.20.20x and 0.23.0 release
 -

 Key: HADOOP-7738
 URL: https://issues.apache.org/jira/browse/HADOOP-7738
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tom White
Assignee: Tom White
Priority: Critical
 Attachments: apicheck-hadoop-0.20.204.0-0.24.0-SNAPSHOT.txt


 0.20.20x to 0.23.0 will be a common upgrade path, so we should document any 
 incompatible API changes that will affect users.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10504) Document proxy server support

2014-07-09 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056600#comment-14056600
 ] 

Karthik Kambatla commented on HADOOP-10504:
---

Is anyone looking at this? [~daryn] - you seem to know the most about this, 
will you be able to take this up?

 Document proxy server support
 -

 Key: HADOOP-10504
 URL: https://issues.apache.org/jira/browse/HADOOP-10504
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0, 2.5.0
Reporter: Daryn Sharp
Priority: Blocker

 Document http proxy support introduced by HADOOP-10498.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10453) Do not use AuthenticatedURL in hadoop core

2014-07-09 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14056604#comment-14056604
 ] 

Karthik Kambatla commented on HADOOP-10453:
---

[~wheat9] - is this really a blocker for 2.5? IIUC, this has been a 
long-standing issue. Will it be okay to move this out, since no one seems to be 
working on it? 

 Do not use AuthenticatedURL in hadoop core
 --

 Key: HADOOP-10453
 URL: https://issues.apache.org/jira/browse/HADOOP-10453
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Priority: Blocker

 As [~daryn] has suggested in HDFS-4564:
 {quote}
 AuthenticatedURL is not used because it is buggy in part to causing replay 
 attacks, double attempts to kerberos authenticate with the fallback 
 authenticator if the TGT is expired, incorrectly uses the fallback 
 authenticator (required by oozie servers) to add the username parameter which 
 webhdfs has already included in the uri.
 AuthenticatedURL's attempt to do SPNEGO auth is a no-op because the JDK 
 transparently does SPNEGO when the user's Subject (UGI) contains kerberos 
 principals. Since AuthenticatedURL is now not used, webhdfs has to check the 
 TGT itself for token operations.
 Bottom line is AuthenticatedURL is unnecessary and introduces nothing but 
 problems for webhdfs. It's only useful for oozie's anon/non-anon support.
 {quote}
 However, several functionalities that relies on SPNEGO in secure mode suffer 
 from the same problem. For example, NNs / JNs create HTTP connections to 
 exchange fsimage and edit logs. Currently all of them are through 
 {{AuthenticatedURL}}. This needs to be fixed to avoid security 
 vulnerabilities.
 This jira purposes to remove {{AuthenticatedURL}} from hadoop core and to 
 move it to oozie.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10504) Document proxy server support

2014-07-16 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10504:
--

Target Version/s: 2.6.0  (was: 2.5.0)

Moving this out to 2.6 since no one is actively looking at it. If anyone feels 
strongly about targeting 2.5, we can target 2.5. 

 Document proxy server support
 -

 Key: HADOOP-10504
 URL: https://issues.apache.org/jira/browse/HADOOP-10504
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0, 2.5.0
Reporter: Daryn Sharp
Priority: Blocker

 Document http proxy support introduced by HADOOP-10498.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10453) Do not use AuthenticatedURL in hadoop core

2014-07-16 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10453:
--

Target Version/s: 2.6.0  (was: 2.5.0)

(Moving this out of 2.5)

We can continue this conversation and handle this in 2.6. 

 Do not use AuthenticatedURL in hadoop core
 --

 Key: HADOOP-10453
 URL: https://issues.apache.org/jira/browse/HADOOP-10453
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Priority: Blocker

 As [~daryn] has suggested in HDFS-4564:
 {quote}
 AuthenticatedURL is not used because it is buggy in part to causing replay 
 attacks, double attempts to kerberos authenticate with the fallback 
 authenticator if the TGT is expired, incorrectly uses the fallback 
 authenticator (required by oozie servers) to add the username parameter which 
 webhdfs has already included in the uri.
 AuthenticatedURL's attempt to do SPNEGO auth is a no-op because the JDK 
 transparently does SPNEGO when the user's Subject (UGI) contains kerberos 
 principals. Since AuthenticatedURL is now not used, webhdfs has to check the 
 TGT itself for token operations.
 Bottom line is AuthenticatedURL is unnecessary and introduces nothing but 
 problems for webhdfs. It's only useful for oozie's anon/non-anon support.
 {quote}
 However, several functionalities that relies on SPNEGO in secure mode suffer 
 from the same problem. For example, NNs / JNs create HTTP connections to 
 exchange fsimage and edit logs. Currently all of them are through 
 {{AuthenticatedURL}}. This needs to be fixed to avoid security 
 vulnerabilities.
 This jira purposes to remove {{AuthenticatedURL}} from hadoop core and to 
 move it to oozie.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HADOOP-10821) Prepare the release notes for Hadoop 2.5.0

2014-07-16 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned HADOOP-10821:
-

Assignee: Karthik Kambatla

 Prepare the release notes for Hadoop 2.5.0
 --

 Key: HADOOP-10821
 URL: https://issues.apache.org/jira/browse/HADOOP-10821
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Karthik Kambatla
Priority: Blocker

 The release notes for 2.3.0+ 
 (http://hadoop.apache.org/docs/r2.4.1/index.html) still talk about federation 
 and MRv2
 being new features. We should update them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10896) Update compatibility doc to capture visibility of un-annotated classes/ methods

2014-07-25 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-10896:
-

 Summary: Update compatibility doc to capture visibility of 
un-annotated classes/ methods
 Key: HADOOP-10896
 URL: https://issues.apache.org/jira/browse/HADOOP-10896
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.4.1
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker


From discussion on email thread, we should add something to the effect of 

Classes without annotations are to considered @Private. Class members without 
specific annotations inherit the annotations of the class. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10821) Prepare the release notes for Hadoop 2.5.0

2014-07-25 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10821:
--

Attachment: hadoop-10821-branch2.patch

Here is updated index page of the documentation for 2.5. I think we should 
commit this only to branch-2. 

The changes include only YARN. [~andrew.wang] offered to address the HDFS 
parts. 


 Prepare the release notes for Hadoop 2.5.0
 --

 Key: HADOOP-10821
 URL: https://issues.apache.org/jira/browse/HADOOP-10821
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: hadoop-10821-branch2.patch


 The release notes for 2.3.0+ 
 (http://hadoop.apache.org/docs/r2.4.1/index.html) still talk about federation 
 and MRv2
 being new features. We should update them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10896) Update compatibility doc to capture visibility of un-annotated classes/ methods

2014-07-25 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10896:
--

Attachment: hadoop-10896.patch

Patch that adds a policy for classes/members not annotated. 

 Update compatibility doc to capture visibility of un-annotated classes/ 
 methods
 ---

 Key: HADOOP-10896
 URL: https://issues.apache.org/jira/browse/HADOOP-10896
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.4.1
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: hadoop-10896.patch


 From discussion on email thread, we should add something to the effect of 
 Classes without annotations are to considered @Private. Class members 
 without specific annotations inherit the annotations of the class. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10896) Update compatibility doc to capture visibility of un-annotated classes/ methods

2014-07-25 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10896:
--

Status: Patch Available  (was: Open)

 Update compatibility doc to capture visibility of un-annotated classes/ 
 methods
 ---

 Key: HADOOP-10896
 URL: https://issues.apache.org/jira/browse/HADOOP-10896
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.4.1
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: hadoop-10896.patch


 From discussion on email thread, we should add something to the effect of 
 Classes without annotations are to considered @Private. Class members 
 without specific annotations inherit the annotations of the class. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10896) Update compatibility doc to capture visibility of un-annotated classes/ methods

2014-07-25 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074742#comment-14074742
 ] 

Karthik Kambatla commented on HADOOP-10896:
---

Thanks Andrew. Will commit this later today if no one objects. 

 Update compatibility doc to capture visibility of un-annotated classes/ 
 methods
 ---

 Key: HADOOP-10896
 URL: https://issues.apache.org/jira/browse/HADOOP-10896
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.4.1
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: hadoop-10896.patch


 From discussion on email thread, we should add something to the effect of 
 Classes without annotations are to considered @Private. Class members 
 without specific annotations inherit the annotations of the class. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10896) Update compatibility doc to capture visibility of un-annotated classes/ methods

2014-07-25 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10896:
--

   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Just committed to trunk, branch-2 and branch-2.5.

 Update compatibility doc to capture visibility of un-annotated classes/ 
 methods
 ---

 Key: HADOOP-10896
 URL: https://issues.apache.org/jira/browse/HADOOP-10896
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.4.1
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Fix For: 2.5.0

 Attachments: hadoop-10896.patch


 From discussion on email thread, we should add something to the effect of 
 Classes without annotations are to considered @Private. Class members 
 without specific annotations inherit the annotations of the class. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10821) Prepare the release notes for Hadoop 2.5.0

2014-07-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076602#comment-14076602
 ] 

Karthik Kambatla commented on HADOOP-10821:
---

Thanks Andrew. Yes, it makes sense to include more from YARN/ MR. Will update 
the patch in a few hours. 

 Prepare the release notes for Hadoop 2.5.0
 --

 Key: HADOOP-10821
 URL: https://issues.apache.org/jira/browse/HADOOP-10821
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: hadoop-10821-branch2.002.patch, 
 hadoop-10821-branch2.patch


 The release notes for 2.3.0+ 
 (http://hadoop.apache.org/docs/r2.4.1/index.html) still talk about federation 
 and MRv2
 being new features. We should update them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10821) Prepare the release notes for Hadoop 2.5.0

2014-07-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10821:
--

Attachment: hadoop-10821-branch2-3.patch

Here is an updated patch. Most other new features/improvements are either parts 
of full features to come in 2.6 or too small to include here. 

 Prepare the release notes for Hadoop 2.5.0
 --

 Key: HADOOP-10821
 URL: https://issues.apache.org/jira/browse/HADOOP-10821
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: hadoop-10821-branch2-3.patch, 
 hadoop-10821-branch2.002.patch, hadoop-10821-branch2.patch


 The release notes for 2.3.0+ 
 (http://hadoop.apache.org/docs/r2.4.1/index.html) still talk about federation 
 and MRv2
 being new features. We should update them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10821) Prepare the release notes for Hadoop 2.5.0

2014-07-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10821:
--

Attachment: hadoop-10821-branch2-3.patch

Looks like my earlier diff was against trunk. 

Sorry for the scare - didn't mean to change so much source hidden in this JIRA. 

 Prepare the release notes for Hadoop 2.5.0
 --

 Key: HADOOP-10821
 URL: https://issues.apache.org/jira/browse/HADOOP-10821
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: hadoop-10821-branch2-3.patch, 
 hadoop-10821-branch2-3.patch, hadoop-10821-branch2.002.patch, 
 hadoop-10821-branch2.patch


 The release notes for 2.3.0+ 
 (http://hadoop.apache.org/docs/r2.4.1/index.html) still talk about federation 
 and MRv2
 being new features. We should update them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10821) Prepare the release notes for Hadoop 2.5.0

2014-07-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10821:
--

Assignee: Andrew Wang  (was: Karthik Kambatla)

 Prepare the release notes for Hadoop 2.5.0
 --

 Key: HADOOP-10821
 URL: https://issues.apache.org/jira/browse/HADOOP-10821
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Andrew Wang
Priority: Blocker
 Attachments: hadoop-10821-branch2-3.patch, 
 hadoop-10821-branch2-3.patch, hadoop-10821-branch2.002.patch, 
 hadoop-10821-branch2.004.patch, hadoop-10821-branch2.patch


 The release notes for 2.3.0+ 
 (http://hadoop.apache.org/docs/r2.4.1/index.html) still talk about federation 
 and MRv2
 being new features. We should update them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10821) Prepare the release notes for Hadoop 2.5.0

2014-07-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved HADOOP-10821.
---

   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed

Thanks Andrew for the patches, Colin and Sandy for the reviews. Just committed 
this to branch-2 and branch-2.5

 Prepare the release notes for Hadoop 2.5.0
 --

 Key: HADOOP-10821
 URL: https://issues.apache.org/jira/browse/HADOOP-10821
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Andrew Wang
Priority: Blocker
 Fix For: 2.5.0

 Attachments: hadoop-10821-branch2-3.patch, 
 hadoop-10821-branch2-3.patch, hadoop-10821-branch2.002.patch, 
 hadoop-10821-branch2.004.patch, hadoop-10821-branch2.patch


 The release notes for 2.3.0+ 
 (http://hadoop.apache.org/docs/r2.4.1/index.html) still talk about federation 
 and MRv2
 being new features. We should update them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HADOOP-10686) Writables are not always configured

2014-07-29 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reopened HADOOP-10686:
---


Looks like I messed up backporting to branch-2. 

 Writables are not always configured
 ---

 Key: HADOOP-10686
 URL: https://issues.apache.org/jira/browse/HADOOP-10686
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Abraham Elmahrek
Assignee: Abraham Elmahrek
 Fix For: 2.5.0

 Attachments: MAPREDUCE-5914.0.patch, MAPREDUCE-5914.1.patch, 
 MAPREDUCE-5914.2.patch


 Seeing the following exception:
 {noformat}
 java.lang.Exception: java.lang.NullPointerException
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.sqoop.job.io.SqoopWritable.readFields(SqoopWritable.java:59)
   at 
 org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:129)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.compare(MapTask.java:1248)
   at org.apache.hadoop.util.QuickSort.fix(QuickSort.java:35)
   at org.apache.hadoop.util.QuickSort.sortInternal(QuickSort.java:87)
   at org.apache.hadoop.util.QuickSort.sort(QuickSort.java:63)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1582)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1467)
   at 
 org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:699)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:769)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:744)
 {noformat}
 It turns out that WritableComparator does not configure Writable objects 
 :https://github.com/apache/hadoop-common/blob/branch-2.3.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java.
  This is during the sort phase for an MR job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10686) Writables are not always configured

2014-07-29 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved HADOOP-10686.
---

Resolution: Fixed

Looks like I screwed up the merge to branch-2. I merged the changes to branch-2 
along with YARN-2155 and hence are hidden under that commit. I was hoping to 
add a dummy commit to capture that information, but looks like that isn't 
possible.  


 Writables are not always configured
 ---

 Key: HADOOP-10686
 URL: https://issues.apache.org/jira/browse/HADOOP-10686
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Abraham Elmahrek
Assignee: Abraham Elmahrek
 Fix For: 2.5.0

 Attachments: MAPREDUCE-5914.0.patch, MAPREDUCE-5914.1.patch, 
 MAPREDUCE-5914.2.patch


 Seeing the following exception:
 {noformat}
 java.lang.Exception: java.lang.NullPointerException
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.sqoop.job.io.SqoopWritable.readFields(SqoopWritable.java:59)
   at 
 org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:129)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.compare(MapTask.java:1248)
   at org.apache.hadoop.util.QuickSort.fix(QuickSort.java:35)
   at org.apache.hadoop.util.QuickSort.sortInternal(QuickSort.java:87)
   at org.apache.hadoop.util.QuickSort.sort(QuickSort.java:63)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1582)
   at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1467)
   at 
 org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:699)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:769)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:744)
 {noformat}
 It turns out that WritableComparator does not configure Writable objects 
 :https://github.com/apache/hadoop-common/blob/branch-2.3.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableComparator.java.
  This is during the sort phase for an MR job.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10910) Increase findbugs maxHeap size

2014-07-30 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14080051#comment-14080051
 ] 

Karthik Kambatla commented on HADOOP-10910:
---

+1. Thanks for finding and fixing this, Andrew. Will commit this shortly. 

 Increase findbugs maxHeap size
 --

 Key: HADOOP-10910
 URL: https://issues.apache.org/jira/browse/HADOOP-10910
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Blocker
 Attachments: hadoop-10910.001.patch


 The release build fails because of an obscure findbugs error. Testing reveals 
 that this is related to the findbugs heap size.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HADOOP-10759) Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh

2014-08-05 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reopened HADOOP-10759:
---


 Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
 --

 Key: HADOOP-10759
 URL: https://issues.apache.org/jira/browse/HADOOP-10759
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 2.4.0
 Environment: Linux64
Reporter: sam liu
Priority: Minor
 Fix For: 2.5.0

 Attachments: HADOOP-10759.patch, HADOOP-10759.patch


 In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there 
 is a hard code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10759) Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh

2014-08-05 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14086901#comment-14086901
 ] 

Karthik Kambatla commented on HADOOP-10759:
---

[~eyang] - thanks for reviewing and committing this patch. However, we should 
avoid committing anything short of Critical past branch-x. 

Given we are in the middle of RCs for 2.5, I am reverting this from 
branch-2.5.0 and branch-2.5. 

 Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
 --

 Key: HADOOP-10759
 URL: https://issues.apache.org/jira/browse/HADOOP-10759
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 2.4.0
 Environment: Linux64
Reporter: sam liu
Priority: Minor
 Fix For: 2.5.0

 Attachments: HADOOP-10759.patch, HADOOP-10759.patch


 In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there 
 is a hard code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10759) Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh

2014-08-05 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved HADOOP-10759.
---

   Resolution: Fixed
Fix Version/s: (was: 2.5.0)
   2.6.0

Updated CHANGES.txt to move it 2.6.0 and also add it at the end of the section 
instead of the beginning, so we can preserve chronological order. 

 Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
 --

 Key: HADOOP-10759
 URL: https://issues.apache.org/jira/browse/HADOOP-10759
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 2.4.0
 Environment: Linux64
Reporter: sam liu
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-10759.patch, HADOOP-10759.patch


 In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there 
 is a hard code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10759) Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh

2014-08-05 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14086951#comment-14086951
 ] 

Karthik Kambatla commented on HADOOP-10759:
---

Thanks Allen. I was planning to follow up on this after taking care of 2.5.0 
branching.

[~eyang]/[~sam liu] - is there a particular reason for doing this? 

 Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
 --

 Key: HADOOP-10759
 URL: https://issues.apache.org/jira/browse/HADOOP-10759
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 2.4.0
 Environment: Linux64
Reporter: sam liu
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-10759.patch, HADOOP-10759.patch


 In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there 
 is a hard code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10402) Configuration.getValByRegex doesn't do variable substitution

2014-08-10 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092214#comment-14092214
 ] 

Karthik Kambatla commented on HADOOP-10402:
---

+1. Checking this in. 

 Configuration.getValByRegex doesn't do variable substitution
 

 Key: HADOOP-10402
 URL: https://issues.apache.org/jira/browse/HADOOP-10402
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Robert Kanter
Assignee: Robert Kanter
 Attachments: HADOOP-10402.patch


 When using Configuration.getValByRegex(...), variables are not resolved.  
 For example:
 {code:xml}
 property
namebar/name
valuewoot/value
 /property
 property
namefoo3/name
value${bar}/value
 /property
 {code}
 If you then try to do something like {{Configuration.getValByRegex(foo.*)}}, 
 it will return a Map containing foo3=$\{bar} instead of foo3=woot



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10402) Configuration.getValByRegex doesn't do variable substitution

2014-08-10 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10402:
--

Target Version/s: 2.6.0  (was: 2.4.0)

 Configuration.getValByRegex doesn't do variable substitution
 

 Key: HADOOP-10402
 URL: https://issues.apache.org/jira/browse/HADOOP-10402
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Robert Kanter
Assignee: Robert Kanter
 Attachments: HADOOP-10402.patch


 When using Configuration.getValByRegex(...), variables are not resolved.  
 For example:
 {code:xml}
 property
namebar/name
valuewoot/value
 /property
 property
namefoo3/name
value${bar}/value
 /property
 {code}
 If you then try to do something like {{Configuration.getValByRegex(foo.*)}}, 
 it will return a Map containing foo3=$\{bar} instead of foo3=woot



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10402) Configuration.getValByRegex does not substitute for variables

2014-08-10 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10402:
--

Summary: Configuration.getValByRegex does not substitute for variables  
(was: Configuration.getValByRegex doesn't do variable substitution)

 Configuration.getValByRegex does not substitute for variables
 -

 Key: HADOOP-10402
 URL: https://issues.apache.org/jira/browse/HADOOP-10402
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Robert Kanter
Assignee: Robert Kanter
 Attachments: HADOOP-10402.patch


 When using Configuration.getValByRegex(...), variables are not resolved.  
 For example:
 {code:xml}
 property
namebar/name
valuewoot/value
 /property
 property
namefoo3/name
value${bar}/value
 /property
 {code}
 If you then try to do something like {{Configuration.getValByRegex(foo.*)}}, 
 it will return a Map containing foo3=$\{bar} instead of foo3=woot



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10402) Configuration.getValByRegex does not substitute for variables

2014-08-10 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10402:
--

   Resolution: Fixed
Fix Version/s: 2.6.0
   Status: Resolved  (was: Patch Available)

Thanks for reporting and fixing this, Robert. Just committed this to trunk and 
branch-2. 

 Configuration.getValByRegex does not substitute for variables
 -

 Key: HADOOP-10402
 URL: https://issues.apache.org/jira/browse/HADOOP-10402
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Robert Kanter
Assignee: Robert Kanter
 Fix For: 2.6.0

 Attachments: HADOOP-10402.patch


 When using Configuration.getValByRegex(...), variables are not resolved.  
 For example:
 {code:xml}
 property
namebar/name
valuewoot/value
 /property
 property
namefoo3/name
value${bar}/value
 /property
 {code}
 If you then try to do something like {{Configuration.getValByRegex(foo.*)}}, 
 it will return a Map containing foo3=$\{bar} instead of foo3=woot



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10956) Fix create-release script to include docs in the binary

2014-08-11 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-10956:
-

 Summary: Fix create-release script to include docs in the binary
 Key: HADOOP-10956
 URL: https://issues.apache.org/jira/browse/HADOOP-10956
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.5.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker


The create-release script doesn't include docs in the binary tarball. We should 
fix that. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


<    1   2   3   4   5   6   7   >