[jira] [Commented] (HADOOP-10759) Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh

2014-08-19 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14102767#comment-14102767
 ] 

Arpit Gupta commented on HADOOP-10759:
--

[~eyang]

At least with JDK1.6 we saw zookeeper taking p around 4GB of heap on a 16GB 
machine thus we filed ZOOKEEPER-1670

 Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
 --

 Key: HADOOP-10759
 URL: https://issues.apache.org/jira/browse/HADOOP-10759
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 2.4.0
 Environment: Linux64
Reporter: sam liu
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-10759.patch, HADOOP-10759.patch


 In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there 
 is a hard code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10759) Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh

2014-08-05 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14086959#comment-14086959
 ] 

Arpit Gupta commented on HADOOP-10759:
--

Take a look at ZOOKEEPER-1670. We noticed that if no default heap was provided 
java can end up taking upto 1/4th of machine mem.

 Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
 --

 Key: HADOOP-10759
 URL: https://issues.apache.org/jira/browse/HADOOP-10759
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 2.4.0
 Environment: Linux64
Reporter: sam liu
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-10759.patch, HADOOP-10759.patch


 In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there 
 is a hard code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10065) Fix namenode format documentation

2014-06-23 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14041430#comment-14041430
 ] 

Arpit Gupta commented on HADOOP-10065:
--

[~ajisakaa] feel free to take it over.

 Fix namenode format documentation
 -

 Key: HADOOP-10065
 URL: https://issues.apache.org/jira/browse/HADOOP-10065
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10065.2.patch, HADOOP-10065.3.patch, 
 HADOOP-10065.patch


 Current namenode format doc
 http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#namenode
 Does not list the various options format can be called with and their use.
 {code}
 [-format [-clusterid cid ] [-force] [-nonInteractive] ]
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10215) Cannot create hftp filesystem when using a proxy user ugi on a secure cluster

2014-01-08 Thread Arpit Gupta (JIRA)
Arpit Gupta created HADOOP-10215:


 Summary: Cannot create hftp filesystem when using a proxy user ugi 
on a secure cluster
 Key: HADOOP-10215
 URL: https://issues.apache.org/jira/browse/HADOOP-10215
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.2.0
Reporter: Arpit Gupta


Noticed this while debugging issues in another application. We saw an error 
when trying to do a FileSystem.get using an hftp file system on a secure 
cluster using a proxy user ugi.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10215) Cannot create hftp filesystem when using a proxy user ugi on a secure cluster

2014-01-08 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13866323#comment-13866323
 ] 

Arpit Gupta commented on HADOOP-10215:
--

Here is the stack trace from a simple test i wrote

{code}
ava.io.IOException: Unable to obtain remote token
at 
org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.getDTfromRemote(DelegationTokenFetcher.java:233)
at org.apache.hadoop.hdfs.HftpFileSystem$2.run(HftpFileSystem.java:265)
at org.apache.hadoop.hdfs.HftpFileSystem$2.run(HftpFileSystem.java:259)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at 
org.apache.hadoop.hdfs.HftpFileSystem.getDelegationToken(HftpFileSystem.java:259)
at 
org.apache.hadoop.hdfs.HftpFileSystem.initDelegationToken(HftpFileSystem.java:205)
at 
org.apache.hadoop.hdfs.HftpFileSystem.initialize(HftpFileSystem.java:194)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:166)
at org.hw.tests.ProxyUserTests$1.run(ProxyUserTests.java:122)
at org.hw.tests.ProxyUserTests$1.run(ProxyUserTests.java:119)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.hw.tests.ProxyUserTests.getFileSystem(ProxyUserTests.java:119)
at 
org.hw.tests.ProxyUserTests.testProxyUserFileSystems(ProxyUserTests.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:24)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at 
org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
at 

[jira] [Updated] (HADOOP-10215) Cannot create hftp filesystem when using a proxy user ugi on a secure cluster

2014-01-08 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-10215:
-

Assignee: Jing Zhao

 Cannot create hftp filesystem when using a proxy user ugi on a secure cluster
 -

 Key: HADOOP-10215
 URL: https://issues.apache.org/jira/browse/HADOOP-10215
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Jing Zhao

 Noticed this while debugging issues in another application. We saw an error 
 when trying to do a FileSystem.get using an hftp file system on a secure 
 cluster using a proxy user ugi.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10215) Cannot create hftp filesystem when using a proxy user ugi and a doAs on a secure cluster

2014-01-08 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-10215:
-

Description: 
Noticed this while debugging issues in another application. We saw an error 
when trying to do a FileSystem.get using an hftp file system on a secure 
cluster using a proxy user ugi.

This is a small snippet used

{code}
 FileSystem testFS = ugi.doAs(new PrivilegedExceptionActionFileSystem() {
@Override
public FileSystem run() throws IOException {
return FileSystem.get(hadoopConf);
}
});
{code}

The same code worked for hdfs and webhdfs but not for hftp when the ugi used 
was UserGroupInformation.createProxyUser

  was:Noticed this while debugging issues in another application. We saw an 
error when trying to do a FileSystem.get using an hftp file system on a secure 
cluster using a proxy user ugi.


 Cannot create hftp filesystem when using a proxy user ugi and a doAs on a 
 secure cluster
 

 Key: HADOOP-10215
 URL: https://issues.apache.org/jira/browse/HADOOP-10215
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Jing Zhao

 Noticed this while debugging issues in another application. We saw an error 
 when trying to do a FileSystem.get using an hftp file system on a secure 
 cluster using a proxy user ugi.
 This is a small snippet used
 {code}
  FileSystem testFS = ugi.doAs(new PrivilegedExceptionActionFileSystem() {
 @Override
 public FileSystem run() throws IOException {
 return FileSystem.get(hadoopConf);
 }
 });
 {code}
 The same code worked for hdfs and webhdfs but not for hftp when the ugi used 
 was UserGroupInformation.createProxyUser



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10215) Cannot create hftp filesystem when using a proxy user ugi and a doAs on a secure cluster

2014-01-08 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-10215:
-

Summary: Cannot create hftp filesystem when using a proxy user ugi and a 
doAs on a secure cluster  (was: Cannot create hftp filesystem when using a 
proxy user ugi on a secure cluster)

 Cannot create hftp filesystem when using a proxy user ugi and a doAs on a 
 secure cluster
 

 Key: HADOOP-10215
 URL: https://issues.apache.org/jira/browse/HADOOP-10215
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Jing Zhao

 Noticed this while debugging issues in another application. We saw an error 
 when trying to do a FileSystem.get using an hftp file system on a secure 
 cluster using a proxy user ugi.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10065) Fix namenode format documentation

2013-10-23 Thread Arpit Gupta (JIRA)
Arpit Gupta created HADOOP-10065:


 Summary: Fix namenode format documentation
 Key: HADOOP-10065
 URL: https://issues.apache.org/jira/browse/HADOOP-10065
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Fix For: 2.2.1


Current namenode format doc

http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#namenode

Does not list the various options format can be called with and their use.

{code}
[-format [-clusterid cid ] [-force] [-nonInteractive] ]
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10065) Fix namenode format documentation

2013-10-23 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-10065:
-

Fix Version/s: (was: 2.2.1)

 Fix namenode format documentation
 -

 Key: HADOOP-10065
 URL: https://issues.apache.org/jira/browse/HADOOP-10065
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor

 Current namenode format doc
 http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#namenode
 Does not list the various options format can be called with and their use.
 {code}
 [-format [-clusterid cid ] [-force] [-nonInteractive] ]
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10065) Fix namenode format documentation

2013-10-23 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-10065:
-

Target Version/s: 2.2.1

 Fix namenode format documentation
 -

 Key: HADOOP-10065
 URL: https://issues.apache.org/jira/browse/HADOOP-10065
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor

 Current namenode format doc
 http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#namenode
 Does not list the various options format can be called with and their use.
 {code}
 [-format [-clusterid cid ] [-force] [-nonInteractive] ]
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10065) Fix namenode format documentation

2013-10-23 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-10065:
-

Attachment: HADOOP-10065.patch

 Fix namenode format documentation
 -

 Key: HADOOP-10065
 URL: https://issues.apache.org/jira/browse/HADOOP-10065
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Attachments: HADOOP-10065.patch


 Current namenode format doc
 http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#namenode
 Does not list the various options format can be called with and their use.
 {code}
 [-format [-clusterid cid ] [-force] [-nonInteractive] ]
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10065) Fix namenode format documentation

2013-10-23 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-10065:
-

Status: Patch Available  (was: Open)

 Fix namenode format documentation
 -

 Key: HADOOP-10065
 URL: https://issues.apache.org/jira/browse/HADOOP-10065
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Attachments: HADOOP-10065.patch


 Current namenode format doc
 http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#namenode
 Does not list the various options format can be called with and their use.
 {code}
 [-format [-clusterid cid ] [-force] [-nonInteractive] ]
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10050) Update single node and cluster install instructions to work with latest bits

2013-10-22 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13801911#comment-13801911
 ] 

Arpit Gupta commented on HADOOP-10050:
--

Anybody have any thoughts on what config i am missing that causes the running 
AM link to not work?

 Update single node and cluster install instructions to work with latest bits
 

 Key: HADOOP-10050
 URL: https://issues.apache.org/jira/browse/HADOOP-10050
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Attachments: ClusterSetup.html, HADOOP-10050.patch, 
 HADOOP-10050.patch, mapred-site.xml, SingleCluster.html, yarn-site.xml


 A few things i noticed
 1. changes to yarn.nodemanager.aux-services
 2. Set the framework to yarn in mapred-site.xml
 3. Start the history server
 Also noticed no change to the capacity scheduler configs was needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10050) Update single node and cluster install instructions to work with latest bits

2013-10-17 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-10050:
-

Attachment: HADOOP-10050.patch

Addressed everything except for capacity scheduler changes. Let me know if we 
want to put those back.

Also found a minor issue on the last line of the page 

http://hadoop.apache.org/docs/r2.2.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html

where the property should have been mapred_shufflex instead of mapred.shufflex 
so fixed that as well.

 Update single node and cluster install instructions to work with latest bits
 

 Key: HADOOP-10050
 URL: https://issues.apache.org/jira/browse/HADOOP-10050
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Attachments: ClusterSetup.html, HADOOP-10050.patch, 
 HADOOP-10050.patch, mapred-site.xml, SingleCluster.html, yarn-site.xml


 A few things i noticed
 1. changes to yarn.nodemanager.aux-services
 2. Set the framework to yarn in mapred-site.xml
 3. Start the history server
 Also noticed no change to the capacity scheduler configs was needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10050) Update single node and cluster install instructions to work with latest bits

2013-10-17 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13798172#comment-13798172
 ] 

Arpit Gupta commented on HADOOP-10050:
--

Actually for my single node setup i used 0.0.0.0 for the property and it worked 
better. With it set to 0.0.0.0 the Application Master links for the running job 
worked where as when i set it to localhost those links did not work. If i 
changed the ip to localhost it would bring up the page.

So i think leaving to 0.0.0.0 might be better unless i am missing some configs.


 Update single node and cluster install instructions to work with latest bits
 

 Key: HADOOP-10050
 URL: https://issues.apache.org/jira/browse/HADOOP-10050
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Attachments: ClusterSetup.html, HADOOP-10050.patch, 
 HADOOP-10050.patch, mapred-site.xml, SingleCluster.html, yarn-site.xml


 A few things i noticed
 1. changes to yarn.nodemanager.aux-services
 2. Set the framework to yarn in mapred-site.xml
 3. Start the history server
 Also noticed no change to the capacity scheduler configs was needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10050) Update single node and cluster install instructions to work with latest bits

2013-10-16 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-10050:
-

Target Version/s: 2.2.1

 Update single node and cluster install instructions to work with latest bits
 

 Key: HADOOP-10050
 URL: https://issues.apache.org/jira/browse/HADOOP-10050
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Attachments: ClusterSetup.html, HADOOP-10050.patch, mapred-site.xml, 
 SingleCluster.html, yarn-site.xml


 A few things i noticed
 1. changes to yarn.nodemanager.aux-services
 2. Set the framework to yarn in mapred-site.xml
 3. Start the history server
 Also noticed no change to the capacity scheduler configs was needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10050) Update single node and cluster install instructions to work with latest bits

2013-10-16 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797432#comment-13797432
 ] 

Arpit Gupta commented on HADOOP-10050:
--

bq. The capacity-scheduler configs are removed from SingleCluster.apt.vm - any 
reason for this? 

I did not have to make any changes to be able to run an example.

We have yarn.scheduler.capacity.root.default.capacity=100 and 
yarn.scheduler.capacity.root.queues=default.

The following 2 are not there in the configs that are part of the release 
yarn.scheduler.capacity.root.capacity and 
yarn.scheduler.capacity.root.unfunded.capacity


 bq. Generated SingleCluster.html has Set $$YARN_CONF_DIR - mind fixing this 
too?

Will fix this.

bq. yarn-daemon.sh is not meant to be in HADOOP_MAPRED_HOME.

Actually when i ran the hadoop-2.2.0 tar ball i did not have to set any of 
these env vars everything was in ./sbin. I can make the change to to 
HADOOP_YARN_HOME and HADOOP_MAPRED_HOME.

I cannot find any reference for HADOOP_PREFIX on 
http://hadoop.apache.org/docs/r2.2.0/hadoop-project-dist/hadoop-common/SingleCluster.html
 

 Update single node and cluster install instructions to work with latest bits
 

 Key: HADOOP-10050
 URL: https://issues.apache.org/jira/browse/HADOOP-10050
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Attachments: ClusterSetup.html, HADOOP-10050.patch, mapred-site.xml, 
 SingleCluster.html, yarn-site.xml


 A few things i noticed
 1. changes to yarn.nodemanager.aux-services
 2. Set the framework to yarn in mapred-site.xml
 3. Start the history server
 Also noticed no change to the capacity scheduler configs was needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10050) Update single node and cluster install instructions to work with latest bits

2013-10-15 Thread Arpit Gupta (JIRA)
Arpit Gupta created HADOOP-10050:


 Summary: Update single node and cluster install instructions to 
work with latest bits
 Key: HADOOP-10050
 URL: https://issues.apache.org/jira/browse/HADOOP-10050
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor


A few things i noticed

1. changes to yarn.nodemanager.aux-services
2. Set the framework to yarn in mapred-site.xml
3. Start the history server

Also noticed no change to the capacity scheduler configs was needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10050) Update single node and cluster install instructions to work with latest bits

2013-10-15 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13796255#comment-13796255
 ] 

Arpit Gupta commented on HADOOP-10050:
--

[~sandyr]

Yup that was already part of the config clean up i was going to do :). Should 
we resolve it as a dup?

 Update single node and cluster install instructions to work with latest bits
 

 Key: HADOOP-10050
 URL: https://issues.apache.org/jira/browse/HADOOP-10050
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor

 A few things i noticed
 1. changes to yarn.nodemanager.aux-services
 2. Set the framework to yarn in mapred-site.xml
 3. Start the history server
 Also noticed no change to the capacity scheduler configs was needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10050) Update single node and cluster install instructions to work with latest bits

2013-10-15 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-10050:
-

Attachment: yarn-site.xml
mapred-site.xml

Attached are the configs i ended up using. If there are any other changes we  
would like to make let me know and i can address them as well.

 Update single node and cluster install instructions to work with latest bits
 

 Key: HADOOP-10050
 URL: https://issues.apache.org/jira/browse/HADOOP-10050
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Attachments: mapred-site.xml, yarn-site.xml


 A few things i noticed
 1. changes to yarn.nodemanager.aux-services
 2. Set the framework to yarn in mapred-site.xml
 3. Start the history server
 Also noticed no change to the capacity scheduler configs was needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10050) Update single node and cluster install instructions to work with latest bits

2013-10-15 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-10050:
-

Status: Patch Available  (was: Open)

 Update single node and cluster install instructions to work with latest bits
 

 Key: HADOOP-10050
 URL: https://issues.apache.org/jira/browse/HADOOP-10050
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Attachments: HADOOP-10050.patch, mapred-site.xml, yarn-site.xml


 A few things i noticed
 1. changes to yarn.nodemanager.aux-services
 2. Set the framework to yarn in mapred-site.xml
 3. Start the history server
 Also noticed no change to the capacity scheduler configs was needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10050) Update single node and cluster install instructions to work with latest bits

2013-10-15 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-10050:
-

Attachment: HADOOP-10050.patch

Attached an initial patch. 

Also i would like to

hadoop-mapreduce-project/conf/mapred-site.xml.template - 
hadoop-mapreduce-project/conf/mapred-site.xml

Not sure how a rename patch is generated. This is needed so that the file 
generated does not have to be renamed :).

 Update single node and cluster install instructions to work with latest bits
 

 Key: HADOOP-10050
 URL: https://issues.apache.org/jira/browse/HADOOP-10050
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Attachments: HADOOP-10050.patch, mapred-site.xml, yarn-site.xml


 A few things i noticed
 1. changes to yarn.nodemanager.aux-services
 2. Set the framework to yarn in mapred-site.xml
 3. Start the history server
 Also noticed no change to the capacity scheduler configs was needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10050) Update single node and cluster install instructions to work with latest bits

2013-10-15 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-10050:
-

Attachment: SingleCluster.html
ClusterSetup.html

Attaching the html files for the docs in case some one wants to take a look.

 Update single node and cluster install instructions to work with latest bits
 

 Key: HADOOP-10050
 URL: https://issues.apache.org/jira/browse/HADOOP-10050
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Attachments: ClusterSetup.html, HADOOP-10050.patch, mapred-site.xml, 
 SingleCluster.html, yarn-site.xml


 A few things i noticed
 1. changes to yarn.nodemanager.aux-services
 2. Set the framework to yarn in mapred-site.xml
 3. Start the history server
 Also noticed no change to the capacity scheduler configs was needed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10012) Secure Oozie jobs with delegation token renewal exception in HA setup

2013-10-01 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783277#comment-13783277
 ] 

Arpit Gupta commented on HADOOP-10012:
--

Here is hte stack trace

{code}
2013-08-29 20:07:05,773 INFO  resourcemanager.ClientRMService 
(ClientRMService.java:getNewApplicationId(206)) - Allocated new applicationId: 8
2013-08-29 20:07:06,713 WARN  token.Token (Token.java:getRenewer(352)) - No 
TokenRenewer defined for token kind Localizer
2013-08-29 20:07:06,731 ERROR security.UserGroupInformation 
(UserGroupInformation.java:doAs(1480)) - PriviledgedActionException 
as:rm/hostname:8020;
2013-08-29 20:07:06,731 WARN  resourcemanager.RMAppManager 
(RMAppManager.java:submitApplication(297)) - Unable to add the application to 
the delegation token renewer.
java.io.IOException: Failed on local exception: java.io.EOFException; Host 
Details : local host is: hostname:8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at $Proxy9.renewDelegationToken(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:188)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at $Proxy9.renewDelegationToken(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewDelegationToken(ClientNamenodeProtocolTranslatorPB.java:820)
at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:932)
at org.apache.hadoop.security.token.Token.renew(Token.java:372)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:385)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:382)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.renewToken(DelegationTokenRenewer.java:381)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.addApplication(DelegationTokenRenewer.java:301)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:291)
at 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:315)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:163)
at 
org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:243)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at 
org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:995)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:891)
2013-08-29 20:07:06,733 INFO  rmapp.RMAppImpl (RMAppImpl.java:handle(565)) - 
application_1377802472892_0008 State change from NEW to FAILED
2013-08-29 20:07:06,734 WARN  resourcemanager.RMAuditLogger 
(RMAuditLogger.java:logFailure(255)) - USER=hrt_qa  OPERATION=Application 
Finished - Failed TARGET=RMAppManager RESULT=FAILURE  DESCRIPTION=App 
failed with state: FAILED   PERMISSIONS=Failed on local exception: 
java.io.EOFException; Host Details : local host is: hostname:8020; 
APPID=application_1377802472892_0008
{code}

 Secure Oozie jobs with delegation token 

[jira] [Created] (HADOOP-10012) Secure Oozie jobs with delegation token renewal exception in HA setup

2013-10-01 Thread Arpit Gupta (JIRA)
Arpit Gupta created HADOOP-10012:


 Summary: Secure Oozie jobs with delegation token renewal exception 
in HA setup
 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10012) Secure Oozie jobs fail with delegation token renewal exception in HA setup

2013-10-01 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-10012:
-

Summary: Secure Oozie jobs fail with delegation token renewal exception in 
HA setup  (was: Secure Oozie jobs with delegation token renewal exception in HA 
setup)

 Secure Oozie jobs fail with delegation token renewal exception in HA setup
 --

 Key: HADOOP-10012
 URL: https://issues.apache.org/jira/browse/HADOOP-10012
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Suresh Srinivas





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9886) Turn warning message in RetryInvocationHandler to a debug

2013-08-20 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13745223#comment-13745223
 ] 

Arpit Gupta commented on HADOOP-9886:
-

Thanks [~jingzhao]

I will commit it today to the appropriate branches.

 Turn warning message in RetryInvocationHandler to a debug
 -

 Key: HADOOP-9886
 URL: https://issues.apache.org/jira/browse/HADOOP-9886
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Attachments: HADOOP-9886.patch


 Currently if debug is not enabled we display a warning message when the 
 client fails over to another namenode.
 This will happen for every call that goes to the failed over namenode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9886) Turn warning message in RetryInvocationHandler to a debug

2013-08-20 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9886:


Target Version/s: 2.1.1-beta

 Turn warning message in RetryInvocationHandler to a debug
 -

 Key: HADOOP-9886
 URL: https://issues.apache.org/jira/browse/HADOOP-9886
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Attachments: HADOOP-9886.patch


 Currently if debug is not enabled we display a warning message when the 
 client fails over to another namenode.
 This will happen for every call that goes to the failed over namenode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9886) Turn warning message in RetryInvocationHandler to debug

2013-08-20 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9886:


Summary: Turn warning message in RetryInvocationHandler to debug  (was: 
Turn warning message in RetryInvocationHandler to a debug)

 Turn warning message in RetryInvocationHandler to debug
 ---

 Key: HADOOP-9886
 URL: https://issues.apache.org/jira/browse/HADOOP-9886
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Attachments: HADOOP-9886.patch


 Currently if debug is not enabled we display a warning message when the 
 client fails over to another namenode.
 This will happen for every call that goes to the failed over namenode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9886) Turn warning message in RetryInvocationHandler to debug

2013-08-20 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9886:


Issue Type: Improvement  (was: Bug)

 Turn warning message in RetryInvocationHandler to debug
 ---

 Key: HADOOP-9886
 URL: https://issues.apache.org/jira/browse/HADOOP-9886
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Attachments: HADOOP-9886.patch


 Currently if debug is not enabled we display a warning message when the 
 client fails over to another namenode.
 This will happen for every call that goes to the failed over namenode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9886) Turn warning message in RetryInvocationHandler to debug

2013-08-20 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9886:


   Resolution: Fixed
Fix Version/s: 2.1.1-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

 Turn warning message in RetryInvocationHandler to debug
 ---

 Key: HADOOP-9886
 URL: https://issues.apache.org/jira/browse/HADOOP-9886
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: HADOOP-9886.patch


 Currently if debug is not enabled we display a warning message when the 
 client fails over to another namenode.
 This will happen for every call that goes to the failed over namenode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9886) Turn warning message in RetryInvocationHandler to a debub

2013-08-19 Thread Arpit Gupta (JIRA)
Arpit Gupta created HADOOP-9886:
---

 Summary: Turn warning message in RetryInvocationHandler to a debub
 Key: HADOOP-9886
 URL: https://issues.apache.org/jira/browse/HADOOP-9886
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Fix For: 2.1.1-beta


Currently if debug is not enabled we display a warning message when the client 
fails over to another namenode.

This will happen for every call that goes to the failed over namenode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9886) Turn warning message in RetryInvocationHandler to a debub

2013-08-19 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744272#comment-13744272
 ] 

Arpit Gupta commented on HADOOP-9886:
-

Any dfs call returns something like this

{code}
WARN retry.RetryInvocationHandler: Exception while invoking delete of class 
ClientNamenodeProtocolTranslatorPB. Trying to fail over immediately.
{code}


 Turn warning message in RetryInvocationHandler to a debub
 -

 Key: HADOOP-9886
 URL: https://issues.apache.org/jira/browse/HADOOP-9886
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Fix For: 2.1.1-beta


 Currently if debug is not enabled we display a warning message when the 
 client fails over to another namenode.
 This will happen for every call that goes to the failed over namenode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9886) Turn warning message in RetryInvocationHandler to a debub

2013-08-19 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9886:


Status: Patch Available  (was: Open)

 Turn warning message in RetryInvocationHandler to a debub
 -

 Key: HADOOP-9886
 URL: https://issues.apache.org/jira/browse/HADOOP-9886
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: HADOOP-9886.patch


 Currently if debug is not enabled we display a warning message when the 
 client fails over to another namenode.
 This will happen for every call that goes to the failed over namenode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9886) Turn warning message in RetryInvocationHandler to a debub

2013-08-19 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9886:


Attachment: HADOOP-9886.patch

Simple patch that removes the logging if debug is not enabled.

 Turn warning message in RetryInvocationHandler to a debub
 -

 Key: HADOOP-9886
 URL: https://issues.apache.org/jira/browse/HADOOP-9886
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: HADOOP-9886.patch


 Currently if debug is not enabled we display a warning message when the 
 client fails over to another namenode.
 This will happen for every call that goes to the failed over namenode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9886) Turn warning message in RetryInvocationHandler to a debug

2013-08-19 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9886:


Summary: Turn warning message in RetryInvocationHandler to a debug  (was: 
Turn warning message in RetryInvocationHandler to a debub)

 Turn warning message in RetryInvocationHandler to a debug
 -

 Key: HADOOP-9886
 URL: https://issues.apache.org/jira/browse/HADOOP-9886
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: HADOOP-9886.patch


 Currently if debug is not enabled we display a warning message when the 
 client fails over to another namenode.
 This will happen for every call that goes to the failed over namenode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9886) Turn warning message in RetryInvocationHandler to a debug

2013-08-19 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13744641#comment-13744641
 ] 

Arpit Gupta commented on HADOOP-9886:
-

No tests added as this was just a change to the logging messages.

 Turn warning message in RetryInvocationHandler to a debug
 -

 Key: HADOOP-9886
 URL: https://issues.apache.org/jira/browse/HADOOP-9886
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: HADOOP-9886.patch


 Currently if debug is not enabled we display a warning message when the 
 client fails over to another namenode.
 This will happen for every call that goes to the failed over namenode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command

2013-06-11 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13680632#comment-13680632
 ] 

Arpit Gupta commented on HADOOP-9625:
-

yes i am in the process of that. We need to merge HADOOP-9532 to those branches 
first so your patch can apply correctly.

 HADOOP_OPTS not picked up by hadoop command
 ---

 Key: HADOOP-9625
 URL: https://issues.apache.org/jira/browse/HADOOP-9625
 Project: Hadoop Common
  Issue Type: Improvement
  Components: bin, conf
Affects Versions: 2.0.3-alpha, 2.0.4-alpha
Reporter: Paul Han
Priority: Minor
 Fix For: 2.0.5-alpha

 Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, 
 HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 When migrating from hadoop 1 to hadoop 2, one thing caused our users grief 
 are those non-backward-compatible changes. This JIRA is to fix one of those 
 changes:
   HADOOP_OPTS is not picked up any more by hadoop command
 With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop 
 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh :
 export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
 We should fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command

2013-06-11 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13680634#comment-13680634
 ] 

Arpit Gupta commented on HADOOP-9625:
-

Once done will merge it to branch-2 and 2.1.0-beta

 HADOOP_OPTS not picked up by hadoop command
 ---

 Key: HADOOP-9625
 URL: https://issues.apache.org/jira/browse/HADOOP-9625
 Project: Hadoop Common
  Issue Type: Improvement
  Components: bin, conf
Affects Versions: 2.0.3-alpha, 2.0.4-alpha
Reporter: Paul Han
Priority: Minor
 Fix For: 2.0.5-alpha

 Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, 
 HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 When migrating from hadoop 1 to hadoop 2, one thing caused our users grief 
 are those non-backward-compatible changes. This JIRA is to fix one of those 
 changes:
   HADOOP_OPTS is not picked up any more by hadoop command
 With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop 
 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh :
 export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
 We should fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9532) HADOOP_CLIENT_OPTS is appended twice by Windows cmd scripts

2013-06-11 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9532:


Fix Version/s: (was: 3.0.0)
   2.1.0-beta

 HADOOP_CLIENT_OPTS is appended twice by Windows cmd scripts
 ---

 Key: HADOOP-9532
 URL: https://issues.apache.org/jira/browse/HADOOP-9532
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: HADOOP-9532.1.patch


 This problem was reported initially for the shell scripts in HADOOP-9455.  
 This issue tracks the same problem for the Windows cmd scripts.  Appending 
 HADOOP_CIENT_OPTS twice can cause an incorrect JVM launch, particularly if 
 trying to set remote debugging flags.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command

2013-06-11 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9625:


Fix Version/s: (was: 2.0.5-alpha)
   2.1.0-beta

 HADOOP_OPTS not picked up by hadoop command
 ---

 Key: HADOOP-9625
 URL: https://issues.apache.org/jira/browse/HADOOP-9625
 Project: Hadoop Common
  Issue Type: Improvement
  Components: bin, conf
Affects Versions: 2.0.3-alpha, 2.0.4-alpha
Reporter: Paul Han
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, 
 HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 When migrating from hadoop 1 to hadoop 2, one thing caused our users grief 
 are those non-backward-compatible changes. This JIRA is to fix one of those 
 changes:
   HADOOP_OPTS is not picked up any more by hadoop command
 With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop 
 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh :
 export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
 We should fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command

2013-06-11 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9625:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the contribution Paul. This is committed to trunk, branch-2 and 
branch-2.1-beta

 HADOOP_OPTS not picked up by hadoop command
 ---

 Key: HADOOP-9625
 URL: https://issues.apache.org/jira/browse/HADOOP-9625
 Project: Hadoop Common
  Issue Type: Improvement
  Components: bin, conf
Affects Versions: 2.0.3-alpha, 2.0.4-alpha
Reporter: Paul Han
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, 
 HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 When migrating from hadoop 1 to hadoop 2, one thing caused our users grief 
 are those non-backward-compatible changes. This JIRA is to fix one of those 
 changes:
   HADOOP_OPTS is not picked up any more by hadoop command
 With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop 
 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh :
 export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
 We should fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira



[jira] [Commented] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command

2013-06-10 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13679981#comment-13679981
 ] 

Arpit Gupta commented on HADOOP-9625:
-

+1 

 HADOOP_OPTS not picked up by hadoop command
 ---

 Key: HADOOP-9625
 URL: https://issues.apache.org/jira/browse/HADOOP-9625
 Project: Hadoop Common
  Issue Type: Improvement
  Components: bin, conf
Affects Versions: 2.0.3-alpha, 2.0.4-alpha
Reporter: Paul Han
Priority: Minor
 Fix For: 2.0.5-alpha

 Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, 
 HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch

   Original Estimate: 12h
  Remaining Estimate: 12h

 When migrating from hadoop 1 to hadoop 2, one thing caused our users grief 
 are those non-backward-compatible changes. This JIRA is to fix one of those 
 changes:
   HADOOP_OPTS is not picked up any more by hadoop command
 With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop 
 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh :
 export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
 We should fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9422) HADOOP_HOME should not be required to be set to be able to launch commands using hadoop.util.Shell

2013-06-05 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta reassigned HADOOP-9422:
---

Assignee: Arpit Gupta

 HADOOP_HOME should not be required to be set to be able to launch commands 
 using hadoop.util.Shell
 --

 Key: HADOOP-9422
 URL: https://issues.apache.org/jira/browse/HADOOP-9422
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Hitesh Shah
Assignee: Arpit Gupta

 Not sure why this is an enforced requirement especially in cases where a 
 deployment is done using multiple tar-balls ( one each for 
 common/hdfs/mapreduce/yarn ). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9422) HADOOP_HOME should not be required to be set to be able to launch commands using hadoop.util.Shell

2013-06-05 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9422:


Status: Patch Available  (was: Open)

 HADOOP_HOME should not be required to be set to be able to launch commands 
 using hadoop.util.Shell
 --

 Key: HADOOP-9422
 URL: https://issues.apache.org/jira/browse/HADOOP-9422
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Hitesh Shah
Assignee: Arpit Gupta
 Attachments: HADOOP-9422.patch


 Not sure why this is an enforced requirement especially in cases where a 
 deployment is done using multiple tar-balls ( one each for 
 common/hdfs/mapreduce/yarn ). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9422) HADOOP_HOME should not be required to be set to be able to launch commands using hadoop.util.Shell

2013-06-05 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9422:


Attachment: HADOOP-9422.patch

 HADOOP_HOME should not be required to be set to be able to launch commands 
 using hadoop.util.Shell
 --

 Key: HADOOP-9422
 URL: https://issues.apache.org/jira/browse/HADOOP-9422
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Hitesh Shah
Assignee: Arpit Gupta
 Attachments: HADOOP-9422.patch


 Not sure why this is an enforced requirement especially in cases where a 
 deployment is done using multiple tar-balls ( one each for 
 common/hdfs/mapreduce/yarn ). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9422) HADOOP_HOME should not be required to be set to be able to launch commands using hadoop.util.Shell

2013-06-05 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13676278#comment-13676278
 ] 

Arpit Gupta commented on HADOOP-9422:
-

I can take this up.

One thing we could do is not have the static variable defined but call the 
appropriate method when needed. I will upload a patch for that. Also it looks 
like getHadoopHome is not being used in the project and getQualifiedBinPath is 
being used in getWinUtilsPath only when windows env is there.

So we would also remove getHadoopHome and leave the getQualifiedBinPath there.

 HADOOP_HOME should not be required to be set to be able to launch commands 
 using hadoop.util.Shell
 --

 Key: HADOOP-9422
 URL: https://issues.apache.org/jira/browse/HADOOP-9422
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Hitesh Shah
Assignee: Arpit Gupta
 Attachments: HADOOP-9422.patch


 Not sure why this is an enforced requirement especially in cases where a 
 deployment is done using multiple tar-balls ( one each for 
 common/hdfs/mapreduce/yarn ). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2013-05-01 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13646991#comment-13646991
 ] 

Arpit Gupta commented on HADOOP-9438:
-

[~ojoshi] i think you wanted to tag [~arpitagarwal]

 LocalFileContext does not throw an exception on mkdir for already existing 
 directory
 

 Key: HADOOP-9438
 URL: https://issues.apache.org/jira/browse/HADOOP-9438
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Robert Joseph Evans
Priority: Critical
 Attachments: HADOOP-9438.20130501.1.patch, HADOOP-9438.patch, 
 HADOOP-9438.patch


 according to 
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
 should throw a FileAlreadyExistsException if the directory already exists.
 I tested this and 
 {code}
 FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
 Path p = new Path(/tmp/bobby.12345);
 FsPermission cachePerms = new FsPermission((short) 0755);
 lfc.mkdir(p, cachePerms, false);
 lfc.mkdir(p, cachePerms, false);
 {code}
 never throws an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9529) It looks like hadoop.tmp.dir is being used both for local and hdfs directories

2013-04-30 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645875#comment-13645875
 ] 

Arpit Gupta commented on HADOOP-9529:
-

I noticed that mapred.system.dir and staging dir use hadoop.tmp.dir in the 
defaults thus it ends up being used for both local file system and hdfs.

 It looks like hadoop.tmp.dir is being used both for local and hdfs directories
 --

 Key: HADOOP-9529
 URL: https://issues.apache.org/jira/browse/HADOOP-9529
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.20.205.0
 Environment: Ubuntu Server 12.04
Reporter: Ronald Kevin Burton
   Original Estimate: 48h
  Remaining Estimate: 48h

 I would like to separate out the files that are written to /tmp so I added a 
 definition for hadoop.tmp.dir which value I understand as a local folder. It 
 apparently also specifies an HDFS folder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9529) It looks like hadoop.tmp.dir is being used both for local and hdfs directories

2013-04-30 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645893#comment-13645893
 ] 

Arpit Gupta commented on HADOOP-9529:
-

This is default for staging in hadoop 1

property
  namemapreduce.jobtracker.staging.root.dir/name
  value${hadoop.tmp.dir}/mapred/staging/value
  descriptionThe root of the staging area for users' job files
  In practice, this should be the directory where users' home 
  directories are located (usually /user)
  /description
/property


 It looks like hadoop.tmp.dir is being used both for local and hdfs directories
 --

 Key: HADOOP-9529
 URL: https://issues.apache.org/jira/browse/HADOOP-9529
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.20.205.0
 Environment: Ubuntu Server 12.04
Reporter: Ronald Kevin Burton
   Original Estimate: 48h
  Remaining Estimate: 48h

 I would like to separate out the files that are written to /tmp so I added a 
 definition for hadoop.tmp.dir which value I understand as a local folder. It 
 apparently also specifies an HDFS folder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9529) It looks like hadoop.tmp.dir is being used both for local and hdfs directories

2013-04-30 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645933#comment-13645933
 ] 

Arpit Gupta commented on HADOOP-9529:
-

It depends on how the paths are used in the code. In this case the config 
values are used to create a path on hdfs.

 It looks like hadoop.tmp.dir is being used both for local and hdfs directories
 --

 Key: HADOOP-9529
 URL: https://issues.apache.org/jira/browse/HADOOP-9529
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.20.205.0
 Environment: Ubuntu Server 12.04
Reporter: Ronald Kevin Burton
   Original Estimate: 48h
  Remaining Estimate: 48h

 I would like to separate out the files that are written to /tmp so I added a 
 definition for hadoop.tmp.dir which value I understand as a local folder. It 
 apparently also specifies an HDFS folder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9529) It looks like hadoop.tmp.dir is being used both for local and hdfs directories

2013-04-30 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13645935#comment-13645935
 ] 

Arpit Gupta commented on HADOOP-9529:
-

[~rkevinburton]

Would you like to submit a patch for this?

 It looks like hadoop.tmp.dir is being used both for local and hdfs directories
 --

 Key: HADOOP-9529
 URL: https://issues.apache.org/jira/browse/HADOOP-9529
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.20.205.0
 Environment: Ubuntu Server 12.04
Reporter: Ronald Kevin Burton
   Original Estimate: 48h
  Remaining Estimate: 48h

 I would like to separate out the files that are written to /tmp so I added a 
 definition for hadoop.tmp.dir which value I understand as a local folder. It 
 apparently also specifies an HDFS folder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9529) It looks like hadoop.tmp.dir is being used both for local and hdfs directories

2013-04-30 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta reassigned HADOOP-9529:
---

Assignee: Arpit Gupta

 It looks like hadoop.tmp.dir is being used both for local and hdfs directories
 --

 Key: HADOOP-9529
 URL: https://issues.apache.org/jira/browse/HADOOP-9529
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.20.205.0
 Environment: Ubuntu Server 12.04
Reporter: Ronald Kevin Burton
Assignee: Arpit Gupta
   Original Estimate: 48h
  Remaining Estimate: 48h

 I would like to separate out the files that are written to /tmp so I added a 
 definition for hadoop.tmp.dir which value I understand as a local folder. It 
 apparently also specifies an HDFS folder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9458) In branch-1, RPC.getProxy(..) may call proxy.getProtocolVersion(..) without retry

2013-04-24 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640827#comment-13640827
 ] 

Arpit Gupta commented on HADOOP-9458:
-

I applied this patch to my cluster and ran through the pig tests where within 5 
seconds of the pig submission the job tracker was restarted. With this patch 
the tests passed consistently.

 In branch-1, RPC.getProxy(..) may call proxy.getProtocolVersion(..) without 
 retry
 -

 Key: HADOOP-9458
 URL: https://issues.apache.org/jira/browse/HADOOP-9458
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: c9458_20130406.patch


 RPC.getProxy(..) may call proxy.getProtocolVersion(..) without retry even 
 when client has specified retry in the conf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9379) capture the ulimit info after printing the log to the console

2013-03-07 Thread Arpit Gupta (JIRA)
Arpit Gupta created HADOOP-9379:
---

 Summary: capture the ulimit info after printing the log to the 
console
 Key: HADOOP-9379
 URL: https://issues.apache.org/jira/browse/HADOOP-9379
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.2.0, 2.0.4-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Trivial




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9379) capture the ulimit info after printing the log to the console

2013-03-07 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9379:


Description: Based on the discussions in HADOOP-9253 people prefer if we 
dont print the ulimit info to the console but still have it in the logs.

 capture the ulimit info after printing the log to the console
 -

 Key: HADOOP-9379
 URL: https://issues.apache.org/jira/browse/HADOOP-9379
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.2.0, 2.0.4-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Trivial

 Based on the discussions in HADOOP-9253 people prefer if we dont print the 
 ulimit info to the console but still have it in the logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9379) capture the ulimit info after printing the log to the console

2013-03-07 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9379:


Attachment: HADOOP-9379.branch-1.patch

 capture the ulimit info after printing the log to the console
 -

 Key: HADOOP-9379
 URL: https://issues.apache.org/jira/browse/HADOOP-9379
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.2.0, 2.0.4-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-9379.branch-1.patch, HADOOP-9379.patch


 Based on the discussions in HADOOP-9253 people prefer if we dont print the 
 ulimit info to the console but still have it in the logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9379) capture the ulimit info after printing the log to the console

2013-03-07 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9379:


Attachment: HADOOP-9379.patch

Patch for trunk

 capture the ulimit info after printing the log to the console
 -

 Key: HADOOP-9379
 URL: https://issues.apache.org/jira/browse/HADOOP-9379
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.2.0, 2.0.4-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-9379.branch-1.patch, HADOOP-9379.patch


 Based on the discussions in HADOOP-9253 people prefer if we dont print the 
 ulimit info to the console but still have it in the logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9379) capture the ulimit info after printing the log to the console

2013-03-07 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9379:


Status: Patch Available  (was: Open)

 capture the ulimit info after printing the log to the console
 -

 Key: HADOOP-9379
 URL: https://issues.apache.org/jira/browse/HADOOP-9379
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.2.0, 2.0.4-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-9379.branch-1.patch, HADOOP-9379.patch


 Based on the discussions in HADOOP-9253 people prefer if we dont print the 
 ulimit info to the console but still have it in the logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-03-07 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13596030#comment-13596030
 ] 

Arpit Gupta commented on HADOOP-9253:
-

I logged HADOOP-9379 and uploaded a patch which captures the ulimit info after 
the head statement so console output is cleaner.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 0.23.7, 2.0.5-beta

 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.branch-1.patch, HADOOP-9253.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9379) capture the ulimit info after printing the log to the console

2013-03-07 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9379:


Description: 
Based on the discussions in HADOOP-9253 people prefer if we dont print the 
ulimit info to the console but still have it in the logs.

Just need to move the head statement to before the capture of ulimit code.

  was:Based on the discussions in HADOOP-9253 people prefer if we dont print 
the ulimit info to the console but still have it in the logs.


 capture the ulimit info after printing the log to the console
 -

 Key: HADOOP-9379
 URL: https://issues.apache.org/jira/browse/HADOOP-9379
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.2.0, 2.0.4-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-9379.branch-1.patch, HADOOP-9379.patch


 Based on the discussions in HADOOP-9253 people prefer if we dont print the 
 ulimit info to the console but still have it in the logs.
 Just need to move the head statement to before the capture of ulimit code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9379) capture the ulimit info after printing the log to the console

2013-03-07 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13596088#comment-13596088
 ] 

Arpit Gupta commented on HADOOP-9379:
-

No tests added as this is a change to the shell scripts. Manually verified that 
the ulimit info is only in the logs.

 capture the ulimit info after printing the log to the console
 -

 Key: HADOOP-9379
 URL: https://issues.apache.org/jira/browse/HADOOP-9379
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.2.0, 2.0.4-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-9379.branch-1.patch, HADOOP-9379.patch


 Based on the discussions in HADOOP-9253 people prefer if we dont print the 
 ulimit info to the console but still have it in the logs.
 Just need to move the head statement to before the capture of ulimit code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9351) Hadoop daemon startup scripts cause duplication of command line arguments

2013-03-05 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593578#comment-13593578
 ] 

Arpit Gupta commented on HADOOP-9351:
-

Ill take a stab at it.

 Hadoop daemon startup scripts cause duplication of command line arguments
 -

 Key: HADOOP-9351
 URL: https://issues.apache.org/jira/browse/HADOOP-9351
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Chris Nauroth
Assignee: Arpit Gupta
 Fix For: 3.0.0


 Command line arguments such as -Dhadoop.log.dir are appearing multiple times 
 when launching Hadoop daemons.  This can cause confusion for an operator 
 looking at the process table, especially if there are different values for 
 multiple occurrences of the same argument.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9351) Hadoop daemon startup scripts cause duplication of command line arguments

2013-03-05 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta reassigned HADOOP-9351:
---

Assignee: Arpit Gupta

 Hadoop daemon startup scripts cause duplication of command line arguments
 -

 Key: HADOOP-9351
 URL: https://issues.apache.org/jira/browse/HADOOP-9351
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Chris Nauroth
Assignee: Arpit Gupta
 Fix For: 3.0.0


 Command line arguments such as -Dhadoop.log.dir are appearing multiple times 
 when launching Hadoop daemons.  This can cause confusion for an operator 
 looking at the process table, especially if there are different values for 
 multiple occurrences of the same argument.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-02-26 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587799#comment-13587799
 ] 

Arpit Gupta commented on HADOOP-9253:
-

@Alejandro

Another thing we could do is capture the ulimit info after the head cmd. That 
way the users can still get to see the info. Let me know and i can generate a 
new patch.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 0.23.7, 2.0.4-beta

 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.branch-1.patch, HADOOP-9253.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-02-26 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587806#comment-13587806
 ] 

Arpit Gupta commented on HADOOP-9253:
-

Right if the head cmd was run before the ulimit info was captured then it will 
only be in the log and not in the terminal.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 0.23.7, 2.0.4-beta

 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.branch-1.patch, HADOOP-9253.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-02-13 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13577676#comment-13577676
 ] 

Arpit Gupta commented on HADOOP-9253:
-

@Alejandro

i added head -30 as Andy suggested that because of new information captured in 
the logs we might miss some info incase of some errors. Granted the user can 
still open the .out file and look at them but felt this would somewhat preserve 
the behavior we had before.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 0.23.7, 2.0.4-beta

 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.branch-1.patch, HADOOP-9253.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-30 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13567212#comment-13567212
 ] 

Arpit Gupta commented on HADOOP-9253:
-

@Andy

Sounds good i will change the head to print 30 lines the reason being since i 
have added about 17 lines in case of an error we will see atleast 10+ lines 
worth or error log.

I will post an update to the trunk and the branch-1 patch. 

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-30 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9253:


Attachment: HADOOP-9253.branch-1.patch

updated branch-1 patch

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.branch-1.patch, HADOOP-9253.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-30 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9253:


Attachment: HADOOP-9253.patch

updated trunk patch

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.branch-1.patch, HADOOP-9253.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-30 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9253:


Status: Patch Available  (was: Open)

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha, 1.1.1
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.branch-1.patch, HADOOP-9253.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-28 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564402#comment-13564402
 ] 

Arpit Gupta commented on HADOOP-9253:
-

bq. Does this also work in context of a secure DN startup? Does the logged 
ulimit reflect the actual JVM's instead of the wrapper's?


Good point let me test this out and see what it will log.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-28 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9253:


Status: Open  (was: Patch Available)

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha, 1.1.1
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-28 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9253:


Attachment: HADOOP-9253.branch-1.patch

@Harsh
I have updated the patch to handle a secure datanode startup. I tested on a 
secure and un secure cluster and the appropriate info was captured. Let me know 
if the approach looks good and i will provide a similar patch for trunk.

@Andy
I am not quite sure i understand what you are referring to. The log file that 
is being printed to the console should never have any left over contents as 
start commands overwrites it.

{code}
nohup nice -n $HADOOP_NICENESS $HADOOP_PREFIX/bin/hadoop --config 
$HADOOP_CONF_DIR $command $@  $log 21  /dev/null 
{code}

But if you think the problem still exists can open another jira for it.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-28 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13564972#comment-13564972
 ] 

Arpit Gupta commented on HADOOP-9253:
-

bq. it's unclear why to write ulimit to $log at all

This is being added so we can debug issues related to limits being set for 
user. Thus capturing in the log so the user can refer to them at a later time.

bq. 2. If writing ulimit to $log, why use head to truncate the output

{code}
head $log
{code}

Is something that existed before and hence i left it as is. I can certainly 
change it to -20 but as you mention if there are errors in the nohup command it 
will log to this file as well so changing it to printing 20 lines might not 
help in that case.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-26 Thread Arpit Gupta (JIRA)
Arpit Gupta created HADOOP-9253:
---

 Summary: Capture ulimit info in the logs at service start time
 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha, 1.1.1
Reporter: Arpit Gupta
Assignee: Arpit Gupta


output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-26 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9253:


Attachment: HADOOP-9253.branch-1.patch

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-26 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9253:


Attachment: HADOOP-9253.patch

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-26 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563570#comment-13563570
 ] 

Arpit Gupta commented on HADOOP-9253:
-

the following will be captured in the .out file

{code}
ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
file size   (blocks, -f) unlimited
max locked memory   (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files  (-n) 100
pipe size(512 bytes, -p) 1
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 709
virtual memory  (kbytes, -v) unlimited
{code}

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-01-26 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9253:


Status: Patch Available  (was: Open)

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha, 1.1.1
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9251) mvn eclipse:eclipse fails on trunk

2013-01-25 Thread Arpit Gupta (JIRA)
Arpit Gupta created HADOOP-9251:
---

 Summary: mvn eclipse:eclipse fails on trunk
 Key: HADOOP-9251
 URL: https://issues.apache.org/jira/browse/HADOOP-9251
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Arpit Gupta


[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-eclipse-plugin:2.8:eclipse (default-cli) on 
project hadoop-common: Request to merge when 'filtering' is not identical. 
Original=resource src/main/resources: output=target/classes, include=[], 
exclude=[common-version-info.properties|**/*.java], test=false, 
filtering=false, merging with=resource src/main/resources: 
output=target/classes, include=[common-version-info.properties], 
exclude=[**/*.java], test=false, filtering=true - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hadoop-common


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9251) mvn eclipse:eclipse fails on trunk

2013-01-25 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563230#comment-13563230
 ] 

Arpit Gupta commented on HADOOP-9251:
-

i am getting around it by running mvn 
org.apache.maven.plugins:maven-eclipse-plugin:2.6:eclipse

 mvn eclipse:eclipse fails on trunk
 --

 Key: HADOOP-9251
 URL: https://issues.apache.org/jira/browse/HADOOP-9251
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Arpit Gupta

 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-eclipse-plugin:2.8:eclipse (default-cli) on 
 project hadoop-common: Request to merge when 'filtering' is not identical. 
 Original=resource src/main/resources: output=target/classes, include=[], 
 exclude=[common-version-info.properties|**/*.java], test=false, 
 filtering=false, merging with=resource src/main/resources: 
 output=target/classes, include=[common-version-info.properties], 
 exclude=[**/*.java], test=false, filtering=true - [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 [ERROR] 
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-common

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9251) mvn eclipse:eclipse fails on trunk

2013-01-25 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563239#comment-13563239
 ] 

Arpit Gupta commented on HADOOP-9251:
-

It went through after i cleaned up my maven cache.

 mvn eclipse:eclipse fails on trunk
 --

 Key: HADOOP-9251
 URL: https://issues.apache.org/jira/browse/HADOOP-9251
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Arpit Gupta

 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-eclipse-plugin:2.8:eclipse (default-cli) on 
 project hadoop-common: Request to merge when 'filtering' is not identical. 
 Original=resource src/main/resources: output=target/classes, include=[], 
 exclude=[common-version-info.properties|**/*.java], test=false, 
 filtering=false, merging with=resource src/main/resources: 
 output=target/classes, include=[common-version-info.properties], 
 exclude=[**/*.java], test=false, filtering=true - [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 [ERROR] 
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-common

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9251) mvn eclipse:eclipse fails on trunk

2013-01-25 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta resolved HADOOP-9251.
-

Resolution: Invalid

 mvn eclipse:eclipse fails on trunk
 --

 Key: HADOOP-9251
 URL: https://issues.apache.org/jira/browse/HADOOP-9251
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Arpit Gupta

 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-eclipse-plugin:2.8:eclipse (default-cli) on 
 project hadoop-common: Request to merge when 'filtering' is not identical. 
 Original=resource src/main/resources: output=target/classes, include=[], 
 exclude=[common-version-info.properties|**/*.java], test=false, 
 filtering=false, merging with=resource src/main/resources: 
 output=target/classes, include=[common-version-info.properties], 
 exclude=[**/*.java], test=false, filtering=true - [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 [ERROR] 
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-common

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8106) hadoop-config.sh script defaults to /usr/etc/hadoop rather than /etc/hadoop for the default location of the conf dir

2012-12-19 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13536370#comment-13536370
 ] 

Arpit Gupta commented on HADOOP-8106:
-

@Suresh

We no longer have rpm scripts in trunk and branch-2. Also the patch in the rpm 
script fixes a different issue. What this tried to resolve is even in a tar 
ball layout the config dir that was expected was HADOOP_HOME/etc/hadoop where 
as in branch 1 it is HADOOP_HOME/conf

 hadoop-config.sh script defaults to /usr/etc/hadoop rather than /etc/hadoop 
 for the default location of the conf dir
 

 Key: HADOOP-8106
 URL: https://issues.apache.org/jira/browse/HADOOP-8106
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0, 0.24.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8106.branch-1.0.patch, HADOOP-8106.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9115) Deadlock in configuration when writing configuration to hdfs

2012-12-04 Thread Arpit Gupta (JIRA)
Arpit Gupta created HADOOP-9115:
---

 Summary: Deadlock in configuration when writing configuration to 
hdfs
 Key: HADOOP-9115
 URL: https://issues.apache.org/jira/browse/HADOOP-9115
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arpit Gupta
Priority: Blocker
 Attachments: hive-jstack.log

This was noticed when using hive with hadoop-1.1.1 and running select count(*) 
from tbl;

This would cause a deadlock configuration. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9115) Deadlock in configuration when writing configuration to hdfs

2012-12-04 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9115:


Attachment: hive-jstack.log

Thread 1640 and 1577 show the deadlock.

 Deadlock in configuration when writing configuration to hdfs
 

 Key: HADOOP-9115
 URL: https://issues.apache.org/jira/browse/HADOOP-9115
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arpit Gupta
Priority: Blocker
 Attachments: hive-jstack.log


 This was noticed when using hive with hadoop-1.1.1 and running select 
 count(*) from tbl;
 This would cause a deadlock configuration. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9115) Deadlock in configuration when writing configuration to hdfs

2012-12-04 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9115:


Description: 
This was noticed when using hive with hadoop-1.1.1 and running 

{code}
select count(*) from tbl;
{code}

This would cause a deadlock configuration. 

  was:
This was noticed when using hive with hadoop-1.1.1 and running select count(*) 
from tbl;

This would cause a deadlock configuration. 


 Deadlock in configuration when writing configuration to hdfs
 

 Key: HADOOP-9115
 URL: https://issues.apache.org/jira/browse/HADOOP-9115
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arpit Gupta
Priority: Blocker
 Attachments: hive-jstack.log


 This was noticed when using hive with hadoop-1.1.1 and running 
 {code}
 select count(*) from tbl;
 {code}
 This would cause a deadlock configuration. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9115) Deadlock in configuration when writing configuration to hdfs

2012-12-04 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-9115:


Affects Version/s: 1.1.1

 Deadlock in configuration when writing configuration to hdfs
 

 Key: HADOOP-9115
 URL: https://issues.apache.org/jira/browse/HADOOP-9115
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.1.1
Reporter: Arpit Gupta
Assignee: Jing Zhao
Priority: Blocker
 Attachments: hive-jstack.log


 This was noticed when using hive with hadoop-1.1.1 and running 
 {code}
 select count(*) from tbl;
 {code}
 This would cause a deadlock configuration. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8963) CopyFromLocal doesn't always create user directory

2012-11-06 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8963:


Attachment: HADOOP-8963.branch-1.patch

Attached a new patch where the check is made as part of the else if statement.

Also added more tests to test the copy methods and the overwrite flags.

 CopyFromLocal doesn't always create user directory
 --

 Key: HADOOP-8963
 URL: https://issues.apache.org/jira/browse/HADOOP-8963
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Billie Rinaldi
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch, 
 HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch


 When you use the command hadoop fs -copyFromLocal filename . before the 
 /user/username directory has been created, the file is created with name 
 /user/username instead of a directory being created with file 
 /user/username/filename.  The command hadoop fs -copyFromLocal filename 
 filename works as expected, creating /user/username and 
 /user/username/filename, and hadoop fs -copyFromLocal filename . works as 
 expected if the /user/username directory already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8963) CopyFromLocal doesn't always create user directory

2012-11-06 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492005#comment-13492005
 ] 

Arpit Gupta commented on HADOOP-8963:
-

Added tests where when using fsshell and the dst exists that and cmd fails, and 
also added tests for the FileUtil.copy with overwrite flags.

 CopyFromLocal doesn't always create user directory
 --

 Key: HADOOP-8963
 URL: https://issues.apache.org/jira/browse/HADOOP-8963
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Billie Rinaldi
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch, 
 HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch


 When you use the command hadoop fs -copyFromLocal filename . before the 
 /user/username directory has been created, the file is created with name 
 /user/username instead of a directory being created with file 
 /user/username/filename.  The command hadoop fs -copyFromLocal filename 
 filename works as expected, creating /user/username and 
 /user/username/filename, and hadoop fs -copyFromLocal filename . works as 
 expected if the /user/username directory already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8963) CopyFromLocal doesn't always create user directory

2012-11-06 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492023#comment-13492023
 ] 

Arpit Gupta commented on HADOOP-8963:
-

test patch output

{code}
exec] BUILD SUCCESSFUL
 [exec] Total time: 5 minutes 6 seconds

 [exec] 
 [exec] 
 [exec] 
 [exec] 
 [exec] -1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 6 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] -1 findbugs.  The patch appears to introduce 9 new Findbugs 
(version 1.3.9) warnings.
 [exec] 
 [exec] 
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Finished build.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 
{code}

Findbugs warnings are not related to this patch.

 CopyFromLocal doesn't always create user directory
 --

 Key: HADOOP-8963
 URL: https://issues.apache.org/jira/browse/HADOOP-8963
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Billie Rinaldi
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch, 
 HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch


 When you use the command hadoop fs -copyFromLocal filename . before the 
 /user/username directory has been created, the file is created with name 
 /user/username instead of a directory being created with file 
 /user/username/filename.  The command hadoop fs -copyFromLocal filename 
 filename works as expected, creating /user/username and 
 /user/username/filename, and hadoop fs -copyFromLocal filename . works as 
 expected if the /user/username directory already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8963) CopyFromLocal doesn't always create user directory

2012-10-29 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13486178#comment-13486178
 ] 

Arpit Gupta commented on HADOOP-8963:
-

Thanks for the feedback Daryn. I will incorporate it and add more tests.

 CopyFromLocal doesn't always create user directory
 --

 Key: HADOOP-8963
 URL: https://issues.apache.org/jira/browse/HADOOP-8963
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Billie Rinaldi
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch, 
 HADOOP-8963.branch-1.patch


 When you use the command hadoop fs -copyFromLocal filename . before the 
 /user/username directory has been created, the file is created with name 
 /user/username instead of a directory being created with file 
 /user/username/filename.  The command hadoop fs -copyFromLocal filename 
 filename works as expected, creating /user/username and 
 /user/username/filename, and hadoop fs -copyFromLocal filename . works as 
 expected if the /user/username directory already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8963) CopyFromLocal doesn't always create user directory

2012-10-23 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13482449#comment-13482449
 ] 

Arpit Gupta commented on HADOOP-8963:
-

@Daryn

Thanks for info. I was actually wrong and this problem exists on trunk as well.

{code}
...skipping...
2012-10-23 09:16:57,838 INFO  hdfs.TestDFSShell (TestDFSShell.java:runCmd(835)) 
- RUN: -copyFromLocal 
/Users/arpit/github/hadoop/trunk/hadoop-common/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testCopyFromLocal1.txt
 .
2012-10-23 09:16:57,859 DEBUG ipc.Client (Client.java:sendParam(844)) - IPC 
Client (2029445125) connection to localhost/127.0.0.1:56770 from arpit sending 
#10
2012-10-23 09:16:57,859 DEBUG ipc.Server (Server.java:processData(1614)) -  got 
#10
2012-10-23 09:16:57,860 DEBUG ipc.Server (Server.java:run(1726)) - IPC Server 
handler 1 on 56770: has Call#10for RpcKind RPC_PROTOCOL_BUFFER from 
127.0.0.1:56779
2012-10-23 09:16:57,860 DEBUG security.UserGroupInformation 
(UserGroupInformation.java:logPrivilegedAction(1400)) - PrivilegedAction 
as:arpit (auth:SIMPLE) 
from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1742)
2012-10-23 09:16:57,860 DEBUG security.Groups (Groups.java:getGroups(83)) - 
Returning cached groups for 'arpit'
2012-10-23 09:16:57,862 INFO  FSNamesystem.audit 
(FSNamesystem.java:logAuditEvent(272)) - allowed=true  ugi=arpit (auth:SIMPLE) 
ip=/127.0.0.1   cmd=getfileinfo src=/user/arpit dst=nullperm=null
{code}

src is /user/arpit rather than /user/arpit/testCopyFromLocal1.txt when the 
user uses '.' as the destination and the home dir does not exist. 

I will try to track down where the issue is.

 CopyFromLocal doesn't always create user directory
 --

 Key: HADOOP-8963
 URL: https://issues.apache.org/jira/browse/HADOOP-8963
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Billie Rinaldi
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-8963.branch-1.patch


 When you use the command hadoop fs -copyFromLocal filename . before the 
 /user/username directory has been created, the file is created with name 
 /user/username instead of a directory being created with file 
 /user/username/filename.  The command hadoop fs -copyFromLocal filename 
 filename works as expected, creating /user/username and 
 /user/username/filename, and hadoop fs -copyFromLocal filename . works as 
 expected if the /user/username directory already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8963) CopyFromLocal doesn't always create user directory

2012-10-23 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8963:


Affects Version/s: 2.0.2-alpha

 CopyFromLocal doesn't always create user directory
 --

 Key: HADOOP-8963
 URL: https://issues.apache.org/jira/browse/HADOOP-8963
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 2.0.2-alpha
Reporter: Billie Rinaldi
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-8963.branch-1.patch


 When you use the command hadoop fs -copyFromLocal filename . before the 
 /user/username directory has been created, the file is created with name 
 /user/username instead of a directory being created with file 
 /user/username/filename.  The command hadoop fs -copyFromLocal filename 
 filename works as expected, creating /user/username and 
 /user/username/filename, and hadoop fs -copyFromLocal filename . works as 
 expected if the /user/username directory already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8963) CopyFromLocal doesn't always create user directory

2012-10-23 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13482528#comment-13482528
 ] 

Arpit Gupta commented on HADOOP-8963:
-

@Daryn

Once again you are right :). I had a bug in my test where i was not asserting 
the exit code of the run cmd. copyFromLocal does indeed fail if the home dir 
does not exist in trunk.

I will bring up a pseudo distributed cluster with 3.0 and try out various 
commands and update the jira. 

 CopyFromLocal doesn't always create user directory
 --

 Key: HADOOP-8963
 URL: https://issues.apache.org/jira/browse/HADOOP-8963
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 2.0.2-alpha
Reporter: Billie Rinaldi
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-8963.branch-1.patch


 When you use the command hadoop fs -copyFromLocal filename . before the 
 /user/username directory has been created, the file is created with name 
 /user/username instead of a directory being created with file 
 /user/username/filename.  The command hadoop fs -copyFromLocal filename 
 filename works as expected, creating /user/username and 
 /user/username/filename, and hadoop fs -copyFromLocal filename . works as 
 expected if the /user/username directory already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8963) CopyFromLocal doesn't always create user directory

2012-10-23 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8963:


Affects Version/s: (was: 2.0.2-alpha)

 CopyFromLocal doesn't always create user directory
 --

 Key: HADOOP-8963
 URL: https://issues.apache.org/jira/browse/HADOOP-8963
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Billie Rinaldi
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-8963.branch-1.patch


 When you use the command hadoop fs -copyFromLocal filename . before the 
 /user/username directory has been created, the file is created with name 
 /user/username instead of a directory being created with file 
 /user/username/filename.  The command hadoop fs -copyFromLocal filename 
 filename works as expected, creating /user/username and 
 /user/username/filename, and hadoop fs -copyFromLocal filename . works as 
 expected if the /user/username directory already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8963) CopyFromLocal doesn't always create user directory

2012-10-23 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13482560#comment-13482560
 ] 

Arpit Gupta commented on HADOOP-8963:
-

Confirmed on a pseudo distributed cluster that if the home dir does not exist 
the copyFromLocal command fails.

 CopyFromLocal doesn't always create user directory
 --

 Key: HADOOP-8963
 URL: https://issues.apache.org/jira/browse/HADOOP-8963
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Billie Rinaldi
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-8963.branch-1.patch


 When you use the command hadoop fs -copyFromLocal filename . before the 
 /user/username directory has been created, the file is created with name 
 /user/username instead of a directory being created with file 
 /user/username/filename.  The command hadoop fs -copyFromLocal filename 
 filename works as expected, creating /user/username and 
 /user/username/filename, and hadoop fs -copyFromLocal filename . works as 
 expected if the /user/username directory already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8963) CopyFromLocal doesn't always create user directory

2012-10-23 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8963:


Attachment: HADOOP-8963.branch-1.patch

updated the tests in branch-1 to assert the exit code.

 CopyFromLocal doesn't always create user directory
 --

 Key: HADOOP-8963
 URL: https://issues.apache.org/jira/browse/HADOOP-8963
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Billie Rinaldi
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch


 When you use the command hadoop fs -copyFromLocal filename . before the 
 /user/username directory has been created, the file is created with name 
 /user/username instead of a directory being created with file 
 /user/username/filename.  The command hadoop fs -copyFromLocal filename 
 filename works as expected, creating /user/username and 
 /user/username/filename, and hadoop fs -copyFromLocal filename . works as 
 expected if the /user/username directory already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >