Re: Hadoop 2.4.0 How to change Configured Capacity

2014-08-02 Thread arthur.hk.c...@gmail.com
Hi, Both ”dfs.name.data.dir” and “dfs.datanode.data.dir” are not set in my cluster. By the way I have searched around about these two parameters, I cannot find them in Hadoop Default page. http://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml Can you please

Re: Hadoop 2.4.0 How to change Configured Capacity

2014-08-02 Thread Harsh J
You will need to set them up in the hdfs-site.xml. P.s. Their default is present in the hdfs-default.xml you linked to: http://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml#dfs.datanode.data.dir On Sat, Aug 2, 2014 at 12:29 PM, arthur.hk.c...@gmail.com

RE: ResourceManager debugging

2014-08-02 Thread Naganarasimha G R (Naga)
Hi Yehia , I set YARN_RESOURCEMANAGER_OPTS in installation folder/bin/yarn and i was able to debug. YARN_RESOURCEMANAGER_OPTS=$YARN_RESOURCEMANAGER_OPTS -Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=7089,suspend=n Regards, Naga Huawei Technologies Co., Ltd. Phone: Fax:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException)

2014-08-02 Thread Ana Gillan
Hi everyone, I am having an issue with MapReduce jobs running through Hive being killed after 600s timeouts and with very simple jobs taking over 3 hours (or just failing) for a set of files with a compressed size of only 1-2gb. I will try and provide as much information as I can here, so if

Re: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException)

2014-08-02 Thread hadoop hive
Can you check the ulimit for tour user. Which might be causing this. On Aug 2, 2014 8:54 PM, Ana Gillan ana.gil...@gmail.com wrote: Hi everyone, I am having an issue with MapReduce jobs running through Hive being killed after 600s timeouts and with very simple jobs taking over 3 hours (or

Re: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException)

2014-08-02 Thread Ana Gillan
For my own user? It is as follows: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 483941 max locked memory (kbytes, -l) 64

Re: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException)

2014-08-02 Thread Ana Gillan
Filemax across the cluster is set to over 6 million. I¹ve checked the open file limits for the accounts used by the Hadoop daemons and they have an open file limit of 32K. This is confirmed by the various .out files, e.g. /var/log/hadoop-hdfs/hadoop-hdfs-datanode-slave1.out Contains open files

Re: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException)

2014-08-02 Thread hadoop hive
32k seems fine for mapred user(hope you using this for fetching you data) but if you have huge data on your system you can try 64k. Did you try increasing you time from 600 sec to like 20 mins. Can you also check on which stage its getting hanged or killed. Thanks On Aug 2, 2014 9:38 PM, Ana

Re: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException)

2014-08-02 Thread Ana Gillan
I¹m not sure which user is fetching the data, but I¹m assuming no one changed that from the default. The data isn¹t huge in size, just in number, so I suppose the open files limit is not the issue? I¹m running the job again with mapred.task.timeout=120, but containers are still being killed

Re: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException)

2014-08-02 Thread hadoop hive
Hey try change ulimit to 64k for user which running query and change time from scheduler which should be set to 600sec. Check the jt logs also for further issues. Thanks On Aug 2, 2014 11:09 PM, Ana Gillan ana.gil...@gmail.com wrote: I’m not sure which user is fetching the data, but I’m

Re: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException)

2014-08-02 Thread Ana Gillan
Ok, I will request this to be done, as I¹m not an admin, and then get back to this thread on Monday. Thank you! From: hadoop hive hadooph...@gmail.com Reply-To: user@hadoop.apache.org Date: Saturday, 2 August 2014 18:50 To: user@hadoop.apache.org Subject: Re:

Re: Fair Scheduler issue

2014-08-02 Thread Yehia Elshater
Hi Julien, Did you try to change yarn.nodemanager.resource.memory-mb to 13 GB for example (the other 3 for OS) ? Thanks On 1 August 2014 05:41, Julien Naour julna...@gmail.com wrote: Hello, I'm currently using HDP 2.0 so it's Hadoop 2.2.0. My cluster consist in 4 nodes, 16 coeurs 16 GB

Re: ResourceManager debugging

2014-08-02 Thread Yehia Elshater
Hi Naga, Thanks a lot for your help. I have submitted multiple MapReduce jobs, the debugger is attached successfully to eclipse and I put a breakpoint in org.apache.hadoop.yarn.server.resourcemanager.ResourceManager, but, eclipse debugger still waits without any interruption. However, I put

Exception in hadoop and java

2014-08-02 Thread Ekta Agrawal
Hi, I am writing a code in java that connects to hadoop. Earlier it was running fine. I wanted to add some charts and I used jfree api,it started giving this error.chart is not using hadoop. I removed the chart ,but it keeps coming.If anybody can look into it and help me in understanding that why