Re: [ANNOUNCE] Hadoop version 1.2.1 (stable) released

2013-08-05 Thread Chris K Wensel
any particular reason the 1.1.2 releases were pulled from the mirrors (so quickly)? On Aug 4, 2013, at 2:08 PM, Matt Foley ma...@apache.org wrote: I'm happy to announce that Hadoop version 1.2.1 has passed its release vote and is now available. It has 18 bug fixes and patches over the

Re: [ANNOUNCE] Hadoop version 1.2.1 (stable) released

2013-08-05 Thread Matt Foley
It's still available in archive at http://archive.apache.org/dist/hadoop/core/. I can put it back on the main download site if desired, but the model is that the main download site is for stuff we actively want people to download. Here is the relevant quote from

Re: [ANNOUNCE] Hadoop version 1.2.1 (stable) released

2013-08-05 Thread Chris K Wensel
regardless of what was written in a wiki somewhere, it is a bit aggressive I think. there are a fair number of automated things that link to the former stable releases that are now broken as they weren't given a grace period to cut over. not the end of the world or anything. just a bit of a

Re: [ANNOUNCE] Hadoop version 1.2.1 (stable) released

2013-08-05 Thread Matt Foley
Chris, there is a stable link for exactly this purpose: http://www.apache.org/dist/hadoop/core/stable/ --Matt On Mon, Aug 5, 2013 at 11:43 AM, Chris K Wensel ch...@wensel.net wrote: regardless of what was written in a wiki somewhere, it is a bit aggressive I think. there are a fair number

about monitor metrics in hadoop

2013-08-05 Thread ch huang
hi,all: i installed yarn ,no mapreducev1 install,and it have no /etc/hadoop/conf/hadoop-env.sh file exist, i use openTSDB to monitor my cluster,and need some option set on hadoop-env.sh, and how can i do now? create a new hadoop-env.sh file ?

Re: about monitor metrics in hadoop

2013-08-05 Thread 闫昆
I use cdh4.3 ,my hadoop_env.sh file in follow directory $HADOOP_HOME/etc/hadoop/ 2013/8/5 ch huang justlo...@gmail.com hi,all: i installed yarn ,no mapreducev1 install,and it have no /etc/hadoop/conf/hadoop-env.sh file exist, i use openTSDB to monitor my cluster,and need some option

about replication

2013-08-05 Thread Irfan Sayed
hi, i have setup the two node apache hadoop cluster on windows environment one is namenode and another is datanode everything is working fine. one thing which i need to know is , how the replication starts if i create a.txt in namenode , how it will be appeared in datanodes please suggest

setLocalResources() on ContainerLaunchContext

2013-08-05 Thread Krishna Kishore Bonagiri
Hi, Can someone please tell me what is the use of calling setLocalResources() on ContainerLaunchContext? And, also an example of how to use this will help... I couldn't guess what is the String in the map that is passed to setLocalResources() like below: // Set the local resources

Re: setLocalResources() on ContainerLaunchContext

2013-08-05 Thread Harsh J
The string for each LocalResource in the map can be anything that serves as a common identifier name for your application. At execution time, the passed resource filename will be aliased to the name you've mapped it to, so that the application code need not track special names. The behavior is

Re: about replication

2013-08-05 Thread Mohammad Tariq
Hello Irfan, You can find all the answers from HDFS architecture guidehttp://hadoop.apache.org/docs/stable/hdfs_design.html. See the section Data Organizationhttp://hadoop.apache.org/docs/stable/hdfs_design.html#Data+Organizationin particular for this question. Warm Regards, Tariq

Re: metics v1 in hadoop-2.0.5

2013-08-05 Thread lei liu
There is hadoop-metrics.properties file in etc/hadoop directory. I config the file with below content: dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 dfs.period=10 dfs.servers=dw74:8649 But the configuration does not work. Do I only use metrics v2 in hadoop-2.0.5? 2013/8/5

Re: metics v1 in hadoop-2.0.5

2013-08-05 Thread Harsh J
Yes. The daemons all use metrics-v2 in Apache Hadoop 2.x. On Mon, Aug 5, 2013 at 3:56 PM, lei liu liulei...@gmail.com wrote: There is hadoop-metrics.properties file in etc/hadoop directory. I config the file with below content: dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31

Re: metics v1 in hadoop-2.0.5

2013-08-05 Thread Binglin Chang
metrics v1 is deprecated, both in 1.2 and 2.x. the existence of hadoop-metrics.properties is confusing, I think it should be removed. On Mon, Aug 5, 2013 at 6:26 PM, lei liu liulei...@gmail.com wrote: There is hadoop-metrics.properties file in etc/hadoop directory. I config the file with

Re: about replication

2013-08-05 Thread Irfan Sayed
thanks mohammad i ran the below command on NameNode $ ./hadoop dfs -mkdir /wksp and the wksp dir got created in c:\ ( as i have windows environment) now , when i log in to one of the DataNode , then i am not able to see c:\wksp any issue ? please suggest regards On Mon, Aug 5, 2013 at

Re: about replication

2013-08-05 Thread Mohammad Tariq
You cannot physically see the HDFS files and directories through local FS. Either use HDFS shell or HDFS webUI(namenode_machine:50070). Warm Regards, Tariq cloudfront.blogspot.com On Mon, Aug 5, 2013 at 4:46 PM, Irfan Sayed irfu.sa...@gmail.com wrote: thanks mohammad i ran the below command

Re: about replication

2013-08-05 Thread Irfan Sayed
thanks. please refer below: Administrator@DFS-DC /cygdrive/c/hadoop-1.1.2/hadoop-1.1.2/bin $ ./hadoop dfs -ls /wksp Found 1 items drwxr-xr-x - Administrator Domain 0 2013-08-05 16:58 /wksp/New folder Administrator@DFS-DC /cygdrive/c/hadoop-1.1.2/hadoop-1.1.2/bin $ same command if i

Re: about replication

2013-08-05 Thread manish dunani
You can not physically access the datanode.You have to understand it to logically and it really happens. Type jps command to check ur datanode was started or not. when user stores the file into hdfs ,the request is goes to datanode and datanode will divide the file into number of blocks. Each

Re: setLocalResources() on ContainerLaunchContext

2013-08-05 Thread Krishna Kishore Bonagiri
Hi Harsh, Thanks for the quick and detailed reply, it really helps. I am trying to use it and getting this error in node manager's log: 2013-08-05 08:57:28,867 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:dsadm (auth:SIMPLE)

Large-scale collection of logs from multiple Hadoop nodes

2013-08-05 Thread Public Network Services
Hi... I am facing a large-scale usage scenario of log collection from a Hadoop cluster and examining ways as to how it should be implemented. More specifically, imagine a cluster that has hundreds of nodes, each of which constantly produces Syslog events that need to be gathered an analyzed at

Re: setLocalResources() on ContainerLaunchContext

2013-08-05 Thread Harsh J
The detail is insufficient to answer why. You should also have gotten a trace after it, can you post that? If possible, also the relevant snippets of code. On Mon, Aug 5, 2013 at 6:36 PM, Krishna Kishore Bonagiri write2kish...@gmail.com wrote: Hi Harsh, Thanks for the quick and detailed reply,

Re: Large-scale collection of logs from multiple Hadoop nodes

2013-08-05 Thread Inder Pall
We have been using a flume like system for such usecases at significantly large scale and it has been working quite well. Would like to hear thoughts/challenges around using zeromq alike systems at good enough scale. inder you are the average of 5 people you spend the most time with On Aug 5,

Re: about replication

2013-08-05 Thread Irfan Sayed
thanks. i verified, datanode is up and running i ran the below command: Administrator@DFS-DC /cygdrive/c/hadoop-1.1.2/hadoop-1.1.2/bin $ ./hadoop dfs -copyFromLocal C:\\Users\\Administrator\\Desktop\\hadoop-1.1.2.tar /wksp copyFromLocal: File C:/Users/Administrator/Desktop/hadoop-1.1.2.tar does

Re: setLocalResources() on ContainerLaunchContext

2013-08-05 Thread Krishna Kishore Bonagiri
Hi Harsh, Please see if this is useful, I got a stack trace after the error has occurred 2013-08-06 00:55:30,559 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: CWD set to /tmp/nm-local-dir/usercache/dsadm/appcache/application_1375716148174_0004 =