Ubuntu 12.04 - Which JDK?

2012-11-07 Thread a...@hsk.hk
Hi, I am planning to use Ubuntu 12.04, from http://wiki.apache.org/hadoop/HadoopJavaVersions, about OpenJDK "Note*: OpenJDK6 has some open bugs w.r.t handling of generics... so OpenJDK cannot be used to compile hadoop mapreduce code in branch-0.23 and beyond, please use other JDKs." Is it OK

Re: Ubuntu 12.04 - Which JDK?

2012-11-08 Thread a...@hsk.hk
find plenty, like here - >> http://www.ubuntututorials.com/install-oracle-java-jdk-7-ubuntu-12-04/. >> These are for jdk 7, but you can follow the same to install jdk 6. >> >> Enjoy! >> >> On Nov 8, 2012 11:30 AM, "a...@hsk.hk" wrote: >>> >

Re: Ubuntu 12.04 - Which JDK?

2012-11-08 Thread a...@hsk.hk
n/java java > /usr/lib/jvm/jdk1.6.0_37/bin/java 1 > > 7. sudo update-alternatives --install /usr/bin/javaws javaws > /usr/lib/jvm/jdk1.6.0_37/bin/javaws 1 > > Then choose which java to use : > > sudo update-alternatives --config java > > choose the no for java6 >

Re: Ubuntu 12.04 - Which JDK? and some more

2012-11-10 Thread a...@hsk.hk
buntu, should NOT use LVM (Linux Logical Volume Manager) for Hadoop data disks! (There will be performance issue between the filesystem and the device, LVM is the default for some Linux package, be careful not to select LVM) regards On 8 Nov 2012, at 6:12 PM, a...@hsk.hk wrote: > Hi, thank you

Re: HA for hadoop-0.20.2

2012-11-13 Thread a...@hsk.hk
Hi, A question, is 2.x ready for production deployment? Thanks On 13 Nov 2012, at 5:19 PM, Harsh J wrote: > Hi, > > Why not just use the 2.x releases for HA-NNs? There is quite a wide > delta between 0.20.x and 2.x, especially around the edit log areas > after HDFS-1073. > > In any case, I thi

High Availability - second namenode (master2) issue: Incompatible namespaceIDs

2012-11-15 Thread a...@hsk.hk
Hi, Please help! I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS. Now I am setting up the second master (master2) in order to form HA. When I used JPS to check the cluster, I found : 2782 Jps 2126 NameNode 2720 SecondaryNameNode i.e. The d

Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

2012-11-16 Thread a...@hsk.hk
vailability - second namenode (master2) issue: > Incompatible namespaceIDs > > Seems like you havn't format your cluster (if its 1st time made). > > On Fri, Nov 16, 2012 at 9:58 AM, a...@hsk.hk wrote: > Hi, > > Please help! > > I have installed a Hadoop Cl

Datanode: "Cannot start secure cluster without privileged resources"

2012-11-26 Thread a...@hsk.hk
Hi, I am setting up HDFS security with Kerberos: When I manually started the first datanode, I got the following messages (the namenode is started): 1) INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user 2) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:

Re: Datanode: "Cannot start secure cluster without privileged resources"

2012-11-26 Thread a...@hsk.hk
available. Are you using > tarballs or packages (and if packages, are they from Bigtop)? > > On Mon, Nov 26, 2012 at 5:21 PM, a...@hsk.hk wrote: >> Hi, >> >> I am setting up HDFS security with Kerberos: >> When I manually started the first datanode, I got the follo

Re: Datanode: "Cannot start secure cluster without privileged resources"

2012-11-26 Thread a...@hsk.hk
check DN in secure mode? Thanks On 26 Nov 2012, at 9:03 PM, a...@hsk.hk wrote: > Hi Harsh, > > Thank you very much for your reply, got it! > > Thanks > ac > > On 26 Nov 2012, at 8:32 PM, Harsh J wrote: > >> Secure DN needs to be started as root (it runs a

Re: Datanode: "Cannot start secure cluster without privileged resources"

2012-11-26 Thread a...@hsk.hk
started with > a custom launcher, the class name (or JAR file name) and the arguments > to the main method will not be available. In this case, the jps > command will output the string Unknownfor the class name or JAR file > name and for the arguments to the main method." >

Re: Datanode: "Cannot start secure cluster without privileged resources"

2012-11-26 Thread a...@hsk.hk
Mon, Nov 26, 2012 at 7:35 PM, a...@hsk.hk wrote: >> Hi, >> >> Thanks for your reply. >> >> However, I think 16152 should not be the DN, since >> 1) my second try of "/usr/local/hadoop/bin/hadoop-daemon.sh start datanode" >> says 16117 (

Failed To Start SecondaryNameNode in Secure Mode

2012-11-27 Thread a...@hsk.hk
Hi, Please help! I tried to start SecondaryNameNode in secure mode by the command: {$HADOOP_HOME}bin/hadoop-daemon.sh start secondarynamenode 1) from the log, I saw "Login successful" / 2012-11-27 22:05:23,120 INFO or

Re: Failed To Start SecondaryNameNode in Secure Mode

2012-11-28 Thread a...@hsk.hk
> > and this principal needs to be available in your /etc/hadoop/hadoop.keytab. > From the logs it looks like you only have the following configured > "dfs.secondary.namenode.kerberos.principal" > > > -- > Arpit Gupta > Hortonworks Inc. > http://horton

Re: Failed To Start SecondaryNameNode in Secure Mode

2012-11-29 Thread a...@hsk.hk
Hi, I found this error message in the .out file after trying to start SecondaryNameNode in secure mode Exception in thread "main" java.lang.IllegalArgumentException: Does not contain a valid host:port authority: m146:m146:0 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.jav

Re: Failed To Start SecondaryNameNode in Secure Mode

2012-11-29 Thread a...@hsk.hk
n both cases, the port was not 50090, very strange. Thanks AC On 29 Nov 2012, at 5:46 PM, a...@hsk.hk wrote: > Hi, > > I found this error message in the .out file after trying to start > SecondaryNameNode in secure mode > > Exception in thread "main" java.

Re: CheckPoint Node

2012-11-30 Thread a...@hsk.hk
Hi JM, If you migrate 1.0.3 to 2.0.x, could you mind to share your migration steps? it is because I also have a 1.0.4 cluster (Ubuntu 12.04, Hadoop 1.0.4, Hbase 0.94.2 and ZooKeeper 3.4.4 ) and want to migrate it to 2.0.x in order to avoid the hardware failure of the NameNode. I have a testing

Re: Map Reduce jobs taking a long time at the end

2012-12-04 Thread a...@hsk.hk
Hi, Have you also checked .out file of the tasktracker in logs? It could contain some useful information for the issue. Thanks ac On 4 Dec 2012, at 8:27 PM, Jay Whittaker wrote: > Hey, > > We are running Map reduce jobs against a 12 machine hbase cluster and > for a long time they took appro

Re: Strange machine behavior

2012-12-09 Thread a...@hsk.hk
Hi, I always set "vm.swappiness = 0" for my hadoop servers (PostgreSQL servers too). The reason is that Linux moves memory pages to swap space if they have not been accessed for a period of time (swapping). Java virtual machine (JVM) does not act well in the case of swapping that will make

Re: IOException:Error Recovery for block

2012-12-09 Thread a...@hsk.hk
Hi, Can you let us know which Hadoop version you are using? Thanks ac On 9 Dec 2012, at 3:03 PM, Manoj Babu wrote: > Hi All, > > When grepping the error logs i could see the below for a job which process > some 500GB of data. What would be the cause and how to avoid it further? > > java.io.I