On Mon, Jan 23, 2012 at 5:32 PM, Fei Dong <[email protected]> wrote: > Hello guys, > > I setup a Hadoop and HBase in EC2. My Settings as follows: > Apache Official Version > Hadoop 0.20.203.0
HBase won't work on this version of hadoop. See http://hbase.apache.org/book.html#hadoop > export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_HOME/lib/zookeeper.jar" > export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_HOME/hbase.jar" > The jars are not normally named as you have them above. Usually there is a version on the jar name. > org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to > connect to ZooKeeper but the connection closes immediately. This could > be a sign that the server has too many connections (30 is the > default). Consider inspecting your ZK server logs for that error and > then make sure you are reusing HBaseConfiguration as often as you can. > See HTable's javadoc for more information. > at > org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:155) > at > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1002) > at > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:304) > at > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:295) > at > org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:157) > """ > Search this mailing list archive for similar reports to above. Up your maximum count of concurrent zookeeper connections as work around. > 2) > When another mapreduce job: > > /usr/local/hadoop-0.20.203.0/bin/hadoop jar > ./bin/../dist/xxxxxx.jar pMapReduce.SmartRunner -numReducers > 80 -inDir /root/test1/input -outDir /root/test1/output -landmarkTable > Landmarks -resultsTable test_one -numIter 10 -maxLatency 75 > -filterMinDist 10 -hostAnswerWeight 5 -minNumLandmarks 1 -minNumMeas 1 > -alwaysUseWeightedIxn -writeFullDetails -weightMonte -allTarg > -allLookup -clean -cleanResultsTable > > JobTracker shows error: > "" > 12/01/23 00:51:31 INFO mapred.JobClient: Running job: job_201201212243_0009 > 12/01/23 00:51:32 INFO mapred.JobClient: map 0% reduce 0% > 12/01/23 00:51:40 INFO mapred.JobClient: Task Id : > attempt_201201212243_0009_m_000174_0, Status : FAILED > java.lang.Throwable: Child Error > at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271) > Caused by: java.io.IOException: Task process exit with nonzero status of 1. > at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258) > """ > > TaskTracker log: > """ > Could not find the main class: . Program will exit. > Exception in thread "main" java.lang.NoClassDefFoundError: > Caused by: java.lang.ClassNotFoundException: > at java.net.URLClassLoader$1.run(URLClassLoader.java:202) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:190) > at java.lang.ClassLoader.loadClass(ClassLoader.java:307) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) > at java.lang.ClassLoader.loadClass(ClassLoader.java:248) > Could not find the main class: . Program will exit. > """ Thats a pretty basic failure; it couldn't find basic class java class in classpath. Can you dig in more on this? You've seen this: http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath St.Ack > > The real entry is the main() in SmartRunner.class > jar tf ./bin/../dist/xxxxxx.jar|grep SmartRunner > pMapReduce/SmartRunner.class > > Can anyone help me, thanks a lot. > -- > Best Regards, > -- > Fei Dong
