Hi..
I solved my above problem related to zookeeper.keeperException and other
errors.

The solution is, i added zookeeper-3.3.2.jar and log4j-1.2.5.jar on
classpath of HBASE i.e i set  HBASE_CLASSPATH in the
{HBASE_HOME}/conf/hbase-env.sh file with the above two jars. That solved my
problem. I did above settings on all the machines of my cluster.

Thank you.

On Wed, Dec 14, 2011 at 8:13 PM, Vamshi Krishna <[email protected]>wrote:

> Hi, thank you. all these days i am coding in eclipse and trying to run
> that program from eclipse only, but never i saw that program running on the
> cluster , only it is running on the LocalJobRunner, even though i set
> config.set("mapred.job.tracker", "jthost:port");
>
> Now i realized on thing. just correct me if i am wrong. " write code in
> the eclipse, then build it, then jar it, then run it through command line
> in the hadoop home folder  "
>
>  for example: {Hadoop_home}/bin/hadoop jar project.jar program_name -jt
> <host:port> argument_1 argument_2 ..
> Is it the correct way ? please correct if i am wrong.
>
> Now, i did the same thing as i mentioned in the above lines. i started
> running the program from one of the datanode machine(one of the machines in
> my 2 node cluster), now i observed that program is running on the cluster,
> i specified 4 map tasks, 2 reduce tasks. But out of 4 map tasks, only those
> 2 tasks are running on the datanode machine, but the other 2 map tasks
> submitted to namenode machine are not running, . I got the following error
> on the console and on the jobtracker web UI page for those corresponding
> tasks.
>
> please what is the problem, help...
>
>
>
> java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
>     at
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:569)
>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
>     at org.apache.hadoop.mapred.Child.main(Child.java:170)
> Caused by: java.lang.reflect.InvocationTargetException
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>     at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>     at
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:113)
>     ... 3 more
> Caused by: java.lang.NoClassDefFoundError:
> org/apache/zookeeper/KeeperException
>     at SetImplementation.MyHset$setInsertionMapper.(MyHset.java:138)
>     ... 8 more
> Caused by: java.lang.ClassNotFoundException:
> org.apache.zookeeper.KeeperException
>     at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>     ... 9 more
>
>
>
>
>
>
> On Tue, Dec 13, 2011 at 9:22 AM, Harsh J <[email protected]> wrote:
>
>> Vamsi,
>>
>> One easy hack is to:
>> config.set("mapred.job.tracker", "jthost:port");
>>
>> (Or better yet, use the Tool interface always to write your Hadoop jobs
>> and then you can simply pass a "-jt <host:port>" in the command-line when
>> you want it to run against a cluster.
>>
>> On 13-Dec-2011, at 8:43 AM, Vamshi Krishna wrote:
>>
>> > what i shoud set in job's classpath ? where should i do setting class
>> path
>> > for job and how ? My requirement is to run the MR jobs on the cluster of
>> > nodes and NOT by LocalJobRunner , when i start the program form eclipse.
>> > please help me..
>> > My snippet of code for job settings is here, are there any more
>> settings i
>> > need to add here,
>> >
>> > public static void main(String args[]) throws IOException,
>> > InterruptedException, ClassNotFoundException
>> >    {
>> >
>> >        Configuration config=HBaseConfiguration.create();
>> >        Job job=new Job(config, "SET-Insetion");
>> >        job.setJarByClass(MyHset.class);
>> >        job.setMapperClass(setInsertionMapper.class);
>> >
>> >        ...
>> >        ...
>> >
>> > On Mon, Dec 12, 2011 at 11:35 PM, Jean-Daniel Cryans <
>> [email protected]>wrote:
>> >
>> >> That setting also needs to be in your job's classpath, it won't guess
>> it.
>> >>
>> >> J-D
>> >>
>> >> On Thu, Dec 8, 2011 at 10:14 PM, Vamshi Krishna <[email protected]>
>> >> wrote:
>> >>> Hi harsh,
>> >>> ya, i no jobs are seen in that jobtracker page, under RUNNING JOBS it
>> is
>> >>> none, under FINISHED JOBS it is none,FAILED JOBS it is none . its just
>> >> like
>> >>> no job is running. In eclipse i could see during mapreduce program
>> >> running,
>> >>> as you said "LOcalJobRunner", may be  Eclipse is merely launching the
>> >>> program via a LocalJobRunner.
>> >>> I ran like this,
>> >>>
>> >>> 1) right click on my main java file-> run as-> java application    ,
>> So,
>> >> it
>> >>> happened as i mentioned.
>> >>>
>> >>> So, i tried even doing this,
>> >>>
>> >>> 2) right click on my main java file-> run as-> Run on hadoop,    Now
>> >>> nothing is happening, i mean to say, no job is created, no process
>> seems
>> >> to
>> >>> be started then, i checked even the jobtracker, task tracker pages
>> also,
>> >>> there also i colud see no jobs are running, all are none.
>> >>>
>> >>> But actually if i see my mared-site.xml file in conf directory of
>> hadoop,
>> >>> its like this
>> >>>
>> >>> <name>mapred.job.tracker</name>
>> >>> <value>hadoop-namenode:9001</value>
>> >>>
>> >>> this hadoop-namenode's ip address is 10.0.1.54, i am running my
>> mapreduce
>> >>> job from an eclipse ,which is residing on the same machine. so, the
>> >>> mapred.job.tracker is set to one machine and port, so, then it should
>> be
>> >>> submitted as distributed job, right? But why this is not happening?
>> >>> On all machines , all daemons are running.
>> >>> what i should do to run it  on clusetr from the eclipse.. please
>>  help..
>> >>> On Thu, Dec 8, 2011 at 12:12 PM, Harsh J <[email protected]> wrote:
>> >>>
>> >>>> Do you not see progress, or do you not see a job at all?
>> >>>>
>> >>>> Perhaps the problem is that your Eclipse is merely launching the
>> program
>> >>>> via a LocalJobRunner, and not submitting to the cluster. This is
>> cause
>> >> of
>> >>>> improper config setup (you need "mapred.job.tracker" set at minimum,
>> to
>> >>>> submit a distributed job).
>> >>>>
>> >>>> On 08-Dec-2011, at 12:10 PM, Vamshi Krishna wrote:
>> >>>>
>> >>>>> Hi all,
>> >>>>> i am running hbase on 3 machine cluster.i am running a mapreduce
>> >> program
>> >>>> to
>> >>>>> insert data into an hbase table from elipse, so when its running i
>> >> opened
>> >>>>> hadoop jobtracker and tasktracker pages
>> >>>>> (http://10.0.1.54:50030 and http://10.0.1.54:50060) on browser, i
>> >> could
>> >>>>> find no changes or progress of mapreduce jobs, like map tasks'
>> >> progresses
>> >>>>> etc.. What is the problem, how can i see their progress on the
>> >> browser,
>> >>>>> while mapreduce program is running from eclipse? i am using
>> >> ubuntu-10.04
>> >>>>>
>> >>>>> can anybody help?
>> >>>>>
>> >>>>> --
>> >>>>> *Regards*
>> >>>>> *
>> >>>>> Vamshi Krishna
>> >>>>> *
>> >>>>
>> >>>>
>> >>>
>> >>>
>> >>> --
>> >>> *Regards*
>> >>> *
>> >>> Vamshi Krishna
>> >>> *
>> >>
>> >
>> >
>> >
>> > --
>> > *Regards*
>> > *
>> > Vamshi Krishna
>> > *
>>
>>
>
>
> --
> *Regards*
> *
> Vamshi Krishna
> *
>



-- 
*Regards*
*
Vamshi Krishna
MTech ,CS,SSSIHL
Prashanthi Nilayam
INDIA*

Reply via email to