Our doc. on this topic can be found here:
http://hadoop.apache.org/hbase/docs/r0.20.3/api/org/apache/hadoop/hbase/mapreduce/package-summary.html#package_description

St.Ack

On Thu, Apr 8, 2010 at 9:40 PM, Todd Lipcon <t...@cloudera.com> wrote:
> On Thu, Apr 8, 2010 at 4:39 PM, Jean-Daniel Cryans <jdcry...@apache.org>wrote:
>
>> Ted, please make sure you keep the conversation on the mailing list,
>> private answers have no value for an open source community.
>>
>> About JT, it would be awesome not to have to restart it but I don't
>> know enough about its internals to comment on how hard it would be to
>> modify. Maybe there's even an open jira.
>>
>> Or maybe someone else who listens could comment? Cloudera people?
>>
>>
> Rather than adding the jars to lib/ or HADOOP_CLASSPATH, you can also insert
> them directly into a lib/ directory inside your job jar. Then it will be
> added to the classpath of your MR Task child JVM when your job is localized
> on the task trackers. I think this is best practice for any jars that you
> anticipate will change on even a semiregular basis.
>
> -Todd
>
>
>> J-D
>>
>> On Thu, Apr 8, 2010 at 4:33 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>> > J-D:
>> > Do you think it makes for job tracker to dynamically refresh classpath ?
>> > I can see this need because HBase, HIVE, etc may initiate request in
>> cluster
>> > mode and we don't want to restart job tracker in production often.
>> >
>> > Cheers
>> >
>> > On Thu, Apr 8, 2010 at 3:52 PM, Jean-Daniel Cryans <jdcry...@apache.org>
>> > wrote:
>> >>
>> >> Actually the preferred method is putting the hbase and zookeeper jar
>> >> on the HADOOP_CLASSPATH of conf/hadoop-env.sh (putting the hbase conf
>> >> folder there is also good).
>> >>
>> >> Yeah have to restart JT, but AFAIK it's not HBase's fault right? ;)
>> >>
>> >> J-D
>> >>
>> >> On Thu, Apr 8, 2010 at 3:47 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>> >> > I copied zookeeper-3.2.1.jar after seeing the exception below.
>> >> > Turns out that I have to restart job tracker to refresh classpath :-(
>> >> >
>> >> > On Thu, Apr 8, 2010 at 3:03 PM, Jean-Daniel Cryans
>> >> > <jdcry...@apache.org>wrote:
>> >> >
>> >> >> Zookeeper jar isn't in the path of hadoop?
>> >> >>
>> >> >> btw https://issues.apache.org/jira/browse/HBASE-2335
>> >> >>
>> >> >> J-D
>> >> >>
>> >> >> On Thu, Apr 8, 2010 at 2:58 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>> >> >> > I ran hbase export in cluster mode.
>> >> >> >
>> >> >> > From Saturn:
>> >> >> > 10/04/07 14:18:22 INFO zookeeper.ClientCnxn: zookeeper.
>> >> >> > disableAutoWatchReset is false
>> >> >> > 10/04/07 14:18:22 INFO zookeeper.ClientCnxn: Attempting connection
>> to
>> >> >> server
>> >> >> > uranus/10.10.31.18:2181
>> >> >> > 10/04/07 14:18:22 INFO zookeeper.ClientCnxn: Priming connection to
>> >> >> > java.nio.channels.SocketChannel[connected local=/10.10.31.16:34170
>> >> >> remote=uranus/
>> >> >> > 10.10.31.18:2181]
>> >> >> > 10/04/07 14:18:22 INFO zookeeper.ClientCnxn: Server connection
>> >> >> > successful
>> >> >> > 10/04/07 14:18:22 INFO mapreduce.TableInputFormatBase: split:
>> >> >> 0*->Neptune*:,
>> >> >> > 10/04/07 14:18:23 INFO mapred.JobClient: Running job:
>> >> >> job_201004051529_0015
>> >> >> > 10/04/07 14:18:24 INFO mapred.JobClient:  map 0% reduce 0%
>> >> >> > 10/04/07 14:18:36 INFO mapred.JobClient: Task Id :
>> >> >> > attempt_201004051529_0015_m_000000_0, Status : FAILED
>> >> >> > Error: java.lang.ClassNotFoundException:
>> org.apache.zookeeper.Watcher
>> >> >> >        at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
>> >> >> >        at java.security.AccessController.doPrivileged(Native
>> Method)
>> >> >> >        at
>> java.net.URLClassLoader.findClass(URLClassLoader.java:188)
>> >> >> >        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
>> >> >> >        at
>> >> >> > sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>> >> >> >        at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
>> >> >> >        at
>> >> >> > java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
>> >> >> >        at java.lang.ClassLoader.defineClass1(Native Method)
>> >> >> >        at java.lang.ClassLoader.defineClass(ClassLoader.java:621)
>> >> >> >        at
>> >> >> >
>> >> >> >
>> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124)
>> >> >> >        at
>> >> >> > java.net.URLClassLoader.defineClass(URLClassLoader.java:260)
>> >> >> >        at
>> java.net.URLClassLoader.access$000(URLClassLoader.java:56)
>> >> >> >        at java.net.URLClassLoader$1.run(URLClassLoader.java:195)
>> >> >> >        at java.security.AccessController.doPrivileged(Native
>> Method)
>> >> >> >        at
>> java.net.URLClassLoader.findClass(URLClassLoader.java:188)
>> >> >> >        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
>> >> >> >        at
>> >> >> > sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>> >> >> >        at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
>> >> >> >        at
>> >> >> > java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.client.HConnectionManager.getClientZooKeeperWatcher(HConnectionManager.java:151)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getZooKeeperWrapper(HConnectionManager.java:885)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRootRegion(HConnectionManager.java:901)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:580)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:556)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:630)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:589)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:556)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:630)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:593)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:556)
>> >> >> >        at
>> >> >> > org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:127)
>> >> >> >        at
>> >> >> > org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:105)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:73)
>> >> >> >        at
>> >> >> >
>> >> >> >
>> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
>> >> >> >        at
>> >> >> > org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:536)
>> >> >> >        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
>> >> >> >        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>> >> >> >
>> >> >> > I wonder why the above exception was thrown because we have same
>> >> >> > hbase
>> >> >> > deployment on all the servers:
>> >> >> > [r...@neptune software]# ls -l lib/zookeeper-3.2.1.jar
>> >> >> > -rwxr-xr-x 1 hbaseadmin hbaseadmin 913093 Oct  7 11:55
>> >> >> > lib/zookeeper-3.2.1.jar
>> >> >> >
>> >> >> > Can someone shed some light on how to get over the above exception
>> ?
>> >> >> >
>> >> >> > Thanks
>> >> >> >
>> >> >>
>> >> >
>> >
>> >
>>
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>

Reply via email to