For testing purposes, it is possible to run HBase without HDFS and the benefits of durability. Benoit Sigoure has a good writeup here:
http://opentsdb.net/setup-hbase.html But for larger deployments, HDFS is the way to go. Another approach you might consider is the pseudo-distributed option, where you get Hadoop+HBase running all on the same node (http://goo.gl/Rytnp). Norbert On Sat, Feb 5, 2011 at 9:51 AM, Norbert Burger <[email protected]>wrote: > Mike, you'll also need also access to an installation of Hadoop, whether > this on the same machines as your HBase install (common), or somewhere > else. Often, people install Hadoop first and then layer HBase over it. > > HBase depends on core Hadoop functionality like HDFS, and uses the Hadoop > JAR in lib/ to support this. But this is library code only; what you're > missing is the rest of Hadoop ecosystem (config files, directory structure, > command-line tools, etc.) > > Norbert > > > On Sat, Feb 5, 2011 at 9:21 AM, Ted Yu <[email protected]> wrote: > >> On a related note: >> http://wiki.apache.org/hadoop/Hadoop%20Upgrade (referenced by >> http://wiki.apache.org/hadoop/Hbase/HowToMigrate#90) needs to be filled >> out. >> >> On Fri, Feb 4, 2011 at 11:47 PM, Mike Spreitzer <[email protected]> >> wrote: >> >> > Hi, I'm new to HBase and have a stupid question about its dependency on >> > Hadoop. Section 1.3.1.2 of (http://hbase.apache.org/notsoquick.html) >> says >> > there is an "instance" of Hadoop in the lib directory of HBase. What >> > exactly is meant by "instance"? Is it all I need, or do I need to get a >> > "full" copy of Hadoop from elsewhere? If HBase already has all I need, >> I >> > am having trouble finding it. The Hadoop instructions refer to >> commands, >> > for example, that I can't find. >> > >> > Thanks, >> > Mike >> > >
