The default out-of-the-box configuration of HBase does not require Hadoop and
stores data temporarily in  my /tmp directory. This is a great way to
quickly create a dev environment for newbies. 

I guess if I have to run both HBase and Hadoop in at least pseudo
distributed mode then I need to reconsider if I can avoid the use of
map/reduce in my app.

Thanks.


Amandeep Khurana wrote:
> 
> Where do you store your hbase data? Arent you using Hadoop and HDFS?
> 
> If you want to run MR jobs over data stored in HBase, you would need a
> Hadoop instance...
> 
> 
> Amandeep Khurana
> Computer Science Graduate Student
> University of California, Santa Cruz
> 
> 
> On Tue, Sep 8, 2009 at 12:52 PM, Keith Thomas
> <[email protected]>wrote:
> 
>>
>> I have had a great time working through the awesome Hbase 0.20.0 client
>> api
>> as I write my first web app with data persisted by HBase on HDFS. However
>> the time has come to write my first map/reduce job for use by the web
>> app.
>>
>> Until now I've been starting HBase with 'start-hbase.sh' but I see that
>> in
>> these instructions,
>>
>>
>> http://hadoop.apache.org/hbase/docs/current/api/overview-summary.html#runandconfirm
>> that I now need to start "the mapreduce daemons". I'm not clear what this
>> means (this is entirely because I am a newbie).
>>
>> Does this mean I need to move away from the standalone HBase mode I've
>> been
>> working in and now bring up Hadoop as well? Or maybe this means I can
>> keep
>> working in standalone mode and just execute another script in addition to
>> 'start-hbase.sh'? I'm afraid I just don't know.
>>
>> Any pointers offered would be brilliant.
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Mapreduce-dameons--tp25352905p25352905.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Mapreduce-dameons--tp25352905p25353647.html
Sent from the HBase User mailing list archive at Nabble.com.

Reply via email to