Hey guys,

If you are running on hadoop 0.20.2  you are going to lose data when
you crash.  So don't do it :-)

You will need to either use a cdh3 beta (we use b2), or build the
hadoop-20-append branch.  We have built the hadoop-20-append tip and
included the JAR with the default distribution. It is not compatible
with hadoop 0.20.2 (stock/native) nor cdh3 beta*.

It's really confusing, but the basic fact is there is no ASF released
version of hadoop that runs HBase properly. My best suggestion is to
complain to general@, and file JIRAs if you can. It helps when users
complain, since I think everyone has gone tone deaf from me
complaining :-)

-ryan

On Thu, Feb 10, 2011 at 6:13 AM, Mike Spreitzer <[email protected]> wrote:
> Yes, you've got it right.  Let me emphasize that what I did was *much*
> easier than the other way around --- which I tried first and in which I
> had problems.  The Cloudera release specifically depends on Sun security
> classes that are not in the Java (IBM's) that I used.  I tried building
> Hadoop's 0.20-append branch but had some difficulties and it took a long
> time.  The various build instructions I found all talked about running the
> regression test suite once or twice --- and a single run takes hours.  The
> first time I ran it, from a clean download and build, it had problems. And
> the instructions are confusion regarding building the native part.  The
> instructions seem to say you can build and test without building the
> native support; how can that be?
>
> Regards,
> Mike Spreitzer
> SMTP: [email protected], Lotus Notes: Mike Spreitzer/Watson/IBM
> Office phone: +1-914-784-6424 (IBM T/L 863-)
> AOL Instant Messaging: M1k3Sprtzr
>
>
>
> From:   Suraj Varma <[email protected]>
> To:     [email protected]
> Date:   02/10/2011 08:02 AM
> Subject:        Re: Using the Hadoop bundled in the lib directory of HBase
>
>
>
> This procedure does seem a bit opposite of what I've seen folks recommend
> (and the way it is documented in the notsoquick.html).But it might be
> equivalent in this specific case (not completely sure as scripts etc are
> different). I'll let one of the experts comment on that.
>
> If I understood you right, you took the hadoop 0.20.2 release (which does
> not have append support needed to prevent data loss in some situations)
> and
> installed that. Next you took hbase 0.90.0 's hadoop-core.jar (which is
> from
> a separately built branch-0.20-append and copied that over to the hadoop
> installation.
>
> What folks usually do is copy over the hadoop install's jar file over to
> hbase - so, if you have a Cloudera install, you would copy over the
> Cloudera
> built hadoop jar over to your hbase install (replacing the hbase hadoop
> jar).
>
> I'm guessing that in your specific situation since branch-0.20-append and
> hadoop 0.20.2 are fairly close (other than the append changes), it "might"
> work. But - not sure if this is what folks normally do ...
>
> Can someone clarify this? The above procedure Mike followed certainly is
> much simpler in this specific case as he doesn't have to built out his own
> branch-0.20-append and rather "reuse" the one that was built for
> hbase-0.90.
>
> Thanks,
> --Suraj
>
>
> On Mon, Feb 7, 2011 at 9:17 AM, Mike Spreitzer <[email protected]>
> wrote:
>
>> After a few false starts, what I have done is: fetch the 0.20.2 release
> of
>> hadoop core (which appears to be common + dfs + mapred), install it,
>> delete hadoop/hadoop-core.jar, unpack the hbase distribution, copy its
>> lib/hadoop-core-...jar file to hadoop/hadoop-...-core.jar, configure,
> and
>> test.  It seems to be working.  Is that what you expected?  Should I
>> expect subtle problems?
>>
>> If that was the right procedure, this could be explained a little more
>> clearly at (http://hbase.apache.org/notsoquick.html#hadoop).  The first
>> thing that set me on the wrong path was the statement that I have to
>> either build my own Hadoop or use Cloudera; apparently that's not right,
> I
>> can use a built release if I replace one jar in it.  That web page says
> "
>> If you want to run HBase on an Hadoop cluster that is other than a
> version
>> made from branch-0.20.append " (which is my case, using a standard
>> release) "you must replace the hadoop jar found in the HBase lib
> directory
>> with the hadoop jar you are running out on your cluster to avoid version
>> mismatch issues" --- but I think it's the other way around in my case.
>>
>> Thanks,
>> Mike Spreitzer
>> SMTP: [email protected], Lotus Notes: Mike Spreitzer/Watson/IBM
>> Office phone: +1-914-784-6424 (IBM T/L 863-)
>> AOL Instant Messaging: M1k3Sprtzr
>>
>>
>>
>> From:   Stack <[email protected]>
>> To:     [email protected]
>> Date:   02/07/2011 12:07 PM
>> Subject:        Re: Using the Hadoop bundled in the lib directory of
> HBase
>> Sent by:        [email protected]
>>
>>
>>
>> On Sun, Feb 6, 2011 at 9:31 PM, Vijay Raj <[email protected]>
> wrote:
>> > Hadoop core contained hdfs / mapreduce , all bundled together until
>> 0.20.x .
>> >  Since 0.21, it got forked into common, hdfs and mapreduce
> sub-projects.
>> >
>>
>> What Vijay said.
>>
>> > In this case - what is needed is a 0.20.2 download from hadoop and
>> configuring
>> > the same. The hadoop-0.20.2.jar needs to be replaced by the patched
>> > hadoop-0.20.2-xxxx.jar available in HBASE_HOME/lib/*.jar directory, to
>> make
>> > things work .
>> >
>>
>> This is a  little off.
>>
>> Here is our Hadoop story for 0.90.0:
>> http://hbase.apache.org/notsoquick.html#hadoop
>>
>> It links to the branch.   If you need instruction on how to check out
>> and build, just say (do we need to add pointers to book?)
>>
>> St.Ack
>>
>>
>
>

Reply via email to