I just tried it here Ted and it passed (on a mac):

....
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.hadoop.hbase.util.TestMergeTool
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.422 sec

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

It passes on linux too?  What you think is the difference?
St.Ack

P.S. Thanks for noticing my failure committing hbase-3403 to TRUNK.
Well-spotted!

On Sat, Jan 8, 2011 at 9:02 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> I downloaded RC3 and encountered a repeatable unit test failure on my Mac
> laptop
>
> Test set: org.apache.hadoop.hbase.util.TestMergeTool
> -------------------------------------------------------------------------------
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 6.087 sec
> <<< FAILURE!
> testMergeTool(org.apache.hadoop.hbase.util.TestMergeTool)  Time elapsed:
> 6.06 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: 'merging regions 0 and 1' failed
>        at junit.framework.Assert.fail(Assert.java:47)
>        at junit.framework.Assert.assertTrue(Assert.java:20)
>        at
> org.apache.hadoop.hbase.util.TestMergeTool.mergeAndVerify(TestMergeTool.java:182)
>        at
> org.apache.hadoop.hbase.util.TestMergeTool.testMergeTool(TestMergeTool.java:257)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>
> My OS:
> Darwin tyumac.local 10.5.0 Darwin Kernel Version 10.5.0: Fri Nov  5 23:20:39
> PDT 2010; root:xnu-1504.9.17~1/RELEASE_I386 i386 i386
>
> From org.apache.hadoop.hbase.util.TestMergeTool-output.txt:
>
> 2011-01-08 20:57:44,469 WARN
> [org.apache.hadoop.hdfs.server.datanode.dataxceiverser...@1afd1810]
> datanode.DataXceiverServer(137): DatanodeRegistration(127.0.0.1:60366,
> storageID=DS-788800082-192.168.2.107-60366-1294549062630, infoPort=60367,
> ipcPort=60368):DataXceiveServer:
> java.nio.channels.AsynchronousCloseException
>        at
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
>        at
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
>        at
> sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
>        at
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
>        at java.lang.Thread.run(Thread.java:680)
>
> Shutting down DataNode 0
>
> FYI
>
> On Fri, Jan 7, 2011 at 5:03 PM, Stack <st...@duboce.net> wrote:
>
>> The fourth hbase 0.90.0 release candidate is available for download:
>>
>>  http://people.apache.org/~stack/hbase-0.90.0-candidate-3/<http://people.apache.org/%7Estack/hbase-0.90.0-candidate-3/>
>>
>> This is going to be the one!
>>
>> Should we release this candidate as hbase 0.90.0?  Take it for a spin.
>> Check out the doc., etc.  Vote +1/-1 by next Friday, the 14th of January.
>>
>> HBase 0.90.0 is the major HBase release that follows 0.20.0 and the
>> fruit of the 0.89.x development release series we've been running of
>> late.
>>
>> Over 1k issues have been closed since 0.20.0.  Release notes are
>> available here: http://su.pr/8LbgvK.
>>
>> HBase 0.90.0 runs on Hadoop 0.20.x.  It does not currently run on
>> Hadoop 0.21.0 nor on Hadoop TRUNK.   HBase will lose data unless it is
>> running on an Hadoop HDFS 0.20.x that has a durable sync. Currently
>> only the branch-0.20-append branch [1] has this attribute (See
>> CHANGES.txt [3] in branch-0.20-append to see the list of patches
>> involved adding an append). No official releases have been made from
>> this branch as yet so you will have to build your own Hadoop from the
>> tip of this branch, OR install Cloudera's CDH3 [2] (Its currently in
>> beta).  CDH3b2 or CDHb3 have the 0.20-append patches needed to add a
>> durable sync. If using CDH, be sure to replace the hadoop jars that
>> are bundled with HBase with those from your CDH distribution.
>>
>> There is no migration necessary.  Your data written with HBase 0.20.x
>> (or with HBase 0.89.x) is readable by HBase 0.90.0.  A shutdown and
>> restart after putting in place the new HBase should be all thats
>> involved.  That said, once done, there is no going back to 0.20.x once
>> the transition has been made.   HBase 0.90.0 and HBase 0.89.x write
>> region names differently in the filesystem.  Rolling restart from
>> 0.20.x or 0.89.x to 0.90.0RC1 will not work.
>>
>> Yours,
>> The HBasistas
>> P.S. For why the version 0.90 and whats new in HBase 0.90, see slides
>> 4-10 in this deck [4]
>>
>> 1. http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append
>> 2. http://archive.cloudera.com/docs/
>> 3.
>> http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/CHANGES.txt
>> 4. http://hbaseblog.com/2010/07/04/hug11-hbase-0-90-preview-wrap-up/
>>
>

Reply via email to