Yes, 0.18.3 hadoop and 0.18.1 hbase loads the same data just fine.  The only
issues I have with 0.18.x is a few map tasks that timeout on one of the many
processing MR jobs, but that is normal, i just have to set the timeout
higher.  the data itself loads into hbase tables just fine.
Obviously im not using the same code, but its logically the same, the only
difference is using the new MapReduce API classes in 0.19

As far as ScannerTimeoutException in hbase 0.19.0, ill test it again on
hbase 0.19.1 once its released and let you know.

-alphaomega


On Wed, Mar 4, 2009 at 2:49 AM, stack <[email protected]> wrote:

> Well, it messes up your client, right?  Its not completing its scan?
>
> Is your client processing the return from hbase and taking longer than one
> minute to complete?  If so, its lease on the remote server will have
> timedout in this time.  Otherwise, do any of questions above to Ryan apply
> to you?
>
> Thanks,
> St.Ack
>
> On Tue, Mar 3, 2009 at 10:23 PM, Liu Yan <[email protected]> wrote:
>
> > Can I just ignore this error? I assume it is only a scanner error. What
> > caused this? Memory, disk space, max-file, I checked all these, but seems
> > very unlikly caused these conditions.
> >
> > Regards,
> > Yan
> >
> > 2009/3/1 Ryan Smith <[email protected]>
> >
> > > Hi guys,
> > >
> > > http://pastebin.com/m1c7b01da
> > >
> > > I am also getting the ScannerTimeoutException on HBase/Hadoop 0.19.0.
>  I
> > am
> > > loading millions of records in 20GB of data in a mapreduce job.  Just
> > > trying
> > > to map (write) one text file to one hbase table.  This job would
> complete
> > > fine on 0.18.x   This is the first time ive tried to run this on
> 0.19.0.
> > > Ive repeated this test under several 0.19.0 clouds and i keep getting
> > this
> > > ScannerTimeoutException.   I am going to test it once again on 0.18.x
> to
> > > make sure that is the problem.
> > >
> > > -alphaomega
> > >
> > >
> > >
> > > On Sun, Mar 1, 2009 at 3:39 AM, Liu Yan <[email protected]> wrote:
> > >
> > > > I have 2 tables, each with about 350K rows (I did "count" on them).
> > > >
> > > > For the first table, the scan kept running for about 5 hours before
> > > failing
> > > > out. For the second table, it took about 5 minutes to see the same
> > error.
> > > >
> > > > How to increase the timeout, I searched for "timeout" or "scan"
> keyword
> > > > under the ./conf folder, didn't find anything too helpful.
> > > >
> > > > Regards,
> > > > Yan
> > > >
> > > > 2009/3/1 stack <[email protected]>
> > > >
> > > > > Before you hit the exception, had it been scanning fine?
> > > > > The exception from the regionserver is a complaint that an
> > outstanding
> > > > > scanner didn't check in within the configured client timeout.  Was
> > > > > something
> > > > > happening on the client?  Was the cell being fetched extra large?
> >  You
> > > > > could
> > > > > up the timeout and see if that helps.
> > > > >
> > > > > St.Ack
> > > > >
> > > > >
> > > > > On Sat, Feb 28, 2009 at 8:00 AM, Liu Yan <[email protected]>
> > > wrote:
> > > > >
> > > > > > hi,
> > > > > >
> > > > > > I tried to scan a table with "scan 'table_name'" command in the
> > hbase
> > > > > > shell,
> > > > > > and hit the following exception:
> > > > > >
> > > > > > {{{
> > > > > > NativeException: java.lang.RuntimeException:
> > > > > > org.apache.hadoop.hbase.client.ScannerTimeoutException
> > > > > >    from org/apache/hadoop/hbase/client/HTable.java:1704:in
> > `hasNext'
> > > > > >    from sun.reflect.GeneratedMethodAccessor10:-1:in `invoke'
> > > > > >    from sun/reflect/DelegatingMethodAccessorImpl.java:25:in
> > `invoke'
> > > > > >    from java/lang/reflect/Method.java:597:in `invoke'
> > > > > >    from org/jruby/javasupport/JavaMethod.java:250:in
> > > > > > `invokeWithExceptionHandling'
> > > > > >    from org/jruby/javasupport/JavaMethod.java:219:in `invoke'
> > > > > >    from org/jruby/javasupport/JavaClass.java:416:in `execute'
> > > > > >    from
> > > > > org/jruby/internal/runtime/methods/SimpleCallbackMethod.java:67:in
> > > > > > `call'
> > > > > >    from
> org/jruby/internal/runtime/methods/DynamicMethod.java:70:in
> > > > > `call'
> > > > > >    from org/jruby/runtime/CallSite.java:295:in `call'
> > > > > >    from org/jruby/evaluator/ASTInterpreter.java:646:in `callNode'
> > > > > >    from org/jruby/evaluator/ASTInterpreter.java:324:in
> > `evalInternal'
> > > > > >    from org/jruby/evaluator/ASTInterpreter.java:1790:in
> `whileNode'
> > > > > >    from org/jruby/evaluator/ASTInterpreter.java:505:in
> > `evalInternal'
> > > > > >    from org/jruby/evaluator/ASTInterpreter.java:620:in
> `blockNode'
> > > > > >    from org/jruby/evaluator/ASTInterpreter.java:318:in
> > `evalInternal'
> > > > > > ... 121 levels...
> > > > > >    from
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> ruby.usr.local.hbase_minus_0_dot_19_dot_0.bin.hirbInvokermethod__32$RUBY$startOpt:-1:in
> > > > > > `call'
> > > > > >    from
> org/jruby/internal/runtime/methods/DynamicMethod.java:74:in
> > > > > `call'
> > > > > >    from
> > org/jruby/internal/runtime/methods/CompiledMethod.java:48:in
> > > > > `call'
> > > > > >    from org/jruby/runtime/CallSite.java:123:in `cacheAndCall'
> > > > > >    from org/jruby/runtime/CallSite.java:298:in `call'
> > > > > >    from
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> ruby/usr/local/hbase_minus_0_dot_19_dot_0/bin//usr/local/hbase/bin/../bin/hirb.rb:429:in
> > > > > > `__file__'
> > > > > >    from
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> ruby/usr/local/hbase_minus_0_dot_19_dot_0/bin//usr/local/hbase/bin/../bin/hirb.rb:-1:in
> > > > > > `__file__'
> > > > > >    from
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> ruby/usr/local/hbase_minus_0_dot_19_dot_0/bin//usr/local/hbase/bin/../bin/hirb.rb:-1:in
> > > > > > `load'
> > > > > >    from org/jruby/Ruby.java:512:in `runScript'
> > > > > >    from org/jruby/Ruby.java:432:in `runNormally'
> > > > > >    from org/jruby/Ruby.java:312:in `runFromMain'
> > > > > >    from org/jruby/Main.java:144:in `run'
> > > > > >    from org/jruby/Main.java:89:in `run'
> > > > > >    from org/jruby/Main.java:80:in `main'
> > > > > >    from /usr/local/hbase/bin/../bin/hirb.rb:334:in `scan'
> > > > > >    from (hbase):3:in `binding
> > > > > > }}}
> > > > > >
> > > > > > In the regionserver log file, I found this:
> > > > > >
> > > > > > {{{
> > > > > > 2009-02-28 10:50:48,685 INFO
> > > > > org.apache.hadoop.hbase.regionserver.HRegion:
> > > > > > compaction completed on region
> > > > > > 1002_profiles,113495530684109376,1235659992234 in 0sec
> > > > > > 2009-02-28 10:50:54,779 INFO
> > > > > > org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner
> > > > > > 310687422691357774 lease expired2009-02-28 10:50:55,869 ERROR
> > > > > > org.apache.hadoop.hbase.regionserver.HRegionServer
> > > > > > : org.apache.hadoop.hbase.UnknownScannerException: Name:
> > > > > 310687422691357774
> > > > > > 2009-02-28 10:50:55,871 INFO org.apache.hadoop.ipc.HBaseServer:
> IPC
> > > > > Server
> > > > > > handler 7 on 60020, call next(310687422691357774, 30) from
> > > > > > 10.254.51.127:54821: error
> > > > > > : org.apache.hadoop.hbase.UnknownScannerException: Name:
> > > > > 310687422691357774
> > > > > > org.apache.hadoop.hbase.UnknownScannerException: Name:
> > > > > > 310687422691357774        at
> > > > > >
> > org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer
> > > > > > .java:1568)
> > > > > >        at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown
> > > Source)
> > > > > >        at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > > > > >        at java.lang.reflect.Method.invoke(Method.java:597)
> >  at
> > > > > >
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:632)
> > > > > >        at
> > > > > >
> > > >
> > org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:895)
> > > > > > 2009-02-28 10:51:08,687 INFO
> > > > > org.apache.hadoop.hbase.regionserver.HRegion:
> > > > > > starting  compaction on region
> > > > > > 1002_profiles,266291625891696512,1235713612502
> > > > > > 2009-02-28 10:51:08,688 DEBUG
> > > > > org.apache.hadoop.hbase.regionserver.HStore:
> > > > > > 2131509311/fetl: no store files to compact
> > > > > > 2009-02-28 10:51:08,689 DEBUG
> > > > > org.apache.hadoop.hbase.regionserver.HStore:
> > > > > > 2131509311/pre_fetl: no store files to compact
> > > > > > 2009-02-28 10:51:08,691 DEBUG
> > > > > org.apache.hadoop.hbase.regionserver.HStore:
> > > > > > 2131509311/scored: no store files to compact
> > > > > > 2009-02-28 10:51:08,692 DEBUG
> > > > > org.apache.hadoop.hbase.regionserver.HStore:
> > > > > > 2131509311/reverse_edge: no store files to compact
> > > > > > 2009-02-28 10:51:08,692 DEBUG
> > > > > org.apache.hadoop.hbase.regionserver.HStore:
> > > > > > 2131509311/inferred: no store files to compact
> > > > > > }}}
> > > > > >
> > > > > > I didn't find any exception in the master log file.
> > > > > >
> > > > > > Can any one help?
> > > > > >
> > > > > > Regards,
> > > > > > Yan
> > > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to