Thanks. I think the problem is I have potentially millions of columns. where a given RowResult can hold millions of columns to values. Thats why Map/Reduce is having problems as well (Java Heap exception). I've upped mapred.child.java.opts, but problem presists.
Ryan Rawson wrote: > > Hey, > > A scanner's lease expires in 60 seconds. I'm not sure what version you > are > using, but try: > table.setScannerCaching(1); > > This way you won't retrieve 60 rows that each take 1-2 seconds to process. > > This is the new default value in 0.20, but I don't know if it ended up in > 0.19.x anywhere. > > > On Wed, Jun 10, 2009 at 2:14 PM, llpind <[email protected]> wrote: > >> >> Okay, I think I got it figured out. >> >> although when scanning large row keys I do get the following exception: >> >> NativeException: java.lang.RuntimeException: >> org.apache.hadoop.hbase.UnknownScannerException: >> org.apache.hadoop.hbase.UnknownScannerException: -4424757523660246367 >> at >> >> org.apache.hadoop.hbase.regionserver.HRegionServer.close(HRegionServer.java:1745) >> at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) >> at >> >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >> at java.lang.reflect.Method.invoke(Method.java:597) >> at >> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:632) >> at >> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:912) >> >> from org/apache/hadoop/hbase/client/HTable.java:1741:in `hasNext' >> from sun/reflect/NativeMethodAccessorImpl.java:-2:in `invoke0' >> from sun/reflect/NativeMethodAccessorImpl.java:39:in `invoke' >> from sun/reflect/DelegatingMethodAccessorImpl.java:25:in `invoke' >> from java/lang/reflect/Method.java:597:in `invoke' >> from org/jruby/javasupport/JavaMethod.java:298:in >> `invokeWithExceptionHandling' >> from org/jruby/javasupport/JavaMethod.java:259:in `invoke' >> from org/jruby/java/invokers/InstanceMethodInvoker.java:36:in >> `call' >> from org/jruby/runtime/callsite/CachingCallSite.java:73:in `call' >> from org/jruby/ast/CallNoArgNode.java:61:in `interpret' >> from org/jruby/ast/WhileNode.java:124:in `interpret' >> from org/jruby/ast/NewlineNode.java:101:in `interpret' >> from org/jruby/ast/BlockNode.java:68:in `interpret' >> from org/jruby/internal/runtime/methods/DefaultMethod.java:156:in >> `interpretedCall' >> from org/jruby/internal/runtime/methods/DefaultMethod.java:133:in >> `call' >> from org/jruby/internal/runtime/methods/DefaultMethod.java:246:in >> `call' >> ... 108 levels... >> from org/jruby/internal/runtime/methods/DynamicMethod.java:226:in >> `call' >> from org/jruby/internal/runtime/methods/CompiledMethod.java:216:in >> `call' >> from org/jruby/internal/runtime/methods/CompiledMethod.java:71:in >> `call' >> from org/jruby/runtime/callsite/CachingCallSite.java:260:in >> `cacheAndCall' >> from org/jruby/runtime/callsite/CachingCallSite.java:75:in `call' >> from home/hadoop/hbase193/bin/$_dot_dot_/bin/hirb.rb:441:in >> `__file__' >> from home/hadoop/hbase193/bin/$_dot_dot_/bin/hirb.rb:-1:in >> `__file__' >> from home/hadoop/hbase193/bin/$_dot_dot_/bin/hirb.rb:-1:in `load' >> from org/jruby/Ruby.java:564:in `runScript' >> from org/jruby/Ruby.java:467:in `runNormally' >> from org/jruby/Ruby.java:340:in `runFromMain' >> from org/jruby/Main.java:214:in `run' >> from org/jruby/Main.java:100:in `run' >> from org/jruby/Main.java:84:in `main' >> from /home/hadoop/hbase193/bin/../bin/hirb.rb:346:in `scan' >> >> >> =================================================== >> >> Is there an easy way around this problem? >> >> >> >> >> Billy Pearson-2 wrote: >> > >> > Yes that's what scanners are good for they will return all the >> > columns:lables combos for a row >> > What does the MR job stats say for rows processed for the maps and >> > reduces? >> > >> > Billy Pearson >> > >> > >> > >> > "llpind" <[email protected]> wrote in >> > message news:[email protected]... >> >> >> >> also, >> >> >> >> I think what we want is a way to wildcard everything after colFam1: >> >> (e.g. >> >> colFam1:*). Is there a way to do this in HBase? >> >> >> >> This is assuming we dont know the column name, we want them all. >> >> >> >> >> >> llpind wrote: >> >>> >> >>> Thanks. >> >>> >> >>> Yea I've got that colFam for sure in the HBase table: >> >>> >> >>> {NAME => 'tableA', FAMILIES => [{NAME => 'colFam1', VERSIONS => '3', >> >>> COMPRESSION => 'NONE', LENGTH => '2147483647', >> >>> TTL => '-1', IN_MEMORY => 'false', BLOCKCACHE => 'false'}, {NAME => >> >>> 'colFam2', VERSIONS => '3', COMPRESSION => >> >>> 'NONE', LENGTH => '2147483647', TTL => '-1', IN_MEMORY => 'false', >> >>> BLOCKCACHE => 'false'}]} >> >>> >> >>> >> >>> I've been trying to play with rowcounter, and not having much luck >> >>> either. >> >>> >> >>> I run the command: >> >>> hadoop19/bin/hadoop org.apache.hadoop.hbase.mapred.Driver rowcounter >> >>> /home/hadoop/dev/rowcounter7 tableA colFam1: >> >>> >> >>> >> >>> The map/reduce finishes just like it does with my own program, but >> with >> >>> all part files empty in /home/hadoop/dev/rowcounter7. >> >>> >> >>> Any Ideas? >> >>> >> >>> >> >> >> >> -- >> >> View this message in context: >> >> >> http://www.nabble.com/Help-with-Map-Reduce-program-tp23952252p23967196.html >> >> Sent from the HBase User mailing list archive at Nabble.com. >> >> >> >> >> > >> > >> > >> > >> >> -- >> View this message in context: >> http://www.nabble.com/Help-with-Map-Reduce-program-tp23952252p23971190.html >> Sent from the HBase User mailing list archive at Nabble.com. >> >> > > -- View this message in context: http://www.nabble.com/Help-with-Map-Reduce-program-tp23952252p23973170.html Sent from the HBase User mailing list archive at Nabble.com.
