Build failed in Jenkins: HBase-TRUNK #2017

2011-07-09 Thread Apache Jenkins Server
See https://builds.apache.org/job/HBase-TRUNK/2017/changes Changes: [stack] Added note on diff between snappy in hbase and snappy in hadoop [stack] HBASE-4019 troubleshooting.xml - adding section under NameNode for where to find hbase objects on HDFS --

Re: Converting byte[] to ByteBuffer

2011-07-09 Thread Ted Dunning
MapR does help with the GC because it *does* have a JNI interface into an external block cache. Typical configurations with MapR trim HBase down to the minimal viable size and increase the file system cache correspondingly. On Fri, Jul 8, 2011 at 7:52 PM, Jason Rutherglen

Build failed in Jenkins: HBase-TRUNK #2018

2011-07-09 Thread Apache Jenkins Server
See https://builds.apache.org/job/HBase-TRUNK/2018/changes Changes: [tedyu] Added timeout for tests in TestScannerTimeout -- [...truncated 1286 lines...] [INFO] Surefire report directory:

Re: Converting byte[] to ByteBuffer

2011-07-09 Thread M. C. Srivas
On Fri, Jul 8, 2011 at 6:47 PM, Jason Rutherglen jason.rutherg...@gmail.com wrote: There are couple of things here, one is direct byte buffers to put the blocks outside of heap, the other is MMap'ing the blocks directly from the underlying HDFS file. I think they both make sense. And I'm

Re: HBASE-3904

2011-07-09 Thread M. C. Srivas
Its not clear from hbase-3904 what the issues are. If there's some code relying on isTableAvailable, that code is inherently broken. 1. isTableAvailable() is never reliable, because (a) if it returns true, the table can disappear immediately after the call finishes, or (b) the table can

Re: HBASE-3904

2011-07-09 Thread Ted Yu
I resolved HBASE-3904 because there was no solution that everyone agreed on. On Sat, Jul 9, 2011 at 12:48 PM, M. C. Srivas mcsri...@gmail.com wrote: Its not clear from hbase-3904 what the issues are. If there's some code relying on isTableAvailable, that code is inherently broken. 1.

Re: Converting byte[] to ByteBuffer

2011-07-09 Thread Ryan Rawson
I think my general point is we could hack up the hbase source, add refcounting, circumvent the gc, etc or we could demand more from the dfs. If a variant of hdfs-347 was committed, reads could come from the Linux buffer cache and life would be good. The choice isn't fast hbase vs slow hbase,

Re: Converting byte[] to ByteBuffer

2011-07-09 Thread Jason Rutherglen
I'm a little confused, I was told none of the HBase code changed with MapR, if the HBase (not the OS) block cache has a JNI implementation then that part of the HBase code changed. On Jul 9, 2011 11:19 AM, Ted Dunning tdunn...@maprtech.com wrote: MapR does help with the GC because it *does* have

Re: Converting byte[] to ByteBuffer

2011-07-09 Thread Doug Meil
re: If a variant of hdfs-347 was committed, I agree with what Ryan is saying here, and I'd like to second (third? fourth?) keep pushing for HDFS improvements. Anything else is coding around the bigger I/O issue. On 7/9/11 6:13 PM, Ryan Rawson ryano...@gmail.com wrote: I think my general

Re: Converting byte[] to ByteBuffer

2011-07-09 Thread Ryan Rawson
No lines of hbase were changed to run on Mapr. Mapr implements the hdfs API and uses jni to get local data. If hdfs wanted to it could use more sophisticated methods to get data rapidly from local disk to a client's memory space...as Mapr does. On Jul 9, 2011 6:05 PM, Doug Meil