Hi,

I've had some limited experience with Oracle, SQL Server,
Informix and at least one commercial in-memory database.

More recently, I use mysql memory tables for fun speeding
up bulk read-write operations such as:

set max_heap_table_size=250*1024*1024;
create table mem_proptbl (field_one varchar(32),value_one varchar(100),
index using hash(value_one)) engine=memory;

downside is i/o time and churning when later writing to disk.

Column-oriented approaches like SPARQL remind me of
XQuery, good for specific uses but with limited adoption.

HBase looks to be a component for distributed, RAM and
log based byte-arrays that should be able to be COMPRESSED
by simply bzip2ing the logs...

It's a much needed scalability tool complementary to RDBMS
and it's columns don't affect how I store the data offline.

Thanks to it's contributors for Rocking the House.

Later,

Peter W.

Jonathan Hendler wrote:

One of the valid points ... has to do with
compression (and null values).  For example - does HBase also offer
tools, or a strategy for compression?

Reply via email to