The description of hbase table is
'hbase_table_name', {NAME => 'f', DATA_BLOCK_ENCODING => 'NONE',
BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1',
COMPRESSION => 'LZO', MIN_VERSIONS => '0', TTL => '2147483647',
KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false',
BLOCKCACHE => 'true'}
I want to speculate  the size of "hadoop fs -ls
/user/hbase/data/default/table_name"
 ethier COMPRESSION => 'LZO' or COMPRESSION => 'NONE'

2016-07-20 12:09 GMT+08:00 Ted Yu <[email protected]>:

> What format are the one billion records saved in at the moment ?
>
> The answer would depend on the compression scheme used for the table:
>
> http://hbase.apache.org/book.html#compression
>
> On Tue, Jul 19, 2016 at 8:59 PM, Jone Zhang <[email protected]>
> wrote:
>
> > There is a  100G date of one billion records.
> > If i save it to hbase.
> > What the size of "hadoop fs -ls /user/hbase/data/default/table_name"?
> >
> > Best wishes.
> >
>

Reply via email to