"Normal" depends a lot of the KeyValues that get generated.

See the KeyValue section in here..

http://hbase.apache.org/book.html#store

... Because the usage has a lot to do with the rowkey length, the CF
name-length, attribute lengths, and whether you're using compression for
the CF.




On 11/22/11 1:45 PM, "Denis Kreis" <[email protected]> wrote:

>Hi,
>
>I loaded a 2GB log file using importtsv. Each row has 54 values, which are
>all stored in one column family. The disk space consumed on HDFS is about
>46GB. Is it normal?
>I am using HBase on HDFS in pseudo-distributed mode.
>
>Thanks
>Denis


Reply via email to