Hi Lukas the table split constructor expects startRow, endRow and location we won't be having info about any of these. Moreover we require table size as a whole, not split size.
We will use the table size to look for a threshold breach in metadata table, if breached we have to trigger a delete operation on to the table(of which threshold is breached) to delete LRU records until table size is within limit (~ 50-60Gb) On Mon, Feb 10, 2014 at 6:01 PM, Vikram Singh Chandel < [email protected]> wrote: > Hi > > The requirement is to get the hbase table size (using API) and have to > save this size for each table in a metadata table . > > i tried hdfs command to check table size but need api method (if available) > > Hadoop fs -du -h hdfs:// > > > > Thanks > > -- > *Regards* > > *VIKRAM SINGH CHANDEL* > > Please do not print this email unless it is absolutely necessary,Reduce. > Reuse. Recycle. Save our planet. > -- *Regards* *VIKRAM SINGH CHANDEL* Please do not print this email unless it is absolutely necessary,Reduce. Reuse. Recycle. Save our planet.
