For #1,
You really don’t want to do what is suggested by the HBase book.
Yes you can do it, but then again, just because you can do something doesn’t
mean you should. Its really bad advice.
HBase is IRT not CRUD.
(IRT == Insert, Read, Tombstone)
If there is a temporal component to your
For #1, see http://hbase.apache.org/book.html#versions and
http://hbase.apache.org/book.html#schema.versions
Cheers
On Fri, May 1, 2015 at 9:17 PM, Arun Patel arunp.bigd...@gmail.com wrote:
1) Are there any problems having many versions for a column family? What's
the recommended limit?
2)
1) Are there any problems having many versions for a column family? What's
the recommended limit?
2) We have created a table for storing documents related data. All
applications in our company are storing their documents data in same table
with rowkey as SHA1+Document ID. Table is growing
Hi,
I am pretty new to HBase so it would be great if someone could help me out
with my below queries;
(Ours is a time series data and all the queries will be range scan on
composite row keys)
a) What is the usual practice of storing data types.
We have noticed that converting
For #b, take a look
at
src/main/java/org/apache/hadoop/hbase/client/coprocessor/AggregationClient.java
in 0.94
It supports avg, max, min and sum operations through calling coprocessors.
Here is snippet from its javadoc:
* This client class is for invoking the aggregate functions deployed on the
Hi Vivek,
Take a look at the SQL skin for HBase called Phoenix
(https://github.com/forcedotcom/phoenix). Instead of using the native
HBase client, you use regular JDBC and Phoenix takes care of making the
native HBase calls for you.
We support composite row keys, so you could form your row