Apache HBase 0.98.9 is now available for download. Get it from an Apache
mirror [1] or Maven repository.
The list of changes in this release can be found in the release notes [2]
or following this announcement.
Thanks to all who contributed to this release.
Best,
The HBase Dev Team
1.
Are there any plans for including a Filter for Delete?
Currently, the only way seems to be via checkAndDelete in HTable/Table.
This is helpful but does not cover all use cases.
For e.g., I use column qualifier prefixes as a sort of poor man's 2rd level
of indexing (i.e, 3 levels of indexing
Have you looked
at
hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/BulkDeleteEndpoint.java
to see if it fits your need ?
Cheers
On Wed, Dec 24, 2014 at 1:34 PM, Devaraja Swami devarajasw...@gmail.com
wrote:
Are there any plans for including a Filter for Delete?
Thanks for your reply, Ted. I looked into the coprocessor example you
provided. It will definitely address my specific need. However, two aspects
of this approach seem less than ideal to me:
1. Being a coprocessor service, I believe the endpoint needs to be
pre-installed on the region servers.
Would like to add my perspective as a user. (Thanks to Aaron Beppu for
uncovering this hidden issue). In my applications, I have some tables for
which I need autoflushing, and others for which I need a write buffer. Plus
the size of the write buffer is different for different tables.
All these
bq. Using a scan for just one known row
Can you batch some deletions in one invocation of the endpoint ?
Supporting filter in the delete path requires non-trivial amount of work.
So for the time being, please use BulkDeleteEndpoint.
Cheers
On Wed, Dec 24, 2014 at 6:23 PM, Devaraja Swami
Thanks, Ted. I can work around my problem by changing other aspects of my
application. Worst case, I can use the BulkDeleteEndpoint and batch up my
deletes like you said.
It's just that the lack of filters in the Delete makes me adjust my data
model and data access approaches often.
I understand