Hi,
I am trying to build secondary index by hbase-transactional-tableindexed.
My HBase version is hbase-0.90.1-cdh3u0.jar.
Where can I download the hbase-transactional-tableindexed jar ?
and how can I deploy that jar (to each region server?)
or is there any best secondary index components?
This is now maintained for 0.90.RC3 on
https://github.com/hbase-trx/hbase-transactional-tableindexed/
I guess it is compatible with 0.90.3 (I plan to test it one of these days).
Tks,
Eric
On 13/06/11 03:25, Something Something wrote:
What's the best way of implementing transaction management
Hi,
This is the place for the source
https://github.com/hbase-trx/hbase-transactional-tableindexed
You will have to build it (mvn package) and validate it is still
compatible with 0.90.1-cdh3u0.
About the installation, they say Drop the jar in the classpath of your
application. I guess
I've met with the same problem.
Update operations are blocked by memstore flushing, and memstore flushing is
blocked by a compaction (too many store files, delay flushing for 90s).
Have you got any solutions?
2011/5/23 Wayne wav...@gmail.com
We have 4 CFs, but only 1 is ever used for a given
Hi,
It seems that it is not compatible with 0.90.1-cdh3u0.
There are compile errors in openHRegion and replayRecoveredEditsIfAny.
Any ideas?
Thank you
public class TransactionalRegion extends HRegion {
@Override
protected HRegion openHRegion(final Progressable reporter) throws
I guess it github project is no more compatible with last HBase API changes.
Maybe contact directly the author (James Kennedy) on Github (last commit
was 2 months ago), or still better, fork, fix and request for pull on
github :)
Tks,
- Eric
On 13/06/11 09:42, hmch...@tsmc.com wrote:
Hi,
Hi,
Can anyone please give an working example of completebulkload tool? How do
we specify the column names and the row key?
--
With Regards,
Jr.
On Sat, Jun 11, 2011 at 5:33 PM, Stack st...@duboce.net wrote:
On Sat, Jun 11, 2011 at 6:57 AM, James Hammerton
james.hammer...@mendeley.com wrote:
We've now come up against another problem, namely that on our cluster
attempts to merge regions don't seem to do anything. I tested my merging
Hi All.
I have a question about logical division of hbase cluster, means dividing
the region servers w.r.t tables. This means that If I have let say 10
computers in cluster then table t1,t2 should be handled by c1,c3,c7 region
servers and all data related to these tables should be placed on
From: Jason Rutherglen jason.rutherg...@gmail.com
Right thanks. I think replication is fairly simple, I don't know much
about the HDFS sync code, if one has sync'd on the HLog writer, then
an HLog reader should be able to read from there?
See my comments on HBASE-2357 regarding the
Everything in hbase is stored as a binary array of data and it is up to the application to
understand that data. In php strings are effectively binary arrays of bytes. So to store other
types of data I would look at pack and unpack to produce binary int, longs, doubles, etc.
If you want to
Hi,
I'm a newbie to Hbase, can you tell me if Hbase support increment counters ?
And if the counters support TTL ?
Does it have duplicate problem when do retries?
Thanks!
Donal
I am trying to find a way to get the name of the HFile which contains a certain
row id of a table. I can find the regionlocation of the Hfile with
table.getRegionLocation(rowid), but HRegionLocation doesn't seem to contain the
name of the file.
Thanks for the help,
Nikos
You will have to use the hfile tool:
http://hbase.apache.org/book.html#hfile_tool Point it at the region
that you've figured contains the row.
St.Ack
On Sat, Jun 11, 2011 at 12:56 PM, nikos pap n_i_k_o_...@yahoo.gr wrote:
I am trying to find a way to get the name of the HFile which contains a
Sharing store files will require a coordination dance between master and
slaves upon compaction and flushes. Sharing active HLogs is more evil given
the [HMaster] may become involved
The log rollover happens relatively infrequently, however yes, when a
log rolls over, it could be tricky on
On Mon, Jun 13, 2011 at 9:03 AM, donal donal0...@gmail.com wrote:
I'm a newbie to Hbase, can you tell me if Hbase support increment counters ?
Yes.
(There is an issue filed against decrements with a fix yet to be committed)
And if the counters support TTL ?
Yes.
Does it have duplicate
You'd be fighting with HBase if you want one HBase instance and try
something like you described. You'd be better off splitting your
machines in multiple clusters...
J-D
On Mon, Jun 13, 2011 at 6:29 AM, Shuja Rehman shujamug...@gmail.com wrote:
Hi All.
I have a question about logical division
On Mon, Jun 13, 2011 at 6:29 AM, Shuja Rehman shujamug...@gmail.com wrote:
I have a question about logical division of hbase cluster, means dividing
the region servers w.r.t tables. This means that If I have let say 10
computers in cluster then table t1,t2 should be handled by c1,c3,c7 region
You don't need to specify row keys or columns, that's supposed to be
already done by the time you have to run completebulkload since the
previous step will output files that will be given to HBase. See
http://hbase.apache.org/bulk-loads.html
J-D
On Mon, Jun 13, 2011 at 2:16 AM, James Ram
Unless your normal workload is very heavy on writes (which is Wayne's
case), you're better off using bulk loading:
http://hbase.apache.org/bulk-loads.html
J-D
On Mon, Jun 13, 2011 at 12:26 AM, Sheng Chen chensheng2...@gmail.com wrote:
I've met with the same problem.
Update operations are
The below seems right. Do you have a patch Jieshan?
St.Ack
On Sun, Jun 12, 2011 at 2:41 AM, bijieshan bijies...@huawei.com wrote:
From the HMaster logs, I found something weird:
2011-05-24 11:12:11,152 INFO org.apache.hadoop.hbase.master.HMaster: balance
If they have divergent read and write patterns why not put them in separate
tables?
That's an entirely fair question. I'm new to this. I figured if the data
was related to the same thing and could have the same key, then it ought to
go into various CFs on that key in a single table. I got
Awesome, this is a very good bug. I filed
https://issues.apache.org/jira/browse/HBASE-3984
J-D
On Sun, Jun 12, 2011 at 2:03 AM, bijieshan bijies...@huawei.com wrote:
Thanks J-D.
I have got the reasons.
The original .META. location is 100-9. So after verify .META. region location
and failed,
That's an entirely fair question. I'm new to this. I figured if the data
was related to the same thing and could have the same key, then it ought to
go into various CFs on that key in a single table. I got the feeling from
reading the BigTable paper that the typical design approach was to
Re: keyed by something like [timestamp, action details, session ID]
Read the part about monotonically increasing keys in the HBase book. There
have been lots of other threads in the dist-list about this topic too.
-Original Message-
From: Leif Wickland
I think you are confusing a few things, I'll try to clear this up inline.
J-D
On Fri, Jun 10, 2011 at 8:27 PM, Sam Seigal selek...@yahoo.com wrote:
Hi All,
I had a question about a certain kind of query I would like to do in hbase.
I am storing records in HBase that transition from an
Thanks Stack. We will try those patches and upgrade to 0.90.3 and see how
things improve. I will update in few days.
GC pause don't follow any increasing pattern, so we can eliminate that.
Store files, I had confusing input. We have 300 regions, 2 column families,
and _total_ 1300-1500 files.
Read the part about monotonically increasing keys in the HBase book. There
have been lots of other threads in the dist-list about this topic too.
Thanks for mentioning that, Doug. I did see that in the HBase book.
My wording was poor. I meant that the column names would be derived from
Table 2 provides some actual CF/table numbers. One of the crawl tables has
16 CFs and one of the Google Base tables had 29 CFs
What's Google doing in BigTable that enables so many CFs?
Is the cost in HBase the seek to each individual key in the CFs, or is
it the cost of loading each block
Re: monotonically increasing column names.
No problem with that.
-Original Message-
From: Leif Wickland [mailto:leifwickl...@gmail.com]
Sent: Monday, June 13, 2011 5:29 PM
To: user@hbase.apache.org
Subject: Re: Question from HBase book: HBase currently does not do well with
Dear all,
I want to import data from Cassandra to HBase.
I think the way maybe:
Customize ImportTsv.java for read Cassandra data file (*.dbf) and convert
to HBase data files, and use completebulkload tool
Could you give me show advice?
Thank a lot for support.
On Mon, Jun 13, 2011 at 8:17 PM, King JKing beuk...@gmail.com wrote:
Dear all,
I want to import data from Cassandra to HBase.
That's what we like to hear! ;-)
I think the way maybe:
Customize ImportTsv.java for read Cassandra data file (*.dbf) and convert
to HBase data files, and use
32 matches
Mail list logo