Could you tell me please in detail the parameters you'd like to see so i
can look for them and learn the important ones?i'm using cloudera, cdh4 in
one cluster and cdh5 in the other.
Best,
Flavio
On May 17, 2014 2:48 AM, prince_mithi...@yahoo.co.in
prince_mithi...@yahoo.co.in wrote:
Can you
Mu,
I think rowid is metadata in RDBMS storage engine and it is not related to
user data. but in hbase rowkey usually contains user data as a purpose of
primary index in rdbms.(http://hbase.apache.org/book.html#rowkey.design)
there is more explanation in google's bigtable paper.
On Fri, May 16,
Hi
I have a requirement to query my data base on date and user category.
User category can be Supreme,Normal,Medium.
I want to query how many new users are there in my table from date range
(2014-01-01) to (2014-05-16) category wise.
Another requirement is to query how many users of Supreme
Please take a look at
hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift
Cheers
On Fri, May 16, 2014 at 9:09 PM, Software Dev static.void@gmail.comwrote:
Where could I find the thrift2.thrift file? Is it even officially
supported? If not, what is the expected
Thanks thats what I was looking for
On Sat, May 17, 2014 at 7:34 AM, Ted Yu yuzhih...@gmail.com wrote:
Please take a look at
hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift
Cheers
On Fri, May 16, 2014 at 9:09 PM, Software Dev
static.void@gmail.comwrote:
Thanks for the confirmatoin Ted ;) I figured that afterward that some
emails are not coming in the right order.
Vinay, can you please confirm the client call you are doing?
Thanks,
JM
2014-05-16 19:19 GMT-04:00 Ted Yu yuzhih...@gmail.com:
JMS:
I saw your earlier email.
There are some
Moving the discussion to the user list.
Hi Vikas,
You can use coprocessors to do something similar, but there is some
drawback. You need to be pretty careful with them. That's the way to create
process similar to stored procs. Else, to MR jobs?
JM
2014-05-08 5:47 GMT-04:00 Vikas Jadhav
I know there is support for receiving multiple rows with one query but
at what number of rows does this start to fall apart?
For example, say we hash some time ordered series data to avoid
hotspotting. Since all of the rowkeys are hashed we can no longer
perform a scan over a time range. I think,
I recently came across the pattern of adding a salting prefix to the
row keys to prevent hotspotting. Still trying to wrap my head around
it and I have a few questions.
- Is there ever a reason to salt to more buckets than there are region
servers? The only reason why I think that may be
Well kept reading on this subject and realized my second question may
not be appropriate since this prefix salting pattern assumes that the
prefix is random. I thought it was actually based off a hash that
could be predetermined so you could alwasy, if needed, get to the
exact row key with one
No, there's nothing wrong with your thinking. That's exactly what Phoenix
does - use the modulo of the hash of the key. It's important that you can
calculate the prefix byte so that you can still do fast point lookups.
Using a modulo that's bigger than the number of region servers can make
sense
11 matches
Mail list logo