On Tue, Mar 13, 2012 at 10:48 PM, Jiajun Chen cjjvict...@gmail.com wrote:
0040c405caac920528286d210a6162daStoryWaitFetchQ,,1331124996873.0040c405caac920528286d210a6162da.
state=OFFLINE, ts=Wed Mar 14 13:42:38 CST 2012 (206s ago), server=null
Is scanning supported using partial of the row key?
if i have a row key as name + timestamp + id + type
can i scan the rows with start key as name + timestamp
will it fetch any rows from the base.I'm stuck up and it doesn't work.I'm
always getting null rows.
Can someone please help me.
thanks
Is scanning supported using partial of the row key?
if i have a row key as name + timestamp + id + type
can i scan the rows with start key as name + timestamp
will it fetch any rows from the base.I'm stuck up and it doesn't work.I'm
always getting null rows.
Can someone please help me.
thanks
Is scanning supported using partial of the row key?
if i have a row key as name + timestamp + id + type
can i scan the rows with start key as name + timestamp
will it fetch any rows from the base.I'm stuck up and it doesn't work.I'm
always getting null rows.
Can someone please help me.
thanks
Hi,
You can perform scan this way,
scan 'tablename', {STARTROW='name + start time stamp', ENDROW='name + end
time stamp'}
Idea is, you can scan with prefix values but if you want to scan with name
+ * + id, then I doubt.
Thanks
On Wed, Mar 14, 2012 at 6:09 AM, newbie24 shripri...@hotmail.com
yes, upgraded to 0.92.1. used hadoop-1.0.1
Regions in Transition
*Region*
*State*
0040c405caac920528286d210a6162da
StoryWaitFetchQ,,1331124996873.0040c405caac920528286d210a6162da.
state=PENDING_OPEN, ts=Wed Mar 14 14:13:05 CST 2012 (51s ago), server=
slave183.uc.com,60020,1331702165414
On Tue, Mar 13, 2012 at 11:26 PM, Jiajun Chen cjjvict...@gmail.com wrote:
yes, upgraded to 0.92.1. used hadoop-1.0.1
Regions in Transition
*Region*
*State*
0040c405caac920528286d210a6162da
StoryWaitFetchQ,,1331124996873.0040c405caac920528286d210a6162da.
state=PENDING_OPEN, ts=Wed Mar 14
To know what takes your CPU,you can use java -Xprof to start the Java
Process.
It can give you a report what method takes most.
On Wed, Mar 14, 2012 at 2:08 PM, raghavendhra rahul
raghavendhrara...@gmail.com wrote:
Hi,
I m running coprocessor aggregation for some million rows. When
Hi all:
From blogs I know HTablePool class is like ConnectionPool.But when I use
method putTable() ,my IDE noticed me
The method putTable(HTableInterface) from the type HTablePool is
deprecated
Is this class has been abandon?Do there any class like HTablePool instead?
Hi
Don't use HTablePut.put() method, just HTable.close on the table you got
from pool when you no longer need it. Don't worry, the table will be put
back in the pool after you closed it.
On 03/14/2012 10:14 AM, 韶隆吴 wrote:
Hi all:
From blogs I know HTablePool class is like
sounds like your cpu is blocked waiting on your disks.
What does your cluster look like?
how many cores per node? how many spindles?
On Mar 14, 2012, at 1:08 AM, raghavendhra rahul wrote:
Hi,
I m running coprocessor aggregation for some million rows. When
execution cpu is waiting for
Scans are also described in the RefGuide here...
http://hbase.apache.org/book.html#data_model_operations
On 3/14/12 2:22 AM, Akbar Gadhiya akbar.gadh...@gmail.com wrote:
Hi,
You can perform scan this way,
scan 'tablename', {STARTROW='name + start time stamp', ENDROW='name +
end
time
Sorry for the bad typing, Don't use HTablePool.putTable()...
The class is not abandoned from what I know, only the putTable method
should be avoided.
On 03/14/2012 10:32 AM, Daniel Iancu wrote:
Hi
Don't use HTablePut.put() method, just HTable.close on the table you got
from pool when you no
Hi All,
I am using cdh3u2. I am having problem accessing the table.jsp page from
the web interface of HBase cluster. Two days ago i was able to view HBase
table information with the help of table.jsp. But, after a cluster crash it
started giving the following error:
HTTP ERROR 404
Problem
There might be some issue with the table itself.
Can you issue get, scan from HBase shell ?
Please examine master / region server logs for relevant information.
On Wed, Mar 14, 2012 at 2:25 PM, anil gupta anilgupt...@gmail.com wrote:
Hi All,
I am using cdh3u2. I am having problem accessing
Yes, everything works fine with the HBase shell. I even ran count on the
table which was having 32 million rows. So, apart from table.jsp everything
looks good.
Is there a place where i can find logs for jetty?
On Wed, Mar 14, 2012 at 2:32 PM, Ted Yu yuzhih...@gmail.com wrote:
There might be
Hi all:
I'm trying to import data from oracle to hbase and now I have a problem.
In some tables,there have more than one primary key and they all use a
lot.
How to design the rowkey in these tables to make them searched quickly?
Presumably you mean there are more than one UNIQUE key (only one pk per
table, even in rdbms..)
Thats a core difference between rdbms and hbase: secondary indexes are not
suppoted per se in the latter. There are various options here, one of
which may be to:
- you might see whether you can
Thanks for your suggestions,
and I have another question.
For example,in RDBMS I have 3 tables:orders,item,order_item
and in table order_item have two PK:order_id,item_id.
Usually,I use order_id to find item_id and use item_id to find orderid.
If I use order_id +_+ item_id to be the rowkey in
Hi there-
You probably want to see this in the RefGuide...
http://hbase.apache.org/book.html#schema
On 3/14/12 9:47 PM, 韶隆吴 yechen1...@gmail.com wrote:
Hi all:
I'm trying to import data from oracle to hbase and now I have a
problem.
In some tables,there have more than one primary
20 matches
Mail list logo