Jon.
Were you running cdh3 with security turned on?
LoadIncrementHFiles.doBulkLoad() seems to split a HFile to pieces at the
same folder then load them. In your case, the mr job created the HFile
under geoff directory where hbase user doesn't have write permission.
You can try to put the
Nichole.
Himanshu is right. Your coprocessor at some regions took too long to
complete which caused the timeout.
Can you do some profiling at the problematic region? Just adding some
log message in the coprocessor might be okay.
If all coprocessors (one per region) at a RS are performing a
I am trying to setup a local Hudson CI for HBase.
Once I configure and give a build I get the following problem
Started by user anonymous
Updating http://svn.apache.org/repos/asf/hbase/trunk
http://svn.apache.org/repos/asf/hbase/trunk
At revision 1141411
no change for
Hi,
I have a cluster of 5 nodes with one large table that currently has around
12000 regions. Everything was working fine for relatively long time, until
now.
Yesterday I significantly reduced the TTL on the table and initiated major
compaction. This should have reduced the table size to about 20%
Hi,
I am doing bulk insertion into Hbase using Map reduce reading from lot of
small(10MB approximation) files, resulting mappers = no of files. I am also
monitoring the performance using Ganglia. The machines are c1.xlarge for
processing the files(task trackers+data nodes) and m1.xlarge for
We add an argument here on that issue:
Can somebody put a little bit light on the reason we need to disable the
table in order to add CF?
It looks to me that adding CF should be simple as say: there is a new one -
anyway its on different file?
What do I miss here?
Thanks!
Ophir
On Sat, Jun 18,
http://hadoop.apache.org/common/docs/stable/file_system_shell.html#du
On Fri, Jul 1, 2011 at 12:02 AM, Shuja Rehman shujamug...@gmail.com wrote:
what does -s , -h means? by executing this command, i m getting such kind o
results
2649
Re: http://hbase.apache.org/book.html#regions.arch
The high-level hand-waving answer is that it's not just one file. Since
clients talk directly to RegionServers, all the regions need to be in sync
in terms of table/region/CF metadata.
On 6/30/11 11:44 AM, Ophir Cohen oph...@gmail.com
I'd start with the Hbase book
http://hbase.apache.org/book.html#gc
On 6/29/11 10:50 PM, xiujin yang xiujiny...@hotmail.com wrote:
Hi all
Backgrand:
Hadoop : CDH3u0
HBase : CDH3u0
ZK : CDH3u0
Servers: 30
Now our hbase server is more than 64G. and we want to use hbase on
online
Hi,
I'm debugging a prePut hook which I've implemented as part of the coprocessor
work being developed. This hook is loaded via a table COPROCESSOR attribute and
I've noticed that the prePut method is being called twice for a single Put.
After setting up the region server to run in a debugger,
Sounds like a bug. File a JIRA.
-Joey
On Jun 30, 2011 2:32 PM, Terry Siu terry@datasphere.com wrote:
Hi,
I'm debugging a prePut hook which I've implemented as part of the
coprocessor work being developed. This hook is loaded via a table
COPROCESSOR attribute and I've noticed that the
Done. Filed HBASE-4051.
-Terry
-Original Message-
From: Joey Echeverria [mailto:j...@cloudera.com]
Sent: Thursday, June 30, 2011 2:53 PM
To: user@hbase.apache.org
Subject: Re: Table coprocessor loaded twice when region is initialized?
Sounds like a bug. File a JIRA.
-Joey
On Jun 30,
Hi,
I am working on a research project which used Hadoop 0.21.0, and I want
to add HBase on top of that. But it seems there is no a released version
of HBase could work with Hadoop 0.21.x. I saw 0.92.0 will support hadoop
0.21.x. But I when I tried to check out the version from SVN, I cannot
HBase trunk will be 0.92.0 when released.
HBASE-2233 (Support both Hadoop 0.20 and 0.22) went into trunk on June 9th. I
have not personally tried it, though.
Best regards,
- Andy
Problems worthy of attack prove their worth by hitting back. - Piet Hein (via
Tom White)
- Original
http://www.slideshare.net/jacque74/hug-hbase-presentation
My presentation posted on slideshare, from todays talk. FYI.
Best,
-Jack
On Fri, Jun 10, 2011 at 3:11 PM, Jack Levin magn...@gmail.com wrote:
Yep. I'd do HUG, its probably larger building/room anyway :).
-Jack
On Fri, Jun 10, 2011
From the issue mentioned by Andy: '..we do not work against hadoop
0.21, not w/o backport of HADOOP-7351 Regression:
HttpServer#getWebAppsPath used to be protected so subclasses could
supply alternate webapps path but it was made private by HADOOP-6461
and HDFS-1948 Forward port 'hdfs-1520
16 matches
Mail list logo