I am happy to document this in wiki some place if it might help others.
We could add this to the document I referred to.
A) Is there any other 'best practices' to keep DNS / HOST LOOKUPs straight?
I'd ask a sysadmin, but I think you just need to keep things
consistant meaning always in the
The zk stuff is ok, it's just that hadoop1 doesn't have a zk server
but hadoop2 does (so review your configuration).
You need to replace the hadoop jar since right now you have
/hadoop-core-0.20-append-r1056497.jar
Like the doc says http://hbase.apache.org/book.html#hadoop
It is critical that
https://issues.apache.org/jira/browse/HBASE-3149 for flushes, not sure
about compactions.
J-D
On Thu, Jun 2, 2011 at 2:57 PM, Vidhyashankar Venkataraman
vidhy...@yahoo-inc.com wrote:
Is there a JIRA for issuing flushes and compactions on a per column family
basis?
On 6/2/11 2:48 PM, Stack
I have 2 question:
1. Does HBase support scan data of rowkey by column?
You mean secondary indexes? No:
http://hbase.apache.org/book.html#secondary.indices
2. Which design is better? I think that design 2 is better when user have
large amount of follower.
I cover a bunch of designs in this
This is discussed in the book: http://hbase.apache.org/book.html#hadoop
J-D
On Thu, Jun 2, 2011 at 6:09 PM, 吕鹏 lvpengd...@gmail.com wrote:
Hi
I want to build a hbase cluster in the production environment. Which
version of hbase and hadoop is recommended? The apache or the cdh3? Is cdh3
a
Inline.
J-D
On Wed, Jun 1, 2011 at 6:34 AM, Xu, Richard richard...@citi.com wrote:
Hi folks,
I need to load 1 million queue messages into a hbase table in 30 mins.
As HBase: The Definitive Guide suggests, I use Client API, flushCommits().
I launched, say, 20 threads, each thread has its
Oh I'm so dumb, I didn't see that this is coming from _another_ region
server. What happens when splitting is that we try to update the
.META. table which requires talking to another region server, except
that in this case the RS was aborting meaning that the split
transaction had to be rolled
Inline.
J-D
2011/5/31 Gaojinchao gaojinc...@huawei.com:
Per I know:
1.zookeeper is sensitive to resources(Memory, Disk, CPU, NetWork).
Mostly just disk.
If there is some underprovisioning on server, then
a) Server may not respond to client requests in time.
b) Client assumes server is
hbase noob question: do compactions (major/minor) always work in the
scope of a region but they don't do region merges?
That's what HBASE-1621 is about, merges can't be done while the
cluster is running and compactions only happen when hbase is running.
J-D
Can you post the full log somewhere? You talk about several Exceptions
but we can't see them.
J-D
On Tue, May 31, 2011 at 4:41 AM, bijieshan bijies...@huawei.com wrote:
It occurred in an RegionServer with an un-known reason. I have check this
RegionServer logs, there's no prev-aborting, and
)) failed = false;
}
Jieshan Bean
- -
发件人: jdcry...@gmail.com [mailto:jdcry...@gmail.com] 代表 Jean-Daniel Cryans
发送时间: 2011年5月27日 1:02
收件人: user@hbase.apache.org
主题: Re: HRegion.openHRegion IOException caused an endless loop of
opening--opening failed
What's going
families with followers_id and following_id as index..
if one person is having 1000 followers and 100 followings only.. then how I
will be storing it ?
Can anyone help in designing schema of this type..??
Thanks.
On Thu, May 26, 2011 at 11:05 PM, Jean-Daniel Cryans
jdcry...@apache.orgwrote
What's going on in the master? When a region times out in OPENING this
is what the TimeoutMonitor does:
// Attempt to transition node into OFFLINE
try {
data = new RegionTransitionData(
EventType.M_ZK_REGION_OFFLINE,
or delete of a row: it exists if there are cells with values
in them).
does it means that defining TTL is relevant only in case I will using
versions?
Thanks in advance
Oleg.
On Wed, May 25, 2011 at 8:26 PM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
As you saw it's family based, there's
You can call flush on the table with either the shell or HBaseAdmin
which will persist the Memstore data. What's not so good about this
trick is that if any region server died before you called flush you
need to re-import.
J-D
On Thu, May 26, 2011 at 12:38 AM, Weihua JIANG weihua.ji...@gmail.com
Inline.
J-D
On Thu, May 26, 2011 at 5:12 AM, elton sky eltonsky9...@gmail.com wrote:
I am new to hbase and I got a few questions:
1. tables are divided into regions and allocated to region servers. what is
the allocation policy?
Ted Yu recently wrote up a blog post about balancing in HBase:
1 row per tweet? The row key might be something like user+timestamp.
J-D
On Thu, May 26, 2011 at 12:02 AM, praveenesh kumar praveen...@gmail.com wrote:
hey guys..!!
I am new to hbase and basically wants hbase to store twitter data ??
Anyone working similar to this kind of problem..
If yes,.
As you saw it's family based, there's no cross-family schemas.
Can you tell us more about your use case?
J-D
On Wed, May 25, 2011 at 5:58 AM, Oleg Ruchovets oruchov...@gmail.com wrote:
Hi ,
Is it possible to define TTL for hbase row (I found TTL only for column
family) ?
In case it is
There's no recommended size, I guess as long as it fits in memory
it's ok given that not all JVMs are given the same amount of heap.
J-D
On Wed, May 25, 2011 at 6:48 AM, Lucian Iordache
lucian.george.iorda...@gmail.com wrote:
To be more specific, I was thinking of a recommended number of
From the doc at http://hbase.apache.org/bulk-loads.html
The -c config-file option can be used to specify a file containing the
appropriate hbase parameters (e.g., hbase-site.xml) if not supplied
already on the CLASSPATH (In addition, the CLASSPATH must contain the
directory that has the zookeeper
I'm not sure why you are asking this question on the hbase user
mailing list, it seems like you have a log4j issue.
J-D
On Wed, May 25, 2011 at 1:03 PM, Himanish Kushary himan...@gmail.com wrote:
Could anybody please help me with this.
On Tue, May 24, 2011 at 10:17 AM, Himanish Kushary
Zookeeper doesn't query addresses, it's all done in HBase which in
turn stores it in ZK.
Also http://hbase.apache.org/book.html#dns
J-D
On Tue, May 24, 2011 at 4:37 PM, Jack Levin magn...@gmail.com wrote:
figured it out... the /etc/hosts file has ip to name, was used by
zookeeper was
It was fixed in 0.90.3, before that we didn't clear the list.
J-D
On Mon, May 23, 2011 at 9:27 AM, Daniel Iancu daniel.ia...@1and1.ro wrote:
Hello everybody
I've run into this strange problem. We run a 6 RS cluster and suddenly the
client application started reporting errors, region not
Quick guess, you let HBase run ZK and didn't change
hbase.zookeeper.property.dataDir so it's being stored in /tmp and was
cleared. Be sure to checkout
http://hbase.apache.org/book/notsoquick.html#ZooKeeper
BTW running 2 ZK nodes doesn't make sense, read the section titled
How many ZooKeepers
Your attachement didn't make it, it rarely does on the mailing lists.
I suggest you use a gist.github or a pastebin.
Regarding the error, looks like something closed the HCM and someone
else is trying to use it. Since this is client side, it would point to
a Pig problem.
J-D
On Thu, May 19,
)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
-Original Message-
From: Jean-Daniel Cryans jdcry...@apache.org
To: user@hbase.apache.org
It would move blocks that are used by the local region servers,
messing up your block locality. That the first reason I can think of.
J-D
On Mon, May 16, 2011 at 11:14 AM, Erik Onnen eon...@gmail.com wrote:
Is there any reason why running an HDFS balancer on the filesystem
used for HBase would
You are giving us the mile high overview of the problem, pointing to a
specific culprit could be very time consuming. Instead, can you run
some system tests and make sure things work the way they should? Are
the disks strangely slow? Any switches acting up?
Regarding your CPUs, counting is mostly
Hi,
I have two questions:
1. Does HBase knows how to handle blocks moving.
e.g does HBase can recognize that some local block deleted from machine and
move that region to machine with that block?
No, transparent.
2. What happen if the region server of the .META. failed? Does HBase has
If you have a high insert rate then maybe log rolling (which blocks
inserts a little) makes it that the calls get queued enough (occupying
heap) to make you enter a GC loop of death? Can you enable RPC logging
and see if you can confirm that?
Thx,
J-D
On Sun, May 15, 2011 at 5:37 PM, Jack Levin
-in -all probably something like 3 minutes should warrant
everybody has found the new master one way or another , right? if not,
we have a problem, right?
Thanks.
-Dmitriy
On Fri, May 13, 2011 at 12:34 PM, Jean-Daniel Cryans
jdcry...@apache.org wrote:
Maybe there is something else
Would you be able to patch in
https://issues.apache.org/jira/browse/HBASE-3695 and see what hbck
tells you now? Else you could try using the 0.90.3 rc0 which has it
too: http://people.apache.org/~stack/hbase-0.90.3-candidate-0/
J-D
On Sun, May 15, 2011 at 9:11 AM, Andy Sautins
do with get / and
or scan.
Thanks!
On May 12, 2011, at 9:37 PM, Jean-Daniel Cryans wrote:
You'd have to hack it up into the thrift server, shouldn't be so bad
but there's no such doc.
J-D
On Thu, May 12, 2011 at 8:26 PM, Matthew Ward m...@imageshack.net wrote:
Oh interesting
Ok I see... so the only thing that changed is the HW right? No
upgrades to a new version? Also could it be possible that you changed
some configs (or missed them)? BTW counting has a parameter for
scanner caching, like you would write: count myTable, CACHE = 1000
and it should stream through your
Inline.
J-D
On Wed, May 11, 2011 at 4:22 PM, Miles Spielberg mi...@box.net wrote:
We're planning out our first Hbase cluster, and we'd like to get some
feedback on our proposed hardware configuration. We're intending to use this
cluster purely for Hbase; it will not generally be running
That contrib was moved to github, see
https://github.com/hbase-trx/hbase-transactional-tableindexed
J-D
On Thu, May 12, 2011 at 8:28 AM, Praveen Bathala pbatha...@gmail.com wrote:
Hi All,
I started using HBase for a while and i am working on some open source
project which has some old jars
MB, max=199.18 MB, blocks=115,
accesses=526913, hits=37278, hitRatio=7.07%%, cachingAccesses=37393,
cachingHits=37278, cachingHitsRatio=99.69%%, evictions=0, evicted=0,
evictedPerRun=NaN
-邮件原件-
发件人: jdcry...@gmail.com [mailto:jdcry...@gmail.com] 代表 Jean-Daniel Cryans
发送时间: 2011年5月
You'd have to hack it up into the thrift server, shouldn't be so bad
but there's no such doc.
J-D
On Thu, May 12, 2011 at 8:26 PM, Matthew Ward m...@imageshack.net wrote:
Oh interesting, is there a way to access it via thrift (from PHP)? Are there
some docs I can read up on it?
Thanks!
The error message in the datanode log is pretty obvious about the
config if you really hit it. The error message you pasted from the DN
doesn't look complete either.
J-D
On Wed, May 11, 2011 at 7:30 AM, Stanley Xu wenhao...@gmail.com wrote:
Dear all,
We are using hadoop 0.20.2 with a couple
Your zookeeper stuff is fine, as you can see it's INFO level.
What's fatal tho is that the master can't connect to your namenode (on
port 9000).
J-D
2011/5/11 Άρμεν Αρσακιάν arsak...@ee.duth.gr:
I installed hbase on my Mac Os 10.6 machine and when i try to run hbase
master start I get the
could I identify if it comes from a corruption in data or just some
mis-hit in the scenario you mentioned?
On Tue, May 10, 2011 at 6:23 AM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
Very often the cannot open filename happens when the region in
question was reopened somewhere else
Most users I know rolled out their own since it doesn't require a very
big layer on top of HBase (since it's all simple queries) and it's
tailored to their own environment.
For JDO there's DataNucleus that supports HBase.
J-D
On Tue, May 10, 2011 at 2:34 AM, Kobla Gbenyo ko...@riastudio.fr
port failed
-邮件原件-
发件人: jdcry...@gmail.com [mailto:jdcry...@gmail.com] 代表 Jean-Daniel Cryans
发送时间: 2011年5月10日 1:17
收件人: user@hbase.apache.org
主题: Re: A question about client
TreeMap isn't concurrent and it seems it was used that way? I know you
guys are testing a bunch of different
TreeMap isn't concurrent and it seems it was used that way? I know you
guys are testing a bunch of different things at the same time so which
HBase version and which patches were you using when you got that?
Thx,
J-D
On Mon, May 9, 2011 at 5:22 AM, Gaojinchao gaojinc...@huawei.com wrote:
I
It looks like the master entered a GC loop of death (since there are a
lot of We slept 76166ms messages) and finally died. Was it splitting
logs? Did you get a heap dump? Did you inspect it and can you tell
what was using all that space?
Thx,
J-D
2011/5/8 Gaojinchao gaojinc...@huawei.com:
That could be easily done with bulk loads
http://hbase.apache.org/bulk-loads.html it will really depend on how
fast you can get the data out of sybase providing that you have the
appropriate hardware for hbase.
More than a year ago we loaded a bit more than a 1TB (pre-replication)
into 20
...@gmail.com [mailto:jdcry...@gmail.com] 代表 Jean-Daniel Cryans
发送时间: 2011年5月6日 1:27
收件人: user@hbase.apache.org
主题: Re: Hmaster has some warn logs.
The master was trying to talk to the region server at 10.85.129.177
but it took more than 60 seconds to get an answer.
J-D
On Thu, May 5, 2011
Very often the cannot open filename happens when the region in
question was reopened somewhere else and that region was compacted. As
to why it was reassigned, most of the time it's because of garbage
collections taking too long. The master log should have all the
required evidence, and the region
As I said before the regions aren't replicated, it is not the right
way to see it.
The data for those regions is replicated, but only 1 region server
does the management of that data.
If a RS crashes, like I said before, the data is unavailable until the
logs are replayed. In the context of a MR
a workaround?
thx,
Sean
On Thu, May 5, 2011 at 11:49 AM, Jean-Daniel Cryans jdcry...@apache.org
wrote:
This sounds like https://issues.apache.org/jira/browse/HBASE-3545
which was fix in 0.90.2, which version are you testing?
J-D
On Thu, May 5, 2011 at 9:23 AM, sean barden sbar
Inline.
On Thu, May 5, 2011 at 6:03 AM, Eric Burin des Roziers
eric_...@yahoo.com wrote:
Hi,
I am currently looking at adding a transactional consistency aspect to HBase
and had 2 questions:
1. My understanding is that when the client performs an operation (put,
delete, incr), it is sent
The master was trying to talk to the region server at 10.85.129.177
but it took more than 60 seconds to get an answer.
J-D
On Thu, May 5, 2011 at 5:40 AM, Gaojinchao gaojinc...@huawei.com wrote:
In our test cluster. There is a lot of logs for socket timeout.
I try to dig it, but nothing.
Who
You can start by reading the book entry on this:
http://hbase.apache.org/book/important_configurations.html#disable.splitting
But what you might wanna do instead is bulk loading:
http://hbase.apache.org/bulk-loads.html
J-D
On Thu, May 5, 2011 at 11:02 AM, hbaseuser hbaseuser
(please don't cross-post to multiple lists, bcc'ed dev@ since this is
really a user question)
There's only 1 scanner in HBase, can you be more precise?
J-D
On Thu, May 5, 2011 at 9:15 PM, Ramkrishna S Vasudevan
ramakrish...@huawei.com wrote:
Hi
There are different scanners in HBase.
Any
On Tue, May 3, 2011 at 6:20 AM, Eran Kutner e...@gigya.com wrote:
Flushing, at least when I try it now, long after I stopped writing, doesn't
seem to have any effect.
Bummer.
In my log I see this:
2011-05-03 08:57:55,384 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats:
Ian,
Regarding your first point, I understand where the concern is coming
from but I'd like to point out that with the new MemStore-Local
Allocation Buffers the full GCs taking minutes might not be as much as
an issue as it used to be. That said, I haven't tested that out yet
and I don't know of
That's a very vague question... what's slow exactly? How much data
do you have to copy? What transfer rate are you expecting? What's the
hardware like? How big is the pipe between the two clusters?
The best I could tell you would be to make sure that the target table
is already pre-split, that
, why?
My first message didn't appear in the list for a long time and I
thought it could happen because I had sent it before I actually
subscribed to the list. Really sorry for inconvenience.\
On 4/27/11, Jean-Daniel Cryans jdcry...@apache.org wrote:
Hi Alex,
Before answering I made sure
Can you give an example of what you're trying to do?
BTW what we mean when we say that filters don't work across region
servers (actually it's more across regions, so it's also a problem on
a single machine) is that if you happened to have some sort of state
in your filter, it wouldn't be carried
,
Harold
--- On Thu, 4/28/11, Jean-Daniel Cryans jdcry...@apache.org wrote:
From: Jean-Daniel Cryans jdcry...@apache.org
Subject: Re: Write Operations on a Table Hangs
To: user@hbase.apache.org
Date: Thursday, April 28, 2011, 4:55 PM
That message only means that the
region is being
of state in your
filter? As far as I can see only the reset() and filterRow() methods seem
to alter the state. Are there more methods that alter the state? If so could
you please point me to the relevant documentation?
thanks very much
-ajay
From: Jean-Daniel
Last time I heard about this issue, it was because of a DNS problem.
Look at your master and region server logs.
J-D
On Thu, Apr 28, 2011 at 7:49 PM, bijieshan bijies...@huawei.com wrote:
Hi,
When I execute command of 'list' in hbase shell, it told me that the
HRegioninfo was null or empty
...@cs.ualberta.ca wrote:
Could it be the /tmp/hbase-userID directory that is playing the culprit.
just a wild guess though.
On Tue, Apr 26, 2011 at 5:56 PM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
Unless HBase was running when you wiped that out (and even then), I
don't see how this could happen
Hi Alex,
Before answering I made sure it was working for me and it does. In
your master log after killing the -ROOT- region server you should see
lines like this:
INFO org.apache.hadoop.hbase.zookeeper.RegionServerTracker:
RegionServer ephemeral node deleted, processing expiration
[servername]
gstat...@traackr.comwrote:
On Thu, Apr 21, 2011 at 1:56 PM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
On Thu, Apr 21, 2011 at 10:49 AM, George P. Stathis
gstat...@traackr.com wrote:
I gave the thread that name because that was the best way I could come
up
with to describe the symptoms
Looks like I missed it, will commit the fix. Thanks for figuring this one out.
Also in the future please address your development-related questions
to the dev@ mailing list and not the user@ mailing list.
J-D
On Tue, Apr 26, 2011 at 2:29 AM, Gaojinchao gaojinc...@huawei.com wrote:
It has
of
the parameters related to storeheap and this one I really don't understand.
For loading every block there is a disk access? If adding data in a
region(randomly), is the cache important or only for random get/ scan and
delete?
Thank you,
Iulia
On 04/22/2011 09:21 PM, Jean-Daniel Cryans
From what I can tell it's not that the master can't find the region
servers, it's the region servers that never checkin with the master.
You might want to look at their logs.
J-D
On Mon, Apr 25, 2011 at 11:03 PM, Rakesh Kumar Rakshit
ihavethepotent...@gmail.com wrote:
Hi,
Here is the HMaster
servers). RS heap is 8G and DN is 12G.
I haven't done much testing changing the DN heap, but in my experience
it's not really of use to have 12GB there since the data never goes
through the DN. Max 2GB maybe, give the rest to the region server or
even the OS cache (ie don't allocate some GBs on
Ah yeah the ConnectionLoss bubbled all the way up and it should have
been retried, see https://issues.apache.org/jira/browse/HBASE-3065
J-D
On Mon, Apr 25, 2011 at 11:25 PM, Gaojinchao gaojinc...@huawei.com wrote:
Sorry, I don't know about zk. Please help me.
Thanks.
Do you mean that need
Just that line? Nothing about regions in transition that timed out?
Usually there's a few other lines coming with this one, might give us
a hint.
Thx,
J-D
On Tue, Apr 26, 2011 at 12:24 PM, Buttler, David buttl...@llnl.gov wrote:
Hi all,
Looking back at my logs, it looks like I upgraded to
Unless HBase was running when you wiped that out (and even then), I
don't see how this could happen. Could you match those blocks to the
files using fsck and figure when the files were created and if they
were part of the old install?
Thx,
J-D
On Tue, Apr 26, 2011 at 4:53 PM, Jonathan Bender
MetaEditor.addRegionToMeta(catalogTracker, region.getRegionInfo());
// 4. Close the new region to flush to disk. Close log file too.
region.close();
region.getLog().closeAndDelete();
}
-邮件原件-
发件人: jdcry...@gmail.com [mailto:jdcry...@gmail.com] 代表 Jean-Daniel Cryans
Can't tell what it was because it OOME'd while reading whatever was coming in.
Did you bump the number of handlers in that cluster too? Because you
might hit what we talked about in this jira:
https://issues.apache.org/jira/browse/HBASE-3813
Chatting w/ J-D this morning, he asked if the queues
that breaks
things, how can we avoid it?
-Jack
On Mon, Apr 25, 2011 at 10:37 AM, Jean-Daniel Cryans
jdcry...@apache.org wrote:
Can't tell what it was because it OOME'd while reading whatever was coming
in.
Did you bump the number of handlers in that cluster too? Because you
might hit what we
Probably the same ConnectionLossException that others have been
describing on this list? I don't see it in your stack trace (in fact I
can't really see anything), but it sounds like what you describe.
J-D
On Fri, Apr 22, 2011 at 10:32 AM, Pete Tyler peteralanty...@gmail.com wrote:
Seeing this
The datanodes don't consume much memory, we run ours with 1GB and give
the rest to the region servers.
BTW if you want to serve the whole dataset, depending on your SLA, you
might want to try HDFS-347 since concurrent HDFS access is rather
slow. The other choice would be to make sure you can hold
The splitting is based on when a region reaches a configured size
(default is 256MB). A table starts with 1 region, and splits as needed
when you insert. For a bit more info see:
http://hbase.apache.org/book.html#regions.arch
J-D
On Fri, Apr 22, 2011 at 10:40 AM, Peter Haidinyak
What exactly happened here? As much as I enjoy reading logs, I also
enjoy short descriptions of the context of what I'm looking at.
J-D
On Thu, Apr 21, 2011 at 8:36 PM, Gaojinchao gaojinc...@huawei.com wrote:
Is there any issue about this ?
2011-04-21 14:48:24,676 INFO
, 2011, at 12:05 PM, Jean-Daniel Cryans jdcry...@apache.org wrote:
Which HTable instantiation is giving you the error here?
Are you starting multiple jobs from the same jvm?
J-D
On Fri, Apr 22, 2011 at 11:16 AM, Pete Tyler peteralanty...@gmail.com
wrote:
Is it possible my use of map
That's almost exactly what mozilla is doing with sorocco (google for
their presentations).
Also you seem to assume things about the region balancer that are, at
least at the moment, untrue:
Then the assumption is this process would continue until every server in the
cluster has on region of
reproducible at will for a few hours.
Are there any known cases where a deliberate delete on an entire row will
still leave data behind? Could we be messing our timestamps in such a way
that we could be causing this?
-GS
On Wed, Apr 20, 2011 at 6:58 PM, Jean-Daniel Cryans
jdcry
Hey Eran,
Glad you could go back to debugging performance :)
The scalability issues you are seeing are unknown to me, it sounds
like the client isn't pushing it enough. It reminded me of when we
switched to using the native Thrift PHP extension instead of the
normal one and we saw huge speedups.
On Thu, Apr 21, 2011 at 10:49 AM, George P. Stathis
gstat...@traackr.com wrote:
I gave the thread that name because that was the best way I could come up
with to describe the symptoms. We still have the problem, it just may end up
be timestamp related after all.
No worries.
This does look
The get average time is the average time for a Get to execute.
The other metric... I don't see it. Are you talking about
fsReadLatency_avg_time?
J-D
On Wed, Apr 20, 2011 at 5:15 PM, Jack Levin magn...@gmail.com wrote:
How does get ave. time differs from read average time? What is
the
fsReadLatency_avg_time is the one.
-Jack
On Thu, Apr 21, 2011 at 11:06 AM, Jean-Daniel Cryans
jdcry...@apache.org wrote:
The get average time is the average time for a Get to execute.
The other metric... I don't see it. Are you talking about
fsReadLatency_avg_time?
J-D
On Wed, Apr 20, 2011 at 5:15
Are you sharing a single HTable between multiple threads that do puts?
J-D
On Wed, Apr 20, 2011 at 6:03 AM, Venkatesh vramanatha...@aol.com wrote:
Using hbase-0.90.2..(sigh..) Any tip? thanks
java.lang.IndexOutOfBoundsException: Index: 4, Size: 3
at
Hey George,
Sorry for the late answer, there's nothing that comes to mind when
reading your email.
HBASE_SLAVE_SLEEP is only used by the bash scripts, like when you do
hbase-daemons.sh it will wait that sleep time between each machine.
Would you be able to come up with a test that shows the
Take a look at this section of the book:
http://hbase.apache.org/book/performance.html
J-D
On Wed, Apr 20, 2011 at 12:44 PM, Weishung Chung weish...@gmail.com wrote:
Hello Stack,
Thank you.
You are right, it's on the client side. I was populating 30 batches of
datasets one after another
Regarding the test:
- Try to only keep one HBaseAdmin, one HTablePool and always reuse
the same conf between tests, creating a new HBA or HTP creates a new
HBaseConfiguration thus a new connection. Use methods like
setUpBeforeClass. Another option is to close the connection once you
used those
So you have your special lucene region that's opened on some region
server and when the master starts shutting down, it doesn't seem to
see it because while closing regions it says:
2011-04-18 21:35:09,221 INFO [IPC Server handler 4 on 32141]
master.ServerManager(283): Only catalog regions
and started
waiting. Instead we should wait in increments (basically always sleep
a small amount of time, up to the specified timeout). On a future loop
it would have seen that the server was stopped.
J-D
On Tue, Apr 19, 2011 at 11:04 AM, Jean-Daniel Cryans
jdcry...@apache.org wrote:
So you have your
We have something on the menu:
https://issues.apache.org/jira/browse/HBASE-2357 Coprocessors: Add
read-only region replicas (slaves) for availability and fast region
recovery
Something to keep in mind is that you have to cache the data for each
replica, so a row could be in 3 different caches
Message
From: Jean-Daniel Cryans jdcry...@apache.org
To: user@hbase.apache.org
Sent: Tue, April 19, 2011 5:10:07 PM
Subject: Re: Region replication?
We have something on the menu:
https://issues.apache.org/jira/browse/HBASE-2357 Coprocessors: Add
read-only region replicas (slaves
and Analytics
http://blog.sematext.com/2011/04/18/hiring-data-mining-analytics-machine-learning-hackers/
- Original Message
From: Jean-Daniel Cryans jdcry...@apache.org
To: user@hbase.apache.org
Sent: Tue, April 19, 2011 5:28:46 PM
Subject: Re: Region replication?
I don't know why you would
I see what you are saying, and I understand the deadlock, but what
escapes me is why ResourceBundle has to go touch all the classes every
time to find the locale as I see 2 threads doing the same. Maybe my
understanding of what it does is just poor, but I also see that you
are using the yourkit
You can confirm if it's really a GC issue by taking a look at the
master log; if the log splitting started before the errors in the
region server log, then something happened to the region server. It's
usually GC, but it could be something else.
The fact that mslab helped does point to a GC
Yes, all 1.6 updates should all be compatible with each other.
J-D
2011/4/17 Zhoushuaifeng zhoushuaif...@huawei.com:
I found that there are information about JDK version in HBASE-3703 in
CHANGES.txt file (hbase-config.sh needs to be updated so it can auto-detects
the sun jdk provided by
See HBASE-3744, createTable shouldn't be using the startup bulk assigner.
J-D
On Mon, Apr 18, 2011 at 8:43 PM, Gaojinchao gaojinc...@huawei.com wrote:
I created table with some regions.
Hmaster had crashed because of one region server crashed.
I dig the code. It may be a bug.
Startup or
Vague question, I'll try my best answer.
This was committed as part of HBASE-1816 Master rewrite, and in the
comments I can read:
+ Move fs methods out of HMaster to FSUtils.
And if you you look at FSUtils, you'll see how it's done now.
J-D
On Mon, Apr 18, 2011 at 8:48 PM, Gaojinchao
601 - 700 of 1248 matches
Mail list logo