Currently bloom filters and hfile indexes are in storefileIndexSize
Why do you want bigger blocks? Just for less index size? Be aware it
may lower random read performance.
J-D
On Fri, Apr 15, 2011 at 8:23 AM, Matt Corgan mcor...@hotpads.com wrote:
Some of our servers have 5.2gb hbase heaps
Yeah it's not clear from the logs why it did that, and looking through
my logs I can't see any occurrence. Can you reproduce it easily?
J-D
On Fri, Apr 15, 2011 at 12:46 AM, Gaojinchao gaojinc...@huawei.com wrote:
Thanks for your reply.
Hbase version : 0.90.1
I find other abnormal logs.
Which hbase version?
J-D
On Fri, Apr 15, 2011 at 11:12 AM, Ajay Govindarajan
agovindara...@yahoo.com wrote:
I was doing a bulk update on a table when the RegionServer crashed. IT failed
because we had not allocated enough memory to the process. So we restarted
the master and region servers
: Jean-Daniel Cryans jdcry...@apache.org
To: user@hbase.apache.org; Ajay Govindarajan agovindara...@yahoo.com
Sent: Friday, April 15, 2011 11:15 AM
Subject: Re: unable to disable tables
Which hbase version?
J-D
On Fri, Apr 15, 2011 at 11:12 AM, Ajay Govindarajan
agovindara...@yahoo.com
This is probably a red herring, for example if the region server had a
big GC pause then the master could have already split the log and the
region server wouldn't be able to close it (that's our version of IO
fencing). So from that exception look back in the log and see if
there's anything like :
is
now set to 2gb instead of 512mb. This could be causing long compaction
times. If a compaction takes too long it won't respond and can be marked as
dead. I have had this happen on my dev cluster a few times.
-Ben
On Thu, Apr 14, 2011 at 11:20 AM, Jean-Daniel Cryans
jdcry
the compaction is in another thread it
would still fail to respond during major compaction.
-Ben
On Thu, Apr 14, 2011 at 11:26 AM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
Ben, the compaction is done in a background thread, it doesn't block
anything. Now if you had a heap close to 2GB, you could
(please don't leave unrelated discussions at the tail of your emails)
So I thought I never got that issue but wanted to make sure so I
grepped my logs and indeed saw I got it, so I what I did is that I
grepped the name of one of the regions that got the issue and looked
at what was happening at
I guess we should, there's
https://issues.apache.org/jira/browse/HBASE-3065 that's open, but in
your case like I mentioned in your other email there seems to be
something weird in your environment.
J-D
On Thu, Apr 14, 2011 at 12:51 AM, bijieshan bijies...@huawei.com wrote:
Hi,
The
Ok so this is the result of chain events, without which we can't
really tell what was going on. You need to find more information about
that region in the master log and region servers logs, try to find its
story.
BTW which version of HBase is this?
J-D
On Thu, Apr 14, 2011 at 6:09 AM,
the message (I
verified that our clocks are in perfect sync). This means this is
another queue that's too slow (the ones that processes all the events
coming from zookeeper).
I'll keep digging.
J-D
On Thu, Apr 14, 2011 at 10:58 AM, Jean-Daniel Cryans
jdcry...@apache.org wrote:
(please don't leave
a look though, J-D, and I'm sorry if you wasted too much time on it.
Sandy
-Original Message-
From: jdcry...@gmail.com [mailto:jdcry...@gmail.com] On Behalf Of Jean-
Daniel Cryans
Sent: Monday, April 11, 2011 17:34
To: user@hbase.apache.org
Subject: Re: Catching ZK ConnectionLoss
This could be HBASE-2077
J-D
On Wed, Apr 13, 2011 at 9:15 AM, Gary Helmling ghelml...@gmail.com wrote:
Hi Vidhya,
So it sounds like the timeout thread is timing out the scanner when it takes
more than 60 seconds reading through the large column family store file
without returning anything
this
with the usage counter. That may be the more correct approach, but I was
wondering if we would do something simpler with periodically renewing the
lease down in the RegionScanner iteration? Sort of like calling progress()
within an MR job.
On Wed, Apr 13, 2011 at 9:42 AM, Jean-Daniel
HConnectionManager needed some modifications in order to make it work,
it's not just about backporting that job.
J-D
On Wed, Apr 13, 2011 at 7:27 AM, Manuel de Ferran
manuel.defer...@gmail.com wrote:
Greetings,
I'm trying to backport CopyTable to HBase 0.20.6.
In other words, the challenge
upgrade - zookeeper exception in mapreduce job
Thanks J-D
I made sure to pass conf objects around in places where I was n't..
will give it a try
-Original Message-
From: Jean-Daniel Cryans jdcry...@apache.org
To: user@hbase.apache.org
Sent: Tue, Apr 12
is bounce.
-Original Message-
From: Jean-Daniel Cryans jdcry...@apache.org
To: user@hbase.apache.org
Sent: Wed, Apr 13, 2011 3:22 pm
Subject: Re: hbase -0.90.x upgrade - zookeeper exception in mapreduce job
Like I said, it's a zookeeper configuration that you can change
and try to match configs
Venkatesh - unless you have other processes in your JVM accessing HBase (I
have
one), #1 might be the best bet.
- Ruben
From: Jean-Daniel Cryans jdcry...@apache.org
To: user@hbase.apache.org
Sent: Wed, April 13, 2011 3:22:48
Is there anything in particular you'd like to know?
I recently answered a more specific (but still general) question about
Hive/HBase here: http://search-hadoop.com/m/YZe7h1zxxoc1
I will also be giving a presentation at OSCON Data in July about our
experience using both together.
J-D
On Mon,
That usually means that your datanode refuses to start or isn't able
to connect for some reason. Have a look at its log.
J-D
On Tue, Apr 12, 2011 at 11:13 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
I'm running into an error when setting the DFS block size to be larger
than the
No need to specify the method for adding, it wasn't required in 0.20.6
either so if it was accepting it it was a bug.
J-D
On Tue, Apr 12, 2011 at 1:54 AM, 陈加俊 cjjvict...@gmail.com wrote:
I can add family by follow command In HBase-0.20.6
alter 'cjjHTML', {NAME = 'responseHeader', METHOD =
YouAreDead means that the master is already processing the death of
those region servers when the region server talks back to the master.
Network split?
J-D
On Tue, Apr 12, 2011 at 11:33 AM, Vidhyashankar Venkataraman
vidhy...@yahoo-inc.com wrote:
This was something that happened a week back in
Could you upgrade to the newly released CDH3 instead? It has a few more fixes.
So regarding your issue, I don't see regions stuck. The first one did
timeout on opening but then it was reassigned (and then I can't see
anything in the log that says it timed out again).
By the way can you check
under an
hour,
and sure enough there are a ton of Zookeeper threads just sitting there.
Here's
a pastebin link: http://pastebin.com/MccEuvrc
I'm running 0.90.0 right now.
- Ruben
From: Jean-Daniel Cryans jdcry...@apache.org
To: user@hbase.apache.org
Yep.
J-D
On Tue, Apr 12, 2011 at 3:55 PM, Chris Tarnas c...@email.com wrote:
I was looking at the metrics column in the regionservers web UI and had a
question:
If I understand correctly hbase.hregion.max.filesize is the max size of a
single column family's storefile size. If I have 4
It says:
2011-04-12 16:16:17,157 DEBUG [IPC Server handler 7 on 51372]
namenode.ReplicationTargetChooser(408): Node
/default-rack/127.0.0.1:22967 is not chosen because the node does not
have enough space
J-D
On Tue, Apr 12, 2011 at 4:24 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
)
MIN_BLOCKS_FOR_WRITE defaults to 5.
J-D
On Tue, Apr 12, 2011 at 4:35 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Hmm... There's no physical limitation, is there an artificial setting?
On Tue, Apr 12, 2011 at 4:27 PM, Jean-Daniel Cryans jdcry...@apache.org
wrote:
It says:
2011-04-12 16:16
it If I want to add another family to a
table ? How to keep the data in the table before?
On Wed, Apr 13, 2011 at 2:31 AM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
No need to specify the method for adding, it wasn't required in 0.20.6
either so if it was accepting it it was a bug.
J-D
It's possible under some bugs, which HBase version are you using?
J-D
On Mon, Apr 11, 2011 at 4:50 AM, 茅旭峰 m9s...@gmail.com wrote:
Hi,
Is it possible that some table cannot cover the whole key space. What I saw
was like
hbase(main):006:0 put 'table1', 'abc', 'cfEStore:dasd', '123'
I think it changed somewhere between 0.20 and 0.90 as I remember being
able to use a separate ZK with a standalone HBase. So for the moment
you can just set hbase.cluster.distributed to true which will spawn
the master and the region server into 2 processes but it will still
work without HDFS
Same for the issue where the master isn't shutting down, we should be
shutting down region servers once checkFilesystem is called... at
least until we can find a way to ride over NN restarts.
gets/puts are probably working for files that are already opened since
HBase doesn't have to talk to the
I'm cleaning this up in this jira
https://issues.apache.org/jira/browse/HBASE-3755
But it's a failure case I haven't seen before, really interesting.
There's a HTable that's created in the guts if HCM that will throw a
ZookeeperConnectionException but it will bubble up as an IOE. I'll try
to
0x12ee42283320050, closing socket connection and attempting
reconnect
Which is a 2m20s GC pause. The HDFS errors come from the fact that the
master split the logs _while_ the region server was sleeping.
J-D
On Mon, Apr 11, 2011 at 11:47 AM, Jean-Daniel Cryans
jdcry...@apache.org wrote:
So my
What does the log look like when you start hbase without starting your
own zookeeper? The Couldnt start ZK at requested address message
means that it does fall into that part of the code, but something must
be blocking it from starting... The log should tell you.
J-D
On Mon, Apr 11, 2011 at
0.20 is the name of the branch, if you generate a tar from it you'll
see that it's called 0.20.3-SNAPSHOT
J-D
On Mon, Apr 11, 2011 at 2:16 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Well, just the difference in versions, the one in HBase is listed as
0.20 whereas the latest is
keep an eye on the Jira.
-Original Message-
From: jdcry...@gmail.com [mailto:jdcry...@gmail.com] On Behalf Of Jean-
Daniel Cryans
Sent: Monday, April 11, 2011 11:52
To: user@hbase.apache.org
Subject: Re: Catching ZK ConnectionLoss with HTable
I'm cleaning this up in this jira
https
It's really a lot yes, but it could also be weird configurations or
too big values.
J-D
On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 cjjvict...@gmail.com wrote:
Is it too many regions ? Is the memory enough ?
HBase-0.20.6
2011-04-12 00:16:31,844 FATAL
...@gmail.com wrote:
Can I limit the numbers of regions on one RegionServer ?
On Tue, Apr 12, 2011 at 8:37 AM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
It's really a lot yes, but it could also be weird configurations or
too big values.
J-D
On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 cjjvict
/property
On Tue, Apr 12, 2011 at 8:43 AM, Jean-Daniel Cryans jdcry...@apache.org
wrote:
And where will they go? The issue isn't the number of regions per say,
it's the amount of data being served by that region server. Also I
still don't know if that's really your issue or it's a configuration
Were they opening the same region by any chance?
On Mon, Apr 11, 2011 at 6:13 PM, 陈加俊 cjjvict...@gmail.com wrote:
There is no big scan,and just norma load. Also strange is when one RS exited
then another RS exited and others RS like that.
On Tue, Apr 12, 2011 at 8:55 AM, Jean-Daniel Cryans
My experience in debugging those kind of issues is that 95% of the
time it's a configuration issue, 4.99% of the time it's environment
and network issues (network splits, lost packets, etc), and the
remaining 0.01% is actual HDFS issues.
The fact that you're saying that you had issues even with
It should be working, what's the stack trace like?
Thx,
J-D
On Sun, Apr 10, 2011 at 7:00 PM, 茅旭峰 m9s...@gmail.com wrote:
I've tried using the new hbase 0.90.2 on our hadoop cluster setup with
CDH3B4.
I have replaced the hadoop jar file under hbase-0.90.2/lib by the CDH3B4
one.
It works
Looks like your zookeeper isn't running for some reason, that's
usually what connection refused means. Also notice it connects on
localhost.
J-D
On Fri, Apr 8, 2011 at 11:54 PM, William Kang weliam.cl...@gmail.com wrote:
Hi folks,
I recently upgraded to hbase 0.90.2 that runs with hadoop
You cannot have more mappers than you have regions, but you can have
less. Try going that way.
Also 149,624 regions is insane, is that really the case? I don't think
i've ever seen such a large deploy and it's probably bound to hit some
issues...
J-D
On Sat, Apr 9, 2011 at 9:15 AM, Avery Ching
, then I'll only have 100 map tasks. Just
was wondering if anyone else faced these issues.
Thanks for your quick response on a Saturday morning =),
Avery
On Apr 9, 2011, at 9:26 AM, Jean-Daniel Cryans wrote:
You cannot have more mappers than you have regions, but you can have
less. Try going
Unfortunately it seems that there's nothing in the OutputFormat
interface that we could implement (like getSplits in the InputFormat)
to inform the JobTracker of the location of the regions. It kinda make
sense, since when you're writing to HDFS in a normal MR job you
always write to the local
I guess you only have 1 table and you write to it sequentially such
that the regions that get moved are the ones you're not writing to?
Then yeah it's unusable but also you might be doing wrong (if that's
really your situation). You need to make sure you don't end up writing
to the latest regions,
See https://issues.apache.org/jira/browse/HBASE-3071
On Fri, Apr 8, 2011 at 2:22 PM, Vivek Krishna vivekris...@gmail.com wrote:
Is there a procedure to decommission a region server?
If I just kill a region server process, the master tries to reconnect again
and again and the master logs looks
Well not really a bug since that was the original intention (see how
that server has a different start code, meaning it's a different
instance of that server).
This jira was created with the intention of making it more like
hadoop: https://issues.apache.org/jira/browse/HBASE-3580
J-D
On Fri,
Check the log of the zookeeper at the adress that's printed, it may be
a problem of too many connections (in which case you need to make sure
you reuse the configuration objects).
J-D
On Thu, Apr 7, 2011 at 9:49 AM, Shahnawaz Saifi shahsa...@gmail.com wrote:
Hi,
While executing MR with 472G
Another question, why would dfsclient setting for sockettimeout (for
data reading) would be set so high by default if HBASE is expected to
be real time? Shouldn't it be few seconds (5?).
Not all clusters are used for real time applications, also usually
users first try to cram as much data as
You should be seeing more log lines related to ZooKeeper before that.
Also make sure your client connects to the zk server.
J-D
On Thu, Apr 7, 2011 at 9:11 AM, Shuja Rehman shujamug...@gmail.com wrote:
Hi
I am trying to read from hbase the following code.
http://pastebin.com/wvVVUT3p
it
There's nothing of use in the pasted logs unfortunately, and the log
didn't get attached to your mail (happens often). Consider putting on
a web server or pastebin.
Also I see you are on an older version, upgrading isn't going to fix
your issue (which is probably related to your environment or
That's the one published by Facebook, the one maintained by Apache is
https://github.com/apache/hadoop-common/tree/branch-0.20-append
J-D
On Thu, Apr 7, 2011 at 11:04 AM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Is https://github.com/facebook/hadoop-20-append the Github branch for
)
at DAO.AbstractDAO.connect(AbstractDAO.java:31)
at DAO.TimeWindowDAO.getTimeWindowReport(TimeWindowDAO.java:132)
On Thu, Apr 7, 2011 at 10:27 PM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
You should be seeing more log lines related to ZooKeeper before that.
Also make sure your
(TimeWindowDAO.java:132)
On Thu, Apr 7, 2011 at 10:27 PM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
You should be seeing more log lines related to ZooKeeper before that.
Also make sure your client connects to the zk server.
J-D
On Thu, Apr 7, 2011 at 9:11 AM, Shuja Rehman shujamug
To help usability, I created https://issues.apache.org/jira/browse/HBASE-3755
J-D
On Thu, Apr 7, 2011 at 11:39 AM, Jean-Daniel Cryans jdcry...@apache.org wrote:
So regarding finding your logs and other stuff related to that, since
you are using CDH you should always check their documentation
As far as I can tell, they are at the same revision.
J-D
On Thu, Apr 7, 2011 at 1:19 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
Is http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-append
different than the Github one at
: 2011-01-10 11:01:36 -0800 (Mon, 10 Jan 2011)
On Thu, Apr 7, 2011 at 2:05 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
How did you compare?
On Thu, Apr 7, 2011 at 1:37 PM, Jean-Daniel Cryans jdcry...@apache.org
wrote:
As far as I can tell, they are at the same revision.
J-D
: hairong
Last Changed Rev: 1057313
Last Changed Date: 2011-01-10 11:01:36 -0800 (Mon, 10 Jan 2011)
On Thu, Apr 7, 2011 at 2:05 PM, Jason Rutherglen
jason.rutherg...@gmail.com wrote:
How did you compare?
On Thu, Apr 7, 2011 at 1:37 PM, Jean-Daniel Cryans jdcry...@apache.org
wrote:
As far
Check the region server log, you probably missed something when
configuring your table.
J-D
On Wed, Apr 6, 2011 at 2:15 AM, Jameson Li hovlj...@gmail.com wrote:
Hi,
Today I have added the lzo to the hadoop cluster, and also the
hbase(0.20.6).
Then I create a test table:
create 'mytable',
Google will give you what you're asking for.
Look for how Facebook is using HBase for messages. Also look for how
we have been using HBase at StumbleUpon for 2 years now and for both
live and batch queries. Numbers are usually included in the decks.
J-D
On Wed, Apr 6, 2011 at 2:18 PM, Shantian
Google for could only be replicated to 0 nodes, instead of 1, this
is usually caused by a basic HDFS configuration problem.
J-D
On Mon, Apr 4, 2011 at 11:04 PM, prasunb prasun.bhattachar...@tcs.com wrote:
Hello,
I am new in Hadoop and I am struggling to configure it in fully distribution
What I usually tell people is that if time is part of your model, then put
it in a key.
J-D
On Tue, Apr 5, 2011 at 2:16 AM, Miguel Costa miguel-co...@telecom.ptwrote:
Hi,
I want to have my data aggregated by day, so I would like to know wich is
the best option to query my data. To put The
Those configs are about the interaction between the hbase client and
region server.
What you are trying to do doesn't make much sense IMO, there's no such
thing as a primary datanode.
J-D
On Mon, Apr 4, 2011 at 2:43 PM, Jack Levin magn...@gmail.com wrote:
property
As far as I can tell the async nature of those operations has nothing
to do with what you see since it's not even able to get a session from
ZooKeeper (so it's not even talking to the region servers). If you
look at the stack trace:
From http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html
Instances of HTable passed the same Configuration instance will share
connections to servers out on the cluster and to the zookeeper
ensemble as well as caches of region locations. This is usually a
*good* thing. This
Tarnas c...@email.com wrote:
Hi JD,
Sorry for taking a while - I was in traveling. Thank you very much for
looking through these.
See answers below:
On Apr 1, 2011, at 11:19 AM, Jean-Daniel Cryans wrote:
Thanks for taking the time to upload all those logs, I really appreciate it.
So from
Hi users,
I just want to share a useful tip when storing very fat values into
HBase, we were able to get some of our MR jobs an order of magnitude
faster by simply using Java's Deflater and then passing the byte[] to
Put (and the equivalent when retrieving the values with Inflator). We
also use
this error. When I had done this, I was
assuming something related to the asynchronous nature of flush(), but that
doesn't make sense at all now. So why is it getting fixed (seems to) when I
do this?
Thanks,
Hari
On Mon, Apr 4, 2011 at 10:40 PM, Jean-Daniel Cryans
jdcry...@apache.orgwrote
2 Internets for you Doug, that's awesome!
Thx
J-D
On Apr 2, 2011 11:59 AM, Doug Meil doug.m...@explorysmedical.com wrote:
Hi there everybody-
Just thought I'd let everybody know about this... Stack and I have been
working on updating the HBase book and porting portions of the very-out-date
right? More like, all the data in
one family was missing for those 11B rows? Is that right?
Thx!
J-D
On Thu, Mar 31, 2011 at 7:15 PM, Chris Tarnas c...@email.com wrote:
Thanks for your help J.D., answers inline:
On Mar 31, 2011, at 8:00 PM, Jean-Daniel Cryans wrote:
I wouldn't worry too much
(TableMapReduceUtil.java:172)
Regards
Stuart
-Original Message-
From: jdcry...@gmail.com [mailto:jdcry...@gmail.com] On Behalf Of Jean-Daniel
Cryans
Sent: 30 March 2011 17:34
To: user@hbase.apache.org
Subject: Re: Changing Zookeeper address programmatically for reduces
That's basically
);
}
}
-邮件原件-
发件人: jdcry...@gmail.com [mailto:jdcry...@gmail.com] 代表 Jean-Daniel Cryans
发送时间: 2011年3月30日 1:39
收件人: user@hbase.apache.org
抄送: Gaojinchao; Chenjian
主题: Re: A lot of data is lost when name node crashed
I was expecting it would die, strange it didn't. Could you provide
Inline.
J-D
I assume the block cache tunning key you talk about is
hfile.block.cache.size, right? If it is only 20% by default than
what is the rest of the heap used for? Since there are no fancy
operations like joins and since I'm not using memory tables the only
thing I can think of is
...@gmail.com wrote:
thank you JD
the type of key is Long , and the family's versions is 5 .
On Thu, Mar 31, 2011 at 12:42 PM, Jean-Daniel Cryans
jdcry...@apache.org wrote:
(Trying to answer with the very little information you gave us)
So in HBase every cell is stored along it's row key, family
That's the correct guess.
J-D
On Thu, Mar 31, 2011 at 4:59 PM, Joseph Boyd
joseph.b...@cbsinteractive.com wrote:
We're using hbase 0.90.0 here, and I'm seeing a curious behavior with my
scans.
I have some code that does a scan over a table, and for each row
returned some work to verify the
Where is that hadoop.log.file you're talking about?
J-D
On Thu, Mar 31, 2011 at 3:22 PM, Geoff Hendrey ghend...@decarta.com wrote:
Hi -
I was wondering where I can find an explanation of what hbase logs to
hadoop.log.file. This file is defined in log4j.properties. I see
DFSClient logging
Sub-second responses for 100MBs files? You sure that's right?
Regarding proper case studies, I don't think a single one exists.
You'll find presentations decks about some use cases if you google a
bit tho.
J-D
On Thu, Mar 31, 2011 at 12:20 PM, Shantian Purkad
shantian_pur...@yahoo.com wrote:
).
-geoff
-Original Message-
From: jdcry...@gmail.com [mailto:jdcry...@gmail.com] On Behalf Of
Jean-Daniel Cryans
Sent: Thursday, March 31, 2011 5:26 PM
To: user@hbase.apache.org
Subject: Re: hadoop.log.file
Where is that hadoop.log.file you're talking about?
J-D
On Thu, Mar 31
I wouldn't worry too much at the moment for what seems to be double
deletes of blocks, I'd like to concentrate on the state of your
cluster first.
So if you run hbck, do you see any inconsistencies?
In the datanode logs, do you see any exceptions regarding xcievers
(just in case).
In the region
...@decarta.com wrote:
whoops, yep that's the one. Just trying to understand how it related to
the master logfile, and regionserver logfile, and zookeeper logfile.
-geoff
-Original Message-
From: jdcry...@gmail.com [mailto:jdcry...@gmail.com] On Behalf Of
Jean-Daniel Cryans
Sent: Thursday
That's basically what CopyTable does if I understand your need properly:
https://github.com/apache/hbase/blob/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java
J-D
On Wed, Mar 30, 2011 at 8:34 AM, Stuart Scott stuart.sc...@e-mis.com wrote:
Hi,
I have a map/reduce job that
Put your data on hard drives, ship them to the other data center?
On Wed, Mar 30, 2011 at 12:44 AM, 陈加俊 cjjvict...@gmail.com wrote:
My cluster 1 is in shanghai and cluster 2 is in beijing. So the cluster 1
can't see the cluster 2.
any hints?
2011/3/10 shixing paradise...@gmail.com
Because
Just as a baseline comparison, I've looked at our own environment and
our ~64MB flushes usually take ~700ms. Now, there could be a whole lot
of reasons why it's faster in our case and not yours starting with
hardware, so I'm not saying that you necessarily have an issue, but
I'd like to cover the
Are you logging GC activity for the datanodes?
On Mon, Mar 28, 2011 at 9:28 PM, Jack Levin magn...@gmail.com wrote:
Good Evening, anyone seen this in your logs? It could be something
simple that we are missing. We also seeing that Datanodes can't be
accessed from the webport 50075 every
Are you talking about 0.90.1? 0.91.0 is far from being released so I
guess it can't be it.
0.90.1 it's a minor revision, it's supposed to be forward and backward
compatible so code compiled against 0.90.0 should work on 0.90.1 (if
not then it's a bug).
J-D
On Wed, Mar 30, 2011 at 12:22 PM,
(Trying to answer with the very little information you gave us)
So in HBase every cell is stored along it's row key, family name,
qualifier and timestamp (plus length of each). Depending on how big
your keys are, it can grow your total dataset. So it's not just a
function of value sizes.
J-D
On
cluster...make sense now.
~Jeff
On 3/28/2011 3:34 PM, Jean-Daniel Cryans wrote:
The slave cluster is saying that the table user-session doesn't
exist... is it the case?
J-D
On Mon, Mar 28, 2011 at 1:38 PM, Jeff Whitingje...@qualtrics.com wrote:
My regions weren't very well balanced
I was expecting it would die, strange it didn't. Could you provide a
bigger log, this one basically tells us the NN is gone but that's
about it. Please put it on a web server or something else that's
easily reachable for anyone (eg don't post the full thing here).
Thx,
J-D
On Tue, Mar 29, 2011
Hey Eran,
Usually this mailing list doesn't accept attachements (or it works for
voodoo reasons) so you'd be better off pastebin'ing them.
Some thoughts:
- Inserting into a new table without pre-splitting it is bound to be a
red herring of bad performance. Please pre-split it with methods such
The 60 secs timeout means that the client was waiting on the master
for some operation but the master took longer than 60 secs to do it,
so its log should be the next place too look for something whack.
BTW deleting the rows from .META. directly is probably the worst thing
you can do.
J-D
On
(regionInfo, null);
It seems like some bug in special scenario when hamster restart or
failover
} else if (!serverManager.isServerOnline(
-邮件原件-
发件人: jdcry...@gmail.com [mailto:jdcry...@gmail.com] 代表 Jean-Daniel Cryans
发送时间: 2011年3月29日 1:02
收件人
the wrong decision and that code was
fixed in 0.89, but then completely redone in 0.90
J-D
On Mon, Mar 28, 2011 at 7:01 PM, 陈加俊 cjjvict...@gmail.com wrote:
HBase-0.20.6
On Tue, Mar 29, 2011 at 1:27 AM, Jean-Daniel Cryans jdcry...@apache.org
wrote:
Which HBase version?
J-D
On Mon, Mar 28, 2011
recommended. But we were
facing the disable problem way too often. So we thought we'd make it
in-built. What other alternatives do I have?
Thx,
Hari
On Tue, Mar 29, 2011 at 11:30 PM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
The 60 secs timeout means that the client was waiting on the master
. Here is the HMaster log before the IOException was
thrown: http://pastebin.com/x1BUuPpQ
thanks,
Hari
On Wed, Mar 30, 2011 at 12:21 AM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
There's a reason why disabling takes time, if you delete rows from
.META. you might end up in an inconsistent
Yes but you'll start with a single region, instead of truncating you
probably want instead to create a pre-split table.
J-D
On Tue, Mar 29, 2011 at 2:27 PM, Venkatesh vramanatha...@aol.com wrote:
Hi,
If I export existing table using Export MR job, truncate the table, increase
region
this exercise is reduce # of regions in our cluster (in the
absence of additional hardware
25K regions on 20 node)
-Original Message-
From: Jean-Daniel Cryans jdcry...@apache.org
To: user@hbase.apache.org
Sent: Tue, Mar 29, 2011 5:29 pm
Subject: Re: Export/Import
Inline.
J-D
Hi J-D,
I can't paste the entire file because it's 126K. Trying to attach it
now as zip, lets see if that has more luck.
In the jstack you posted, all the Gets were hitting HDFS which is
probably why it's slow. Until you can get something like HDFS-347 in
your Hadoop you'll have
Hey Joe,
That TPE is used to do batch operations from a single HTable, but
those pools cannot be shared the way the code works right now. If you
don't need batch operations, you can set hbase.htable.threads.max to
1.
It seems that when you call htable.close it doesn't close the TPE,
which is a
701 - 800 of 1248 matches
Mail list logo