Hi All, does anyone happen to know or use a script or tool that can
read .META. rows for regioninfo and create .regioninfo file?
Thanks.
-Jack
re why you need this? hbck can create
> .regioninfo files if missing if thats what you need.
>
> thanks,
> esteban.
>
>
>
> --
> Cloudera, Inc.
>
>
> On Mon, Oct 5, 2015 at 1:23 PM, Jack Levin <magn...@gmail.com> wrote:
>
>> Hi All, does anyone h
We at Imageshack use Hbase to store all of our images, currently at ~2bl
rows with about 350+ TB.
Jack
On Friday, December 5, 2014, iain wright iainw...@gmail.com wrote:
Hi Jeremy,
pinterest is using it for their feeds:
http://www.slideshare.net/cloudera/case-studies-session-3a
Hey All, I was wondering if anyone had this issue with 0.90.5 HBASE.
I have a table 'img611', I issue delete of keys like this:
hbase(main):004:0 describe 'img611'
DESCRIPTION
ENABLED
{NAME = 'img611', FAMILIES = [{NAME = 'att',
: Compaction
(major) requested for
img611,u4rpx.jpg,1329700235569.cf0a557ff4030c238fc5a6ad732be45f.
because User-triggered major compaction; priority=1, compaction queue
size=2
However, split never occurred, and data did not get cleaned.
-Jack
On Tue, Apr 22, 2014 at 10:34 AM, Jack Levin magn
Submitted JIRA patch: https://issues.apache.org/jira/browse/HDFS-6022
(with test)
On Mon, Feb 24, 2014 at 12:16 PM, Jack Levin magn...@gmail.com wrote:
I will do that.
-Jack
On Mon, Feb 24, 2014 at 6:23 AM, Steve Loughran ste...@hortonworks.com
wrote:
that's a very old version
February 2014 04:48, Stack st...@duboce.net wrote:
On Sat, Feb 15, 2014 at 8:01 PM, Jack Levin magn...@gmail.com wrote:
Looks like I patched it in DFSClient.java, here is the patch:
https://gist.github.com/anonymous/9028934
I moved 'deadNodes' list outside as global field
I can submit Jira for this if you feel that's appropriate
On Feb 18, 2014 8:49 PM, Stack st...@duboce.net wrote:
On Sat, Feb 15, 2014 at 8:01 PM, Jack Levin magn...@gmail.com wrote:
Looks like I patched it in DFSClient.java, here is the patch:
https://gist.github.com/anonymous/9028934
, Feb 13, 2014 at 10:55 PM, Stack st...@duboce.net wrote:
On Thu, Feb 13, 2014 at 9:18 PM, Jack Levin magn...@gmail.com wrote:
One other question, we get this:
2014-02-13 02:46:12,768 WARN org.apache.hadoop.hdfs.DFSClient: Failed to
connect to /10.101.5.5:50010 for file
/hbase/img32
I found the code path that does not work, patched it. Will report if it
fixes the problem
On Feb 14, 2014 8:19 AM, Jack Levin magn...@gmail.com wrote:
0.20.2-cdh3u2 --
add to deadNodes and continue would solve this issue. For some reason
its not getting into this code path.
If its a matter
Good morning --
I had a question, we have had a datanode go down, and its been down for few
days, however hbase is trying to talk to that dead datanode still
2014-02-13 08:57:23,073 WARN org.apache.hadoop.hdfs.DFSClient: Failed to
connect to /10.101.5.5:50010 for file
I meant its in the 'dead' list on HDFS namenode page. Hadoop fsck / shows
no issues.
On Thu, Feb 13, 2014 at 10:38 AM, Jack Levin magn...@gmail.com wrote:
Good morning --
I had a question, we have had a datanode go down, and its been down for
few days, however hbase is trying to talk
and then keeps them open as long as the RS is alive. We're
failing read of this replica and then we succeed getting the block
elsewhere? You get that exception every time? What hadoop version Jack?
You have short-circuit reads on?
St.Ack
On Thu, Feb 13, 2014 at 10:41 AM, Jack Levin magn
at 1:41 PM, Jack Levin magn...@gmail.com wrote:
As far as I can tell I am hitting this issue:
http://grepcode.com/search/usages?type=methodid=repository.cloudera.com%24content%24repositories%24releases@com.cloudera.hadoop%24hadoop-core@0.20.2-320@org%24apache%24hadoop%24hdfs%24protocol
Can upgrade now but I would take suggestions on how to deal with this
On Feb 13, 2014 2:02 PM, Stack st...@duboce.net wrote:
Can you upgrade Jack? This stuff is better in later versions (dfsclient
keeps running list of bad datanodes...)
St.Ack
On Thu, Feb 13, 2014 at 1:41 PM, Jack Levin
I meant to say, I can't upgrade now, its a petabyte storage system. A
little hard to keep a copy of something like that.
On Thu, Feb 13, 2014 at 3:20 PM, Jack Levin magn...@gmail.com wrote:
Can upgrade now but I would take suggestions on how to deal with this
On Feb 13, 2014 2:02 PM, Stack st
: Failed to connect to /
10.103.8.109:50010, add to deadNodes and continue
add to deadNodes and continue specifically?
-Jack
On Thu, Feb 13, 2014 at 8:55 PM, Jack Levin magn...@gmail.com wrote:
I meant to say, I can't upgrade now, its a petabyte storage system. A
little hard to keep a copy
I've never tried it, HBASE worked out nicely for this task, caching
and all is a bonus for files.
-jack
On Mon, Jan 28, 2013 at 2:01 AM, Adrien Mogenet
adrien.moge...@gmail.com wrote:
Could HCatalog be an option ?
Le 26 janv. 2013 21:56, Jack Levin magn...@gmail.com a écrit :
AFAIK, namenode
?
thanks and regards,
Yiyu
On Sat, Jan 26, 2013 at 9:56 PM, Jack Levin magn...@gmail.com wrote:
AFAIK, namenode would not like tracking 20 billion small files :)
-jack
On Sat, Jan 26, 2013 at 6:00 PM, S Ahmed sahmed1...@gmail.com wrote:
That's pretty amazing.
What I am confused
learn it.
Also, do you store meta data of each video clip directly in HDFS or you
have other storage like memcache?
thanks and regards,
Yiyu
On Sun, Jan 27, 2013 at 11:56 AM, Jack Levin magn...@gmail.com wrote:
We did some experiments, open source project HOOP works well with
interfacing
, or we can rent our own cluster with Restful
API. If anyone's interested, ping me off the list please.
Thanks.
-Jack
On Sun, Jan 27, 2013 at 8:06 PM, Jack Levin magn...@gmail.com wrote:
We store image/media data into second hbase cluster, but I don't see a
reason why it would not work
along).
On Wed, Jan 23, 2013 at 11:53 PM, Jack Levin magn...@gmail.com wrote:
Its best to keep some RAM for caching of the filesystem, besides we
also run datanode which takes heap as well.
Now, please keep in mind that even if you specify heap of say 5GB, if
your server opens
and
bloom.cacheonwrite) to cache the index/bloom blocks upon hfile writes
though i find it unlikely that they could be impacting my setup.
On Thu, Jan 24, 2013 at 11:46 PM, Jack Levin magn...@gmail.com wrote:
Generally, the larger the flush to harder the GC will work. Flush more
often to avoid this. What
PM, Jack Levin magn...@gmail.com wrote:
Its best to keep some RAM for caching of the filesystem, besides we
also run datanode which takes heap as well.
Now, please keep in mind that even if you specify heap of say 5GB, if
your server opens threads to communicate with other systems via RPC
Generally, the larger the flush to harder the GC will work. Flush more
often to avoid this. What is your total heap size set at?
On Jan 24, 2013 9:02 PM, Varun Sharma va...@pinterest.com wrote:
I do have significant block cache churn and this issue is typical
correlated with a huge increase in
? Is there a reason to not use all of it (the
DataNode typically takes a 1G of RAM)
On Sun, Jan 20, 2013 at 11:49 AM, Jack Levin magn...@gmail.com wrote:
I forgot to mention that I also have this setup:
property
namehbase.hregion.memstore.flush.size/name
value33554432/value
%, do you mean
the memstore flush size ? 15 % would mean close to 1G, have you seen any
issues with flushes taking too long ?
Thanks
Varun
On Sun, Jan 13, 2013 at 8:17 AM, Jack Levin magn...@gmail.com wrote:
That's right, Memstore size , not flush size is increased. Filesize is
10G
in
addition to increase the memstore size:
hbase.hregion.max.filesize
hbase.hregion.memstore.flush.size
On Fri, Jan 11, 2013 at 9:47 AM, Jack Levin magn...@gmail.com wrote:
We buffer all accesses to HBASE with Varnish SSD based caching layer.
So the impact for reads is negligible. We have 70 node
...@apache.org
wrote:
Interesting. That's close to a PB if my math is correct.
Is there a write up about this somewhere? Something that we could link
from the HBase homepage?
-- Lars
- Original Message -
From: Jack Levin magn...@gmail.com
To: user@hbase.apache.org
http://img338.imageshack.us/img338/6831/screenshot20130111at949.png
this shows how often we flush, and how large are the region files. We
do have bloomfilters turn up, that we don't incur extra seeks across
multiple RS files.
-Jack
On Fri, Jan 11, 2013 at 9:47 AM, Jack Levin magn...@gmail.com
We stored about 1 billion images into hbase with file size up to 10MB.
Its been running for close to 2 years without issues and serves
delivery of images for Yfrog and ImageShack. If you have any
questions about the setup, I would be glad to answer them.
-Jack
On Sun, Jan 6, 2013 at 1:09 PM,
Please note that if you have fast growing datastore, you may end up
with very large region files - if you limit the number of regions. If
that happens (and you can tell by simply examining your HDFS), your
compactions (which you can't avoid) will end up rewriting a lot of
data. In our case (we
Whats wrong with that size? We store 15MB routinely into our image hbase.
-Jack
On Tue, Apr 17, 2012 at 10:46 AM, Jean-Daniel Cryans
jdcry...@apache.org wrote:
Make sure the config is changed client-side not server-side.
Also you might not want to store 12MB values in HBase.
J-D
On Tue,
I would do a panel session on the subject if there is interest.
-Jack
On Thu, Feb 23, 2012 at 11:50 AM, Andrew Purtell apurt...@apache.org wrote:
As a disclaimer, I'm on the program committee for HBC2012 but this is
strictly my personal opinion.
I think we should steer clear of HBase vs.
images (jpgs) are bytes, there is no difference, you just need to add
appropriate http headers using nginx or any other proxy of choice and
put it on top of REST HBASE api.
-Jack
On Tue, Jan 17, 2012 at 10:11 AM, shashwat shriparv
dwivedishash...@gmail.com wrote:
You can not store image as such
you're looking for an online solution.
On Sun, Dec 25, 2011 at 7:15 PM, Jack Levin magn...@gmail.com wrote:
Yep
-Jack
On Dec 25, 2011, at 4:39 PM, Ted Yu yuzhih...@gmail.com wrote:
Which version of HBase ?
I guess 0.90.4 ?
Cheers
On Sun, Dec 25, 2011 at 3:55 PM, Jack Levin magn
Some time ago, we had a situation where our REST server was slammed
with queries that did not find any matches for rows in Hbase. When
that happened we sustained 50k rpc/sec to META region server as
reported by the master web page. After digging deeper we found that
reach request with 'wrong'
Greetings all. How does one deals with holes in tables between
regions nowadays?
NameRegion Server Start Key End Key
img644,,1317474152909.02f379ab6f08f4d7609ef1245cb7033a. not deployed
1cce.jpg
img644,1cce.jpg,1317474152909.ebb8778fc1e67965c518e357125678ea. not
deployed
3. enable 'img644'
Does not solve the problem.
-Jack
On Sun, Dec 25, 2011 at 3:54 PM, Jack Levin magn...@gmail.com wrote:
Greetings all. How does one deals with holes in tables between
regions nowadays?
Name Region Server Start Key End Key
img644
Yep
-Jack
On Dec 25, 2011, at 4:39 PM, Ted Yu yuzhih...@gmail.com wrote:
Which version of HBase ?
I guess 0.90.4 ?
Cheers
On Sun, Dec 25, 2011 at 3:55 PM, Jack Levin magn...@gmail.com wrote:
3. enable 'img644'
Does not solve the problem.
-Jack
On Sun, Dec 25, 2011 at 3:54
Hello All. I've setup an hbase (0.90.4) sandbox running on servers
where we have some excess capacity. Feel free to play with it, e.g.
create tables, run load tests, benchmarks, essentially do whatever you
want, just don't put your production services there, because while we
do have it up due
Anyone seen this before? We continue to have this on several of our clusters.
Thanks.
-Jack
On Wed, Nov 9, 2011 at 7:24 PM, Jack Levin magn...@gmail.com wrote:
Hey guys, I am getting those errors after moving into 0.90.4:
2011-11-09 19:22:51,220 ERROR
...@duboce.net wrote:
On Wed, Nov 9, 2011 at 7:24 PM, Jack Levin magn...@gmail.com wrote:
Hey guys, I am getting those errors after moving into 0.90.4:
You have custom code on the server-side Jack? A filter or something?
You could turn on rpc logging. It could give you more clues on what
is messing
Nope, there are no timeouts, the queries are fast and 95% in cache,
this looks like a region server tried to read some memory buffer and
get 0 bytes in return.
-Jack
On Mon, Nov 14, 2011 at 3:10 PM, Stack st...@duboce.net wrote:
On Mon, Nov 14, 2011 at 3:05 PM, Jack Levin magn...@gmail.com
Hey guys, I am getting those errors after moving into 0.90.4:
2011-11-09 19:22:51,220 ERROR
org.apache.hadoop.hbase.io.HbaseObjectWritable: Error in readFields
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at
You likely have hadoop-core in hbase/lib dir that's wrong, delete it, and copy
one from Hadoop/ dir
-Jack
On Nov 2, 2011, at 10:14 AM, LoveIR shiva2...@gmail.com wrote:
Hi,
I am using Hbase 0.90.4 version and my Hadoop version is 0.20.203. I am
getting the following exception in my
Project: HBase
Issue Type: Bug
Components: wal
Affects Versions: 0.90.4
Reporter: jack levin
Assignee: gaojinchao
Priority: Blocker
Fix For: 0.92.0, 0.90.5
Attachments: HBASE-4695_Trunk_V2.patch,
HBASE
The master will detect that RS is down by periodically checking a
zookeeper ( it will say in the master log, znode expired ). After, it
will check to see if there is anything in /hbase/.logs directory for
that region server, if something is found, master will replay the log
records and 'push'
Make hbase.hregion.max.filesize to be very large. Then your regions
won't split. We use this method when copying 'live' hbase to make a
backup.
-Jack
On Sat, Sep 3, 2011 at 4:32 PM, Geoff Hendrey ghend...@decarta.com wrote:
Is there a way to prevent regions from splitting while we are running
I don't think there is anyone except Facebook actually uses it. Their
case is special, as they have millions and millions of files in HDFS.
-Jack
On Thu, Aug 11, 2011 at 11:19 AM, shanmuganathan.r
shanmuganatha...@zohocorp.com wrote:
Hi All,
I am running the HBase distributed mode in
here is connection pool implementation that works for us:
http://pastebin.com/KBv9t7S8
-Jack
On Sun, Jul 17, 2011 at 11:16 PM, aadish kotwal.aad...@gmail.com wrote:
Hey people,
I am very new to HBase, and I would like if someone gave me guidance
regarding connection pooling.
Thanks a
? It
is significantly faster with (de)serialization. You may want to grab the
latest nightly build of thrift as it has quite a few bug fixes in the php
thrift extension.
~Jeff
On 7/11/2011 11:22 PM, Jack Levin wrote:
For those who are interested, I did some loadtesting of Puts and Gets
speeds using PHP
Hello, we are thinking about using Hbase table as a simple queue which
will dispatch the work for a mapreduce job, as well as real time
fetching of data to present to end user. In simple terms, suppose you
had a data source table and a queue table. The queue table has a
smaller set of Rows that
out from the middle of the table.
-Jack
On Sat, Jul 16, 2011 at 9:38 AM, Jack Levin magn...@gmail.com wrote:
Hello, we are thinking about using Hbase table as a simple queue which
will dispatch the work for a mapreduce job, as well as real time
fetching of data to present to end user
Be mindful that if you are using a scanner with filters, RowKey
remains the index of the table, and that filter just filters your
results based on how you run your scanner, similarly to cat file |
grep filter, where if file is your table and has many lines (rows),
your scan might be very
For those who are interested, I did some loadtesting of Puts and Gets
speeds using PHP - Thrift Server - HBASE, and Java API Client -
HBASE.
Writing and reading 5 - 10 byte cells (from Cache), is 30 times faster
using Java API client. So I am going to assume that writing near
realtime
).
Ravi
On Thursday, June 30, 2011, Mark Kerzner markkerz...@gmail.com wrote:
Thank you, looks very interesting.
Mark
On Thu, Jun 30, 2011 at 11:29 PM, Jack Levin magn...@gmail.com wrote:
http://www.slideshare.net/jacque74/hug-hbase-presentation
My presentation posted
http://www.slideshare.net/jacque74/hug-hbase-presentation
My presentation posted on slideshare, from todays talk. FYI.
Best,
-Jack
On Fri, Jun 10, 2011 at 3:11 PM, Jack Levin magn...@gmail.com wrote:
Yep. I'd do HUG, its probably larger building/room anyway :).
-Jack
On Fri, Jun 10, 2011
Yep. I'd do HUG, its probably larger building/room anyway :).
-Jack
On Fri, Jun 10, 2011 at 11:39 AM, Jean-Daniel Cryans
jdcry...@apache.org wrote:
Would you like to transform this into a HUG? See
http://www.meetup.com/hbaseusergroup/
J-D
On Wed, Jun 8, 2011 at 12:07 AM, Jack Levin magn
Depends on the load. We have huge cluster running, 4 x 2 TB disks,
Core 2 Duo 2.5 Ghz, 8 GB RAM, with 60 nodes, using it mostly for
binary cold storage of photos, with very low access rates, and
moderate write rates.
Second cluster, is Core i7 Quad (hyperthreaded) 3.0Ghz , with 16GB
RAM, 4x2TB
That would be a real nice feature though, imagine going to the shell,
and requesting a dump of your table.
load into outfile/hdfs '/tmp/table_a' from table_a
Or something similar.
load from outfile/hdfs -- could power bulk uploader
-Jack
On Mon, Jun 6, 2011 at 9:09 PM, Jack Levin magn
...@duboce.net wrote:
On Tue, Jun 7, 2011 at 10:20 AM, Jack Levin magn...@gmail.com wrote:
That would be a real nice feature though, imagine going to the shell,
and requesting a dump of your table.
But it would only work for mickey mouse tables, no? If your
input/output is of any substantial size
Hey Guys, I plan to do a tech talk here at ImageShack, on how we store
and serve about 200ml images from HBASE. The stats of our are:
60 Region Servers running HBASE
Configured Capacity : 517.44 TB
DFS Used : 286.93 TB
Table count: 1,000
Average file size: 500KB
I am
Hello, does anyone have any tools you could share that would take a
table, and dump the contents as TSV text format? We want it in tsv
for quick HIVE processing that we have in the another datamining
cluster. We do not want to write custom map-reduce jobs for hbase
because we already have an
need to now the names of your column families, but besides that it
could be done fairly generically.
On Mon, Jun 6, 2011 at 3:57 PM, Jack Levin magn...@gmail.com wrote:
Hello, does anyone have any tools you could share that would take a
table, and dump the contents as TSV text format? We
Can you hook hive to hbase?
Yes, we used hbase to hive and back before, but its not real flexible,
especially going hbase - hive route. Much better prefer bulk uploader
tool for modified tables via hive map-reduce of tsv or csv.
-Jack
I have a feature request: There should be a native function called
'count', that produces count of rows based on specific family filter,
that is internal to HBASE and won't be required to read CELLs off the
disk/cache. Just count up the rows in the most efficient way
possible. I realize that
of the row count as new rows are created is also not as easy as
it seems - this is because a Put does not know if a row already exists
or not. Making it aware of that fact would require doing a get before
a put - not cheap.
-ryan
On Fri, Jun 3, 2011 at 3:20 PM, Jack Levin magn...@gmail.com
Hello, is there a git repo URL I could use to check out that code version?
-Jack
On Thu, May 19, 2011 at 2:35 PM, Stack st...@duboce.net wrote:
The Apache HBase team is happy to announce that HBase 0.90.3 is
available from the Apache mirror of choice:
Wayne, we get CMS failures also, I am pretty sure they are
fragmentation related:
2011-05-26T09:20:00.304-0700: 206371.599: [GC 206371.599: [ParNew
(promotion failed): 76633K-76023K(76672K), 0.0924180 secs]206371.692:
[CMS: 11452308K-7142504K(122
02816K), 13.5870310 secs]
It might sound crazy, but if you have plenty of CPU, consider lowering
your NewSize to like 30MB, if you do that your ParNews will be more
frequent, but hitting CMS failure will be less likely, this is what we
seen.
-Jack
On Thu, May 26, 2011 at 10:51 AM, Jack Levin magn...@gmail.com wrote
just put new hbase version on our test cluster. and been testing it...
so far if I shutdown an RS, master does not reassign its regions, and
we remain inconsistent forerver, likewise when new RS is up, it does
not get regions assigned to it, this is the master log:
2011-05-24 15:30:57,724 DEBUG
details:
http://hbase.apache.org/book.html#decommission
Dave
On Tue, May 24, 2011 at 3:33 PM, Jack Levin magn...@gmail.com wrote:
just put new hbase version on our test cluster. and been testing it...
so far if I shutdown an RS, master does not reassign its regions, and
we remain inconsistent
img645.prod.imageshack.us and img645.imageshack.us are both point to
the same IP.
-Jack
On Tue, May 24, 2011 at 3:50 PM, Jack Levin magn...@gmail.com wrote:
looks like our balancer is on:
hbase(main):001:0 balance_switch true
true
0 row(s) in 0.3700 seconds
I simply kill PID for RS
... it gotta be consistent,
otherwise aliases end up screwing things up and people will end up
guessing why things don't work.
-Jack
On Tue, May 24, 2011 at 4:04 PM, Jack Levin magn...@gmail.com wrote:
img645.prod.imageshack.us and img645.imageshack.us are both point to
the same IP.
-Jack
On Tue
Then I recommend scratching hostname use in leu of reverse lookup only
-Jack
On May 24, 2011, at 5:45 PM, Andrew Purtell apurt...@apache.org wrote:
From: Jack Levin magn...@gmail.com
figured it out... the /etc/hosts file has ip to name, was used by
zookeeper was *.prod.imageshack.com, while
which in
turn stores it in ZK.
Also http://hbase.apache.org/book.html#dns
J-D
On Tue, May 24, 2011 at 4:37 PM, Jack Levin magn...@gmail.com wrote:
figured it out... the /etc/hosts file has ip to name, was used by
zookeeper was *.prod.imageshack.com, while hostname was
imgXX.imageshack.us
...@duboce.net wrote:
Are you running at INFO level logging Jack? Can you pastebin more log
context. I'd like to take a look.
Thanks,
St.Ack
On Thu, May 19, 2011 at 11:36 PM, Jack Levin magn...@gmail.com wrote:
Thanks, now with setting that value to 2, we still get slow DN death
master recovery
/3 between young/old, so that would be 1gb young, but that
is probably bigger than you need. I'm far from an expert though... what
size do other people use?
Matt
On Tue, May 17, 2011 at 12:55 AM, Jack Levin magn...@gmail.com wrote:
This is the way I read it. Low processors == high CPU
When we change versions to 1 from 3 on hbase table schema, things
appear work right.
-Jack
On Mon, May 16, 2011 at 12:14 PM, Jean-Daniel Cryans
jdcry...@apache.org wrote:
I doesn't look like you are doing something wrong, also I looked at
the unit tests and they seem to cover the basic usage
We had issues of moving into 32 core AMD box also. The issue was
revolving around datanode getting slow after about 12 hours. What you
need to do is check fsreadlatency_ave_time graph, if it appears spiky
then you have a problem with IO, next get a graph of Runnable
Threads they should be
What is the clock rate of your CPUs (desktop vs blade)?
-Jack
On Mon, May 16, 2011 at 1:24 PM, Himanish Kushary himan...@gmail.com wrote:
Yes, it is only the HW that was changed . All the configurations are kept at
default from the cloudera installer.
The regionserver logs semms ok.
On
, May 16, 2011 at 12:44 PM, Stack st...@duboce.net wrote:
What is the size of your new gen? Is it growing? Does it level off?
St.Ack
On Mon, May 16, 2011 at 12:39 PM, Jack Levin magn...@gmail.com wrote:
There HEAP would be 8G used out of 12G total. The gc-log would be
full of ParNew
-hbase.log \
-XX:+CMSIncrementalMode \
-XX:+CMSIncrementalPacing \ ---
-XX:-TraceClassUnloading ---
This way GC statistically adapts, and lesses the load on CPU.
-Jack
On Mon, May 16, 2011 at 3:17 PM, Jack Levin magn...@gmail.com wrote:
hlog rolled, and qps was at 1000
GC previous?
ParNew is much bigger now.
St.Ack
On Mon, May 16, 2011 at 4:11 PM, Jack Levin magn...@gmail.com wrote:
I think this will resolve my issue, here is the output:
14 2011-05-16T15:58
13 2011-05-16T15:59
12 2011-05-16T16:00
14 2011-05-16T16:01
14 2011-05
the chance of stop-the-world GC and should be avoided.
- Andy
(who always gets nervous when we start talking about GC black magic)
From: Jack Levin magn...@gmail.com
Subject: Re: GC and High CPU
To: user@hbase.apache.org
Date: Monday, May 16, 2011, 5:06 PM
Those are the lines I added
the GC logs you show.
See below:
On Mon, May 16, 2011 at 5:06 PM, Jack Levin magn...@gmail.com wrote:
Those are the lines I added:
-XX:+CMSIncrementalMode \
From the doc., it says about i-cms and java6 This feature is useful
when applications that need the low pause times provided
to be waiting on appends to log4j.
-Todd
On Mon, May 2, 2011 at 7:29 PM, Jack Levin magn...@gmail.com wrote:
As requested:
http://pastebin.com/aySaTADp
Note, blocked threads.
-Jack
On Mon, May 2, 2011 at 2:39 PM, Jean-Daniel Cryans jdcry...@apache.org
wrote:
I think Todd was asking to have
Levin magn...@gmail.com wrote:
my yourkit version expired :)... but here is the jstack when it
happens: http://pastebin.com/5v6mHg3t
On Mon, May 2, 2011 at 1:00 PM, Todd Lipcon t...@cloudera.com wrote:
On Mon, May 2, 2011 at 12:56 PM, Jack Levin magn...@gmail.com wrote:
Tried removing yourkit
I took a jstack (http://pastebin.com/5v6mHg3t). After few hours, its
literally staggers to a halt and gets very very slow... Any ideas
whats its blocking on?
(main issue is that fsreads for RS get really slow when that happens).
-Jack
Version: 0.20.2+320 hdfs
.89 HBASE
ulimit is 32k
xcievers is 5k
Note from the jstack, I am not exceeding xcievers.
-Jack
On Sun, May 1, 2011 at 6:19 PM, Michael Segel michael_se...@hotmail.com wrote:
What's your xceivers set to?
What's the ulimit -n set for hdfs/hadoop user...
, 2011 at 12:47 AM, Jack Levin magn...@gmail.com wrote:
Shouldn't the RS just shutdown then? Because it stays half alive and
none of the puts succeed. Also the oome happen right after
flush/compaction/split... so clearly the RS was busy, and it could be
just a matter of hitting Heap ceiling
memory could be used by
the queues.
J-D
On Mon, Apr 25, 2011 at 10:32 AM, Jack Levin magn...@gmail.com wrote:
Stack:
Exception in thread pool-1-thread-9 java.lang.OutOfMemoryError: Java
heap space
at
org.apache.hadoop.hbase.ipc.HBaseRPC$Invocation.readFields(HBaseRPC.java:120
On Thu, Apr 21, 2011 at 12:21 AM, Stack st...@duboce.net wrote:
On Wed, Apr 20, 2011 at 3:45 PM, Jack Levin magn...@gmail.com wrote:
Hello -- we have an issue that looks like this. We have php app
front end and thrift servers that live on seperate boxes away from
HBASE cluster. Everytime we
threads
done by the hosting executorservice.
St.Ack
On Wed, Apr 20, 2011 at 10:11 PM, Jack Levin magn...@gmail.com wrote:
Hello, with 0.89 HBASE, we see the following, all REST servers get
locked on trying to connect to one of our RS servers, the error in the
.out file on that Region Server
, Apr 20, 2011 at 5:15 PM, Jack Levin magn...@gmail.com wrote:
How does get ave. time differs from read average time? What is
the definition of thereof?
-Jack
Hello -- we have an issue that looks like this. We have php app
front end and thrift servers that live on seperate boxes away from
HBASE cluster. Everytime we do compaction on one of our 8 RS servers,
we cause a thread pile up on Thrift servers that delay _all_ queries
to HBASE. Our usual
How does get ave. time differs from read average time? What is
the definition of thereof?
-Jack
Hello, with 0.89 HBASE, we see the following, all REST servers get
locked on trying to connect to one of our RS servers, the error in the
.out file on that Region Server looks like this:
Exception in thread pool-1-thread-3 java.lang.OutOfMemoryError: Java
heap space
at
In some cases its important to bring hbase up after hdfs crash without
recovering hlogs first, is it possible to have a tool that just takes
hlogs and replays them after HBASE already been started?
-Jack
1 - 100 of 222 matches
Mail list logo