Hi, the problem is gone.
I did what you say :)
Thanks!
2014-12-13 22:38 GMT+03:00 Serega Sheypak serega.shey...@gmail.com:
Great, I'll refactor the code. and report back
2014-12-13 22:36 GMT+03:00 Stack st...@duboce.net:
On Sat, Dec 13, 2014 at 11:33 AM, Serega Sheypak
I've done a bit of digging and hope someone can shed some light on my
particular issue. One and Only One of my region servers after each restart
is randomly Plagued with a single maxed out CPU-Core and a Read Request
chart registering around 40k read requests per second. The remaining 13
dance
Sounds like your access patterns are not balanced well, you have a
hotspot. Have a look at the metrics emitted from that machine. It will tell
you which region is winning the popularity contest.
On Monday, December 15, 2014, uamadman uamadm...@gmail.com wrote:
I've done a bit of digging and
On January 15th, we're meeting at AppDynamics in San Francisco. We have
some nice talks linked up [1]. On Feb 17th, lets meet around Strata+Hadoop
World in San Jose. If you are interested in hosting or speaking, write the
organizers.
Thanks,
St.Ack
1.
Over the past few months the rate of the change into 0.94 has slowed
significantly.
0.94.25 was released on Nov 15th, and since then we had only 4 changes.
This could mean two things: (1) 0.94 is very stable now or (2) nobody is using
it (at least nobody is contributing to it anymore).
If
Excellent! Should be quite a bit faster too.
-- Lars
From: Serega Sheypak serega.shey...@gmail.com
To: user user@hbase.apache.org
Cc: lars hofhansl la...@apache.org
Sent: Monday, December 15, 2014 5:57 AM
Subject: Re: HConnectionManager leaks with zookeeper conection oo many
given that CDH4 is hbase 0.94 i dont believe nobody is using it. for our
clients the majority is on 0.94 (versus 0.96 and up).
so i am going with 1), its very stable!
On Mon, Dec 15, 2014 at 1:53 PM, lars hofhansl la...@apache.org wrote:
Over the past few months the rate of the change into
was: 200-300 ms per request
now: 80 ms request
request=full trip from servlet to HBase and back to response.
2014-12-15 22:40 GMT+03:00 lars hofhansl la...@apache.org:
Excellent! Should be quite a bit faster too.
-- Lars
From: Serega Sheypak serega.shey...@gmail.com
To: user
This is the table that stores information about all the tables. It is
normal when a cluster is recovering for reads to be high on this table
while all the table information is being loaded into the regionservers.
http://hbase.apache.org/book/arch.catalog.html
-Pere
On Mon, Dec 15, 2014 at 12:21
Hi Lars,
Thanks for bringing this for discussion. From my experience I can tell that
0.94 is very stable but that shouldn't be a blocker to consider to EOL'ing.
Are you considering any specific timeframe for that?
thanks,
esteban.
--
Cloudera, Inc.
On Mon, Dec 15, 2014 at 11:46 AM, Koert
Meta is getting pegged? Sounds like your client applications are not being
friendly. Are you reusing cluster configurations? You should have one per
process for its lifetime. Basically, how often are you calling
HConnectionFactory.createConnection() ?
On Mon, Dec 15, 2014 at 12:30 PM, Pere Kyle
Minor correction:
HConnectionFactory is in the upcoming 1.0 release.
How often is HConnectionManager.createConnection() called ?
Cheers
On Mon, Dec 15, 2014 at 3:36 PM, Nick Dimiduk ndimi...@gmail.com wrote:
Meta is getting pegged? Sounds like your client applications are not being
friendly.
I am trying to import data into HBase table and tried the following as an
example,
bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv
-Dimporttsv.columns=HBASE_ROW_KEY,b,c datatsv hdfs://data.tsv - this command
complains about data.tsv not existing in HDFS, when I run hadoop fs -ls , I do
Looking for guidance on how to do a zero downtime upgrade from 0.94 - 0.98
(or 1.0 if it launches soon). As soon as we can figure this out, we will
migrate over.
On Mon, Dec 15, 2014 at 1:37 PM, Esteban Gutierrez este...@cloudera.com
wrote:
Hi Lars,
Thanks for bringing this for discussion.
Zero downtime upgrade from 0.94 won't be possible. See
http://hbase.apache.org/book.html#d0e5199
On Mon, Dec 15, 2014 at 4:44 PM, Jeremy Carroll phobos...@gmail.com wrote:
Looking for guidance on how to do a zero downtime upgrade from 0.94 - 0.98
(or 1.0 if it launches soon). As soon as we
Which is why I feel that a lot of customers are still on 0.94. Pretty much
trapped unless you want to take downtime for your site. Any type of
guidance would be helpful. We are currently in the process of designing our
own system to deal with this.
On Mon, Dec 15, 2014 at 4:47 PM, Andrew Purtell
Does replication and snapshot export work from 0.94.6+ to a 0.96 or 0.98
cluster?
Presuming it does, shouldn't a site be able to use a multiple cluster set
up to do a cut over of a client application?
That doesn't help with needing downtime for to do the eventual upgrade, but
it mitigates the
Thanks, Lars. We have customers still using 94. It is indeed stable now.
Jieshan.
From: Sean Busbey [bus...@cloudera.com]
Sent: Tuesday, December 16, 2014 9:04 AM
To: user
Subject: Re: 0.94 going forward
Does replication and snapshot export work from
Replication from 0.94 to +0.96 wont work out of the box unless you use a
bridge: HBASE-9360 Maybe before EOL'ing 0.94 we should ship it as part of
0.94.
cheers,
esteban.
--
Cloudera, Inc.
On Mon, Dec 15, 2014 at 5:23 PM, Bijieshan bijies...@huawei.com wrote:
Thanks, Lars. We have customers
Nope :(Replication uses RPC and that was changed to protobufs. AFAIK snapshots
can also not be exported from 0.94 and 0.98. We have a really shitty story
here. From: Sean Busbey bus...@cloudera.com
To: user user@hbase.apache.org
Sent: Monday, December 15, 2014 5:04 PM
Subject: Re: 0.94
Yep. That's also why I've doing 0.94 release all this time. 0.92 had a
no-downtime path to 0.94. And 0.96 had a no downtime path to 0.98. So both
could be EOL'ed with relatively little annoyance.0.94 is different as going to
0.96 or later (including 0.98) is a big change and requires downtime.
21 matches
Mail list logo