Hi James,
I spoke to my manager and he is fine with the idea of giving the talk. Now, he
is gonna ask higher management for final approval. I am assuming there is still
a slot for my talk in use case srction. I should go ahead with my approval
process. Correct?
Thanks,
Anil Gupta
Sent from my
We invite you to attend the inaugural PhoenixCon on Wed, May 25th 9am-1pm
(the day after HBaseCon) hosted by Salesforce.com in San Francisco. There
will be two tracks: one for use cases and one for internals. Drop me a note
if you're interested in giving a talk. To RSVP and for more details, see
That is interesting. Would it be possible for you to share what GC settings
you ended up on that gave you the most predictable performance?
Thanks.
Saad
On Tue, Apr 26, 2016 at 11:56 AM, Bryan Beaudreault <
bbeaudrea...@hubspot.com> wrote:
> We were seeing this for a while with our CDH5
I'm looking forward to your talk Vlad.
In the mean time, I filed HBASE-15712. We'll get our implementation posted
up there. We have these deployed on one of the masters, running daily with
cron.
@Mikhail, to get this feature into the normalizer, how about this: let's
add a min number of regions
We were seeing this for a while with our CDH5 HBase clusters too. We
eventually correlated it very closely to GC pauses. Through heavily tuning
our GC we were able to drastically reduce the logs, by keeping most GC's
under 100ms.
On Tue, Apr 26, 2016 at 6:25 AM Saad Mufti
Please see HBASE-4298 where this feature was introduced.
On Tue, Apr 26, 2016 at 5:12 AM, WangYQ wrote:
> yes, there is a tool graceful_stop.sh to graceful stop regionserver, and
> can move the regions back to the rs after rs come back.
> but i can not find the
yes, there is a tool graceful_stop.sh to graceful stop regionserver, and can
move the regions back to the rs after rs come back.
but i can not find the relation with drain region servers...
i think drain region servers function is good, but can not think up with a
pracital use case
>From what I can see in the source code, the default is actually even lower
at 100 ms (can be overridden with hbase.regionserver.hlog.slowsync.ms).
Saad
On Tue, Apr 26, 2016 at 3:13 AM, Kevin Bowling
wrote:
> I see similar log spam while system has reasonable
One of use cases we use it is graceful stop of regionserver - you unload
regions from the server before you restart it. Of course, after restart you
expect HBase to move regions back.
Now I'm not really remembering correctly, but I kinda remember that one of
the features was at least that it will
I see similar log spam while system has reasonable performance. Was the
250ms default chosen with SSDs and 10ge in mind or something? I guess I'm
surprised a sync write several times through JVMs to 2 remote datanodes
would be expected to consistently happen that fast.
Regards,
On Mon, Apr 25,
thanks
in hbase 0.99.0, I find the rb file: draining_servers.rb
i have some suggestions on this tool:
1. if I add rs hs1 to draining_servers, when hs1 restart, the zk node still
exists in zk, but hmaster will not treat hs1 as draining_servers
i think when we add a hs to draining_servers,
11 matches
Mail list logo