bq.
On Thu, Jun 27, 2013 at 4:27 PM, Viral Bajaria viral.baja...@gmail.com
wrote:
It's not random, it picks the region with the most data in its memstores.
That's weird, because I see some of my regions which receive the least
amount of data in a given time period flushing before the regions
Viral,
Basically when you increase the memstore flush size ( well ur aim
there is to reduce flushes and make data sit in memory for longer time) you
need to carefully consider the 2 things
1. What is the max heap and what is the % memory you have allocated max for
all the memstores in a RS.
On Thu, Jun 27, 2013 at 4:27 PM, Viral Bajaria viral.baja...@gmail.com wrote:
Hey JD,
Thanks for the clarification. I also came across a previous thread which
sort of talks about a similar problem.
On Fri, Jun 28, 2013 at 9:31 AM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
On Thu, Jun 27, 2013 at 4:27 PM, Viral Bajaria viral.baja...@gmail.com
wrote:
It's not random, it picks the region with the most data in its memstores.
That's weird, because I see some of my regions which receive
On Fri, Jun 28, 2013 at 2:39 PM, Viral Bajaria viral.baja...@gmail.com wrote:
On Fri, Jun 28, 2013 at 9:31 AM, Jean-Daniel Cryans
jdcry...@apache.orgwrote:
On Thu, Jun 27, 2013 at 4:27 PM, Viral Bajaria viral.baja...@gmail.com
wrote:
It's not random, it picks the region with the most data
Hi All,
I wanted some help on understanding what's going on with my current setup.
I updated from config to the following settings:
property
namehbase.hregion.max.filesize/name
value107374182400/value
/property
property
namehbase.hregion.memstore.block.multiplier/name
the flush size is at 128m and there is no memory pressure
You mean there is enough memstore reserved heap in the RS, so that there
wont be premature flushes because of global heap pressure? What is the RS
max mem and how many regions and CFs in each? Can you check whether the
flushes happening
Thanks for the quick response Anoop.
The current memstore reserved (IIRC) would be 0.35 of total heap right ?
The RS total heap is 10231MB, used is at 5000MB. Total number of regions is
217 and there are approx 150 regions with 2 families, ~60 with 1 family and
remaining with 3 families.
How to
发件人: Viral Bajaria [viral.baja...@gmail.com]
发送时间: 2013年6月27日 16:18
收件人: user@hbase.apache.org
主题: Re: flushing + compactions after config change
Thanks for the quick response Anoop.
The current memstore reserved (IIRC) would be 0.35 of total heap right ?
The RS total heap is 10231MB
Thanks Liang!
Found the logs. I had gone overboard with my grep's and missed the Too
many hlogs line for the regions that I was trying to debug.
A few sample log lines:
2013-06-27 07:42:49,602 INFO org.apache.hadoop.hbase.regionserver.wal.HLog:
Too many hlogs: logs=33, maxlogs=32; forcing flush
The config hbase.regionserver.maxlogs specifies what is the max #logs and
defaults to 32. But remember if there are so many log files to replay then
the MTTR will become more (RS down case )
-Anoop-
On Thu, Jun 27, 2013 at 1:59 PM, Viral Bajaria viral.baja...@gmail.comwrote:
Thanks Liang!
hey Viral,
Which hbase version are you using?
On Thu, Jun 27, 2013 at 5:03 PM, Anoop John anoop.hb...@gmail.com wrote:
The config hbase.regionserver.maxlogs specifies what is the max #logs and
defaults to 32. But remember if there are so many log files to replay then
the MTTR will become
0.94.4 with plans to upgrade to the latest 0.94 release.
On Thu, Jun 27, 2013 at 2:22 AM, Azuryy Yu azury...@gmail.com wrote:
hey Viral,
Which hbase version are you using?
Can you paste your JVM options here? and Do you have an extensive write on
your hbase cluster?
On Thu, Jun 27, 2013 at 5:47 PM, Viral Bajaria viral.baja...@gmail.comwrote:
0.94.4 with plans to upgrade to the latest 0.94 release.
On Thu, Jun 27, 2013 at 2:22 AM, Azuryy Yu azury...@gmail.com
I do have a heavy write operation going on. Actually heavy is relative. Not
all tables/regions are seeing the same amount of writes at the same time.
There is definitely a burst of writes that can happen on some regions. In
addition to that there are some processing jobs which play catch up and
btw, don't use CMSIncrementalMode, iirc, it had been removed from hotspot
upstream accually.
发件人: Viral Bajaria [viral.baja...@gmail.com]
发送时间: 2013年6月27日 18:08
收件人: user@hbase.apache.org
主题: Re: 答复: flushing + compactions after config change
I do have
your JVM options arenot enough. I will give you some detail when I go back
office tomorrow.
--Send from my Sony mobile.
On Jun 27, 2013 6:09 PM, Viral Bajaria viral.baja...@gmail.com wrote:
I do have a heavy write operation going on. Actually heavy is relative. Not
all tables/regions are
Thanks Azuryy. Look forward to it.
Does DEFERRED_LOG_FLUSH impact the number of WAL files that will be created
? Tried looking around but could not find the details.
On Thu, Jun 27, 2013 at 7:53 AM, Azuryy Yu azury...@gmail.com wrote:
your JVM options arenot enough. I will give you some detail
No, all your data eventually makes it into the log, just potentially
not as quickly :)
J-D
On Thu, Jun 27, 2013 at 2:06 PM, Viral Bajaria viral.baja...@gmail.com wrote:
Thanks Azuryy. Look forward to it.
Does DEFERRED_LOG_FLUSH impact the number of WAL files that will be created
? Tried
Hey JD,
Thanks for the clarification. I also came across a previous thread which
sort of talks about a similar problem.
http://mail-archives.apache.org/mod_mbox/hbase-user/201204.mbox/%3ccagptdnfwnrsnqv7n3wgje-ichzpx-cxn1tbchgwrpohgcos...@mail.gmail.com%3E
I guess my problem is also similar to
Hi Viral,
the following are all needed for CMS:
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled
-XX:CMSInitiatingOccupancyFraction=70
-XX:+UseCMSCompactAtFullCollection
-XX:CMSFullGCsBeforeCompaction=0
-XX:+CMSClassUnloadingEnabled
-XX:CMSMaxAbortablePrecleanTime=300
21 matches
Mail list logo