ms4000m -Xmx4000m -Xmn256m
> -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=85
> -XX:+AggressiveOpts -XX:+UseCompressedOops -verbose:gc -XX:+PrintGCDetails
> -XX:+PrintGCTimeStamps -Xloggc:/mnt/hbase/logs/hbase-regionserver-gc.log"
>
> YMMV, but hope that helps.
>
&
X:+UseCompressedOops -verbose:gc -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps -Xloggc:/mnt/hbase/logs/hbase-regionserver-gc.log"
YMMV, but hope that helps.
Best regards,
- Andy
--- On Thu, 11/18/10, Michael Segel wrote:
> From: Michael Segel
> Subject: RE: Xceiver problem
>
ft 64K and a hard
128K.
YMMV
-Mike
> Date: Wed, 17 Nov 2010 23:02:41 +0100
> Subject: Re: Xceiver problem
> From: lars.geo...@gmail.com
> To: user@hbase.apache.org
>
> That is what I was also thinking about, thanks for jumping in Todd.
>
> I was simply not sure if that i
You haven't answered all questions yet :) Are you running this on EC2?
What instance types?
On Thu, Nov 18, 2010 at 12:12 AM, Lucas Nazário dos Santos
wrote:
> It seems that newer Linux versions don't have the
> file /proc/sys/fs/epoll/max_user_instances, but instead
> /proc/sys/fs/epoll/max_user
It seems that newer Linux versions don't have the
file /proc/sys/fs/epoll/max_user_instances, but instead
/proc/sys/fs/epoll/max_user_watches. I'm not quite sure about what to do.
Can I favor max_user_watches over max_user_instances? With what value?
I also tried to play with the Xss argument and
That is what I was also thinking about, thanks for jumping in Todd.
I was simply not sure if that is just on .27 or all after that one and
the defaults have never been increased.
On Wed, Nov 17, 2010 at 8:24 PM, Todd Lipcon wrote:
> On that new of a kernel you'll also need to increase your epoll
On that new of a kernel you'll also need to increase your epoll limit. Some
tips about that here:
http://www.cloudera.com/blog/2009/03/configuration-parameters-what-can-you-just-ignore/
Thanks
-Todd
On Wed, Nov 17, 2010 at 9:10 AM, Lars George wrote:
> Are you running on EC2? Couldn't you simp
Are you running on EC2? Couldn't you simply up the heap size for the
java processes?
I do not think there is a hard and fast rule to how many xcievers you
need, trial and error is common. Or ifmyou have enough heap simply set
it too high, like 4096 and that usually works fine. It all depends on
ho
I'm using Linux, the Amazon beta version that they recently released. I'm
not very familiar with Linux, so I think the kernel version
is 2.6.34.7-56.40.amzn1.x86_64. Hadoop version is 0.20.2 and HBase version
is 0.20.6. Hadoop and HBase have 2 GB each and they are not sawpping.
Besides all other q
Hi Lucas,
What OS are you on? What kernel version? What is your Hadoop and HBase
version? How much heap do you assign to each Java process?
Lars
On Wed, Nov 17, 2010 at 3:05 PM, Lucas Nazário dos Santos
wrote:
> Hi,
>
> This problem is widely know, but I'm not able to come up with a decent
> so
Hi,
This problem is widely know, but I'm not able to come up with a decent
solution for it.
I'm scanning 1.000.000+ rows from one table in order to index their content.
Each row has around 100 KB. The problem is that I keep getting the
exception:
Exception in thread "org.apache.hadoop.dfs.datano
11 matches
Mail list logo