Hi,
Are u trying to do parallel scans. If yes, check the time taken for GC and
the number of calls that can be served at your end point.
Best Regards
N.Hari Kumar
On Tue, Sep 11, 2012 at 8:22 AM, Dhirendra Singh dps...@gmail.com wrote:
i tried with a smaller caching i.e 10, it failed again,
For GC Monitoring, Add Parameters export HBASE_OPTS=$HBASE_OPTS
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps
-Xloggc:$HBASE_HOME/logs/gc-hbase.log to hbase-env.sh and try to view the
file using tools like GCViewer. or use tools like VisualVM to look at
your GC Consumption.
./hari
Add
For pretty graphs with JVM GC info + system + HBase metrics you could also
easily hook up SPM to your cluster. See URL in signature.
Otis
--
Performance Monitoring - http://sematext.com/spm
On Sep 11, 2012 6:30 AM, HARI KUMAR harikum2...@gmail.com wrote:
For GC Monitoring, Add Parameters
could someone please clarify, when i say caching 100 or any number,
where does this actually happen on server (cluster ) or client. if i
assume it happens on cluster, so does this ScannerTimeOut is because of
caching as the server might have run out of memory and hence not able to
respond
@hbase.apache.org
Subject: Re: Getting ScannerTimeoutException even after several calls in the
specified time limit
could someone please clarify, when i say caching 100 or any number,
where does this actually happen on server (cluster ) or client. if i
assume it happens on cluster, so does
On Mon, Sep 10, 2012 at 10:13 AM, Dhirendra Singh dps...@gmail.com wrote:
I am facing this exception while iterating over a big table, by default i
have specified caching as 100,
i am getting the below exception, even though i checked there are several
calls made to the scanner before it
i tried with a smaller caching i.e 10, it failed again, not its not really
a big cell. this small cluster(4 nodes) is only used for Hbase, i am
currently using hbase-0.92.1-cdh4.0.1. , could you just let me know how
could i debug this issue ?
aused by: