Can you tell who's doing it? You could enable IPC debug for a few secs
to see who's coming in with scans.
You could also try to disable pre-fetching, set hbase.client.prefetch.limit to 0
Also, is it even causing a problem or you're just worried it might
since it doesn't look normal?
J-D
On
To run a single test, you should use the following command:
mvn test -PrunAllTests -DfailIfNoTests=false -Dtest=xxx
I ran TestColumnRangeFilter using tip of 0.94 code base and it passed.
Did you use tip of 0.94 ?
Cheers
On Mon, Jul 29, 2013 at 10:32 AM, Premal Shah
Hi folks,
We are seeing an issue with hbase 0.94.3 on CDH 4.2.0 with excessive .META.
reads...
In the steady state where there are no client crashes and there are no
region server crashes/region movement, the server holding .META. is serving
an incredibly large # of read requests on the .META.
Questions about unit tests:
1) I ran this to execute all tests in the filter package - mvn test
-Dtest=org.apache.hadoop.hbase.filter.*
The ColumnRangeFilter test fails with this error
---
Test set:
It could be HBASE-6870?
On Mon, Jul 29, 2013 at 7:37 PM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
Can you tell who's doing it? You could enable IPC debug for a few secs
to see who's coming in with scans.
You could also try to disable pre-fetching, set
hbase.client.prefetch.limit to 0
I think Steve forgot to reply to user@hbase.
On Mon, Jul 29, 2013 at 10:11 AM, Steve Loughran
steve.lough...@gmail.comwrote:
I'm somewhere on safari in Tanzania right now, so all I have to add is
http://steveloughran.blogspot.com/2013/06/hoya-hbase-on-yarn.html
YARN lets you ask for
Looking into FilterList#filterKeyValue() and FilterList#getNextKeyHint(),
they both iterate through all the filters.
Suppose there are 3 or more filters in the FilterList which implement
getNextKeyHint(), how would the state be maintained ?
Cheers
On Sun, Jul 28, 2013 at 9:22 PM, Viral Bajaria
Attached are 2 patches: one of them is TestFail.patch where I show that the
behavior is not as expected. On the other hand, the second patch is with
the changes that I did to FilterList and the behavior is as expected.
I have tested the state maintenance on two filters that implement
Can you log a JIRA and attach the patches there ?
Your attachments did not go through.
On Mon, Jul 29, 2013 at 4:34 PM, Viral Bajaria viral.baja...@gmail.comwrote:
Attached are 2 patches: one of them is TestFail.patch where I show that
the behavior is not as expected. On the other hand, the
Attached the two test patches to this JIRA:
https://issues.apache.org/jira/browse/HBASE-9079
On Mon, Jul 29, 2013 at 4:36 PM, Ted Yu yuzhih...@gmail.com wrote:
Can you log a JIRA and attach the patches there ?
Your attachments did not go through.
Thanks for the quick action.
I ran through *Filter* tests along with the new test - they passed.
Let's continue discussion on the JIRA.
On Mon, Jul 29, 2013 at 4:43 PM, Viral Bajaria viral.baja...@gmail.comwrote:
Attached the two test patches to this JIRA:
Hi Shapoor,
Moving the conversation to the users list.
Have you solved your issue? Sorry you haven't gotten a response sooner -- I
think everyone is working overtime to get 0.96 released.
I'm assuming each put is independent of the others. You're not putting
100mm times to the same row, are
Hi all,
I'm having a lot of handlers (90 - 300 aprox) being blocked when reading
rows. They are blocked during changedReaderObserver registration.
Does anybody else run into the same issue?
Stack trace:
IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000
nid=0x2244 waiting on
Hi all,
I'm having a lot of handlers (90 - 300 aprox) being blocked when reading
rows. They are blocked during changedReaderObserver registration.
Does anybody else run into the same issue?
Stack trace:
IPC Server handler 99 on 60020 daemon prio=10 tid=0x41c84000
nid=0x2244 waiting on
Hi Pablo,
What do you see in the logs around the time you saw that behavior? is
this happening on a single Region Server? what version of HBase are
you running?
cheers,
Esteban.
Cloudera, Inc.
On Jul 29, 2013, at 20:21, Pablo Medina pablomedin...@gmail.com wrote:
Hi all,
I'm having a lot
how to get the 'stack trace'?
Thanks!
beatls
On Tue, Jul 30, 2013 at 11:20 AM, Pablo Medina pablomedin...@gmail.comwrote:
Hi all,
I'm having a lot of handlers (90 - 300 aprox) being blocked when reading
rows. They are blocked during changedReaderObserver registration.
Does anybody else
Hi Esteban,
I'm using HBase 0.94.7. I have a cluster of 5 RS + 1 master. This is
happening on the RS containing a region that is being accessed very
frequently. I'm not seeing any particular log during that time.
Thanks,
Pablo.
2013/7/30 Esteban Gutierrez este...@cloudera.com
Hi Pablo,
I got that stack trace using jstack pid
Thanks,
Pablo.
2013/7/30 hua beatls bea...@gmail.com
how to get the 'stack trace'?
Thanks!
beatls
On Tue, Jul 30, 2013 at 11:20 AM, Pablo Medina pablomedin...@gmail.com
wrote:
Hi all,
I'm having a lot of handlers (90 - 300 aprox) being
CopyOnWriteArraySet seems a curious choice here. Bad when modified frequently.
ConcurrentHashMap seems like a better choice.
Mind filing a jira, Pablo? Then we can discuss the issue there.
Thanks.
-- Lars
- Original Message -
From: Pablo Medina pablomedin...@gmail.com
To:
Yeah I've seen this a couple of times lately. CopyOnWrite actually
has a non-linear time under a lock as the number of items increases.
It can be mitigated by making sure to close scanners.
ResultScanner res = null;
try {
// open and read from scanner here
} finally {
if (res != null)
20 matches
Mail list logo