Hi Adrien, As well, if you can share the client code (number of threads, regions, is it a set of single get, or are they multi gets, this kind of stuff).
Cheers, N. On Thu, Aug 23, 2012 at 7:40 PM, Jean-Daniel Cryans <jdcry...@apache.org> wrote: > Hi Adrien, > > I would love to see the region server side of the logs while those > socket timeouts happen, also check the GC log, but one thing people > often hit while doing pure random read workloads with tons of clients > is running out of sockets because they are all stuck in CLOSE_WAIT. > You can check that by using lsof. There are other discussion on this > mailing list about it. > > J-D > > On Thu, Aug 23, 2012 at 10:24 AM, Adrien Mogenet > <adrien.moge...@gmail.com> wrote: >> Hi there, >> >> While I'm performing read-intensive benchmarks, I'm seeing storm of >> "CallerDisconnectedException" in certain RegionServers. As the >> documentation says, my client received a SocketTimeoutException >> (60000ms etc...) at the same time. >> It's always happening and I get very poor read-performances (from 10 >> to 5000 reads/sc) in a 10 nodes cluster. >> >> My benchmark consists in several iterations launching 10, 100 and 1000 >> Get requests on a given random rowkey with a single CF/qualifier. >> I'm using HBase 0.94.1 (a few commits before the official stable >> release) with Hadoop 1.0.3. >> Bloom filters have been enabled (at the rowkey level). >> >> I do not find very clear informations about these exceptions. From the >> reference guide : >> (...) you should consider digging in a bit more if you aren't doing >> something to trigger them. >> >> Well... could you help me digging? :-) >> >> -- >> AM.