I am using a single Connection (HBase 1.0.0) created using
ConnectionFactory.createConnection(config) that is kept alive virtually
forever and actual client requests call connection.getTable(...), do
their work, and finally call table.close() when done.
Setting hbase.hconnection.threads.max
There are different thread pools in the client, and some of the thread
pools depend on how are you constructing connection and table instances.
The first thread pool is the one owned by the connection. If you are using
ConnectionFactory.createConnection() (which you should) then this is the
I think you need to set that property before you make HBaseConfiguration
object. Have you tried that?
On Mon, Mar 13, 2017 at 10:24 AM, Henning Blohm
wrote:
> Unfortunately it doesn't seem to make a difference.
>
> I see that the configuration has
Unfortunately it doesn't seem to make a difference.
I see that the configuration has hbase.htable.threads.max=1 right before
setting up the Connection but then I still get hundreds of
hconnection-***
threads. Is that actually Zookeeper?
Thanks,
Henning
On 13.03.2017 17:28, Ted Yu wrote:
It's that simple...? Thanks so much! Will give it a try right away.
Thanks, Henning
On 13.03.2017 17:28, Ted Yu wrote:
Are you using Java client ?
See the following in HTable :
public static ThreadPoolExecutor getDefaultExecutor(Configuration conf) {
int maxThreads =
Are you using Java client ?
See the following in HTable :
public static ThreadPoolExecutor getDefaultExecutor(Configuration conf) {
int maxThreads = conf.getInt("hbase.htable.threads.max", Integer.
MAX_VALUE);
FYI
On Mon, Mar 13, 2017 at 9:14 AM, Henning Blohm
Hi,
I am running an HBase client on a very resource limited machine. In
particular numproc is limited so that I frequently get "Cannot create
native thread" OOMs. I noticed that, in particular in write situations,
the hconnection pool grows into the hundreds of threads - even when at
most