Hi,
sorry i missed that :-(
I tried that parameter in my hbase-site.xml and restartet the hbase master and
all regionserver.
<property>
<name>dfs.client.socketcache.expiryMsec</name>
<value>900</value>
</property>
No change, the ClOSE_WAIT sockets still persists on the hbase master to the
regionserver's datanode after taking snapshots.
Because it was not clear for me where to the setting has to go
i put it in our hdfs-site.xml too and restarted all datanodes.
I thought that settings with dfs.client maybe have to go there.
But this did not change the behavior either.
Regards Hansi
> Gesendet: Dienstag, 29. April 2014 um 19:21 Uhr
> Von: Stack <[email protected]>
> An: Hbase-User <[email protected]>
> Betreff: Re: Re: taking snapshot's creates to many TCP CLOSE_WAIT handles on
> the hbase master server
>
> On Tue, Apr 29, 2014 at 8:15 AM, Hansi Klose <[email protected]> wrote:
>
> > Hi all,
> >
> > sorry for the late answer.
> >
> > I configured the hbase-site.conf like this
> >
> > <property>
> > <name>dfs.client.socketcache.capacity</name>
> > <value>0</value>
> > </property>
> > <property>
> > <name>dfs.datanode.socket.reuse.keepalive</name>
> > <value>0</value>
> > </property>
> >
> > and restarted the hbase master and all regionservers.
> > I still can see the same behavior. Each snapshot creates
> > new CLOSE_WAIT Sockets which stay there till hbase master restart.
> >
> > I there any other setting I can try?
> >
>
> You saw my last suggestion about "...dfs.client.socketcache.expiryMsec to
> 900 in your HBase client configuration.."?
>
> St.Ack
>