Just an update for those reading this:

Issue turned out to be a Websphere problem and upgrading to a fixpack
that fixed the issue seems to have resolved the problem.  This was
Websphere 6.1 on linux - we needed fixpack 17 to resolve the issue.

Thanks for the assistance!

On Aug 27, 7:51 am, ccanish <[EMAIL PROTECTED]> wrote:
> Dan,
>
> memcached uses libevent which is probably leaking pipes. google on
> libevent + OS.
> libevent uses it for anynch notification and should technically
> cleanup....this may be a possible problem.
>
> /Anish
>
> On Aug 25, 7:15 pm, rodandar <[EMAIL PROTECTED]> wrote:
>
> > One other thing:  Does anyone know why these leaking connections would
> > show up as pipes and not sockets?  We searched the memcached code base
> > and didn't find any pipe references, so where are these pipes coming
> > from?
>
> > On Aug 25, 7:10 pm, rodandar <[EMAIL PROTECTED]> wrote:
>
> > > We're using the 2.0.1 client.  Yes, we are most interested in trying
> > > to find the source of the leaks and we are trying a variety of
> > > activities to try and do that.  Any particular thoughts on a good way
> > > to track this down?  We are seeing very little load on the memcached
> > > servers, so it doesn't appear to be an issue with load on the server
> > > side.
>
> > > On Aug 23, 2:02 pm, "Boris Partensky" <[EMAIL PROTECTED]>
> > > wrote:
>
> > > > Which client/version are you using? I think next step should be to 
> > > > identify
> > > > the source of connection leaks, no?
>
> > > > On Sat, Aug 23, 2008 at 1:47 PM, rodandar <[EMAIL PROTECTED]> wrote:
>
> > > > > Thanks everyone.  A little more info.  We do not believe it is our FD
> > > > > limit as when we run we are seeing an ever growing number of pipes
> > > > > hanging around from the results of lsof.   The error occurs on the app
> > > > > server running the memcached client.  We have three separate apps
> > > > > accessing our memcached servers,  2 are UI based, 1 is not.  The app
> > > > > that is not a UI app, does NOT exhibit the problem. It makes me
> > > > > suspicious that these pipes are getting held through our UI requests
> > > > > which are accessing the memcached client.  We have verified that when
> > > > > we run memcached disabled, we see the results of the lsof command
> > > > > stabilize over time as we run load.  So it looks to me that our
> > > > > memcached requests are not getting cleaned up.  This is on Websphere
> > > > > 6.1/Red Hat 5.1. Next step is to run with keep alive off.
>
> > > > > On Aug 23, 11:19 am, "Boris Partensky" <[EMAIL PROTECTED]>
> > > > > wrote:
> > > > > > I agree. First thing I would examine is possibility of the client 
> > > > > > leaking
> > > > > > connections.
>
> > > > > > On Sat, Aug 23, 2008 at 10:46 AM, James Ranson <[EMAIL PROTECTED]>
> > > > > wrote:
>
> > > > > > > Personally it seems a little premature to start modifying the file
> > > > > > > descriptors; that should be a last resort, as it is already tuned
> > > > > > > adequately for the majority of scenarios.
>
> > > > > > > Dan, please give us some more details about your environment: how 
> > > > > > > many
> > > > > > > memcached servers, how many application servers calling memcached,
> > > > > > > ballpark idea on the general load on your memcached servers (e,g.,
> > > > > > > queries per sec). Are you seeing this error on your memcached 
> > > > > > > server
> > > > > > > or on your application server that is querying memcached.
>
> > > > > > > It sounds like a deficient client library that is not correctly
> > > > > > > pooling connections, or your connection pool max is set way too 
> > > > > > > high.
>
> > > > > > > On Aug 23, 2:31 am, "rohan bankar" <[EMAIL PROTECTED]> wrote:
> > > > > > > > Hi Dan,
>
> > > > > > > > I am not expert in memcached but just trying to help you, we had
> > > > > > > > similar few days back. In linux a process have max limit on 
> > > > > > > > number of
> > > > > > > > open FDs. Generally its 1024 by default and wherever process 
> > > > > > > > cross
> > > > > > > > this limit it can not open a new file(OR any thingthat needs 
> > > > > > > > file
> > > > > > > > descriptor, network socket pipe n all). Try to reset max open FD
> > > > > limit
> > > > > > > > (you can configure it in '/etc/security/limits.conf' ) reboot 
> > > > > > > > machine
> > > > > > > > and run 'ulimit -n'. In addition you also need to change 
> > > > > > > > 'FD_SETSIZE'
> > > > > > > > values in header files '/usr/include/bits/typesizes.h' and
> > > > > > > > '/usr/include/linux/posix_types.h' (its by default 1024, 
> > > > > > > > increase it
> > > > > > > > to 4096 OR whatever u want). And rebuild the libmemcached.
> > > > > > > > Hope this will help you.
>
> > > > > > > > ~Rohan.
>
> > > > > > > > On Fri, Aug 22, 2008 at 6:31 PM, rodandar <[EMAIL PROTECTED]>
> > > > > wrote:
>
> > > > > > > > > We have just recently started using memcached and we are 
> > > > > > > > > seeing an
> > > > > > > > > issue under in which we are getting the error "too many open 
> > > > > > > > > files"
> > > > > > > > > eventually.  It looks like we generating thousands of pipes 
> > > > > > > > > (using
> > > > > > > > > lsof) and they don't seem to go away, they just accumulate.  
> > > > > > > > > We
> > > > > have
> > > > > > > > > tracked the issue down to whether we have memcached enabled 
> > > > > > > > > and it
> > > > > > > > > appears to be an issue with our interaction with the memcached
> > > > > client,
> > > > > > > > > but we're at a loss so far for an explanation.  Has anyone 
> > > > > > > > > seen
> > > > > > > > > something like this?
>
> > > > > > > > > Thanks.
>
> > > > > > > > > -Dan Richards
>
> > > > > > > > --
> > > > > > > > Rohan Bankar
> > > > > > > > Komli Media.
> > > > > > > > 9860404534
>
> > > > > > --
> > > > > > --Boris
>
> > > > --
> > > > --Boris
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/memcached?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to