On 2/2/07, joost baaij <[EMAIL PROTECTED]> wrote:
> *nix, Linux. No, not acts_as_ferret. I'm truncating log files
> manually every week or so, this has never been a problem anyway. Cpu,
> disk and network are all available in abundance.
>

Are you using fastthread? Is mandatory since ruby leaks memory in his
threading/mutex/syncing functions.

> I couldn't say it also happens with a single instance, but definitely
> behind the load balancer. And then only one mongrel out of three.
> Always the one with lowest port (9000). He/she isn't home trained
> anymore and will leave a .pid file behind. The other two are fine.
>

If just happen with one instance, could you switch it to debug mode?

> I don't snore ;) and you shouldn't invest much effort tracing this one.
>

Its weird that the nly one dying is the lowest port mongrel, also "all
the time".

Could you share your mongrel_cluster.yml?

Also, which load balancer? and its configuration file? (Often the load
balancer aren't set to balance anything, have seen this before).

> It might be a quirk in my setup, or specific for LiteSpeed or any of
> my plugins, or Ruby (1.8.4). It's no big problem since my app runs
> just fine with lsapi too. I just wanted to share. As I seem to be the
> only one, must be my setup?
>

Ok, which plugins are you using? memcached-client?

Turn debug logging (refer to Super Debugging mode in mongrel site:
http://mongrel.rubyforge.org/docs/howto.html)


>
> Thanks
> J.
>
> Op 2-feb-2007, om 11:42 heeft Eden Li het volgende geschreven:
>
> > We'll probably need a bit more information here.  Are you running
> > on Windows or *nix?  Are you using acts_as_ferret?  Are you letting
> > Rails/Ruby rotate log files?  Does this occur with a single
> > instance of Mongrel, or only behind your load balancer?  Do you
> > snore at night? etc ;-)
> >
> > On 2/2/07, joost baaij <[EMAIL PROTECTED]> wrote: I'm a bit
> > surprised I can't find anything about this in the mailing
> > list archives. Basically since Mongrel 1.0.1 I've had Mongrels fall
> > asleep without any real cause. A deep sleep, actually more like a
> > coma. The mongrel in question (I'm using a cluster of three) can not
> > be revived. A cluster::stop, then cluster::start is nessesary.
> >
> > A ::restart would not help, but no ruby processes were left running.
> > The comatose Mongrel's pidfile would still be there though.
> >
> > I can find no apparent reason in the logs, but this situation did
> > occur VERY frequently, most often overnight. I run a very agile site
> > with lots of restarts, so maybe it has to do with that (no restarts
> > during the night, a memory leak somewhere?).
> >
> > It started with Mongrel 1.0.1 but .... just a few days before I
> > upgraded to Mongrel 1.0.1, I upgraded to Rails 1.2.
> >
> > So I'm not sure who's the culprit. A few days ago I switched to the
> > lsapi RailsRunner and the problem has disappeared, so Mongrel
> > definitely is involved.
> >
> >
> > Just thought I'd give it a mention.
> >
> > Ant btw Zed: the .nl welcomes you, the land of cheese and clogs.
> > Actually it's nice here, you should visit us someday!
> >
> > Joost.
> >
> > _______________________________________________
> > Mongrel-users mailing list
> > Mongrel-users@rubyforge.org
> > http://rubyforge.org/mailman/listinfo/mongrel-users
> >
> > _______________________________________________
> > Mongrel-users mailing list
> > Mongrel-users@rubyforge.org
> > http://rubyforge.org/mailman/listinfo/mongrel-users
>
> --
> www.gomagazine.nl +31643904460 pobox 51059 nl-1007eb amsterdam
>
>
>
> _______________________________________________
> Mongrel-users mailing list
> Mongrel-users@rubyforge.org
> http://rubyforge.org/mailman/listinfo/mongrel-users
>
>


-- 
Luis Lavena
Multimedia systems
-
Leaders are made, they are not born. They are made by hard effort,
which is the price which all of us must pay to achieve any goal that
is worthwhile.
Vince Lombardi
_______________________________________________
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users

Reply via email to