On 3/7/07, Alexey Verkhovsky [EMAIL PROTECTED] wrote:
It also doesn't leak any memory at all *when it is not overloaded*. E.g.,
under maximum non-concurent load (single-threaded test client that fires the
next request immediately upon receiving a response to the previous one), it
stays up
).
Piet.
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alexey
Verkhovsky
Sent: donderdag 8 maart 2007 7:36
To: mongrel-users@rubyforge.org
Subject: Re: [Mongrel] Memory leaks in my site
snip
On Thu, Mar 08, 2007 at 10:24:40AM +0100, Piet Hadermann wrote:
Quick fix could be to use HAProxy for loadbalancing and setting the max
number of connections per mongrel to 1.
Added bonus here is that all requests get queued op at HAProxy (which is
very conservative on memory use) and
Probably off topic, but which version of ferret are you running? The
latest incarnation (0.10.14 i think?) was segfaulting about every 10th
query bringing down mongrel along with it.
On 3/8/07, Jens Kraemer [EMAIL PROTECTED] wrote:
I have a live app here running on Rails 1.2.1/MySQL with three
On Wed, 7 Mar 2007 23:35:36 -0700
Alexey Verkhovsky [EMAIL PROTECTED] wrote:
Some further findings:
Hello World Rails application on my rather humble rig (Dell D620 laptop
running Ubuntu 6.10, Ruby 1.8.4 and Rails 1.2.2) can handle over 500 hits
per second on the following action:
def
On Thu, Mar 08, 2007 at 09:58:03PM +0800, Eden Li wrote:
Probably off topic, but which version of ferret are you running? The
latest incarnation (0.10.14 i think?) was segfaulting about every 10th
query bringing down mongrel along with it.
that's a common problem with 0.10.x, however I
On Thu, 8 Mar 2007 09:52:09 -0700
Alexey Verkhovsky [EMAIL PROTECTED] wrote:
On 3/8/07, Zed A. Shaw [EMAIL PROTECTED] wrote:
C'mon, Java processes typically hit the 600M or even
2G ranges and that's just common place.
Java runtime is a bloat. But its processes are natively
On Wed, Mar 07, 2007 at 12:55:06AM -0600, Alexey Verkhovsky wrote:
This is exactly what Mongrel does when it cannot cope with the incoming
traffic. I've discovered the same effect today.
You are definitely overloading it with 80 requests per second. After all,
it's a single-threaded instance
Looks like we are on the same stage of the learning curve about this stuff.
So, let's share our discoveries with the rest of the world :)
httperf --server 192.168.1.1 --port 3000 --rate 80 --uri /my_test --num-call
1 --num-conn 1
The memory usage of the mongrel server grows from 20M to
On 3/6/07, Alexey Verkhovsky [EMAIL PROTECTED] wrote:
On 3/6/07, Ken Wei [EMAIL PROTECTED] wrote:
Looks like we are on the same stage of the learning curve about this stuff.
So, let's share our discoveries with the rest of the world :)
httperf --server 192.168.1.1 --port 3000 --rate 80
On Wed, 7 Mar 2007 04:14:57 -0700
Kirk Haines [EMAIL PROTECTED] wrote:
By the way, check the errors section of httperf report, and the
production.log. See if there are fd_unavailable socket errors in
the former, and probably some complaints about too many files
open in the latter. If
used this cmd to run mongrel for the above test case:
mongrel_rails start -d -e production
___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users
Following is the newest test case i took:
* create a fresh rails app
* then create a controller, a model and a view.
* the model is just a simple table containing id and name.
* use httperf to beat the url /book/show/1 by the following steps:
1) httperf --server xxx --port 3000 --rate 50
Having RTFMed on the issue, Mongrel's max number of SOCKETS is 1024, due to
the use of select(). And in my case yesterday it was running out of file
descriptors way before it hit this limit.
As for threads and their associated context using up memory. This may well
be the case. Why does it stay
On Wed, 7 Mar 2007 04:14:57 -0700
Kirk Haines [EMAIL PROTECTED] wrote:
On 3/6/07, Alexey Verkhovsky [EMAIL PROTECTED] wrote:
On 3/6/07, Ken Wei [EMAIL PROTECTED] wrote:
This is exactly what Mongrel does when it cannot cope with the incoming
traffic. I've discovered the same effect today.
On Wed, 7 Mar 2007 13:06:17 +0800
Ken Wei [EMAIL PROTECTED] wrote:
I created a new rails app named 'test' which containing only a controller
and an action .
Here is the controller:
class MyTestController ApplicationController
def index
render_text 'Hello!'
On 3/7/07, Ken Wei [EMAIL PROTECTED] wrote:
4) httperf --server xxx --port 3000 --rate 80 --uri /book/show/1
--num-call 1 --num-conn 2000
2000 calls at a rate of 80/sec is not enough to it out completely and make
it run out of either file descriptors or sockets. Try 1 calls, and
On 3/7/07, Alexey Verkhovsky [EMAIL PROTECTED] wrote:
Having RTFMed on the issue, Mongrel's max number of SOCKETS is 1024, due to
the use of select(). And in my case yesterday it was running out of file
descriptors way before it hit this limit.
As for threads and their associated context
Zed,
Do you know what causes overloaded Mongrel to go to 150 Mb VSS and stay
there, and why it is supposed to behave this way?
If not, would you like somebody to figure it out and/or try to fix it? I
could try to give it a shot.
Alex
On 3/7/07, Zed A. Shaw [EMAIL PROTECTED] wrote:
And one
but, but, it did NOT run out of file descriptions, it works well except
memory leaks, up to 60M and never recover again.
the following is the all output from httperf, pls take a look seriously
first:
---
httperf --client=0/1
It did not run out of file descriptors because you didn't give it enough
time to run out of them. It was, however, going to run out of something in
the next minute or so.
The evidence is in the fact that reported throughput (62.4 requests/sec) was
lower than the load you gave it (80
this is the only controller in the above test case:
class BookController ApplicationController
scaffold :book
def list
logger.debug 'List the books'
@books=Book.find_all
end
def edit
@book=Book.find(@params['id'])
@categories=Category.find_all
end
end
mongrel is 1.0.1, rails
OK, I probably know what is happening. So...
1. Mongrel with a Hello, World application has virtual segment size (VSS) of
~32 Mb. As long as it is not overloaded, it stays at that level.
2. Once you overload it, Mongrel starts spawning new threads, up to the
default limit of 1024, or {max # of
Alex,
I am quite interested in reading it.
Thanks,
~Wayne
On Mar 07, 2007, at 19:38 , Alexey Verkhovsky wrote:
I have a pretty lengthy record of configuration under test and what
I did to figure all this out. If anyone is interested in reading
it, let me know.
Alex
Ah. Thank you Alex, this is very handy information for me as I am in
charge of quite a few small VPS's. I have not had mem leaking issues
yet but this could could come in really handy for a much larger scale
rails project I am working on.
Kyle Kochis
On 3/7/07, Alexey Verkhovsky [EMAIL PROTECTED]
On Wed, 7 Mar 2007 17:38:19 -0700
Alexey Verkhovsky [EMAIL PROTECTED] wrote:
A more industrial solution would be to redesign Mongrel's internal
architecture a bit. Requests routed to Rails can be placed in a queue, and
the thread released, instead of being parked at the mutex. It would help
On 3/7/07, Alexey Verkhovsky [EMAIL PROTECTED] wrote:
OK, I probably know what is happening. So...
1. Mongrel with a Hello, World application has virtual segment size (VSS) of
~32 Mb. As long as it is not overloaded, it stays at that level.
2. Once you overload it, Mongrel starts spawning new
I'm going to go ahead and blame acts_as_ferret. I had an application
that used acts_as_ferret and each mongrel process reached up to 300MB.
I removed acts_as_ferret (as well as switched from mysql to
postgresql) and now the exact same application is staying steady at 70
MB a process. I also use a
Some further findings:
Hello World Rails application on my rather humble rig (Dell D620 laptop
running Ubuntu 6.10, Ruby 1.8.4 and Rails 1.2.2) can handle over 500 hits
per second on the following action:
def say_hi
render :text = 'Hi!'
end
It also doesn't leak any memory at all *when it is
*** LOCAL GEMS ***
actionmailer (1.3.2, 1.2.5, 1.1.5)
Service layer for easy email delivery and testing.
actionpack (1.13.2, 1.12.5, 1.11.2)
Web-flow and rendering framework putting the VC in MVC.
actionwebservice (1.2.2, 1.1.6, 1.0.0)
Web service support for Action Pack.
If you are able try doing a
gem cleanup
to remove old versions of gems.
I've seen one case where that fixed such an issue.
~Wayne
On Mar 06, 2007, at 02:50 , Ken Wei wrote:
*** LOCAL GEMS ***
actionmailer (1.3.2, 1.2.5, 1.1.5)
Service layer for easy email delivery and testing.
'gem cleanup' i did that, but still
___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users
I've got issues with my rails application leaking memory as well. I
can say it's not Mongrel's fault, as I was able to duplicate the
situation in Webbrick.
My problem happens because I'm using monit to make sure my site stays
up, but in doing so, monit hits each of my mongrels every minute. I
Can you do a dump of all the plugins you are using? There are some
that are well known to be problematic with memory leaks.
- R0b
On 3/6/07, Joey Geiger [EMAIL PROTECTED] wrote:
I've got issues with my rails application leaking memory as well. I
can say it's not Mongrel's fault, as I was able
Did you try adding GC.start in your application?
On 3/6/07, Joey Geiger [EMAIL PROTECTED] wrote:
I've got issues with my rails application leaking memory as well. I
can say it's not Mongrel's fault, as I was able to duplicate the
situation in Webbrick.
My problem happens because I'm using
Did you try adding GC.start in your application?
yep
___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users
So, memory leak is the problem, and monit is just highlighting it. Is that
correct?
Alex
___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users
flex_image uses rmagick... which folks have lots of issues with...
I re-wrote the bits of flex_image that I use to use mini-magick and
it's working quite nicely.
I did that when the whole discussion of RMagick bad came up a couple
of months ago.
On 3/6/07, Alexey Verkhovsky [EMAIL PROTECTED]
On Mar 6, 2007, at 12:34 PM, Joey Geiger wrote:
flex_image uses rmagick... which folks have lots of issues with...
I re-wrote the bits of flex_image that I use to use mini-magick and
it's working quite nicely.
I did that when the whole discussion of RMagick bad came up a couple
of months
On 3/6/07, Ezra Zygmuntowicz [EMAIL PROTECTED] wrote:
On Mar 6, 2007, at 12:34 PM, Joey Geiger wrote:
flex_image uses rmagick... which folks have lots of issues with...
I re-wrote the bits of flex_image that I use to use mini-magick and
it's working quite nicely.
I did that when the
On Mar 6, 2007, at 1:23 PM, Rob Sanheim wrote:
On 3/6/07, Ezra Zygmuntowicz [EMAIL PROTECTED] wrote:
snip
Ezra,
What do you use for uptime monitoring? I haven't found anything out
of the box that can check often enough (ie at least every minute).
- Rob
Rob-
We use
I created a new rails app named 'test' which containing only a controller
and an action .
Here is the controller:
class MyTestController ApplicationController
def index
render_text 'Hello!'
end
end
I keeped the setup as the same as before. Then i ran a single
is mongrel still far from production ready? or did i make something wrong?
___
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users
There has got to be something seriously wrong with your stack/install
although I am not knowledgeable enough to tell you where to start
looking.
Kyle Kochis
On 3/6/07, Ken Wei [EMAIL PROTECTED] wrote:
is mongrel still far from production ready? or did i make something wrong?
On 3/6/07, Ken Wei [EMAIL PROTECTED] wrote:
Hi all,
My environment is ruby-1.8.4, rails 1.2.2, mongrel 1.0.1, linux 2.6. Now, i
have a problem on memory leaks with mongrel. My site is running 5 mongrel
processes on a 2G RAM machine, the memory of each process grows from about
20M to about
On 3/5/07, Ken Wei [EMAIL PROTECTED] wrote:
My environment is ruby-1.8.4, rails 1.2.2, mongrel 1.0.1, linux 2.6. Now,
i have a problem on memory leaks with mongrel. My site is running 5 mongrel
processes on a 2G RAM machine, the memory of each process grows from about
20M to about 250M
Even test only a single url which reads a few records from mysql and then
display, the memory usage still keep growing and never recover again.
The test tool is httperf, this is the command i used:
httperf --server 192.168.1.1 --port 81 --rate 15 --uri /user/ken --num-call
1 --num-conn 1000
Do you use ferret at all? I found that ferret (and acts_as_ferret)
cause the memory usage to grow to ~200MB a process. Can you list all
your installed gems or at least all the gems that you are using?
-carl
On 3/5/07, Ken Wei [EMAIL PROTECTED] wrote:
Even test only a single url which reads a
48 matches
Mail list logo