Most of my machines have apache+mongrel running but the mongrels are using
the localhost.
In my production environment, I have 4 boxes. I have setup 2 http servers (
apache 2.2.4 ) and 2 app servers. They are currently using lighttpd+fastcgi.
Which I am changing this week.
I want to get
Since I have 2 app servers (separate boxes), I plan on running 10
mongrels
on each.. I am assuming I just simply use:
Why 10? Have you tested that assumption? We found 4 mongrels to be our
sweet spot... http://mongrel.rubyforge.org/docs/how_many_mongrels.html
Well, Someone else on the
I have mongrel 1.0.1, rails 1.2.2 ruby 1.8.5 running on Centos 4.4.
When I execute the mongrel_rails start -d I see that 3 processes are spawned.
See below:
[EMAIL PROTECTED] aaa]# mongrel_rails start -d
[EMAIL PROTECTED] aaa]# ps -def |grep mong
root 2743 1 9 07:14 ?
I received this piece of code in a patch that turns on the FreeBSD http
filtering. I completely missed that it calls /sbin/sysctl directly
which means I'm slipping on my auditing.
[snip]
unless `/sbin/sysctl -nq net.inet.accf.http`.empty?
[snip]
I'd like to know the following from
Hey all -
I've got a question that I haven't seen addressed anywhere and was
wondering if anyone has put any thought into it or not...
Here's my setup... I have several *small* sites running apache/mongrel.
Each has a single mongrel instance. Most don't get any traffic (no one
reads my blog
Philip Hallstrom wrote:
Then I could either kill off the mongrel later if no traffic was coming
in.
Or perhaps mongrel itself could stay running, but unload (and free the
ram) of rails until a request came in, then load it back up for awhile
until traffic stopped and unload it...
What
I've had a right fun few days at work trying to figure out why our Rails
app (which isn't under very heavy load) kept eating memory and bringing
our server to our knees. Eventually I traced it to send_file (which was
in a way a relief as it wasn't down to my coding ;) -- every time a user
I know that Zed has made mention of the fact that using RMagick with
mongrel and rails is a bad thing. I'm currently using FlexImage (which
in turn uses RMagick) on my application and really haven't had too
many problems. We get a restart for memory usage every 8-10 hours on
one mongrel of
Hello,
I have setup mongrel successfully a few times now, but, each time I have
used apaceh 2.2 and mod_proxy setup descibed on the mongrel site.
However, I need to set up another app in a subdomain. example.com/docserver
instead of docserver.example.com.
I have tried just adding I have
I'm decently versed in sysadmin *nix stuff... apache and friends. I'm just
needing a nudge in the right direction. :-)
cheers,
-rjs-
On 11/28/06, Philip Hallstrom [EMAIL PROTECTED] wrote:
I have an Apache 2.2.3 (mod_proxy_balancer) frontend server that does
not have mongrel installed
If you use something like mongrel_upload_progress, the single-threadedness
of Rails is no problem because Mongrel will intercept the upload and not
hand it off to Rails until it's complete. That's what I'm doing at the
moment, and my app runs just fine on one Mongrel.
I haven't used this
On Tue, Nov 28, 2006 at 11:03:06AM -0600, Philip Hallstrom wrote:
What's a rule of thumb for guesstimating how many
Mongrels to use in a cluster for an app? I have an app
that gets about 5000 unique visitors per day. I
figured I'd give it plenty of Mongrels -- twenty to be
specific. After
I have an Apache 2.2.3 (mod_proxy_balancer) frontend server that does
not have mongrel installed. It does proxy requests to several other
mongrel-only servers (each running 2 mongrel processes). Each mongrel
node has the same rails code-base and it's working perfectly.
However, my question
On Tue, 28 Nov 2006 14:20:00 -0600 (CST)
Philip Hallstrom [EMAIL PROTECTED] wrote:
If you use something like mongrel_upload_progress, the single-threadedness
of Rails is no problem because Mongrel will intercept the upload and not
hand it off to Rails until it's complete. That's what I'm
I assume some of you have run into this error before when trying to run
mongrel on port 80 (or another port 1024) in OSX:
$ mongrel_rails start -p 80
** Starting Mongrel listening at 0.0.0.0:80
/usr/local/lib/ruby/gems/1.8/gems/mongrel-0.3.14/lib/mongrel/tcphack.rb:12:in
Zed -
If I purchase the PDF version, are updates going to be available similar
to the PP books? I admit I'm not familiar OReilly's setup so if it's
blatantly obvious on their site just let me know.
Thanks -philip
On Tue, 24 Oct 2006, Zed A. Shaw wrote:
Time for some all time pimpage folks.
%{DOCUMENT_ROOT}/%{REQUEST_FILENAME} -d
RewriteRule ^(.+[^/])$ $1/ [R]
On 10/3/06, Philip Hallstrom [EMAIL PROTECTED] wrote:
I am using the 'default' Apache 2.2 mod_rewrite rules suggested from the Web
site and they are working.
One thing related to static (non cluster directed) resources:
If I
I'm working on the Mongrel book with Zed, and wanted to get some
feedback from the core users (this list) about how they use Mongrel.
That sounds a bit vague, but I'm interested in hearing things about
frustrating problems / workaround, preferred configurations, if you
have a particular way
Hi,
I just realice that I can't run php code anymore once I installed my dear
mongrels (under an Apache Proxy Balancer).
I'm just trying to run the clasic phpinfo() in the following file
/var/rubyapp/public/phpinfo.php ... but instead of being executed the code
is being displayed as plain
I wanted to show a Rails site to someone so I set up Mongrel and it ran fine,
This is my mongrel_cluster.yml,
port: ?80
environment: production
pid_file: log/mongrel.pid
servers: 4
Now, after going back to Apache/Tomcat I can't get the Mongrel site up and
running again...?
Any suggestions?
I wanted to show a Rails site to someone so I set up Mongrel and it ran
fine,
This is my mongrel_cluster.yml,
port: ?80
environment: production
pid_file: log/mongrel.pid
servers: 4
Now, after going back to Apache/Tomcat I can't get the Mongrel site up and
running again...?
Any suggestions?
I don't see why this is a huge security issue either. At the worst
someone can view your commit history by viewing the .svn/entries file.
The password auth files are stored in the repository itself, not in
the .svn directories in the working copy.
Maybe not huge, but that file gives you
First off, ive just switched to using mongrel_cluster and pound - its
working a treat! Its far far faster than fast-cgi: operations that took a
few seconds on fcgi now are almost instant with mongrel. Excellent!
However, I've been trying to decide how best to utilise my equipment most
This is my first post, so I'm not sure if it's been asked before, but
I can't find an answer anywhere.
If I have one rails application running, one processor I'm running it
on, and mongrel is multi-threaded, why should I have more than one
mongrel running?
Everyone seems to agree on 3-5
http://rubyforge.org/pipermail/mongrel-users/2006-August/000930.html
//jarkko
So what happens when the mongrel gets more than one request at a time?
When I run httpref against it, the mongrel process can serve up large
number of rails pages before it gets bogged down.
And when it does
...
From: Philip Hallstrom [EMAIL PROTECTED]
Reply-To: mongrel-users@rubyforge.org
Date: Thu, 10 Aug 2006 12:56:28 -0500 (CDT)
To: Mongrel mongrel-users@rubyforge.org
Subject: Re: [Mongrel] Telling Apache Not to Send a Sub-Directory to the
Cluster
Howdy,
I am using Apache 2.2 and Mongrel
Just curious, if you are really need the performance for static
files, why not use an asset_host ? i haven't personally used it, nor
do I fully know what it does, but on a cursory, half-hazard glance, it
appears you can serve up content from another web server (lightty,
apache, etc.) here
27 matches
Mail list logo