Thanks for the quick reply, ironically I was just reading your replies
in this thread:
http://rubyforge.org/forum/forum.php?thread_id=7056&forum_id=5450

A bunch of services it is then! :)

On 9 Feb, 16:31, "Luis Lavena" <[EMAIL PROTECTED]> wrote:
> On 2/9/07, David Backeus <[EMAIL PROTECTED]> wrote:
>
>
>
>
>
> > Thanks for all your input.
>
> > Going to attempt using Pen.
>
> > Right now having problems starting the mongrel cluster. Here's some
> > output...
>
> > C:\ruby\apps\STREAM~1>mongrel_rails cluster::start --verbose
> > Starting 10 Mongrel servers...
> > mongrel_rails start -d -e production -p 8000 -P log/mongrel.8000.pid
> > c:/ruby/lib/ruby/gems/1.8/gems/mongrel_cluster-0.2.1/lib/
> > mongrel_cluster/init.rb
> > :53:in ``': Exec format error - mongrel_rails start -d -e production -
> > p 8000 -P
> > log/mongrel.8000.pid (Errno::ENOEXEC)
>
> The problem is Win32 lacks forking (or daemonize) which is used by
> mongrel with -d parameter, to detach from the console and stay
> running.
>
> No mongrel_cluster for win32 (yet).
>
> > Don't get what the problem is here. I can copy/paste the commandline
> > (mongrel_rails start -d -e production -p 8000 -P log/mongrel.8000.pid)
> > and run it just fine.
>
> The workaround is create multiple mongrel_services definitions and set
> them to start automatic.
>
> Please refer to Mongrel win32 specific docs 
> athttp://mongrel.rubyforge.org/docs/win32.html
>
> --
> Luis Lavena
> Multimedia systems
> -
> Leaders are made, they are not born. They are made by hard effort,
> which is the price which all of us must pay to achieve any goal that
> is worthwhile.
> Vince Lombardi


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Deploying Rails" group.
To post to this group, send email to rubyonrails-deployment@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/rubyonrails-deployment?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to