Re: [Mongrel] Mongrel Cluster init.d problems
On 11/30/06, Kyle Kochis [EMAIL PROTECTED] wrote: I have installed mongrel and mongrel_cluster fine and have them running great with my app. I tried using the init.d script provided with mongrel_cluster 2.1 to start it up on boot but it doesn't. mongrel_cluster_ctl works fine for me by hand and so does /etc/init.d./mongrel_cluster start. I am on Debian so I installed the service links with sudo update-rc.d mongrel_cluster defaults. And yes I chmod'ed it. I checked and double checked /etc/mongrel_cluster and my yml file and everything I can think of. Finally out of frustration I hand linked the system boot stuff like this: sudo ln -s /etc/init.d/mongrel_cluster /etc/rc0.d/S84mongrel_cluster sudo ln -s /etc/init.d/mongrel_cluster /etc/rc3.d/S84mongrel_cluster sudo ln -s /etc/init.d/mongrel_cluster /etc/rc6.d/S84mongrel_cluster And still no luck. Again if I start em up by hand everything works like it should. Thanks for the help, Kyle Without appropriate error message only god can help you. -- There was only one Road; that it was like a great river: its springs were at every doorstep, and every path was its tributary. ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users
Re: [Mongrel] Mongrel Cluster init.d problems
Make sure the paths needed are available at boot. I had a similar problem on FreeBSD and was helped out by Andrew Bennett. Here's what he had to say: Hey Jamie, Yeah, I found that out after using my own script for a while. It is caused by the way mongrel_cluster starts each of the mongrel servers. It actually calls the command line mongrel_rails start ... on each of the specified directories. However, when FreeBSD starts up, the PATH hasn't been set correctly yet, so calling mongrel_rails results in a not found error because /usr/local/bin hasn't been added to the PATH yet. I just cheated and added the following to /usr/local/etc/rc.d/ mongrel_cluster ... stop_cmd=stop_cmd status_cmd=status_cmd PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/ local/bin:/usr/X11R6/bin restart_cmd() { ... I added the surrounding lines so you can tell where I'm talking about. I'm not sure if there is a better way to do this, but it works for now. On Nov 30, 2006, at 5:04 AM, hemant wrote: On 11/30/06, Kyle Kochis [EMAIL PROTECTED] wrote: I have installed mongrel and mongrel_cluster fine and have them running great with my app. I tried using the init.d script provided with mongrel_cluster 2.1 to start it up on boot but it doesn't. mongrel_cluster_ctl works fine for me by hand and so does /etc/init.d./mongrel_cluster start. I am on Debian so I installed the service links with sudo update-rc.d mongrel_cluster defaults. And yes I chmod'ed it. I checked and double checked /etc/mongrel_cluster and my yml file and everything I can think of. Finally out of frustration I hand linked the system boot stuff like this: sudo ln -s /etc/init.d/mongrel_cluster /etc/rc0.d/S84mongrel_cluster sudo ln -s /etc/init.d/mongrel_cluster /etc/rc3.d/S84mongrel_cluster sudo ln -s /etc/init.d/mongrel_cluster /etc/rc6.d/S84mongrel_cluster And still no luck. Again if I start em up by hand everything works like it should. Thanks for the help, Kyle Without appropriate error message only god can help you. -- There was only one Road; that it was like a great river: its springs were at every doorstep, and every path was its tributary. ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users
[Mongrel] [ANN] Mongrel Service 0.3.1, basic process monitoring
Hello Folks, I tried create a valid gem repository under windows without luck (rubygems show problem with CRLF/LF line endings between *nix and windows). Anyway, I've uploaded a new gem to my webpage: http://www.mmediasys.com/releases/mongrel_service-0.3.1-mswin32.gem Whats new? Beside the things added in previous announcement [1], this version add basic process monitoring. What that means? If for some reason the mongrel process (also a ruby process) suddenly dies (maybe a runtime error), the service will automatically re-create a new process. So, in intervals of 5 seconds, if your process die, it will be recreated. That will reduce the uptime problems of your applications, right? Later this weekend will add the cluster capability, so Windows no longer envy users running *nix or OSX ;-) Oh, BTW, if you don't want to add a service in windows, you could simulate it using console command: mongrel_service console single -c c:/path/to/my/rails/app -p 4000 -e production Please, all the folks on Windows environment try the latest build of the gem and report problems (if any was found). Regards, -- Luis Lavena Multimedia systems - Leaders are made, they are not born. They are made by hard effort, which is the price which all of us must pay to achieve any goal that is worthwhile. Vince Lombardi [1] http://rubyforge.org/pipermail/mongrel-users/2006-November/002186.html ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users
[Mongrel] stability
Hi, Are there any recommendations as to what is currently the most stable setup is for mongrel apache? I read somewhere (probably here) that you should avoid using PStore for sessions. Are there any more of such recommendations? Also, what is currently the safest version of mongrel to use in production? Jeroen ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users
[Mongrel] mongrel served from a subdirectory
Hello, I have setup mongrel successfully a few times now, but, each time I have used apaceh 2.2 and mod_proxy setup descibed on the mongrel site. However, I need to set up another app in a subdomain. example.com/docserver instead of docserver.example.com. I have tried just adding I have something like: ProxyPass /docserver/ http://example.com:3001/ ProxyPassReverse /docserver/ http://example.com:3001 ProxyPreserveHost on The problem seems to be two fold. First, the css, image and javascript links, are all broken. Second all the generated links are to /controller/action instead of to docserver/controller/action. I tried putting RAILS_RELATIVE_URL_ROOT=/docserver/ in enviornment.rb, but that didn't help. any ideas? Thanks, Michael Fairchild ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users
Re: [Mongrel] mongrel served from a subdirectory
Hello, I have setup mongrel successfully a few times now, but, each time I have used apaceh 2.2 and mod_proxy setup descibed on the mongrel site. However, I need to set up another app in a subdomain. example.com/docserver instead of docserver.example.com. I have tried just adding I have something like: ProxyPass /docserver/ http://example.com:3001/ ProxyPassReverse /docserver/ http://example.com:3001 ProxyPreserveHost on The problem seems to be two fold. First, the css, image and javascript links, are all broken. Second all the generated links are to /controller/action instead of to docserver/controller/action. I tried putting RAILS_RELATIVE_URL_ROOT=/docserver/ in enviornment.rb, but that didn't help. I haven't done it, but seems my memory is there is a --prefix option to mongrel that you need to set as well... Check the mongrel site to see if there any docs on this... ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users
Re: [Mongrel] Sharing Mongrel Log Files on SAN Storage
Ezra Zygmuntowicz wrote: On Nov 30, 2006, at 10:36 AM, Steven Hansen wrote: Hi, My department has a SAN and I'm wondering if it is safe to have all of my mongrel's share log files stored on the SAN. More specifically, we have 4 machines, each running a couple of mongrel processes. Right now each machine has its own set of log files for mongrel. However, I am thinking that it might be better to put the log files on the SAN and let all the mongrels on all the different machines share 1 set of log files. I'd be interested to gets some feedback from other mongrelians on whether this is a good idea or a bad idea and why. Thanks, Steven Hey Steven- What filesystem re you using n the SAN? We have many Xen nodes running rails apps that share a GFS filesystem partition off the SAN. With GFS you can make hostname specific symlinks so that even though the nodes of the cluster are all still writing their logs to logs/ production.log, each node is actually writing to a symlink based on hostname. Let me illustrate. Say we have 2 Xen instances for one rails app. They both share a GFS partition mounted off the SAN at /data. So anything under /data is shared between these nodes. With a capistrano setup you make sure to deploy yoru apps to the /data partition. So to setup the hostname logging on both slices you woudl log into each one and run the following commands: app is here: /data/railsapp/current logs are here: /data/railsapp/shared/log We want both nodes to have their own log files but still have them both be writing to the same log file path. So we make hostname symlinks like this: $ cd /data/railsapp/shared $ rm -rf log $ mkdir `hostname` $ ln -s @hostname log You do that on all nodes that share the same /data partition. This gives a very dramatic increase in performance over all nodes writing to the same log file. Getting a file lock on shared storage is more troublesome then on non shared storage. So with multiple nodes all writing to the same log file, each node has to get a lock before it can write and other nodes wait their turn for the lock. Using this @hostname trick, each node has its own log file that no other nodes write to. But you don't have to change the way rails or capistrano sets up the log files. Each node writes its logs to /data/railsapp/ shared/log/production.log, but in reality they are all writing to a symlink to a directory with the same name as the nodes hostname. We tried NFS and a few other filesystems and they all sucked. We ended up using the red hat clustering suite. But even though its a red hat suite it runs on any linux, we use it with gentoo. It includes GFS and the cluster fencing stuff that allows nodes to dynamically fence off any other nodes that are misbehaving. GFS works really well for shared storage off a SAN. I would definitely recommend not having all your nodes write their logs to the same file on the SAN. See if you can find a way to do like GFS does or as a last resort you may have to do some hacking to get rails to write its log files with the pid or some other unique identifier so that each node writes to its own log files. Doing this gave us a whopping 25% better performance when compared with all nodes sharing the same log file. Cheers- -- Ezra Zygmuntowicz -- Lead Rails Evangelist -- [EMAIL PROTECTED] -- Engine Yard, Serious Rails Hosting -- (866) 518-YARD (9273) ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users Ezra and Zed, Thanks for responding. I feel a little foolish for even asking the question now, better safe than sorry though :-P I guess originally I was thinking that if I wanted to check logs for traffic or something, then having it all in one log would be easier. But as Zed pointed out, this would make tracking down problems with an offending mongrel instance, much more difficult. Not to mention that the issue with more mongrel threads fighting for the lock on the log files. Anyway, I'm not sure what type of file system we are using as I really don't have much contact with the guys who are in charge of our machines. Their setup seems similar to yours, Ezra. When I ssh to a machine, I end up on 1 of (n) machines. My entire account's file system--except a few directories--is stored on the SAN (at least this is how I understand it). The non SAN directories are symbolic links to the specific machine that I'm logged on to. So in my situation the directory /apps/local/#{account_name} will always point to the directory of a specific machine. I think I'll be ok and have no problem giving each mongrel access to logs on its specific machine. In case, you're interested, below is an overly detailed diagram of
[Mongrel] Mongrel 0.3.18, rails 1.1.6 and cookies
I've run into an issue with my rails application being unable to properly set cookies on Mongrel 0.3.18. If I run the simplified code below in Mongrel 3.14.4, both cookies are properly sent to and saved by the browser. With the same code in 0.3.18, only the auth_token cookie is created (if I switch them, only the userid as it will only properly create the first cookie in the list.) I'm at home right now, but when I get back into work tomorrow, I'll try to install 0.3.15 and see if the problem exists there as well. Thanks. def login cookies[:auth_token] = { :value = auth, :expires = 10.years.from_now.utc } cookies[:userid] = { :value = 100, :expires = 10.years.from_now.utc } end ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users
Re: [Mongrel] mongrel served from a subdirectory
Thanks for that Philip. I missed that option. I tried starting up with that, but --prefix=/docserver and leaving the apache stuff the same. When i did that i couldn't even get any of it. I think there might be something i need to set in enviornment.rb, but im not sure. I'll keep trying. ~michael On 11/30/06, Philip Hallstrom [EMAIL PROTECTED] wrote: Hello, I have setup mongrel successfully a few times now, but, each time I have used apaceh 2.2 and mod_proxy setup descibed on the mongrel site. However, I need to set up another app in a subdomain. example.com/docserver instead of docserver.example.com. I have tried just adding I have something like: ProxyPass /docserver/ http://example.com:3001/ ProxyPassReverse /docserver/ http://example.com:3001 ProxyPreserveHost on The problem seems to be two fold. First, the css, image and javascript links, are all broken. Second all the generated links are to /controller/action instead of to docserver/controller/action. I tried putting RAILS_RELATIVE_URL_ROOT=/docserver/ in enviornment.rb, but that didn't help. I haven't done it, but seems my memory is there is a --prefix option to mongrel that you need to set as well... Check the mongrel site to see if there any docs on this... ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users
Re: [Mongrel] Sharing Mongrel Log Files on SAN Storage
Are you suggesting that you have a separate mongrel.8001.log, mongrel.8002.log like the pids files or is there a way to somehow log like production-8001.log, production-8002.log through a mongrel setting? Having separate log files for the mongrel.log seems like overkill to me, since when my app is running, it's not generating very many entries there... If you can create rails production logs named with the pid, could someone point me in a direction on how to set that up? Thanks. I should probably just make this the norm, but I advocate that people setup pid and log files separate for each port they have, and probable each machine:port combination. This makes it easier to isolate one mongrel's stuff and to track what is happening to a particular instance. If you do this it also means you can share it all you like. ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users
[Mongrel] deploying mongrel with capistrano
I'm wondering if anyone has built a Capistrano task that will deploy mongrel either standalone or along with a rails app. I've been looking at possibilities for doing this but still new to using both having switched from using FCGI+Lighty and deploying by hand. Thanks, Curtis ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users
Re: [Mongrel] Mongrel 0.3.18, rails 1.1.6 and cookies
On Thu, 30 Nov 2006 19:14:17 -0600 Joey Geiger [EMAIL PROTECTED] wrote: I've run into an issue with my rails application being unable to properly set cookies on Mongrel 0.3.18. If I run the simplified code below in Mongrel 3.14.4, both cookies are properly sent to and saved by the browser. With the same code in 0.3.18, only the auth_token cookie is created (if I switch them, only the userid as it will only properly create the first cookie in the list.) I'm at home right now, but when I get back into work tomorrow, I'll try to install 0.3.15 and see if the problem exists there as well. Thanks. def login cookies[:auth_token] = { :value = auth, :expires = 10.years.from_now.utc } cookies[:userid] = { :value = 100, :expires = 10.years.from_now.utc } end Aha! You're the guy! No, that's because I created a change which eliminated duplicate headers but we weren't quite sure what headers should be allowed to duplicate. I'll push a new release out that includes an exception list with the cookie headers. -- Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu http://www.zedshaw.com/ http://www.awprofessional.com/title/0321483502 -- The Mongrel Book http://mongrel.rubyforge.org/ http://www.lingr.com/room/3yXhqKbfPy8 -- Come get help. ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users
Re: [Mongrel] stability
Also, try to avoid RMagick processing inside rails. People love their file_column, but RMagick is a fat nasty pig that cripples many sites without warning. The optimal setup is use something like BackgrounDRb or a plain DRb server and use a batch processing method. I don't use file_column, but do use RMagick and haven't had any problems (and its author has been helpful with support). But I also only allow admins to use it. 43Things resizes images on the fly -- not sure if they use rmagick, but with all their users and all those images, it's gotta bump up the cpu and memory requirements quite a bit. Joe Ruby MUDCRAP-CE Believe nothing, test everything! Yahoo! Music Unlimited Access over 1 million songs. http://music.yahoo.com/unlimited ___ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users