Hi Fernando,

I've been using Xen for almost a year now and we've been really happy with
it. We have >50 virtual servers running on three physical servers.

Isolating the servers from one another provides numerous benefits:

# One server consuming too much CPU or RAM will not adversely affect the
others

We don't want an app that chews up RAM to affect other services (like db).
We allocate a certain amount of RAM to each slice and can change the
allocation when required.

Nightly backups chew up CPU but our backup slice doesn't affect the other
slices running on the same server as CPU is shared fairly*.

# We can manage disk more easily

Isolation of disk between virtual servers is a benefit as it reduces the
damage a full disk would cause (our monitoring alerts us well before that
happens though!).

# Security

We expose nginx to the world, but our app servers, database, etc are on
private IPs. This reduces our exposure to attack. Also, putting different
services on different (virtual) servers means that if one service have an
vulnerability it doesn't necessarily mean that other others will be
vulnerable. If an exploit is found for your mailserver you don't want that
to mean your database gets owned.

There are a few things that I've still not achieved:

# Migrating a live server from one physical server to another

This requires a shared filesystem. I couldn't get GFS working on Ubuntu. I'm
wary of NFS as I hear it can be unstable under high load. I'd love to have
it but I'm still waiting for the HOWTO to be written.

# Sharing a filesystem between slices

Same as above. This would be needed to allow separate app servers to handle
fileuploads.

In response to your drawbacks:

- Xen is an absolute pain to setup

Once you know what you're doing it's pretty easy but it takes some learning.
Possibly not worth the time if you only have one app and don't need it to
scale.

- Setting up each VM is tedious

Check out xen-tools for creating the VMs. Check out deprec automating your
setup.

- Nginx can't talk to Thin over Unix sockets, and moreover you need to
setup NFS to share static files between both

nginx talks to mongrel over http. I don't know anything about thin.

- you must setup the DB to be remotely accessible, and cannot use Unix
socket for communication

You can make it remotely accessible on a private subnet that nginx can
access. I can't see any problem with this.

- you have to manage keeping each VM updated

When you automate your processes, updating 10 servers is no more work than
updating 1. Plus you have record of what you've done and it's repeatable.
Putting the processes in place requires time and thought though.

- actually upgrading the server is not easy as moving the VMs on another
box, there will still be a good downtime.

Yep, putting more RAM into the physical server requires me to schedule
downtime for all the slices. I was sold on the "live migration" but it's
never become a reality for me (yet). Still, it's no worse than without
virtualization. I've learnt to fill it with RAM when we buy it. :-)

Xen is cool. It can be useful. It may be that your needs are not currently
great enough to justify the investment of time getting to know it though.

I'd love to hear from anyone who's got live migration working with Xen on
Ubuntu or Debian.

- Mike

* Yes, you can use 'nice' on a single server to manage CPU usage.

On Wed, Oct 22, 2008 at 9:44 AM, Fernando Perez <
[EMAIL PROTECTED]> wrote:

>
> Hi,
>
> Following Ezra's advice in his book "rails deployment", I have decided
> to go through all the pain of setting up a server virtualized with Xen,
> and separate each layer of the application in its own VM.
>
> So I have 4 VMs: Nginx, Thin, PostgreSQL and one for email
>
> The advantages are:
> - if a VM crashes, the others are still alive
> - I can setup a testing VM, mess things up, compile, break stuff without
> any fear
> - I can upgrade the server quickly by moving the VMs on another box
>
> The drawbacks:
> - Xen is an absolute pain to setup
> - Setting up each VM is tedious
> - Nginx can't talk to Thin over Unix sockets, and moreover you need to
> setup NFS to share static files between both
> - you must setup the DB to be remotely accessible, and cannot use Unix
> socket for communication
> - you have to manage keeping each VM updated
> - actually upgrading the server is not easy as moving the VMs on another
> box, there will still be a good downtime.
>
> Each time you have more steps to do compared to a single server with all
> the software running directly on a single OS.
>
> Well separating each VM like I did is I find a bad idea, everything is
> just more painful.
>
> How do you setup your app? Single server VS multiple?
> --
> Posted via http://www.ruby-forum.com/.
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Deploying Rails" group.
To post to this group, send email to rubyonrails-deployment@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/rubyonrails-deployment?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to