Hi,

Thanks for the reply.

Tony Howat wrote:
[email protected] asked :
In the final solution I want to be able to send the requests to a preferred server based on a header in the HTTP request, but from what I saw, that should be doable with pound.

The question I have is, can I do the "404 failover" with pound, or do I need to patch it? Is it easy to patch it for this use? Is this a good way to go about it, or are there better solutions for the above requirements?


The proper solution for this would be shared storage for your web nodes, or 
network partition mirroring using something like drbd.

Why is that the proper solution? There's a couple issues I have with both of these solutions (and I did think of them before coming up with the question above):

1) The point of the exercise is redundancy on disk level. This would immediately disqualify most shared file systems, because they're generally not redundant (with some exceptions of course, e.g., AFS, which is very complicated and management intensive). Solutions such as hardware mirroring/RAID tend to occur within one physical device so that if the main board of the device fails your disks are gone (and if not dead, then at least not accessible anymore at that moment).

2) Disk replication like drdb are not portable (i.e., I don't want to be tied to Linux), and they are also much more heavyweight and low level and complicated than I need. They need to support a much richer semantics than I need: all I need is read only access to entire images, i.e., HTTP GET. I don't need block level replication; I don't need advanced underlying filesystem/locking support, etc.

The HTTP level solution has many advantages that I can think of:

1) The interface is standard HTTP; I can use whatever operating system I want to implement this interface. I can put plain disks in the server, use RAID, or put racks with hard disks behind them. The interface to the application is not changed.

2) It is simple. All I need to do is make sure files appear in the web space atomically (e.g., write to a temp file on the same partition, and rename). If a file is on either of the servers, it's available from the "cluster". I can keep them in sync using rsync, bidirectionally, after a crash or an offline period. Generally I will try to write them to both disks, but if one is offline and I can't, the rsync will get it later, and meanwhile the "404 failover" combined with regular pound failover will still allow the image to be available.

3) It is robust and all the software used is open source and well supported. Other shared disk solutions such as NFS aren't jokingly called "network failure system" for nothing. I've had servers hang because the NFS mount became unavailable. Stuff like that just can't happen with this solution. The rsync recover is also simple and robust, instead of the file system inconsistencies you could get when recovering block device replication. Also, you can use this solution in a "master-master" configuration, i.e., both servers serving data at all time.

If there is a better way to achieve what I want, then certainly I want to know. I saw something similar (mogilefs) to what I describe, but it's drawbacks (IMO) are that it needs a special client interface and is hard to backup (not a POSIX filesystem).

Regards,
Sebastiaan

--

Tony

_________________________________________________________________
Beyond Hotmail — see what else you can do with Windows Live.
http://clk.atdmt.com/UKM/go/134665375/direct/01/

--
To unsubscribe send an email with subject unsubscribe to [email protected].
Please contact [email protected] for questions.



--
To unsubscribe send an email with subject unsubscribe to [email protected].
Please contact [email protected] for questions.

Reply via email to