Hi there,
I am currently working on the architecture for Archiva High Availability.
In our projects we have about 120 developers (increasing) and a number of
CI-systems operating.
What we need is to reduce the single-point-of-failure as much as possible.
(a server can crash or shut down for maintanance in purpose)
Load-balancing would also be beneficial.
In this (very large) company it's a "political" matter to establish
open-source based build-management tools, so we have this pressure to
guarantee the availability.

My idea looks like this
- 2 Linux-based servers running an Archiva instance each
- single Oracle RAC DB-instance for both instances (already running)
- single Apache httpd as load-balancer (SPOF)
- single NAS-FS connected by both Archiva's using NFS 

(I would like to avoid having two completely independent
Archiva-infrastructures and synchronize them via rsync. There will always be
some time lag.)

The tricky thing now is to configure one of the Archiva instances to do
read-operations ONLY, as I suppose it will cause various problems if two
Archiva-instances deploy artifacts concurrently.

So, the idea is to configure the httpd-balancer to direct any requests that
involve write-operations (=deploy-goals) to only one of the Archiva-Servers.
Read-requests are directed to any server.

Anybody gave give input regarding possible issues?
Alernative HA-strategies?



-- 
View this message in context: 
http://www.nabble.com/Archiva-HA-tp22306970p22306970.html
Sent from the archiva-dev mailing list archive at Nabble.com.

Reply via email to