Your setup with the master behind a LB VIP looks right.
I don't think replication in Solr was meant to be bidirectional.

 Otis
--
Sematext -- http://sematext.com/ -- Solr - Lucene - Nutch



----- Original Message ----
> From: Matthew Inger <mattin...@yahoo.com>
> To: solr-user@lucene.apache.org; r...@intelcompute.com
> Sent: Thu, January 7, 2010 10:45:20 AM
> Subject: Re: High Availability
> 
> I've tried having two servers set up to replicate each other, and it is not a 
> pretty thing.  It seems that SOLR doesn't really do any checking of the 
> version 
> # to see if the version # on the master is > the version # on the slave 
> before 
> deciding to replicate.  It only looks to see if it's different.  As a result, 
> what ends up happening is this:
> 
> 1.  Both servers at same revision, say revision 100
> 2.  Update Master 1 to revision 101
> 3.  Master 2 starts pull of revision 101
> 4.  Master 1 sees master 2 has different revision and starts pull of revision 
> 100
> 
> See where it's going?  Eventually, both servers seem to end up back at 
> revision 
> 100, and my updates get lost, so my sequencing might be a little out of wack 
> here, but nonetheless having two servers setup as slaves to each other does 
> not 
> work properly.  I would think though that with a small code change to check 
> to 
> see if the revision # has increased before pulling the file, that would solve 
> the issue.
> 
> In the mean time, my plan is to:
> 1.  Setup two index update servers as masters behind an F5 load balancer with 
> a 
> VIP in an active/passive configuration. 
> 2.  Setup N search servers as slaves behind an F5 load balancer with a VIP in 
> an 
> round robin configuration. Replication would be from the master's VIP, 
> instead 
> of any one particular master.
> 3.  Index update servers would have a handler would would do delta updates 
> every 
> so often to keep both servers in sync with the database (i'm only indexing a 
> complex database here, which doesn't lend itself well to sql querying on the 
> fly).
> 
> Ideally, i'd love to be able to force the master servers to update if either 
> one 
> of them switches from passive to active state, but am not sure how to 
> accomplish 
> that.
> 
> 
> ----
> mattin...@yahoo.com
> "Once you start down the dark path, forever will it
> dominate your destiny.  Consume you it will " - Yoda
> 
> 
> 
> ----- Original Message ----
> From: "r...@intelcompute.com" 
> To: solr-user@lucene.apache.org
> Sent: Mon, January 4, 2010 11:37:22 AM
> Subject: Re: High Availability
> 
> 
> Even when Master 1 is alive again, it shouldn't get the floating IP until 
> Master 
> 2 actually fails.
> 
> So you'd ideally want them replicating to eachother, but since one will only 
> be 
> updated/Live at a time, it shouldn't cause an issue with cobbling data (?).
> 
> Just a suggestion tho, not done it myself on Solr, only with DB servers.
> 
> 
> 
> 
> On Mon 04/01/10 16:28 , Matthew Inger wrote:
> 
> > So, when the masters switch back, does that mean, we have to force a
> > full delta update, correct?
> > ----
> > "Once you start down the dark path, forever will it
> > dominate your destiny.  Consume you it will " - Yoda
> > ----- Original Message ----
> > From: "" 
> > To: 
> > Sent: Mon, January 4, 2010 11:17:40 AM
> > Subject: Re: High Availability
> > Have you looked into a basic floating IP setup?
> > Have the master also replicate to another hot-spare master.
> > Any downtime during an outage of the 'live' master would be minimal
> > as the hot-spare takes up the floating IP.
> > On Mon 04/01/10 16:13 , Matthew Inger  wrote:
> > > I'm kind of stuck and looking for suggestions for high
> > availability
> > > options.  I've figured out without much trouble how to get the
> > > master-slave replication working.  This eliminates any single
> > points
> > > of failure in the application in terms of the application's
> > searching
> > > capability.
> > > I would setup a master which would create the index and several
> > > slaves to act as the search servers, and put them behind a load
> > > balancer to distribute the requests.  This would ensure that if a
> > > slave node goes down, requests would continue to get serviced by
> > the
> > > other nodes that are still up.
> > > The problem I have is that my particular application also has the
> > > capability to trigger index updates from the user interface.  This
> > > means that the master now becomes a single point of failure for
> > the
> > > user interface.  
> > > The basic idea of the app is that there are multiple oracle
> > > instances contributing to a single document.  The volume and
> > > organization of the data (database links, normalization, etc...)
> > > prevents any sort of fast querying via SQL to do querying of the
> > > documents.  The solution is to build a lucene index (via solr),
> > and
> > > use that for searching.  When updates are made in the UI, we will
> > > also send the updates directly to the solr server as well (we
> > don't
> > > want to wait some arbitrary interval for a delta query to run).  
> > > So you can see the problem here is that if the master is down, the
> > > sending of the updates to the master solr server will fail, thus
> > > causing an application exception.
> > > I have tried configuring multiple solr servers which are both
> > setup
> > > as masters and slaves to each other, but they keep clobber each
> > > other's index updates and rolling back each other's delta updates.
> > 
> > > It seems that the replication doesn't take the generation # into
> > > account and check that the generation it's fetching is > the
> > > generation it already has before it applies it.
> > > I thought of maybe introducing a JMS queue to send my updates to
> > and
> > > having the JMS message listener set to manually acknowledge the
> > > messages only after a succesfull application of the solrj api
> > calls,
> > > but that seems kind of contrived, and is only a band-aid.
> > > Does anyone have any suggestions?
> > > ----
> > > "Once you start down the dark path, forever will it
> > > dominate your destiny.  Consume you it will " - Yoda
> > > 
> > > 
> > Message sent via Atmail Open - http://atmail.org/
> > 
> > 
> Message sent via Atmail Open - http://atmail.org/

Reply via email to