Erick ,

bq: We want the hits on solr servers to be distributed

True, this happens automatically in SolrCloud, but a simple load
balancer in front of master/slave does the same thing.

Midas : in case of solrcloud architecture we need not to have load balancer
? .

On Thu, Feb 11, 2016 at 11:42 PM, Erick Erickson <erickerick...@gmail.com>
wrote:

> bq: We want the hits on solr servers to be distributed
>
> True, this happens automatically in SolrCloud, but a simple load
> balancer in front of master/slave does the same thing.
>
> bq: what if master node fail what should be our fail over strategy  ?
>
> This is, indeed one of the advantages for SolrCloud, you don't have
> to worry about this any more.
>
> Another benefit (and you haven't touched on whether this matters)
> is that in SolrCloud you do not have the latency of polling and
> replicating from master to slave, in other words it supports Near Real
> Time.
>
> This comes at some additional complexity however. If you have
> your master node failing often enough to be a problem, you have
> other issues ;)...
>
> And the recovery strategy if the master fails is straightforward:
> 1> pick one of the slaves to be the master.
> 2> update the other nodes to point to the new master
> 3> re-index the docs from before the old master failed to the new master.
>
> You can use system variables to not even have to manually edit all of the
> solrconfig files, just supply different -D parameters on startup.
>
> Best,
> Erick
>
> On Wed, Feb 10, 2016 at 10:39 PM, kshitij tyagi
> <kshitij.shopcl...@gmail.com> wrote:
> > @Jack
> >
> > Currently we have around 55,00,000 docs
> >
> > Its not about load on one node we have load on different nodes at
> different
> > times as our traffic is huge around 60k users at a given point of time
> >
> > We want the hits on solr servers to be distributed so we are planning to
> > move on solr cloud as it would be fault tolerant.
> >
> >
> >
> > On Thu, Feb 11, 2016 at 11:10 AM, Midas A <test.mi...@gmail.com> wrote:
> >
> >> hi,
> >> what if master node fail what should be our fail over strategy  ?
> >>
> >> On Wed, Feb 10, 2016 at 9:12 PM, Jack Krupansky <
> jack.krupan...@gmail.com>
> >> wrote:
> >>
> >> > What exactly is your motivation? I mean, the primary benefit of
> SolrCloud
> >> > is better support for sharding, and you have only a single shard. If
> you
> >> > have no need for sharding and your master-slave replicated Solr has
> been
> >> > working fine, then stick with it. If only one machine is having a load
> >> > problem, then that one node should be replaced. There are indeed
> plenty
> >> of
> >> > good reasons to prefer SolrCloud over traditional master-slave
> >> replication,
> >> > but so far you haven't touched on any of them.
> >> >
> >> > How much data (number of documents) do you have?
> >> >
> >> > What is your typical query latency?
> >> >
> >> >
> >> > -- Jack Krupansky
> >> >
> >> > On Wed, Feb 10, 2016 at 2:15 AM, kshitij tyagi <
> >> > kshitij.shopcl...@gmail.com>
> >> > wrote:
> >> >
> >> > > Hi,
> >> > >
> >> > > We are currently using solr 5.2 and I need to move on solr cloud
> >> > > architecture.
> >> > >
> >> > > As of now we are using 5 machines :
> >> > >
> >> > > 1. I am using 1 master where we are indexing ourdata.
> >> > > 2. I replicate my data on other machines
> >> > >
> >> > > One or the other machine keeps on showing high load so I am
> planning to
> >> > > move on solr cloud.
> >> > >
> >> > > Need help on following :
> >> > >
> >> > > 1. What should be my architecture in case of 5 machines to keep
> >> > (zookeeper,
> >> > > shards, core).
> >> > >
> >> > > 2. How to add a node.
> >> > >
> >> > > 3. what are the exact steps/process I need to follow in order to
> change
> >> > to
> >> > > solr cloud.
> >> > >
> >> > > 4. How indexing will work in solr cloud as of now I am using mysql
> >> query
> >> > to
> >> > > get the data on master and then index the same (how I need to change
> >> this
> >> > > in case of solr cloud).
> >> > >
> >> > > Regards,
> >> > > Kshitij
> >> > >
> >> >
> >>
>

Reply via email to