Erick,

yes, currently I have 6 shards, which accept writes and reads. Sometimes I
delete data from all 6 and try to balance them, fill them up respectively,
so they have approx. the same amount of data on it. So all 6 are 'in
motion' somehow. I would like that the writing would take place more often
than now, but after a write the querying slows down, so I reduce writing to
every n hours.

So I've thought maybe it would make sense to add 6 slave shards. But what I
don't know is, if the slave-shards also suffer after a replication and the
querying will take some time too. I had a master/slave setup before, but
without sharding. So only one big master and one slave. And after a
replication it took couple of minutes to get a proper performance.

Daniel




On Fri, Jan 20, 2012 at 3:05 AM, Erick Erickson <erickerick...@gmail.com>wrote:

> It's generally recommended that you do the indexing on the master
> and searches on the slaves. In that case, firstSearcher and
> newSearcher sections are irrelevant on the master and shouldn't
> be there.
>
> I don't understand why you would need 5 more machines, are you
> sharding?
>
> Best
> Erick
>
> On Thu, Jan 19, 2012 at 7:25 AM, Daniel Brügge
> <daniel.brue...@googlemail.com> wrote:
> > Hi,
> >
> > I am currently running multiple Solr instances and often write data to
> > them. I also query them. Both works fine right now, because I don't have
> so
> > many search requests. For querying I recognized that the firstSearcher
> and
> > newSearcher static warming with one facet query really brings a
> performance
> > boost. But the downside is, that writing now is really slow.
> >
> > Does it make sense at all to place firstSearcher and newSearcher on a
> Solr
> > server, which get lot's of writes. Or is the best strategy to introduce
> > some slave server, where these event listeners are integrated, but to
> keep
> > them away from the master?
> >
> > The thing is, that I would need 6 additional Solr slaves, if I would pick
> > this approach. :)
> >
> > What do you think?
> >
> > Thanks.
> > Daniel
>

Reply via email to