Thanks Dave. Setting up a proxy is a much better solution.

On Fri, Oct 8, 2021 at 6:35 PM Dave <hastings.recurs...@gmail.com> wrote:

> Yes. Put a proxy to hold the solr instances on your server, and simply
> point solrj to that proxy which has autofailover abilities already built in
> and you will instantly drop down the server list of one fails to respond.
>
> > On Oct 8, 2021, at 2:44 AM, HU Dong <itechb...@gmail.com> wrote:
> >
> > Hi,
> >
> > We're facing similar situations. If "run multiple fully independent
> > clusters" is the recommended solution, is there any recommended way to do
> > disaster recovery during query time?
> >
> > In our production environment, we're using solrj as the client. What
> comes
> > to my mind is declaring two client instances, each with one solrcloud
> > cluster configured. During search, try the first client and fallback to
> the
> > second if it fails. Is there any better way?
> >
> >> On Wed, Sep 15, 2021 at 7:25 PM Eric Pugh <
> ep...@opensourceconnections.com>
> >> wrote:
> >>
> >> I don’t think that Solr really provides (yet!) a great solution across
> >> datacenter.   I lean towards, if you want to run in multiple
> datacenters,
> >> just run multiple fully independent clusters, and feed them from a
> common
> >> queue.   That way, if something bad happens in one area, you don’t have
> the
> >> other area trying to connect/communicate across a barrier.
> >>
> >>
> >>
> >>> On Sep 15, 2021, at 6:02 AM, Christian Pfarr <z0lt...@pm.me.INVALID>
> >> wrote:
> >>>
> >>> You can use autoscaling Feature to achieve this. We currently use it
> for
> >> balancing nodes within a Datacenter, but should also work with
> different AZ
> >> by providing a system property.
> >>>
> >>>
> >>>
> >>
> https://solr.apache.org/guide/8_7/solrcloud-autoscaling-policy-preferences.html#place-replicas-based-on-a-system-property
> >> <
> >>
> https://solr.apache.org/guide/8_7/solrcloud-autoscaling-policy-preferences.html#place-replicas-based-on-a-system-property
> >>>
> >>>
> >>>
> >>> combine 3 AZ rules with 33% each .
> >>>
> >>>
> >>>
> >>
> https://solr.apache.org/guide/8_7/solrcloud-autoscaling-policy-preferences.html#multiple-percentage-rules
> >> <
> >>
> https://solr.apache.org/guide/8_7/solrcloud-autoscaling-policy-preferences.html#multiple-percentage-rules
> >>>
> >>>
> >>>
> >>> Not totally sure if it works that way, but you can give it a try.
> >>>
> >>>
> >>> Regards,
> >>>
> >>> Christian
> >>>
> >>>
> >>>
> >>>
> >>> -------- Original-Nachricht --------
> >>> Am 15. Sept. 2021, 07:44, Walter Underwood schrieb:
> >>>
> >>> You need three data centers. We split our Zookeeper ensemble across
> >> three AWS availability zones.
> >>>
> >>> sunder
> >>>
> >>> Sent from my iPad
> >>>
> >>>> On Sep 14, 2021, at 10:28 PM, HariBabu kuruva <
> >> hari2708.kur...@gmail.com> wrote:
> >>>>
> >>>> Hi All,
> >>>>
> >>>> We have Solr cloud running on 10 nodes with Zookeeper running on 5
> >> nodes.
> >>>> They are running across 2 data centers.
> >>>>
> >>>> As part of high availability we would like to run solr so that it
> >> should
> >>>> sustain even if one data center goes down.
> >>>>
> >>>> How can we achieve this approach as we need to maintain the Quorum
> >> rule for
> >>>> Zookeepers.
> >>>>
> >>>> Please advise
> >>>>
> >>>> --
> >>>>
> >>>> Thanks and Regards,
> >>>> Hari
> >>>> Mobile:9790756568
> >>>
> >>> <publickey - EmailAddress(s=z0lt...@pm.me) - 0xF0E154C5.asc>
> >>
> >> _______________________
> >> Eric Pugh | Founder & CEO | OpenSource Connections, LLC | 434.466.1467 |
> >> http://www.opensourceconnections.com <
> >> http://www.opensourceconnections.com/> | My Free/Busy <
> >> http://tinyurl.com/eric-cal>
> >> Co-Author: Apache Solr Enterprise Search Server, 3rd Ed <
> >>
> https://www.packtpub.com/big-data-and-business-intelligence/apache-solr-enterprise-search-server-third-edition-raw
> >
> >>
> >> This e-mail and all contents, including attachments, is considered to be
> >> Company Confidential unless explicitly stated otherwise, regardless of
> >> whether attachments are marked as such.
> >>
> >>
> >
> > --
> > Regards,
> > Dong
>


-- 
Regards,
Dong

Reply via email to