Sent from my iPhone
> On May 15, 2021, at 4:20 PM, matthew sporleder <msporle...@gmail.com> wrote:
>
> Ensure your new cluster has a unique zookeeper chroot.
>
>> On May 15, 2021, at 3:10 PM, jerome moliere <jer...@javaxpert.com> wrote:
>>
>> Hi guys (Shawn.Dwane & others)
>> thanks for your support...
>> My problem is tricky , I will try to summarize the constraints :
>> - existing Solr with Zookeeper cluster , I cannot migrate Zk ensemble
>> cluster separately because it is used by other applications (kafka ...) or
>> I should change the current config to switch to the new version
>> - running version 6.x JDK 8 on RH 6.2 (?), target config is OpenJDK11 on RH
>> 8.x
>> - 2 indexes and different shards (number depends from the target
>> environmen/volumet)
>> - we have 4 environments to migrate dev/staging (no important data here)
>> and production + another (less important but customers facing)
>> - production cluster , so because of the SLA the maximum down time is 4
>> hours and I am sure that it is not sufficient to reindex more than 1
>> billion of documents
>> - for the moment I don't know the schema used but sure I will have access
>> to the old version soon
>> - SolarJ clients to be migrated too (customers would like not to upgrade
>> these clients but it seems to be mandatory judging by the docs & your
>> feedbacks)
>>
>> On my todo list I already have written some points:
>> - build the new cluster independently from the existing one
>> - try to use the same Zk ensemble for both clusters ( there is no problem
>> with recent Zk on JDK 11 with old Solr 6 running JDK 8 ?)
>> - migrate the config files
>> - migrate the JVM config : as mentionned by Dwane , of course we 'll try to
>> get the best options offered by new JVM (ZGC or G1 , we 'll need some
>> benchmarks to choose the best option)
>>
>> then reindexing full documents list...
>>
>> So the switch from one cluster to the other will be a network trick only
>> routing traffic from one cluster to the other...
>>
>> Your expert advices strengthen my idea, but I fear to forgive an important
>> step .....
>> You already helped me to paint an approximate procedure ....
>>
>> Thanks again for your support.
>>
>>
>>
>>>> On Sat, May 15, 2021 at 5:27 PM Shawn Heisey <apa...@elyograg.org> wrote:
>>>
>>>> On 5/14/2021 3:12 PM, jerome moliere wrote:
>>>> I would like to know the best practices to migrate a mission critical
>>> Solr
>>>> cluster from version 6.x to 8.x without any service shutdown.
>>>> Is the double upgrade required (mandatory) ?
>>>> We can introduce another temporary group of machines ro join the cluster,
>>>> then we may stop the existing nodes one by one once upgrade done...
>>>> In this case we have different indexes and 2 replicas per shard...
>>>
>>> If you have an index that has ever been touched by a 6.x version, you
>>> can't use it in 8.x, even if you first upgrade it with a 7.x version.
>>> It will fail. Reindexing is required. Jumping more than one major
>>> version was iffy and not recommended before 6.x, now it is explicitly
>>> enforced as not possible. The enforcement is done by Lucene.
>>>
>>>> Is it possible to have concurently 2 versions of SolR running in the same
>>>> cluster?
>>>
>>> It would not be recommended at all to run two different major versions
>>> in one cluster. I am assuming SolrCloud here, where the nodes are
>>> always talking to each other.
>>>
>>>> Of course a full reindexing seems to be required , there is no problem we
>>>> have enough time ( upgrade procedure is not required to be finished in a
>>>> short time).
>>>> I would like to have a summary of options available , pros & cons for
>>> each
>>>> one and if possible a few tips & tricks to avoid some pitfalls...
>>>
>>> Build the new cluster separately from the old one. They could share the
>>> same zookeeper ensemble by setting up the new cluster with a different
>>> chroot.
>>>
>>> Create new config files for the indexes with the new version examples as
>>> starting points.
>>>
>>> Build the indexes from scratch on the new cluster.
>>>
>>> Switch the URLs in your application to point to the new cluster.
>>>
>>> Thanks,
>>> Shawn
>>>
>>
>>
>> --
>> J.MOLIERE - Mentor/J