Thank you! I’ve looked at Solr Operator and seems like it’s a good way to
go.
Regarding the upload from the script inside an image, did you implement
some kind of a locking mechanism or “assigned” one of the instances to do
the job?

Best regards,
Dmitrii Fitisov


On Tue, 30 Jan 2024 at 11:54, ufuk yılmaz <uyil...@vivaldi.net.invalid>
wrote:

> Hi! There is an official Solr project to manage Solr on kubernetes called
> “Solr Operator”. You can also check it if you wish, instead of rolling your
> own implementation.
>
> Alternatively I used to build a custom docker image based on Solr image
> and put configsets in there, and upload them on Solr startup via scripts in
> ‘docker-entrypoint-initdb.d’ folder. That way I could put both image and
> configsets in version control.
>
> —ufuk yilmaz
> —
>
> > On 30 Jan 2024, at 13:24, Дмитрий Фитисов <fitisovd...@gmail.com> wrote:
> >
> > Hello there! I've been researching moving from Standalone Solr to k8s
> > SolrCloud setup and I'm wondering how you guys manage configsets and
> > collections when operating in k8s? It seems to me that the easiest
> > solution is to introduce some kind of an admin process (In a form of a
> > bash script) that will upload configsets to zookeeper via the call to
> > a zookeeper service and reload/create collections via the call to a
> > solr service. The process itself should be spawned after all the Solr
> > pods (and Zookeeper?) are live and ready.
> >
> > Can you share your way of dealing with this?Maybe I'm missing something?
> >
> > Best regards,
> > Dmitrii Fitisov
>
>

Reply via email to