Thanks. If we write such a process, I'll see if I can get permission to
release it. It might be a moot point because I found out we're stuck on
4.10.3 for the time being. Haven't used that version in a while and forgot
it didn't even have the collection backup API.

On Wed, Nov 9, 2016 at 2:18 PM, Hrishikesh Gadre <gadre.s...@gmail.com>
wrote:

> Hi Mike,
>
> I filed SOLR-9744 <https://issues.apache.org/jira/browse/SOLR-9744> to
> track this work. Please comment on this jira if you have any suggestions.
>
> Thanks
> Hrishikesh
>
>
> On Wed, Nov 9, 2016 at 11:07 AM, Hrishikesh Gadre <gadre.s...@gmail.com>
> wrote:
>
> > Hi Mike,
> >
> > Currently we don't have capability to take rolling backups for the Solr
> > collections. I think it should be fairly straightforward to write a
> script
> > that implements this functionality outside of Solr. If you post that
> > script, may be we can even ship it as part of Solr itself (for the
> benefit
> > of the community).
> >
> > Thanks
> > Hrishikesh
> >
> >
> >
> > On Wed, Nov 9, 2016 at 9:17 AM, Mike Thomsen <mikerthom...@gmail.com>
> > wrote:
> >
> >> I read over the docs (
> >> https://cwiki.apache.org/confluence/display/solr/Making+and+
> >> Restoring+Backups)
> >> and am not quite sure what route to take. My team is looking for a way
> to
> >> backup the entire index of a SolrCloud collection with regular rotation
> >> similar to the backup option available in a single node deployment.
> >>
> >> We have plenty of space in our HDFS cluster. Resources are not an issue
> in
> >> the least to have a rolling back up of say, the last seven days. Is
> there
> >> a
> >> good way to implement this sort of rolling backup with the APIs or will
> we
> >> have to roll some of the functionality ourselves?
> >>
> >> I'm not averse to using the API to dump a copy of each shard to HDFS.
> >> Something like this:
> >>
> >> /solr/collection/replication?command=backup&name=shard_1_1&
> numberToKeep=7
> >>
> >> Is that a viable route to achieve this or do we need to do something
> else?
> >>
> >> Thanks,
> >>
> >> Mike
> >>
> >
> >
>

Reply via email to