Hi Stephen,

Thanks for the update. I filed SOLR-9527
<https://issues.apache.org/jira/browse/SOLR-9527> for tracking purpose. I
will take a look and get back to you.

Thanks
Hrishikesh

On Fri, Sep 16, 2016 at 2:56 PM, Stephen Lewis <sle...@panopto.com> wrote:

> Hello,
>
> I've tried this on both solr 6.1 and 6.2, with the same result. You are
> right that the collections API offering collection level backup/restore
> from remote server is a new feature.
>
> After some more experimentation, I am fairly sure that this is a bug which
> is specific to the leaders in backup restore. After I ran a command to
> restore a backup of the collection "foo" (which has maxShardsPerNode set to
> 1 as well) with a replication factor of 2, I see consistently that the
> followers (replica > 1) are correctly distributed, but all of the leaders
> are brought up hosted on one shard.
>
> *Repro*
>
> *create *
> http://solr.test:8983/solr/admin/collections?action=
> CREATE&name=foo&numShards=3&maxShardsPerNode=1&collection.
> configName=test-one
> (after creation, all shards are on different nodes as expected)
>
> *backup*
> http://solr.test:8983/solr/admin/collections?action=
> BACKUP&name=foo-2&collection=foo&async=foo-2
>
> *delete*
> http://solr.test:8983/solr/admin/collections?action=DELETE&name=foo
>
> *restore*
> Result: All leaders are hosted on node, followers are spread about.
>
>  {
>   "responseHeader" : { "status" : 0,"QTime" : 7},
>   "cluster" : {
>     "collections" : {
>       "foo" : {
>         "replicationFactor" : "2",
>         "shards" : {
>           "shard2" : {
>             "range" : "d5550000-2aa9ffff",
>             "state" : "active",
>             "replicas" : {
>               "core_node1" : {
>                 "core" : "foo_shard2_replica0",
>                 "base_url" : "http://IP1:8983/solr";,
>                 "node_name" : "IP1:8983_solr",
>                 "state" : "active",
>                 "leader" : "true"
>               },
>               "core_node4" : {
>                 "core" : "foo_shard2_replica1",
>                 "base_url" : "http://IP2:8983/solr";,
>                 "node_name" : "IP2:8983_solr",
>                 "state" : "recovering"
>               }
>             }
>           },
>           "shard3" : {
>             "range" : "2aaa0000-7fffffff",
>             "state" : "active",
>             "replicas" : {
>               "core_node2" : {
>                 "core" : "foo_shard3_replica0",
>                 "base_url" : "http://IP1:8983/solr";,
>                 "node_name" : "IP1:8983_solr",
>                 "state" : "active",
>                 "leader" : "true"
>               },
>               "core_node5" : {
>                 "core" : "foo_shard3_replica1",
>                 "base_url" : "http://IP3:8983/solr";,
>                 "node_name" : "IP3:8983_solr",
>                 "state" : "recovering"
>               }
>             }
>           },
>           "shard1" : {
>             "range" : "80000000-d554ffff",
>             "state" : "active",
>             "replicas" : {
>               "core_node3" : {
>                 "core" : "foo_shard1_replica0",
>                 "base_url" : "http://IP1:8983/solr";,
>                 "node_name" : "IP1:8983_solr",
>                 "state" : "active",
>                 "leader" : "true"
>               },
>               "core_node6" : {
>                 "core" : "foo_shard1_replica1",
>                 "base_url" : "http://IP4:8983/solr";,
>                 "node_name" : "IP4:8983_solr",
>                 "state" : "recovering"
>               }
>             }
>           }
>         },
>         "router" : {
>           "name" : "compositeId"
>         },
>         "maxShardsPerNode" : "1",
>         "autoAddReplicas" : "false",
>         "znodeVersion" : 204,
>         "configName" : "test-one"
>       }
>     },
>     "properties" : {
>       "location" : "/mnt/solr_backups"
>     },
>     "live_nodes" : [
>       "IP5:8983_solr",
>       "IP3:8983_solr",
>       "IP6:8983_solr",
>       "IP4:8983_solr",
>       "IP7:8983_solr",
>       "IP1:8983_solr",
>       "IP8:8983_solr",
>       "IP9:8983_solr",
>       "IP2:8983_solr"]
>   }
> }
>
>
> On Fri, Sep 16, 2016 at 2:07 PM, Reth RM <reth.ik...@gmail.com> wrote:
>
> > Which version of solr? Afaik, until 6.1, solr backup and restore command
> > apis required to do separate backup for each shard, and then restore in
> > similar lines( both go for each). 6.1 version seems to have new feature
> of
> > backing up entire collection records and then restoring it back to new
> > collection setup(did not try yet).
> >
> >
> > On Thu, Sep 15, 2016 at 1:45 PM, Stephen Lewis <sle...@panopto.com>
> wrote:
> >
> > > Hello,
> > >
> > > I have a solr cloud cluster in a test environment running 6.1 where I
> am
> > > looking at using the collections API BACKUP and RESTORE commands to
> > manage
> > > data integrity.
> > >
> > > When restoring from a backup, I'm finding the same behavior occurs
> every
> > > time; after the restore command, all shards are being hosted on one
> node.
> > > What's especially surprising about this is that there are 6 live nodes
> > > beforehand, the collection has maxShardsPerNode set to 1, and this
> occurs
> > > even if I pass through the parameter maxShardsPerNode=1 to the API
> call.
> > Is
> > > there perhaps somewhere else I need to configure something, or another
> > step
> > > I am missing? If perhaps I'm misunderstanding the intention of these
> > > parameters, could you clarify for me and let me know how to support
> > > restoring different shards on different nodes?
> > >
> > > Full repro below.
> > >
> > > Thanks!
> > >
> > >
> > > *Repro*
> > >
> > > *Cluster state before*
> > >
> > > http://54.85.30.39:8983/solr/admin/collections?action=
> > > CLUSTERSTATUS&wt=json
> > >
> > > {
> > >   "responseHeader" : {"status" : 0,"QTime" : 4},
> > >   "cluster" : {
> > >     "collections" : {},
> > >     "live_nodes" : [
> > >       "172.18.7.153:8983_solr",
> > >        "172.18.2.20:8983_solr",
> > >        "172.18.10.88:8983_solr",
> > >        "172.18.6.224:8983_solr",
> > >        "172.18.8.255:8983_solr",
> > >        "172.18.2.21:8983_solr"]
> > >   }
> > > }
> > >
> > >
> > > *Restore Command (formatted for ease of reading)*
> > >
> > > http://54.85.30.39:8983/solr/admin/collections?action=RESTORE
> > >
> > > &collection=panopto
> > > &async=backup-4
> > >
> > > &location=/mnt/beta_solr_backups
> > > &name=2016-09-02
> > >
> > > &maxShardsPerNode=1
> > >
> > > <response>
> > > <lst name="responseHeader">
> > > <int name="status">0</int>
> > > <int name="QTime">16</int>
> > > </lst>
> > > <str name="requestid">backup-4</str>
> > > </response>
> > >
> > >
> > > *Cluster state after*
> > >
> > > http://54.85.30.39:8983/solr/admin/collections?action=
> > > CLUSTERSTATUS&wt=json
> > >
> > > {
> > >   "responseHeader" : {"status" : 0,"QTime" : 8},
> > >   "cluster" : {
> > >     "collections" : {
> > >       "panopto" : {
> > >         "replicationFactor" : "1",
> > >         "shards" : {
> > >           "shard2" : {
> > >             "range" : "0-7fffffff",
> > >             "state" : "construction",
> > >             "replicas" : {
> > >               "core_node1" : {
> > >                 "core" : "panopto_shard2_replica0",
> > >                 "base_url" : "http://172.18.2.21:8983/solr";,
> > >                 "node_name" : "172.18.2.21:8983_solr",
> > >                 "state" : "active",
> > >                 "leader" : "true"
> > >               }
> > >             }
> > >           },
> > >           "shard1" : {
> > >             "range" : "80000000-ffffffff",
> > >             "state" : "construction",
> > >             "replicas" : {
> > >               "core_node2" : {
> > >                 "core" : "panopto_shard1_replica0",
> > >                 "base_url" : "http://172.18.2.21:8983/solr";,
> > >                 "node_name" : "172.18.2.21:8983_solr",
> > >                 "state" : "active",
> > >                 "leader" : "true"
> > >               }
> > >             }
> > >           }
> > >         },
> > >         "router" : {
> > >           "name" : "compositeId"
> > >         },
> > >         "maxShardsPerNode" : "1",
> > >         "autoAddReplicas" : "false",
> > >         "znodeVersion" : 44,
> > >         "configName" : "panopto"
> > >       }
> > >     },
> > >     "live_nodes" : ["172.18.7.153:8983_solr", "172.18.2.20:8983_solr",
> > > "172.18.10.88:8983_solr", "172.18.6.224:8983_solr", "172.18.8.255:8983
> > > _solr",
> > > "172.18.2.21:8983_solr"]
> > >   }
> > > }
> > >
> > >
> > >
> > >
> > > --
> > > Stephen
> > >
> > > (206)753-9320
> > > stephen-lewis.net
> > >
> >
>
>
>
> --
> Stephen
>
> (206)753-9320
> stephen-lewis.net
>

Reply via email to