and having looked at this closer, shouldn't the down node not be
marked as active when I stop that solr instance?

On Fri, Feb 17, 2012 at 10:04 AM, Jamie Johnson <jej2...@gmail.com> wrote:
> Thanks Mark.  I'm still seeing some issues while indexing though.  I
> have the same setup describe in my previous email.  I do some indexing
> to the cluster with everything up and everything looks good.  I then
> take down one instance which is running 2 cores (shard2 slice 1 and
> shard 1 slice 2) and do some more inserts.  I then bring this second
> instance back up expecting that the system will recover the missing
> documents from the other instance but this isn't happening.  I see the
> following log message
>
> Feb 17, 2012 9:53:11 AM org.apache.solr.cloud.RecoveryStrategy run
> INFO: Sync Recovery was succesful - registering as Active
>
> which leads me to believe things should be in sync, but they are not.
> I've made no changes to the default solrconfig.xml, not sure if I need
> to or not but it looks like everything should work now.  Am I missing
> a configuration somewhere?
>
> Initial state
>
> {"collection1":{
>    "slice1":{
>      "JamiesMac.local:8501_solr_slice1_shard1":{
>        "shard_id":"slice1",
>        "leader":"true",
>        "state":"active",
>        "core":"slice1_shard1",
>        "collection":"collection1",
>        "node_name":"JamiesMac.local:8501_solr",
>        "base_url":"http://JamiesMac.local:8501/solr"},
>      "JamiesMac.local:8502_solr_slice1_shard2":{
>        "shard_id":"slice1",
>        "state":"active",
>        "core":"slice1_shard2",
>        "collection":"collection1",
>        "node_name":"JamiesMac.local:8502_solr",
>        "base_url":"http://JamiesMac.local:8502/solr"}},
>    "slice2":{
>      "JamiesMac.local:8501_solr_slice2_shard2":{
>        "shard_id":"slice2",
>        "leader":"true",
>        "state":"active",
>        "core":"slice2_shard2",
>        "collection":"collection1",
>        "node_name":"JamiesMac.local:8501_solr",
>        "base_url":"http://JamiesMac.local:8501/solr"},
>      "JamiesMac.local:8502_solr_slice2_shard1":{
>        "shard_id":"slice2",
>        "state":"active",
>        "core":"slice2_shard1",
>        "collection":"collection1",
>        "node_name":"JamiesMac.local:8502_solr",
>        "base_url":"http://JamiesMac.local:8502/solr"}}}}
>
>
> state with 1 solr instance down
>
> {"collection1":{
>    "slice1":{
>      "JamiesMac.local:8501_solr_slice1_shard1":{
>        "shard_id":"slice1",
>        "leader":"true",
>        "state":"active",
>        "core":"slice1_shard1",
>        "collection":"collection1",
>        "node_name":"JamiesMac.local:8501_solr",
>        "base_url":"http://JamiesMac.local:8501/solr"},
>      "JamiesMac.local:8502_solr_slice1_shard2":{
>        "shard_id":"slice1",
>        "state":"active",
>        "core":"slice1_shard2",
>        "collection":"collection1",
>        "node_name":"JamiesMac.local:8502_solr",
>        "base_url":"http://JamiesMac.local:8502/solr"}},
>    "slice2":{
>      "JamiesMac.local:8501_solr_slice2_shard2":{
>        "shard_id":"slice2",
>        "leader":"true",
>        "state":"active",
>        "core":"slice2_shard2",
>        "collection":"collection1",
>        "node_name":"JamiesMac.local:8501_solr",
>        "base_url":"http://JamiesMac.local:8501/solr"},
>      "JamiesMac.local:8502_solr_slice2_shard1":{
>        "shard_id":"slice2",
>        "state":"active",
>        "core":"slice2_shard1",
>        "collection":"collection1",
>        "node_name":"JamiesMac.local:8502_solr",
>        "base_url":"http://JamiesMac.local:8502/solr"}}}}
>
> state when everything comes back up after adding documents
>
> {"collection1":{
>    "slice1":{
>      "JamiesMac.local:8501_solr_slice1_shard1":{
>        "shard_id":"slice1",
>        "leader":"true",
>        "state":"active",
>        "core":"slice1_shard1",
>        "collection":"collection1",
>        "node_name":"JamiesMac.local:8501_solr",
>        "base_url":"http://JamiesMac.local:8501/solr"},
>      "JamiesMac.local:8502_solr_slice1_shard2":{
>        "shard_id":"slice1",
>        "state":"active",
>        "core":"slice1_shard2",
>        "collection":"collection1",
>        "node_name":"JamiesMac.local:8502_solr",
>        "base_url":"http://JamiesMac.local:8502/solr"}},
>    "slice2":{
>      "JamiesMac.local:8501_solr_slice2_shard2":{
>        "shard_id":"slice2",
>        "leader":"true",
>        "state":"active",
>        "core":"slice2_shard2",
>        "collection":"collection1",
>        "node_name":"JamiesMac.local:8501_solr",
>        "base_url":"http://JamiesMac.local:8501/solr"},
>      "JamiesMac.local:8502_solr_slice2_shard1":{
>        "shard_id":"slice2",
>        "state":"active",
>        "core":"slice2_shard1",
>        "collection":"collection1",
>        "node_name":"JamiesMac.local:8502_solr",
>        "base_url":"http://JamiesMac.local:8502/solr"}}}}
>
>
> On Thu, Feb 16, 2012 at 10:24 PM, Mark Miller <markrmil...@gmail.com> wrote:
>> Yup - deletes are fine.
>>
>>
>> On Thu, Feb 16, 2012 at 8:56 PM, Jamie Johnson <jej2...@gmail.com> wrote:
>>
>>> With solr-2358 being committed to trunk do deletes and updates get
>>> distributed/routed like adds do? Also when a down shard comes back up are
>>> the deletes/updates forwarded as well? Reading the jira I believe the
>>> answer is yes, I just want to verify before bringing the latest into my
>>> environment.
>>>
>>
>>
>>
>> --
>> - Mark
>>
>> http://www.lucidimagination.com

Reply via email to