The core_node name is largely irrelevant, you should have names more
descriptive in the state.json file like collection1_shard1_replica1.
You happen to see 19 because you have only one replica per shard,

Exactly how are you creating the replica? What version of Solr? If
you're using the "core admin" UI, it's tricky to get right. I'd
strongly recommend using the "collections API, ADDREPLICA" command,
see: 
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api_addreplica

Best,
Erick

On Tue, Sep 13, 2016 at 7:11 PM, Chetas Joshi <chetas.jo...@gmail.com> wrote:
> Is this happening because I have set replicationFactor=1?
> So even if I manually add replica for the shard that's down, it will just
> create a dataDir but would not copy any of the data into the dataDir?
>
> On Tue, Sep 13, 2016 at 6:07 PM, Chetas Joshi <chetas.jo...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I just started experimenting with solr cloud.
>>
>> I have a solr cloud of 20 nodes. I have one collection with 18 shards
>> running on 18 different nodes with replication factor=1.
>>
>> When one of my shards goes down, I create a replica using the Solr UI. On
>> HDFS I see a core getting added. But the data (index table and tlog)
>> information does not get copied over to that directory. For example, on
>> HDFS I have
>>
>> /solr/collection/core_node_1/data/index
>> /solr/collection/core_node_1/data/tlog
>>
>> when I create a replica of a shard, it creates
>>
>> /solr/collection/core_node_19/data/index
>> /solr/collection/core_node_19/data/tlog
>>
>> (core_node_19 as I already have 18 shards for the collection). The issue
>> is both my folders  core_node_19/data/index and core_node_19/data/tlog are
>> empty. Data does not get copied over from core_node_1/data/index and
>> core_node_1/data/tlog.
>>
>> I need to remove core_node_1 and just keep core_node_19 (the replica). Why
>> the data is not getting copied over? Do I need to manually move all the
>> data from one folder to the other?
>>
>> Thank you,
>> Chetas.
>>
>>

Reply via email to