On the other hand, I'm aware of the fact that if I go with Lucene approach,
failover is something that I will have to support manually! which is a
nightmare!

On Fri, Mar 9, 2012 at 2:13 PM, Alireza Salimi <alireza.sal...@gmail.com>wrote:

> This solution makes sense, but I still don't know if I can use solrCloud
> with
> this configuration or not.
>
> On Fri, Mar 9, 2012 at 2:06 PM, Robert Stewart <bstewart...@gmail.com>wrote:
>
>> Split up index into say 100 cores, and then route each search to a
>> specific core by some mod operator on the user id:
>>
>> core_number = userid % num_cores
>>
>> core_name = "core"+core_number
>>
>> That way each index core is relatively small (maybe 100 million docs or
>> less).
>>
>>
>> On Mar 9, 2012, at 2:02 PM, Glen Newton wrote:
>>
>> > millions of cores will not work...
>> > ...yet.
>> >
>> > -glen
>> >
>> > On Fri, Mar 9, 2012 at 1:46 PM, Lan <dung....@gmail.com> wrote:
>> >> Solr has no limitation on the number of cores. It's limited by your
>> hardware,
>> >> inodes and how many files you could keep open.
>> >>
>> >> I think even if you went the Lucene route you would run into same
>> hardware
>> >> limits.
>> >>
>> >> --
>> >> View this message in context:
>> http://lucene.472066.n3.nabble.com/Lucene-vs-Solr-design-decision-tp3813457p3813511.html
>> >> Sent from the Solr - User mailing list archive at Nabble.com.
>> >
>> >
>> >
>> > --
>> > -
>> > http://zzzoot.blogspot.com/
>> > -
>>
>>
>
>
> --
> Alireza Salimi
> Java EE Developer
>
>
>


-- 
Alireza Salimi
Java EE Developer

Reply via email to