Not yet - we don't plan on working on this until a lot of other stuff is
working solid at this point. But someone else could jump in!

There are a couple ways to go about it that I know of:

A more long term solution may be to start using micro shards - each index
starts as multiple indexes. This makes it pretty fast to move mirco shards
around as you decide to change partitions. It's also less flexible as you
are limited by the number of micro shards you start with.

A more simple and likely first step is to use an index splitter . We
already have one in lucene contrib - we would just need to modify it so
that it splits based on the hash of the document id. This is super
flexible, but splitting will obviously take a little while on a huge index.
The current index splitter is a multi pass splitter - good enough to start
with, but most files under codec control these days, we may be able to make
a single pass splitter soon as well.

Eventually you could imagine using both options - micro shards that could
also be split as needed. Though I still wonder if micro shards will be
worth the extra complications myself...

Right now though, the idea is that you should pick a good number of
partitions to start given your expected data ;) Adding more replicas is
trivial though.

- Mark

On Thu, Dec 1, 2011 at 6:35 PM, Jamie Johnson <jej2...@gmail.com> wrote:

> Another question, is there any support for repartitioning of the index
> if a new shard is added?  What is the recommended approach for
> handling this?  It seemed that the hashing algorithm (and probably
> any) would require the index to be repartitioned should a new shard be
> added.
>
> On Thu, Dec 1, 2011 at 6:32 PM, Jamie Johnson <jej2...@gmail.com> wrote:
> > Thanks I will try this first thing in the morning.
> >
> > On Thu, Dec 1, 2011 at 3:39 PM, Mark Miller <markrmil...@gmail.com>
> wrote:
> >> On Thu, Dec 1, 2011 at 10:08 AM, Jamie Johnson <jej2...@gmail.com>
> wrote:
> >>
> >>> I am currently looking at the latest solrcloud branch and was
> >>> wondering if there was any documentation on configuring the
> >>> DistributedUpdateProcessor?  What specifically in solrconfig.xml needs
> >>> to be added/modified to make distributed indexing work?
> >>>
> >>
> >>
> >> Hi Jaime - take a look at solrconfig-distrib-update.xml in
> >> solr/core/src/test-files
> >>
> >> You need to enable the update log, add an empty replication handler def,
> >> and an update chain with solr.DistributedUpdateProcessFactory in it.
> >>
> >> --
> >> - Mark
> >>
> >> http://www.lucidimagination.com
> >>
> >
>



-- 
- Mark

http://www.lucidimagination.com

Reply via email to