Why not use some form of RAID for your index store? You'd get the performance 
benefit of multiple disks without the complexity of managing them via solr.

Thanks,
Greg



-----Original Message-----
From: Deepak Konidena [mailto:deepakk...@gmail.com] 
Sent: Wednesday, September 11, 2013 2:07 PM
To: solr-user@lucene.apache.org
Subject: Re: Distributing lucene segments across multiple disks.

Are you suggesting a multi-core setup, where all the cores share the same 
schema, and the cores lie on different disks?

Basically, I'd like to know if I can distribute shards/segments on a single 
machine (with multiple disks) without the use of zookeeper.





-Deepak



On Wed, Sep 11, 2013 at 11:55 AM, Upayavira <u...@odoko.co.uk> wrote:

> I think you'll find it hard to distribute different segments between 
> disks, as they are typically stored in the same directory.
>
> However, instantiating separate cores on different disks should be 
> straight-forward enough, and would give you a performance benefit.
>
> I've certainly heard of that done at Amazon, with a separate EBS 
> volume per core giving some performance improvement.
>
> Upayavira
>
> On Wed, Sep 11, 2013, at 07:35 PM, Deepak Konidena wrote:
> > Hi,
> >
> > I know that SolrCloud allows you to have multiple shards on 
> > different machines (or a single machine). But it requires a 
> > zookeeper installation for doing things like leader election, leader 
> > availability, etc
> >
> > While SolrCloud may be the ideal solution for my usecase eventually, 
> > I'd like to know if there's a way I can point my Solr instance to 
> > read lucene segments distributed across different disks attached to the 
> > same machine.
> >
> > Thanks!
> >
> > -Deepak
>

Reply via email to