Hello Hoss, Steve,

thank you very much for your feedbacks, they have been very helpful making
me feel more confident now about this architecture.

In fact I decided to go for a single shared schema, but keeping multiple
indexes (multicore) because those two indexes are very different: one is
huge and updated not very often (once a day delta, once a week full) and the
other one is not that big and it is updated frequently (once per hour, once
per day, once per week full).

My boss is happy...thus I am happy too :-)

Now I am struggling a bit with Solrj...but that is already in another post
of mine :-)

Cheers,
Giovanni


On 3/26/09, Stephen Weiss <swe...@stylesight.com> wrote:
>
>
>> I have a very similar setup and that's precisely what we do - except with
> JSON.
>
> 1) Request comes into PHP
> 2) PHP runs the search against several different cores (in a multicore
> setup) - ours are a little more than "slightly" different
> 3) PHP constructs a new object with the responseHeader and response objects
> joined together (basically add the record counts together in the header and
> then concatenate the arrays of documents)
> 4) PHP encodes the combined data into JSON and returns it
>
> It sounds clunky but it all manages to happen very quickly (< 200 ms round
> trip).  The only problem you might hit is with paging, but from the way you
> describe your situation it doesn't sound like that will be a problem.  It's
> more of an issue if you're trying to make them seamlessly flow into each
> other, but it sounds like you plan on presenting them separately (as we do).
>
> --
> Steve
>
>
>> it could be a custom request handler, but it doesn't have to be -- you
>> could implment it in whatever way is easiest for you (there's no reason
>> why it has to run in the same JVM or on the same physical machine as Solr
>> ... it could be a PHP script on another server if you want)
>>
>>
>>
>>
>> -Hoss
>>
>>
>

Reply via email to