Hi Olav,

am I correct in assuming you are running a distributed deployment? If so, make 
sure on the admin node you are running matterhorn-search-remote instead of 
matterhorn-search-impl. The remote implementation will send the data to the 
actual impl running on the Engage server.

Tobias

On 11.12.2012, at 11:26, Olav Bringedal <[email protected]> wrote:

> On 2012-06-07 15:25, Tobias Wunden wrote:
>> Ruben,
>> 
>> it's usually not recommended to be running the SOLR search indexes off 
>> network (NFS especially) shares, see Google for details.
>> 
>> Tobias
>> 
> 
> Hi
> 
> Sorry for somewhat repeating myself and reviving an old thread.
> 
> I have been banging my head against this one for quite a few months now. I 
> have to choose between restarting Matterhorn on the engage server every time 
> a new recording is published, or copy the solr indexes from the core server 
> to the engage player every publish. Until now we have chosen the former as it 
> is more convenient.
> 
> In my mind Matterhorn's service driven architecture should provide a way for 
> he publishing method to make the engage server rebuild its indexes.
> I have not been able to see any indication that the core server in any way 
> communicates with engage servers solr service to say "here is a new 
> recording, please update your index"
> 
> Digging a bit deeper into the problem with a shared index over nfs makes the 
> engage servers solr service confused as the indexes has gotten new file names.
> 
> My third attempt have been to make the share mount as 'ro' on engage player. 
> That borked the entire server, so not a good solution.
> 
> So my questions are:
> 
> -How is solr supposed to be updated on the engage service if not via a shared 
> device?
> 
> -Is it possible to force the engage solr service to reload the index files?
> 
> -How are the other adapters solving a multiserver-setup?
> 
> Mark: solr here is embedded solr services and not a separate solr server 
> (which some of the documentation are referring to)
> 
> 
> Id be grateful for any advice!
> 
>>> 
>>> 
>>> On Jun 1, 2012, at 3:08 PM, Tobias Wunden wrote:
>>> 
>>>> Hi Frank,
>>>> 
>>>>> from a thread at the second of May, I read that the best way to
>>>>> minimize the duplication of data in a cluster is to put the entire
>>>>> org.opencastproject.storage.dir on a shared NFS drive. Which I did.
>>>> 
>>>> only two selected directories should go onto a shared drive, which are the 
>>>> file.repo.path and the workspace.dir (both from config.properties). All 
>>>> the other directories *must not* got into shared locations, they are to be 
>>>> kept per host, otherwise you will experience nodes overwriting each 
>>>> other's data.
>>>> 
>>>> Tobias
>>>> 
> 
> 
> -- 
> 
> Olav Bringedal
> 
> Seksjon for integrasjon og applikasjonsutvikling
> IT-Avdelingen UIB
> _______________________________________________
> Matterhorn-users mailing list
> [email protected]
> http://lists.opencastproject.org/mailman/listinfo/matterhorn-users

_______________________________________________
Matterhorn-users mailing list
[email protected]
http://lists.opencastproject.org/mailman/listinfo/matterhorn-users

Reply via email to