Re: deploying nightly updates to slaves
Lukas Kahwe Smith: On 07.04.2010, at 14:24, Lukas Kahwe Smith wrote: For Solr the idea is also just copy the index files into a new directory and then use http://wiki.apache.org/solr/CoreAdmin#RELOAD after updating the config file (I assume its not possible to hot swap like with MySQL). Since I want to keep a local backup of the index, I guess it might be better to first call UNLOAD and then CREATE after having moved the current index data to a back dir and having moved the new index data into position. Now UNLOAD has the feature of continuing to serve existing requests. In my case I actually lock the slaves, so there shouldn't be any requests and if so, they do not matter anyways. I do not want to shutdown the solr server in order to not accidentally tick-off the monitoring. But I also want to make sure I do not corrupt the index (then again I am only reading anyways). But I am worried if for some reason there is still some request open and I do not poll via STATUS action to make sure the core is UNLOADed, that I might corrupt the index. regards, Lukas Kahwe Smith m...@pooteeweet.org Hallo Lukas, it sounds as if you could just use SOLR replication out of the box. The replication only happens, if a commit on the master happens or on some other trigger, so you don't waste time on unnecessary replications during the day. Is there by any chance the possibility that you'd rather want to store your data in HBase then in MySQL? I'm working on a project right now to store SOLR/Lucene indices directly in HBase too. I'll be at the webtuesday tomorrow. Maybe I could give an introduction to Hadoop/HBase on a next webtuesday? Beste Grüße, Thomas Koch, http://www.koch.ro
deploying nightly updates to slaves
Hi, For a project I am running a LAMP cluster (master and multiple slaves). Solr is running inside Jetty. To make things easy in terms of server management, all servers are configured the same way, and one server just acts as the MySQL master. As for Solr the only data changes happen over night. So the idea is that we will update the MySQL database on a single server. Then we will update the Solr index on that server. Once this is completed and some sanity checks pass we will take update the slaves one by one. With MySQL we can just copy the data files and hot swap them. For Solr the idea is also just copy the index files into a new directory and then use http://wiki.apache.org/solr/CoreAdmin#RELOAD after updating the config file (I assume its not possible to hot swap like with MySQL). With this approach we can time the deployment of the new data to each slave perfectly. Plus if we run into any issues we can also easily rollback by just swapping the data around again. I would appreciate any comments you guys might have on this concept. regards, Lukas Kahwe Smith m...@pooteeweet.org
Re: deploying nightly updates to slaves
On 07.04.2010, at 14:24, Lukas Kahwe Smith wrote: For Solr the idea is also just copy the index files into a new directory and then use http://wiki.apache.org/solr/CoreAdmin#RELOAD after updating the config file (I assume its not possible to hot swap like with MySQL). Since I want to keep a local backup of the index, I guess it might be better to first call UNLOAD and then CREATE after having moved the current index data to a back dir and having moved the new index data into position. Now UNLOAD has the feature of continuing to serve existing requests. In my case I actually lock the slaves, so there shouldn't be any requests and if so, they do not matter anyways. I do not want to shutdown the solr server in order to not accidentally tick-off the monitoring. But I also want to make sure I do not corrupt the index (then again I am only reading anyways). But I am worried if for some reason there is still some request open and I do not poll via STATUS action to make sure the core is UNLOADed, that I might corrupt the index. regards, Lukas Kahwe Smith m...@pooteeweet.org