On Fri, Aug 14, 2009 at 8:39 AM, KaktuChakarabati <jimmoe...@gmail.com>wrote:

>
> In the old replication, I could snappull with multiple slaves
> asynchronously
> but perform the snapinstall on each at the same time (+- epsilon seconds),
> so that way production load balanced query serving will always be
> consistent.
>
> With the new system it seems that i have no control over syncing them, but
> rather it polls every few minutes and then decides the next cycle based on
> last time it *finished* updating, so in any case I lose control over the
> synchronization of snap installation across multiple slaves.
>

That is true. How did you synchronize them with the script based solution?
Assuming network bandwidth is equally distributed and all slaves are equal
in hardware/configuration, the time difference between new searcher
registration on any slave should not be more then pollInterval, no?


>
> Also, I noticed the default poll interval is 60 seconds. It would seem that
> for such a rapid interval, what i mentioned above is a non issue, however i
> am not clear how this works vis-a-vis the new searcher warmup? for a
> considerable index size (20Million docs+) the warmup itself is an expensive
> and somewhat lengthy process and if a new searcher opens and warms up every
> minute, I am not at all sure i'll be able to serve queries with reasonable
> QTimes.
>

If the pollInterval is 60 seconds, it does not mean that a new index is
fetched every 60 seconds. A new index is downloaded and installed on the
slave only if a commit happened on the master (i.e. the index was actually
changed on the master).

-- 
Regards,
Shalin Shekhar Mangar.

Reply via email to