I'm doing an installation of Sphinx in my data center and wanted to
get some opinions on the best way to set it up on a multi webhost
environment.  From what I've gleaned, Sphinx will connect to a daemon
(searchd) when queried, which usually is set up and running on the
same box as the webhost itself.  I also see that cache files are kept
locally as well.

My very general question is this:  Without getting into a distributed
setup (sharding, etc) what is the optimal way to set this up?

For example, lets assume I have 10 webservers, all running a Rails
instance of the same site load balanced behind a BigIP.  Logically I
would assume I could change the host configuration for Thinking Sphinx
to point to a remote server running searchd.  Makes sense, but then
how do you deal with the cache files?

Would I be ok allowing each host to manage its own cache
independently?  I've seen a few articles online about distributing
these cache files with methods like NFS or even rsync.  Doesn't sound
optimal to me.

Is using remote agents the only true way to support this
configuration, and I should spend the time investigating it...or is
there a simpler more generally acceptable way to accomplish what I'm
describing here?

Thanks!

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Thinking Sphinx" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/thinking-sphinx?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to