Re: Rsyncd start and stop for multiple instances

2009-02-09 Thread Chris Hostetter

: How can I hack the existing script to support multiple rsync module

you might want to just consult some rsyncd resources to answer this 
question, i believe adding a new [modname] block is how you add a 
module, with the path/comment keys listed underneight, however...

1) i don't believe it's recommended to do this with the solr replication 
scripts ... having a seperate rsyncd port per index is considered the best 
appraoch (last time i checked anyway)

2) you should probably consider using the java replication if you're 
dealing with multiiple indexes ... it's definitely moving to replace the 
script based replication.  if script based replication wwas working 
perfeclty for you, i wouldn't really recommend that you switch (especially 
since i haven't even had a chance to test out the java replication) but 
since it soundsl ike script based replication doesn't currently meet your 
needs, it would be worth investigating.


-Hoss



Re: Rsyncd start and stop for multiple instances

2008-07-01 Thread Jacob Singh
Hi Bill and Others:


Bill Au wrote:
 The rsyncd-start scripts gets the data_dir path from the command line and
 create a rsyncd.conf on the fly exporting the path as the rsync module named
 solr.  The salves need the data_dir path on the master to look for the
 latest snapshot.  But the rsync command used by the slaves relies on the
 rsync module name solr to do the file transfer using rsyncd.

So is the answer that replication simply won't work for multiple
instances unless I have a dedicated port for each one?

Or is the answer that I have to hack the existing scripts?

I'm a little confused when you say that slave needs to know the master's
data dir, but, no matter what it sends, it needs to match the one known
by the master when it starts rsyncd...

Sorry if my questions are newbie, I've not actually used rsyncd, but
I've read up quite a bit now.

Thanks,
Jacob

 
 Bill
 
 On Tue, Jun 10, 2008 at 4:24 AM, Jacob Singh [EMAIL PROTECTED] wrote:
 
 Hey folks,

 I'm messing around with running multiple indexes on the same server
 using Jetty contexts.  I've got the running groovy thanks to the
 tutorial on the wiki, however I'm a little confused how the collection
 distribution stuff will work for replication.

 The rsyncd-enable command is simple enough, but the rsyncd-start command
 takes a -d (data dir) as an argument... Since I'm hosting 4 different
 instances, all with their own data dirs, how do I do this?

 Also, you have to specify the master data dir when you are connecting
 from the slave anyway, so why does it need to be specified when I start
 the daemon?  If I just start it with any old data dir will it work for
 anything the user running it has perms on?

 Thanks,
 Jacob

 



Re: Rsyncd start and stop for multiple instances

2008-07-01 Thread Bill Au
You can either use a dedicated rsync port for each instance or hack the
existing scripts to support multiple rsync modules.  Both ways should work.

Bill

On Tue, Jul 1, 2008 at 3:49 AM, Jacob Singh [EMAIL PROTECTED] wrote:

 Hi Bill and Others:


 Bill Au wrote:
  The rsyncd-start scripts gets the data_dir path from the command line and
  create a rsyncd.conf on the fly exporting the path as the rsync module
 named
  solr.  The salves need the data_dir path on the master to look for the
  latest snapshot.  But the rsync command used by the slaves relies on the
  rsync module name solr to do the file transfer using rsyncd.

 So is the answer that replication simply won't work for multiple
 instances unless I have a dedicated port for each one?

 Or is the answer that I have to hack the existing scripts?

 I'm a little confused when you say that slave needs to know the master's
 data dir, but, no matter what it sends, it needs to match the one known
 by the master when it starts rsyncd...

 Sorry if my questions are newbie, I've not actually used rsyncd, but
 I've read up quite a bit now.

 Thanks,
 Jacob

 
  Bill
 
  On Tue, Jun 10, 2008 at 4:24 AM, Jacob Singh [EMAIL PROTECTED]
 wrote:
 
  Hey folks,
 
  I'm messing around with running multiple indexes on the same server
  using Jetty contexts.  I've got the running groovy thanks to the
  tutorial on the wiki, however I'm a little confused how the collection
  distribution stuff will work for replication.
 
  The rsyncd-enable command is simple enough, but the rsyncd-start command
  takes a -d (data dir) as an argument... Since I'm hosting 4 different
  instances, all with their own data dirs, how do I do this?
 
  Also, you have to specify the master data dir when you are connecting
  from the slave anyway, so why does it need to be specified when I start
  the daemon?  If I just start it with any old data dir will it work for
  anything the user running it has perms on?
 
  Thanks,
  Jacob
 
 




Re: Rsyncd start and stop for multiple instances

2008-06-13 Thread Bill Au
The rsyncd-start scripts gets the data_dir path from the command line and
create a rsyncd.conf on the fly exporting the path as the rsync module named
solr.  The salves need the data_dir path on the master to look for the
latest snapshot.  But the rsync command used by the slaves relies on the
rsync module name solr to do the file transfer using rsyncd.

Bill

On Tue, Jun 10, 2008 at 4:24 AM, Jacob Singh [EMAIL PROTECTED] wrote:

 Hey folks,

 I'm messing around with running multiple indexes on the same server
 using Jetty contexts.  I've got the running groovy thanks to the
 tutorial on the wiki, however I'm a little confused how the collection
 distribution stuff will work for replication.

 The rsyncd-enable command is simple enough, but the rsyncd-start command
 takes a -d (data dir) as an argument... Since I'm hosting 4 different
 instances, all with their own data dirs, how do I do this?

 Also, you have to specify the master data dir when you are connecting
 from the slave anyway, so why does it need to be specified when I start
 the daemon?  If I just start it with any old data dir will it work for
 anything the user running it has perms on?

 Thanks,
 Jacob



Rsyncd start and stop for multiple instances

2008-06-10 Thread Jacob Singh
Hey folks,

I'm messing around with running multiple indexes on the same server
using Jetty contexts.  I've got the running groovy thanks to the
tutorial on the wiki, however I'm a little confused how the collection
distribution stuff will work for replication.

The rsyncd-enable command is simple enough, but the rsyncd-start command
takes a -d (data dir) as an argument... Since I'm hosting 4 different
instances, all with their own data dirs, how do I do this?

Also, you have to specify the master data dir when you are connecting
from the slave anyway, so why does it need to be specified when I start
the daemon?  If I just start it with any old data dir will it work for
anything the user running it has perms on?

Thanks,
Jacob