Hi,
In my experience, you can just migrate to 1.4. We are using this in
production without any problems, and the Java Replication (
http://wiki.apache.org/solr/SolrReplication) works excellent.
Bye,
Jaco.
2009/6/17 vaibhav joshi callvaib...@hotmail.com
Hi,
I am using Solr 1.3 release and
On Sun, May 24, 2009 at 12:37 AM, Otis Gospodnetic
otis_gospodne...@yahoo.com wrote:
Yes. Although it might work under Cygwin, too.
cygwin wouldn't work:
http://www.lucidimagination.com/search/document/32471da18a69b169/replication_in_1_3
-Yonik
http://www.lucidimagination.com
OK. And the replication available with solr 1.3 is only for unix right??
Thanks,
Ashish
Noble Paul നോബിള് नोब्ळ्-2 wrote:
On Fri, May 22, 2009 at 3:12 PM, Ashish P ashish.ping...@gmail.com
wrote:
I want to add master slave configuration for solr. I have following solr
configuration:
I
Yes. Although it might work under Cygwin, too.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Ashish P ashish.ping...@gmail.com
To: solr-user@lucene.apache.org
Sent: Saturday, May 23, 2009 10:33:38 PM
Subject: Re: solr replication 1.3
On Fri, May 22, 2009 at 3:12 PM, Ashish P ashish.ping...@gmail.com wrote:
I want to add master slave configuration for solr. I have following solr
configuration:
I am using solr 1.3 on windows. I am also using EmbeddedSolrServer.
In this case is it possible to perform master slave
Hi Noble,
Great stuff, no problem, I really think the Solr development team is
excellent and takes pride in delivering high quality software!
And we're going into production with a brand new Solr based system in a few
weeks as well, so I'm really happy that this is fixed now.
Bye,
Jaco.
Hi,
I applied the patch and did some more tests - also adding some LOG.info()
calls in delTree to see if it actually gets invoked (LOG.info(START:
delTree: +dir.getName()); at the start of that method). I don't see any
entries of this showing up in the log file at all, so it looks like delTree
On Fri, Jan 23, 2009 at 2:12 PM, Jaco jdevr...@gmail.com wrote:
Hi,
I applied the patch and did some more tests - also adding some LOG.info()
calls in delTree to see if it actually gets invoked (LOG.info(START:
delTree: +dir.getName()); at the start of that method). I don't see any
entries
I tested with the patch
it has solved both the issues
On Fri, Jan 23, 2009 at 5:00 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
On Fri, Jan 23, 2009 at 2:12 PM, Jaco jdevr...@gmail.com wrote:
Hi,
I applied the patch and did some more tests - also adding some LOG.info()
calls in
I have opened an issue to track this
https://issues.apache.org/jira/browse/SOLR-978
On Fri, Jan 23, 2009 at 5:22 PM, Noble Paul നോബിള് नोब्ळ्
noble.p...@gmail.com wrote:
I tested with the patch
it has solved both the issues
On Fri, Jan 23, 2009 at 5:00 PM, Shalin Shekhar Mangar
Hi,
I have tested this as well, looking fine! Both issues are indeed fixed, and
the index directory of the slaves gets cleaned up nicely. I will apply the
changes to all systems I've got running and report back in this thread in
case any issues are found.
Thanks for the very fast help! I usually
Hm, I don't know what to do anymore. I tried this:
- Run Tomcat service as local administrator to overcome any permissioning
issues
- Installed latest nightly build (I noticed that item I mentioned before (
http://markmail.org/message/yq2ram4f3jblermd) had been committed which is
good
- Build a
On Thu, Jan 22, 2009 at 10:18 PM, Jaco jdevr...@gmail.com wrote:
Hm, I don't know what to do anymore. I tried this:
- Run Tomcat service as local administrator to overcome any permissioning
issues
- Installed latest nightly build (I noticed that item I mentioned before (
We are seeing something very similar. Ours is intermittent and usually
happens a great deal on random days. Often it seems to occur during large
index updates on the master.
On 1/22/09 8:58 AM, Shalin Shekhar Mangar shalinman...@gmail.com wrote:
On Thu, Jan 22, 2009 at 10:18 PM, Jaco
On Thu, Jan 22, 2009 at 10:37 PM, Jeff Newburn jnewb...@zappos.com wrote:
We are seeing something very similar. Ours is intermittent and usually
happens a great deal on random days. Often it seems to occur during large
index updates on the master.
Jeff, is this also on a Windows box?
--
My apologies. No we are using linux, tomcat setup.
On 1/22/09 9:15 AM, Shalin Shekhar Mangar shalinman...@gmail.com wrote:
On Thu, Jan 22, 2009 at 10:37 PM, Jeff Newburn jnewb...@zappos.com wrote:
We are seeing something very similar. Ours is intermittent and usually
happens a great deal
Jeff ,
Do you see both the empty index. dirs as well as the extra files
in the index?
--Noble
On Thu, Jan 22, 2009 at 10:37 PM, Jeff Newburn jnewb...@zappos.com wrote:
We are seeing something very similar. Ours is intermittent and usually
happens a great deal on random days. Often it seems
We have both. A majority of them are just empty but others have almost a
full index worth of files. I have also noticed that during a lengthy index
update the system will throw errors about how it cannot move one of the
index files. Essentially on reindex the system does not replicate until an
This was reported by another user and was fixed recently.Are you using
a recent version?
--Noble
On Fri, Jan 23, 2009 at 12:00 AM, Jeff Newburn jnewb...@zappos.com wrote:
We have both. A majority of them are just empty but others have almost a
full index worth of files. I have also noticed
Few weeks ago is our version. Does this contribute to the directory issues
and extra files that are left?
On 1/22/09 10:33 AM, Noble Paul നോബിള് नोब्ळ् noble.p...@gmail.com
wrote:
This was reported by another user and was fixed recently.Are you using
a recent version?
--Noble
On Fri,
I am not sure if it was completely fixed. (This was related to a Lucene bug)
But you can try w/ a recent build and confirm it for us.
I have never encountered these during our tests in windows XP/Linux
I have attached a patch which logs the names of the files which could
not get deleted (which
On Fri, Jan 23, 2009 at 12:15 AM, Noble Paul നോബിള് नोब्ळ्
noble.p...@gmail.com wrote:
I have attached a patch which logs the names of the files which could
not get deleted (which may help us diagnose the problem). If you are
comfortable applying a patch you may try it out.
I've committed
Hello,
Hi,
I'm running Solr nightly build of 20.12.2008, with patch as discussed on
http://markmail.org/message/yq2ram4f3jblermd, using Solr replication.
On various systems running, I see that the disk space consumed on the slave
is much higher than on the master. One example:
- Master:
the index.xxx directories are supposed to be deleted (automatically).
you can safely delete them.
But, I am wondering why the index files in the slave did not get
deleted. By default the deletionPolicy is KeepOnlyLastCommit.
On Wed, Jan 21, 2009 at 2:15 PM, Jaco jdevr...@gmail.com wrote:
Hi,
Hi,
There shouldn't be so many files on the slave. Since the empty index.x
folders are not getting deleted, is it possible that Solr process user does
not enough privileges to delete files/folders?
Also, have you made any changes to the IndexDeletionPolicy configuration?
On Wed, Jan 21,
Thanks for the fast replies!
It appears that I made a (probably classical) error... I didnt' make the
change to solrconfig.xml to include the deletionPolicy when applying the
upgrade. I include this now, but the slave is not cleaning up. Will this be
done at some point automatically? Can I
On Wed, Jan 21, 2009 at 3:42 PM, Jaco jdevr...@gmail.com wrote:
Thanks for the fast replies!
It appears that I made a (probably classical) error... I didnt' make the
change to solrconfig.xml to include the deletionPolicy when applying the
upgrade. I include this now, but the slave is not
Hm, this is becomeing a FAQ :)
Have you checked recent discussions about this via markmail.org?
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: David Giffin da...@giffin.org
To: solr-user@lucene.apache.org
Sent: Thursday, January 8, 2009
The current scripts use rsync to minimize the amount of data actually
being copied.
I've had a brief look and found only 1 implementation which is GPL and
abandoned
http://sourceforge.net/projects/jarsync.
Personally I still think the size of the transfer is important (as for
most use cases
Hi Ian,
I assume that a sizeable amount of people do replication after an optimize
which causes almost the whole index to be transferred by rsync. We can do a
checksum based modification check on individual segment files and pull only
those from the master. Although that's not a true diff copy,
In the future, don't post the same idea in solr-user and solr-dev...
most people on solr-dev read solr-user and the cross posting splits
where discussion ends up.
On Apr 29, 2008, at 5:01 AM, Noble Paul നോബിള്
नोब्ळ् wrote:
hi ,
The current replication strategy in solr involves shell
We are not doing away with the current replication strategy. It's
just that
we're proposing an alternative.
I'm all for adding a replication strategy that works on windows and is
controlled/managed from the webapp. The existing hardlink rsync
methods may have better performance...
]
Sent: Monday, January 14, 2008 9:40 PM
To: [EMAIL PROTECTED]
Subject: Re: Solr replication
Yes, you need the same changes in scripts.conf on the slave server but you
don't need the post commit hook enabled on the slave server.
The post commit hook is used to create snapshots. You will see
[mailto:[EMAIL PROTECTED] ]
Sent: Saturday, December 15, 2007 1:08 AM
To: solr-user@lucene.apache.org; [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: Solr replication
On Dec 14, 2007 7:00 AM, Dilip.TS [EMAIL PROTECTED] wrote:
Hi,
I have
: RE: Solr replication
Hi,
I understand that the Rsync is a Unix/Linux daemon thread which needs
to be enable/run to achieve Solr Collection Distribution.
Do we have any similar support for the Solr Collection Distribution in
the Windows environment or Do we need to write equivalent commands
Hi,
I have the following requirement for SOLR Collection Distribution using
Embedded Solr with the Jetty server:
I have different data folders for multiple instances of SOLR within the Same
application.
Im using the same SOLR_HOME with a single bin and conf folder.
My query is:
1)Is is possible
On Dec 14, 2007 7:00 AM, Dilip.TS [EMAIL PROTECTED] wrote:
Hi,
I have the following requirement for SOLR Collection Distribution using
Embedded Solr with the Jetty server:
I have different data folders for multiple instances of SOLR within the
Same
application.
Im using the same SOLR_HOME
1)On solr.master:
+Edit scripts.conf:
solr_hostname=localhost
solr_port=8983
rsyncd_port=18983
+Enable and start rsync:
rsyncd-enable; rsyncd-start
+Run snapshooter:
snapshooter
After running this, you should be able to see a new folder named snapshot.*
in data/index folder.
You can can
Works like a charm. Thanks very much.
cheers
Y.
Message d'origine
Date: Mon, 1 Oct 2007 21:55:30 +1000
De: climbingrose
A: solr-user@lucene.apache.org
Sujet: Re: Solr replication
boundary==_Part_10345_13696775.1191239730731
1)On solr.master:
+Edit scripts.conf:
solr_hostname
to tell Solr (on slave node) to sync itself with
disk ?
cheers
Y.
Message d'origine
De: [EMAIL PROTECTED]
A: solr-user@lucene.apache.org
Sujet: Re: Re: Solr replication
Date: Mon, 1 Oct 2007 15:00:46 +0200
Works like a charm. Thanks very much.
cheers
Y.
Message d'origine
Date
De: [EMAIL PROTECTED]
A: solr-user@lucene.apache.org
Sujet: Re: Re: Solr replication
Date: Mon, 1 Oct 2007 15:00:46 +0200
Works like a charm. Thanks very much.
cheers
Y.
Message d'origine
Date: Mon, 1 Oct 2007 21:55:30 +1000
De: climbingrose
A: solr-user
Perfect. Thanks for all guys.
cheers
Y.
Message d'origine
Date: Tue, 2 Oct 2007 01:01:37 +1000
De: climbingrose
A: solr-user@lucene.apache.org
Sujet: Re: Re: Re: Solr replication
boundary==_Part_11644_22377225.1191250897674
sh /bin/commit should trigger a refresh. However
On 12/27/06, Biggy [EMAIL PROTECTED] wrote:
Does anyone know how/if Solr can handle say 30 server with 3 requests/sec ?
We've only gone to 10 replicated search servers at CNET (and that
number turned out to be way overkill). For most types of requests, a
single server can handle way more than
101 - 143 of 143 matches
Mail list logo