You can't. But index restoration should be a very rare thing,
or you have some lurking problem in your process.
Or this is an XY problem, what problem are you trying to
solve? see: http://people.apache.org/~hossman/#xyproblem
Best
Erick
On Wed, Dec 21, 2011 at 12:21 PM, Dean Pullen wrote:
> I
I can't understand, then, how we could ever restore and get replication to work
without manual intervention!
Dean
On 21 Dec 2011, at 16:37, Dean Pullen wrote:
> I can't see a way, if the slave is on another server.
>
> We're going to upgrade solr - as you can delete the index after unloading a
Be careful deleting the index manually. Delete the entire index directory,
i.e. the data dir has no index directory under it.
About copying the index from the slave to the master, just shut down
the master, delete all the files from the index, and use scp or something
to copy the files in the inde
I can't see a way, if the slave is on another server.
We're going to upgrade solr - as you can delete the index after unloading a
core in this way:
cores?action=UNLOAD&core=liveCore&deleteIndex=true
From v3.3 (I think)
On 21 Dec 2011, at 16:11, Dean Pullen wrote:
> Thought as much, thanks for
Thought as much, thanks for the reply.
Is there an easy way of dropping the index on the slave, or do I have to
manually delta the index files?
Regards,
Dean.
On 21 Dec 2011, at 15:54, Erick Erickson wrote:
> You've probably hit it on the head. The slave version is greater than the
> maste
You've probably hit it on the head. The slave version is greater than the master
version, so replication isn't "necessary". BTW, the version starts
life as a timestamp,
but then is simply incremented on successive commits, which accounts for
what you are seeing.
You should be able to blow the inde
E.g. I see this in the slave logs:
2011-12-21 15:45:27,635 INFO handler.SnapPuller:265 - Master's version:
1271406570655, generation: 376
2011-12-21 15:45:27,635 INFO handler.SnapPuller:266 - Slave's version:
1271406571565, generation: 1286
2011-12-21 15:45:27,636 INFO handler.SnapPuller:267
That did it. I was running 3.3 on one and 3.4 on another.
Thanks!
Eric
On Mon, Dec 19, 2011 at 11:49 PM, Michael Ryan wrote:
> According to http://lucene.apache.org/java/3_4_0/fileformats.html, the
> FNMVersion changed from -2 to -3 in Lucene 3.4. Is it possible that the new
> master is actua
According to http://lucene.apache.org/java/3_4_0/fileformats.html, the
FNMVersion changed from -2 to -3 in Lucene 3.4. Is it possible that the new
master is actually running 3.4, and the new slave is running 3.2? (This is just
a wild guess.)
-Michael
Hi,
Hm, I don't know what this could be caused by. But if you want to get rid of
it, remote that Linux server our of the load balancer pool, stop Solr, remove
the index, and restart Solr. Then force replication and put the server back in
the load balancer pool.
If you use SPM (see link in my
Thanks Erick,
It's good to hear the slave doesn't notice anything.
Roy
--
View this message in context:
http://lucene.472066.n3.nabble.com/Replication-downtime-master-slave-tp3561031p3572969.html
Sent from the Solr - User mailing list archive at Nabble.com.
Replication is basically a background file transfer, your slave shouldn't
notice.
But what your slave will notice is two things:
1> after replication if your first few queries are slow, you need to
autowarm your caches.
2> you will see some memory footprint increase while autowarming
is
Am 05.12.2011 14:28, schrieb Per Steffensen:
Hi
Reading http://wiki.apache.org/solr/SolrReplication I notice the
"pollInterval" (guess it should have been "pullInterval") on the slaves.
That indicate to me that indexed information is not really "pushed" from
master to slave(s) on events defined
@Prakash: Can your please format the body a bit for readability?
@Solr-Users: Is anybody else having any problems when running Zookeeper
from the latest code in the trunk(4.x)?
On Mon, Nov 7, 2011 at 4:44 PM, prakash chandrasekaran <
prakashchandraseka...@live.com> wrote:
>
> hi all, i followed
: Tuesday, October 25, 2011 3:15 PM
To: solr-user@lucene.apache.org
Subject: Re: Replication issues with multiple Slaves
> 1) Hmm, maybe, didn't notice that... but I'd be very confused why it works
> occasionally, and manual replication (through Solr Admin) always works ok
> in t
ent: 25 October 2011 20:51
To: solr-user@lucene.apache.org
Cc: Jaeger, Jay - DOT
Subject: Re: Replication issues with multiple Slaves
Are you frequently adding and deleting documents and committing those
mutations? Then it might try to download a file that doesnt exist anymore.
If that is the case
> From: Jaeger, Jay - DOT [mailto:jay.jae...@dot.wi.gov]
> Sent: 25 October 2011 20:48
> To: solr-user@lucene.apache.org
> Subject: RE: Replication issues with multiple Slaves
>
> I noted that in these messages the left hand side is lower case collection,
> but the right ha
..@dot.wi.gov]
Sent: 25 October 2011 20:48
To: solr-user@lucene.apache.org
Subject: RE: Replication issues with multiple Slaves
I noted that in these messages the left hand side is lower case collection,
but the right hand side is upper case Collection. Assuming you did a
cut/paste, could you hav
Are you frequently adding and deleting documents and committing those
mutations? Then it might try to download a file that doesnt exist anymore. If
that is the case try increasing :
> I noted that in these messages the left hand side is lower case collection,
> but the right hand side is upper
I noted that in these messages the left hand side is lower case collection, but
the right hand side is upper case Collection. Assuming you did a cut/paste,
could you have a core name mismatch between a master and a slave somehow?
Otherwise (shudder): could you be doing a commit while the repli
).
>>
>> In such a case the old master might have uncommitted updates.
>>
>> JRJ
>>
>> -Original Message-
>> From: Otis Gospodnetic [mailto:otis_gospodne...@yahoo.com]
>> Sent: Tuesday, October 11, 2011 3:17 PM
>> To: solr-user@lucene.apache.org
>
rmal network).
>
> In such a case the old master might have uncommitted updates.
>
> JRJ
>
> -Original Message-
> From: Otis Gospodnetic [mailto:otis_gospodne...@yahoo.com]
> Sent: Tuesday, October 11, 2011 3:17 PM
> To: solr-user@lucene.apache.org
> Subject: Re
uncommitted updates.
JRJ
-Original Message-
From: Otis Gospodnetic [mailto:otis_gospodne...@yahoo.com]
Sent: Tuesday, October 11, 2011 3:17 PM
To: solr-user@lucene.apache.org
Subject: Re: Replication with an HA master
Hello,
- Original Message -
> From: Robert Stewart
> To: sol
On Tue, Oct 11, 2011 at 8:17 PM, Otis Gospodnetic <
otis_gospodne...@yahoo.com> wrote:
> > In the case of using a shared (SAN) index between 2 masters, what happens
> if the
> > live master fails in such a way that the index remains "locked" (such
> > as if some hardware failure and it did not unl
On Tue, Oct 11, 2011 at 6:55 PM, Brandon Ramirez <
brandon_rami...@elementk.com> wrote:
> Using a shared volume crossed my mind too, but I discarded the idea because
> of literature I have read about Lucene performing poorly against remote file
> systems. But then I suppose a SAN wouldn't be a re
Hello,
- Original Message -
> From: Robert Stewart
> To: solr-user@lucene.apache.org
> Cc:
> Sent: Tuesday, October 11, 2011 3:37 PM
> Subject: Re: Replication with an HA master
>
> In the case of using a shared (SAN) index between 2 masters, what happens if
>
From: Brandon Ramirez
>> To: "solr-user@lucene.apache.org"
>> Sent: Tuesday, October 11, 2011 2:55 PM
>> Subject: RE: Replication with an HA master
>>
>> Using a shared volume crossed my mind too, but I discarded the idea because
>> of literature
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
>
>From: Brandon Ramirez
>To: "solr-user@lucene.apache.org"
>Sent: Tuesday, October 11, 2011 2:55 PM
>Subject: RE: Replicatio
z | Office: 585.214.5413 | Fax: 585.295.4848
Software Engineer II | Element K | www.elementk.com
-Original Message-
From: Otis Gospodnetic [mailto:otis_gospodne...@yahoo.com]
Sent: Tuesday, October 11, 2011 2:28 PM
To: solr-user@lucene.apache.org
Subject: Re: Replication with an HA master
A few al
://search-lucene.com/
>
>From: Robert Stewart
>To: solr-user@lucene.apache.org
>Sent: Friday, October 7, 2011 10:22 AM
>Subject: Re: Replication with an HA master
>
>Your idea sounds like the correct path. Setup 2 masters, one running in
>
Your idea sounds like the correct path. Setup 2 masters, one running in
"slave" mode which pulls replicas from the live master. When/if live master
goes down, you just reconfigure and restart the backup master to be the live
master. You'd also need to then start data import on the backup mast
@lucene.apache.org
Subject: RE: Replication and ExternalFileField
Probably would have worked on *nix but unfortunately running Windows.
Best regards,
Per
-Original Message-
From: Markus Jelsma [mailto:markus.jel...@openindex.io]
Sent: den 15 september 2011 14:07
To: solr-user
Probably would have worked on *nix but unfortunately running Windows.
Best regards,
Per
-Original Message-
From: Markus Jelsma [mailto:markus.jel...@openindex.io]
Sent: den 15 september 2011 14:07
To: solr-user@lucene.apache.org
Subject: Re: Replication and ExternalFileField
Perhaps a
Perhaps a symlink will do the trick.
On Thursday 15 September 2011 14:04:47 Per Osbeck wrote:
> Hi all,
>
> I'm trying to find some good information regarding replication, especially
> for the ExternalFileField.
>
> As I understand it;
> - the external files must be in data dir.
> - replicatio
On 9/10/2011 3:54 PM, Pulkit Singhal wrote:
> Hi Yury,
>
> How do you manage to start the instances without any issues? The way I see
> it, no matter which instance is started first, the slave will complain about
> not being to find its respective master because that instance hasn't been
> started
Sorry, stupid question, now I see that the core still starts and the polling
process simply logs an error:
SEVERE: Master at: http://localhost:7574/solr/master2/replication is not
available.
Index fetch failed. Exception: Connection refused
I was able to setup the instructions in-detail with this
Hi Yury,
How do you manage to start the instances without any issues? The way I see
it, no matter which instance is started first, the slave will complain about
not being to find its respective master because that instance hasn't been
started yet ... no?
Thanks,
- Pulkit
2011/5/17 Yury Kats
>
Hi,
I found out the problem by myself.
The reason was a bad deployment of of Solr on tomcat. Two instances of
solr were instantiated instead of one. The two instances were managing
the same indexes, and therefore were trying to write at the same time.
My apologies for the noise created on the
Hm, anyone?
On Sat, May 14, 2011 at 7:11 PM, Stefan Matheis
wrote:
> Hi Guys,
>
> while working on the UI for Replication, i've got confused sometimes because
> of the following response (from /replication?command=details):
>
>
>
>
>
>
> Sat May 14 16:25:53 UTC 2011
>
>
Alexander, sorry for the delay in replying. I wanted to test out a few
hunches that I had before I get back to you.
Hurray!!! I was able to resolve the issue. The problem was with the
cache settings in the solrconfig.xml. It was taking almost 15-20
minutes to warm up the caches on each commit, as
On 5/17/2011 10:17 AM, Stefan Matheis wrote:
> Yury,
>
> perhaps Java-Pararms (like used for this sample:
> http://wiki.apache.org/solr/SolrReplication#enable.2BAC8-disable_master.2BAC8-slave_in_a_node)
> can help you?
Ah, thanks! It does seem to work!
Cluster's solrconfig.xml (shared between al
Yury,
perhaps Java-Pararms (like used for this sample:
http://wiki.apache.org/solr/SolrReplication#enable.2BAC8-disable_master.2BAC8-slave_in_a_node)
can help you?
Regards
Stefan
2011/5/17 Yury Kats :
> Hi,
>
> I have two Solr nodes, each managing two cores -- a master core and a slave
> core.
Ravi,
what is the replication configuration on both master and slave?
Also could you list of files in the index folder on master and slave
before and after the replication?
-Alexander
On Fri, 2011-05-13 at 18:34 -0400, Ravi Solr wrote:
> Sorry guys spoke too soon I guess. The replication stil
Sorry guys spoke too soon I guess. The replication still remains very
slow even after upgrading to 3.1 and setting the compression off. Now
Iam totally clueless. I have tried everything that I know of to
increase the speed of replication but failed. if anybody faced the
same issue, can you please t
Thank you Mr. Bell and Mr. Kanarsky, as per your advise we have moved
from 1.4.1 to 3.1 and have made several changes to configuration. The
configuration changes have worked nicely till now and the replication
is finishing within the interval and not backing up. The changes we
made are as follows
Ravi,
if you have what looks like a full replication each time even if the
master generation is greater than slave, try to watch for the index on
both master and slave the same time to see what files are getting
replicated. You probably may need to adjust your merge factor, as Bill
mentioned.
-A
Mr. Bell,
Thank you for your help. Yes, the full index replicated every
1000, 1, 10 etc, if mergeFactor is 10 as per it's definition.
We do index every 5 minutes and replicate every 3 minutes just to make
sure consumers have immediate access to the indexed docs.
Thanks,
Ravi Kiran B
OK let me rephrase.
In solrconfig.xml there is a setting called mergeFactor. The default is
usually 10.
Practically it means there are 10 segments. If you are doing fast delta
indexing (adding a couple documents, then committing),
You will cycle through all 10 segments pretty fast.
It appears tha
Hello Mr. Kanarsky,
Thank you very much for the detailed explanation,
probably the best explanation I found regarding replication. Just to
be sure, I wanted to test solr 3.1 to see if it alleviates the
problems...I dont think it helped. The master index version and
generation are gr
Ravi,
as far as I remember, this is how the replication logic works (see
SnapPuller class, fetchLatestIndex method):
> 1. Does the Slave get the whole index every time during replication or
> just the delta since the last replication happened ?
It look at the index version AND the index generat
Hello Mr. Bell,
Thank you very much for patiently responding to my
questions. We optimize once in every 2 days. Can you kindly rephrase
your answer, I could not understand - "if the amount of time if > 10
segments, I believe that might also trigger a whole index, since you
cycled
I did not see answers... I am not an authority, but will tell you what I
think
Did you get some answers?
On 5/6/11 2:52 PM, "Ravi Solr" wrote:
>Hello,
>Pardon me if this has been already answered somewhere and I
>apologize for a lengthy post. I was wondering if anybody could help m
Sorry to re-open an old thread, but this just happened to me again,
even with a 30 second sleep between taking the snapshot and starting
to tar it up. Then, even more strangely, the snapshot was removed
again before tar completed.
Archiving snapshot.20110320113401 into
/var/www/mesh/backups/weekly
Hi Bill,
> You could always rsync the index dir and reload (old scripts).
I used them previously but was getting problems with them. The
application querying the Solr doesn't cause enough load on it to
trigger the issue. Yet.
> But this is still something we should investigate.
Indeed :-)
> Se
You could always rsync the index dir and reload (old scripts). But this is
still something we should investigate. I had this same issue on high load and
never really found a solution. Did you try another Nic card? See if the Nic is
configured right? Routing? Speed of transfer?
Bill Bell
Sent fr
On Mar 17, 2011, at 3:19 PM, Shawn Heisey wrote:
On 3/17/2011 3:43 AM, Vadim Kisselmann wrote:
Unfortunately, this doesn't seem to be the problem. The queries
themselves are running fine. The problem is that the replications is
crawling when there are many queries going on and that the replication
On 3/17/2011 3:43 AM, Vadim Kisselmann wrote:
Unfortunately, this doesn't seem to be the problem. The queries themselves are
running fine. The problem is that the replications is crawling when there are
many queries going on and that the replication speed stays low even after the
load is gone.
Hello Shawn,
Primary assumption: You have a 64-bit OS and a 64-bit JVM.
>Jepp, it's running 64-bit Linux with 64-bit JVM
It sounds to me like you're I/O bound, because your machine cannot
keep enough of your index in RAM. Relative to your 100GB index, you
only have a maximum of 14G
On 3/16/2011 6:09 PM, Shawn Heisey wrote:
du -hc *x
I was looking over the files in an index and I think it needs to include
more of the files for a true picture of RAM needs. I get 5.9GB running
the following command against a 16GB index. It excludes *.fdt (stored
field data) and *.tvf (t
On 3/16/2011 7:56 AM, Vadim Kisselmann wrote:
If the load is low, both slaves replicate with around 100MB/s from master.
But when I use Solrmeter (100-400 queries/min) for load tests (over
the load balancer), the replication slows down to an unacceptable
speed, around 100KB/s (at least that's wh
If you set maxWarmingSearchers to 1 then you cannot issue an overlapping
commit. Slaves won't poll for a new index version while replication is in
progress.
It works well in my environment where there is a high update/commit frequency,
about a thousand documents per minute. The system even beha
Hi,
Keeping the thread alive, any thought on only doing replication if
there is no warming currently going on?
Cheers,
Dan
On Thu, Feb 10, 2011 at 11:09 AM, dan sutton wrote:
> Hi,
>
> If the replication window is too small to allow a new searcher to warm
> and close the current searcher before
Issue created:
https://issues.apache.org/jira/browse/SOLR-2323
On Tuesday 04 January 2011 20:08:40 Markus Jelsma wrote:
> Hi,
>
> It seems abort-fetch nicely removes the index directory which i'm
> replicating to which is fine. Restarting, however, does not trigger the
> the same feature as the a
PS one other point I didn't mention is that this server has a very
fast autocommit limit (2 seconds max time).
But I don't know if this is relevant -- I thought the files in the
snapshot wouldn't be committed to again. Please correct me if this is
a huge misunderstanding.
On 16 January 2011 12:30
Any thoughts on this one? Should i add a ticket?
On Tuesday 04 January 2011 20:08:40 Markus Jelsma wrote:
> Hi,
>
> It seems abort-fetch nicely removes the index directory which i'm
> replicating to which is fine. Restarting, however, does not trigger the
> the same feature as the abort-fetch com
I have no Windows.
On Tuesday 04 January 2011 23:20:00 Lance Norskog wrote:
> Is this on Windows or Unix? Windows will not delete a file that is still
> open.
>
> On Tue, Jan 4, 2011 at 10:07 AM, Markus Jelsma
>
> wrote:
> > Is it possible this problem has something to do with my old index file
I don't have Windows :)
> Is this on Windows or Unix? Windows will not delete a file that is still
> open.
>
> On Tue, Jan 4, 2011 at 10:07 AM, Markus Jelsma
>
> wrote:
> > Is it possible this problem has something to do with my old index files
> > not being removed? This problem only surfaces
Is this on Windows or Unix? Windows will not delete a file that is still open.
On Tue, Jan 4, 2011 at 10:07 AM, Markus Jelsma
wrote:
> Is it possible this problem has something to do with my old index files not
> being removed? This problem only surfaces in my setup when i restart with
> replicat
Is it possible this problem has something to do with my old index files not
being removed? This problem only surfaces in my setup when i restart with
replication on the slave. I can confirm that for some reason my replicated
indexes get messed up only when i start restarting Tomcat several times
On Tue, Jan 4, 2011 at 9:34 AM, Robert Muir wrote:
> [junit] WARNING: test class left thread running:
> Thread[MultiThreadedHttpConnectionManager cleanup,5,main]
I suppose we should move MultiThreadedHttpConnectionManager to CoreContainer.
-Yonik
http://www.lucidimagination.com
On Tue, Jan 4, 2011 at 9:23 AM, Markus Jelsma
wrote:
> Hi,
>
> Anyone seen this before when stopping of restarting Solr 1.4.1 running as
> slave under Tomcat 6?
>
> SEVERE: The web application [/solr] appears to have started a thread named
> [MultiThreadedHttpConnectionManager cleanup] but has fai
On Wed, Nov 10, 2010 at 8:29 PM, Chris Hostetter
wrote:
>
> (forwarded on behalf of robo ... trying to figured out odd spam blocking
> issue he's having)
>
> -- Forwarded message --
>
> Good day,
>
> We are new to Solr and trying to setup a HA configuration in the
> cloud. We hav
Cool, thanks for the clarification, Shalin.
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
On 9. nov. 2010, at 15.12, Shalin Shekhar Mangar wrote:
> On Tue, Nov 9, 2010 at 12:33 AM, Jan Høydahl / Cominvent
> wrote:
>> Not sure about that. I have read that the replica
On Tue, Nov 9, 2010 at 12:33 AM, Jan Høydahl / Cominvent
wrote:
> Not sure about that. I have read that the replication handler actually issues
> a commit() on itself once the index is downloaded.
That was true with the old replication scripts. The Java based
replication just re-opens the IndexR
Not sure about that. I have read that the replication handler actually issues a
commit() on itself once the index is downloaded.
But probably a better way for Markus' case is to hook the prune job on the
master, writing to another core (myIndexPruned). Then you replicate from that
core instead,
On Fri, Nov 5, 2010 at 2:30 PM, Jan Høydahl / Cominvent
wrote:
>
> How about hooking in Andrzej's pruning tool at the postCommit event,
> literally removing unused fields. I believe a "commit" is fired on the slave
> by itself after every successful replication, to put
> the index live. You cou
Thanks for the pointer!
> How about hooking in Andrzej's pruning tool at the postCommit event,
> literally removing unused fields. I believe a "commit" is fired on the
> slave by itself after every successful replication, to put the index live.
> You could execute a script which prunes away the
How about hooking in Andrzej's pruning tool at the postCommit event, literally
removing unused fields. I believe a "commit" is fired on the slave by itself
after every successful replication, to put the index live. You could execute a
script which prunes away the dead meat and then call a new c
On 10/29/2010 4:33 PM, Shawn Heisey wrote:
The recommended method of safely upgrading Solr that I've read about
is to upgrade slave servers, keeping your production application
pointed either at another set of slave servers or your master
servers. Then you test it with a dev copy of your appli
On 10/27/2010 8:34 PM, Shawn Heisey wrote:
I started to upgrade my slave servers from 1.4.1 to 3.1-dev checked
out this morning. Because of SOLR-2034 (new javabin version) the
replication fails.
Asking about it in comments on SOLR-2034 brought up the suggestion of
switching to XML instead of
Hi Olivier,
the index size is relative big and you enabled replication after startup:
startup
This could explain why the slave is replicating from the very beginning.
Are the index versions/generations the same? (via command or
admin/replication)
If not, the slaves tries to replicate and if that
Hello Peter,
On the slave server http://slave/solr/core0/admin/replication/index.jsp
Poll Interval00:30:00
Local Index Index Version: 1284026488242, Generation: 13102
Location: /solr/multicore/core0/data/index
Size: 26.9 GB
Times Replicated Since Startup: 289
Previous Replication
Hi Olivier,
maybe the slave replicates after startup? check replication status here:
http://localhost/solr/admin/replication/index.jsp
what is your poll frequency (could you paste the replication part)?
Regards,
Peter.
> Hello,
>
> I setup a server for the replication of Solr. I used 2 cores an
You don't need multi-core. Solr already does this automatically. It creates a
new Searcher and auto-warms the cache.
But, it will still be slow. If you use auto-warming, it uses most of one CPU,
which slows down queries during warming. Also, warming isn't perfect, so
queries will be slower afte
Hi Marcin,
This is because when you do the replication, all the caches are rebuild
cause the index has changed, so the searchs performance decrease. You can
change your architecture to a multicore one to reduce the impact of the
replication. Using two cores, one to do the replication, and other to
A 5-second connection is not going to work trans-globally. The
replication engine is generally tested in local sites.
If it is possible to set defaults for the Apache Commons http classes
via system properties, that might let this work. This doc does not
seem promising:
http://www.jdocs.com/httpc
the slave without any deletion happening on the master.
Therefore I didn't see the SolrException in the slave log files and the
replication worked
Thank you
--- On Tue, 3/2/10, Matthieu Labour wrote:
From: Matthieu Labour
Subject: Re: replication issue
To: solr-user@lucene.apache.org
..
What could have possibly happen?
--- On Tue, 3/2/10, Otis Gospodnetic wrote:
From: Otis Gospodnetic
Subject: Re: replication issue
To: solr-user@lucene.apache.org
Date: Tuesday, March 2, 2010, 4:40 PM
Hi Matthieu,
Does this happen over and over?
Is this with Solr 1.4 or some other version?
search :: http://search-hadoop.com/
- Original Message
> From: Matthieu Labour
> To: solr-user@lucene.apache.org
> Sent: Tue, March 2, 2010 4:35:46 PM
> Subject: Re: replication issue
>
> The replication does not work for me
>
>
> I have a big master solr and
wrote:
From: Matthieu Labour
Subject: Re: replication issue
To: solr-user@lucene.apache.org
Date: Tuesday, March 2, 2010, 3:35 PM
The replication does not work for me
I have a big master solr and I want to start replicating it. I can see that the
slave is downloading data from the master
The replication does not work for me
I have a big master solr and I want to start replicating it. I can see that the
slave is downloading data from the master... I see a directory
index.20100302093000 gets created in data/ next to index... I can see its size
growing but then the directory gets
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:675)
at java.lang.Thread.run(Thread.java:595)
--- On Tue, 3/2/10, Matthieu Labour wrote:
From: Matthieu Labour
Subject: Re: replication issue
To: solr
matt
--- On Mon, 3/1/10, Noble Paul നോബിള് नोब्ळ् wrote:
From: Noble Paul നോബിള് नोब्ळ्
Subject: Re: replication issue
To: solr-user@lucene.apache.org
Date: Monday, March 1, 2010, 10:30 PM
The data/index.20100226063400 dir is a temporary dir and isc reated in
the same dir where the index d
The data/index.20100226063400 dir is a temporary dir and isc reated in
the same dir where the index dir is located.
I'm wondering if the symlink is causing the problem. Why don't you set
the data dir as /raid/data instead of /solr/data
On Sat, Feb 27, 2010 at 12:13 AM, Matthieu Labour
wrote:
> H
-1736 and see what tmpIdxDir gets picked up...
What would be cool is the ability to set up solr temp file via config file so
that it can live in the same partition than the data directory
Thank you
--- On Fri, 2/26/10, Shalin Shekhar Mangar wrote:
From: Shalin Shekhar Mangar
Subject: Re
On Fri, Feb 26, 2010 at 8:32 PM, Matthieu Labour
wrote:
> Hi
> I have 2 solr machine. 1 master, 1 slave replicating the index from the
> master
> The machine on which the slave is running went down while the replication
> was running
> I suppose the index must be corrupted. Can I safely remove the
matt
--- On Fri, 2/26/10, Shalin Shekhar Mangar wrote:
From: Shalin Shekhar Mangar
Subject: Re: replication issue
To: solr-user@lucene.apache.org
Date: Friday, February 26, 2010, 2:06 PM
On Sat, Feb 27, 2010 at 12:13 AM, Matthieu Labour wrote:
> Hi
>
> I am still having issues
On Sat, Feb 27, 2010 at 12:13 AM, Matthieu Labour wrote:
> Hi
>
> I am still having issues with the replication and wonder if things are
> working properly
>
> So I have 1 master and 1 slave
>
> On the slave, I deleted the data/index directory and
> data/replication.properties file and restarted
Hi again,
I would still keep all fields in the original schema of the global Solr, just
for the sake of simplicity.
For custom sort order, you can look at ExternalFileField which is a text file
that you can add to your local Solr index independently of the pre-built index.
However, this only s
Hi,
its would be possible to add that to the main solr but the problem is:
Lets face it (example):
We have kind of 1.5 million documents in the solr master. These Documents are
books.
These books have fields like title, ids, numbers and authors and more.
This solr is global.
Now: The slave solr
201 - 300 of 445 matches
Mail list logo