So its generally a bad idea to optimize I gather?
- In older versions it might have done them all at once, but I believe
that newer versions only do one core at a time.
On Tue, Mar 25, 2014 at 11:16 AM, Shawn Heisey wrote:
> On 3/25/2014 11:59 AM, Software Dev wrote:
>>
>> Ehh.. found out the ha
On 3/25/2014 11:59 AM, Software Dev wrote:
Ehh.. found out the hard way. I optimized the collection on 1 machine
and when it was completed it replicated to the others and took my
cluster down. Shitty
It doesn't get replicated -- each core in the collection will be
optimized. In older versions
Ehh.. found out the hard way. I optimized the collection on 1 machine
and when it was completed it replicated to the others and took my
cluster down. Shitty
On Tue, Mar 25, 2014 at 10:46 AM, Software Dev
wrote:
> One other question. If I optimize a collection on one node, does this
> get replicat
One other question. If I optimize a collection on one node, does this
get replicated to all others when finished?
On Tue, Mar 25, 2014 at 10:13 AM, Software Dev
wrote:
> Thanks for the reply. Ill make sure NOT to disable it.
Thanks for the reply. Ill make sure NOT to disable it.
No, don't disable replication!
The way shards ordinarily keep up with updates is by sending every document
to each member of the shard. However, if a shard goes offline for a period
of time and comes back, replication is used to "catch up" that shard. So
you really need it on.
If you created your
On 3/25/2014 10:42 AM, Software Dev wrote:
I see that by default in SolrCloud that my collections are
replicating. Should this be disabled in SolrCloud as this is already
handled by it?
From the documentation:
"The Replication screen shows you the current replication state for
the named core y
Hi,
i am running into the exact same problem:
27534 [qtp989080272-12] INFO org.apache.solr.core.SolrCore – [collection1]
webapp=/solr path=/replication
params={command=details&_=1394164320017&wt=json} status=0 QTime=12
28906 [qtp989080272-12] INFO org.apache.solr.core.SolrCore – [collection1
Thanks Daniel.
So, if I understand correctly the below exception is almost always
caused because of merging segments ? Though I see different file names
(for e.g download_av3.fdt in this case) in the exception messages
[explicit-fetchindex-cmd] ERROR
org.apache.solr.handler.ReplicationHand
On 1/3/2014 10:34 AM, Daniel Collins wrote:
We see this a lot as well, my understanding is that recovery asks the
leader for a list of the files that it should download, then it downloads
them. But if the leader has been merging segments whilst this is going on
(recovery is taking a reasonable p
We see this a lot as well, my understanding is that recovery asks the
leader for a list of the files that it should download, then it downloads
them. But if the leader has been merging segments whilst this is going on
(recovery is taking a reasonable period of time and you have an NRT system
where
On 11/5/2013 10:45 PM, Luis Cappa wrote:
> I have seen that when disabling replication and executing queries the time
> responses are good. Interesting... I can't ser the solution, then, because
> slow replication tomes are needed to almost always get 'fresh' documents in
> slaves to search by,
Hello, Shawn!
I have seen that when disabling replication and executing queries the time
responses are good. Interesting... I can't ser the solution, then, because slow
replication tomes are needed to almost always get 'fresh' documents in slaves
to search by, but this appareantly slows down f
On 11/5/2013 10:16 AM, Luis Cappa Banda wrote:
I have a master-slave replication (Solr 4.1 version) with a 30 seconds
polling interval and continuously new documents are indexed, so after 30
seconds always new data must be replicated. My test index is not huge: just
5M documents.
I have experime
Against --> again, :-)
2013/11/5 Luis Cappa Banda
> Hi guys!
>
> I have a master-slave replication (Solr 4.1 version) with a 30 seconds
> polling interval and continuously new documents are indexed, so after 30
> seconds always new data must be replicated. My test index is not huge: just
> 5M d
The whole point of SolrCloud is to automatically take care of all
the ugly details of synching etc. You should be able to add a node
and, assuming it has been assigned to a shard, do nothing.
The node will start up, synch with the leader, get registered and
start handling queries without you having
it got 100% of
it.
Thanks
Robi
-Original Message-
From: Rohit Harchandani [mailto:rhar...@gmail.com]
Sent: Thursday, August 01, 2013 1:55 PM
To: solr-user@lucene.apache.org
Subject: Re: replication getting stuck on a file
I am facing this problem in solr 4.0 too. Its definitely not
I am facing this problem in solr 4.0 too. Its definitely not related to
autowarming. It just gets stuck while downloading a file and there is no
way to abort the replication except restarting solr.
On Wed, Jul 10, 2013 at 6:10 PM, adityab wrote:
> I have seen this in 4.2.1 too.
> Once replicati
We ran replication at ten minute intervals. One master, five slaves, and
replication on the hour on the first slave, ten minutes after the hour on the
second, twenty minutes after on the third, and so on.
You could do this with a single crontab on the master. Send requests to each
slave to repl
Walter,
Could you provide some more details about your staggered replication
approach?
We are currently running into similar issues and looks like staggered
replication is a better approach to address the performance issues on
Slaves.
thanks
Aditya
--
View this message in context:
http://luce
I have seen this in 4.2.1 too.
Once replication is finished, on Admin UI we see 100% and time and dlspeed
information goes out of wack Same is reflected in mbeans. But whats actually
happening in the background is auto-warmup of caches (in my case)
May be some minor stats bug
--
View this mess
Hmmm, that is kind of funny. I know this is ugly, but what happens if you
1> stop the slave
2> completely delete the data/index directory (directory too, not just contents)
3> fire it back up?
inelegant at best, but if it cures your problem
Erick
On Tue, Jul 9, 2013 at 5:57 PM, Petersen, Rob
Look at the speed and time remaining on this one, pretty funny:
Master http://ssbuyma01:8983/solr/1/replication
Latest Index Version:null, Generation: null
Replicatable Index Version:1276893670202, Generation: 127213
Poll Interval00:05:00
Local Index Index Version: 1276893670108, G
com]
Sent: Tuesday, June 11, 2013 2:24 PM
To: solr-user@lucene.apache.org
Subject: Re: Replication not working
I mean , the log when polling happens when from slave. Not when you issue a
command.
On Tue, Jun 11, 2013 at 5:28 PM, wrote:
> Log on slave:
>
> 2013-06-11 13:19:
06-1:)
> Closing out SolrRequest: {command=indexversion}
>
> -Ursprüngliche Nachricht-
> Von: Noble Paul നോബിള് नोब्ळ् [mailto:noble.p...@gmail.com]
> Gesendet: Dienstag, 11. Juni 2013 13:41
> An: solr-user@lucene.apache.org
> Betreff: Re: Replication not working
>
> Y
det: Dienstag, 11. Juni 2013 13:16
> An: solr-user@lucene.apache.org
> Betreff: Re: Replication not working
>
> can you check with the indexversion command on both mater and slave?
>
> pollInterval is set to 2 minutes. It is usually long . So you may need to
> wait fo
can you check with the indexversion command on both mater and slave?
pollInterval is set to 2 minutes. It is usually long . So you may need to
wait for 2 mins for the replication to kick in
On Tue, Jun 11, 2013 at 3:21 PM, wrote:
> Hi all,
>
>
>
> we have a setup with multiple cores, loaded vi
: In Solr 1.4, on slave, I supplied a masterUrl, but did NOT supply any
: pollInterval at all on slave. I did NOT supply an "enable"
: "false" in slave, because I think that would have prevented even manual
: replication.
that exact same config should still work with solr 4.3
: This seemed to
You can disable polling so that the slave never polls the Master(In Solr
4.3 you can disable it from the Admin interface). . And you can trigger a
replication using the HTTP API
http://wiki.apache.org/solr/SolrReplication#HTTP_API or again, use the
Admin interface to trigger a manual replication.
Hi Arkadi,
If the update delta between the shard leader and replica >100 docs, then
Solr punts and replicas the entire index. Last I heard, the 100 was
hard-coded in 4.0 so is not configurable. This makes sense because the
replica shouldn't be out-of-sync with the leader unless it has been offline
I may be missing something but let me go back to your original statements:
1) You build the index once per week from scratch
2) You replicate this from master to slave.
My understanding of the way replication works is that it's meant to only
send along files that are new and if any files named the
OK then index generation and index version are out of count when it comes
to verify that master and slave index are in sync.
What else is possible?
The strange thing is if master is 2 or more generations ahead of slave then it
works!
With your logic the slave must _always_ be one generation ahea
Okay so then that should explain the generation difference of 1 between the
master and slave
On Wed, Feb 13, 2013 at 10:26 AM, Mark Miller wrote:
>
> On Feb 13, 2013, at 1:17 PM, Amit Nithian wrote:
>
> > doesn't it do a commit to force solr to recognize the changes?
>
> yes.
>
> - Mark
>
On Feb 13, 2013, at 1:17 PM, Amit Nithian wrote:
> doesn't it do a commit to force solr to recognize the changes?
yes.
- Mark
So just a hunch... but when the slave downloads the data from the master,
doesn't it do a commit to force solr to recognize the changes? In so doing,
wouldn't that increase the generation number? In theory it shouldn't matter
because the replication looks for files that are different to determine
w
Now this is strange, the index generation and index version
is changing with replication.
e.g. master has index generation 118 index version 136059533234
and slave has index generation 118 index version 136059533234
are both same.
Now add one doc to master with commit.
master has index generat
Well it seems to have resolved itself. Fully wiped the configuration
directores and recreate the cores and it seems to be fixed.
On Fri, Jan 25, 2013 at 1:11 PM, Sean Siefert wrote:
> Unfortunately no such luck. If anyone thinks of anything to try I would
> appreciate it.
>
>
> On Fri, Jan 25,
Unfortunately no such luck. If anyone thinks of anything to try I would
appreciate it.
On Fri, Jan 25, 2013 at 12:40 PM, Sean Siefert wrote:
> Thanks for the response. That was my last resort attempt. I saw some
> replication related fixes. I will reply if it works.
>
>
> On Fri, Jan 25, 2013 a
Thanks for the response. That was my last resort attempt. I saw some
replication related fixes. I will reply if it works.
On Fri, Jan 25, 2013 at 12:10 PM, Mark Miller wrote:
> I don't have any targeted advice at the moment, but just for kicks, you
> might try using Solr 4.1.
>
> - Mark
>
> On
I don't have any targeted advice at the moment, but just for kicks, you might
try using Solr 4.1.
- Mark
On Jan 25, 2013, at 2:47 PM, Sean Siefert wrote:
> So I have quite a few cores already where this exact (as far as replication
> is concerned) solrconfig.xml works. The other cores all repl
Hi Erick,
Thanks for replying. On the subject of commit vs optimize: for the
moment I'm actually replacing the entire index each time beginning
with a delete *:*, so I think doing an optimize is actually ok, as it
is essentially a new index anyway. Ultimately, I think I'll want to
be doing smal
This log seems to be from when you start Solr, is it? This is master's log,
right? It would be more useful to see the log when the replication actually
happens. You should see something like:
Master's generation:
Slave's generation:
Starting replication process
...
On Fri, Dec 7, 2012 at 2:16 P
No excpetions...
INFO: Opening Searcher@21fb3211 main
Dec 6, 2012 5:09:55 PM
org.apache.solr.update.DirectUpdateHandler2$CommitTracker
INFO: AutoCommit: disabled
Dec 6, 2012 5:09:55 PM org.apache.solr.handler.component.SearchHandler
inform
INFO: Adding
component:org.apache.solr.handler.component
hmm then I'm not sure what can be happening. Do you see anything in the
logs? any exception? Maybe you can share a piece of log that includes the
replication.
On Fri, Dec 7, 2012 at 12:09 PM, André Maldonado
wrote:
> Yes, I'm sure. Files were changed yesterday, this morning we had a full
> reind
Yes, I'm sure. Files were changed yesterday, this morning we had a full
reindexation...
Thank's
*
--
*
*"E conhecereis a verdade, e a verdade vos libertará." (João 8:32)*
*andre.maldonado*@gmail.com
(1
Have you committed the changes on the master? Are you sure that the
replication didn't happen before you change the configuration files?
On Fri, Dec 7, 2012 at 11:56 AM, André Maldonado
wrote:
> The index (documents), was also updated. But the two servers have the same
> index version.
>
> Thank
The index (documents), was also updated. But the two servers have the same
index version.
Thank's
*
--
*
*"E conhecereis a verdade, e a verdade vos libertará." (João 8:32)*
*andre.maldonado*@gmail.com
If I remember correctly, updated files in the master only get replicated if
there is a change in the index (if the index version from the master and
the slave are the same, nothing gets replicated, not even the configuration
files). Are you currently updating the index or just the configuration
fil
Hey Annette,
Are you using Solr 4.0 final? A version of 4x or 5x?
Do you have the logs for when the replica tried to catch up to the leader?
Stopping and starting the node is actually a fine thing to do. Perhaps you can
try it again and capture the logs.
If a node is not listed as live but is
Never mind I think I found it.
There must be some documents into each shardso they havea version
number. Then everything seems to work...
On 11/30/2012 04:57 PM, Mark Miller wrote:
Thanks for all the detailed info!
Yes, that is confusing. One of the sore points we have while supporting both
Thanks for the explaination It's clear now...
I expanded the setup to:
4 hosts with 2 shards en 1 replicator for each shard. When I
shutdown tomcat on solr01-dcg which is the master of shard 1 for
both collections, the replicator (solr01-gs) seems NOT to
First comment: You probably don't need to optimize. Despite its name, it
rarely makes a difference and has several downsides, particularly it'll make
replication replicate the entire index rather than just the changed
segments.
Optimize purges leftover data from docs that have been deleted, which w
On Nov 30, 2012, at 11:01 AM, yayati wrote:
> We have created some custom search component, where this error occur in
> inform method at line
> .getResourceLoader().getConfigDir()));
Does your custom component try and get the config dir? What for?
- Mark
Hi Mark,
Please find detail stacktrace :
2012-11-30 19:32:58,260 [pool-2-thread-1] ERROR
apache.solr.core.CoreContainer - null:org.apache.solr.common.SolrException:
ZkSolrResourceLoader does not support getConfigDir() - likely, what you are
trying to do is not supported in ZooKeeper mode
Thanks for all the detailed info!
Yes, that is confusing. One of the sore points we have while supporting both
std Solr and SolrCloud mode.
In SolrCloud, every node is a Master when thinking about std Solr replication.
However, as you see on the cloud page, only one of them is a *leader*. A lea
Need more information about your setup and config.
Longer stack traces would be helpful as well.
- Mark
On Nov 30, 2012, at 12:35 AM, yayati wrote:
> Hi All,
>
> I also got similar error while moving my solr 3.6 based application on solr
> cloud. While setting solrcloud i got this error :
> S
On Nov 30, 2012, at 5:08 AM, Arkadi Colson wrote:
> Hi
>
> I've setup an simple 2 machine cloud with 1 shard, one replicator and 2
> collections.Everything went fine. However when I look at the interface:
> http://localhost:8983/solr/#/coll1/replication is reporting the both machines
> are m
There doesn't seem to be a lock file created by the snapshooter, it news up
a lock file but it never obtains the lock.
So there is no indication of when it is finished backing up the files.
On Sun, Nov 25, 2012 at 5:32 AM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:
> Hi Eva,
>
> You j
Hi Eva,
You just need a script that:
* calls master with http://replication?command=backup
* copies the backup off of master and stores it somewhere
* removes that backup from the master if you don't have enough disk for it
there
Otis
--
SOLR Performance Monitoring - http://sematext.com/spm/i
Hi Otis,
It seems to me that I'm going to have to write a script anyway that takes
handles the retention of the backups.
Plus it doesn't seem optimal that I would run a solr instance on that
server, taking up memory when I could probably
write a script that would pull all the data directly using t
Hi Eva,
I think you just need to configure the Solr instance on your Windows and
point it to your Solr master. It will then copy the index from the master
periodically.
Please see http://search-lucene.com/?q=solr+replication+backup for some
more info about doing backups - you don't need rsync. O
This may be related to SOLR-3939.
I'll try and get to testing it out without that fix.
- Mark
> On Thu, Oct 18, 2012 at 12:52 AM, Minoru Osuka wrote:
>> Hi,
>>
>> I am facing replication problem.
>> I had added a shard replica after the leader's core had been reloaded. I
>> had expected to star
First, why are you reloading the leader? Just as an experiment? I
know there's been some JIRA issues with reloading cores and
SolrCloud...
Second, what's your evidence that replication didn't happen?
For just a few documents the slave index might be updated from
the transaction log and you wouldn'
If I understand you right, replication of data has 0 downtime, it
just works and the data flows through from master to slaves. If you
want, you can configure the replication to replicate configuration
files across the cluster (although to me my deploy script does this).
I'd recommend tweaking the
Thanks for all the information.
> I'm not sure how exactly you are measuring/defining "replication lag" but
> if you mean "lag in how long until the newly replicated documents are
> visible in searches"
That is exactly what I wanted to say.
I've attached the cache statistics.
If you are inter
: However, with these modifications we noticed an important replication
I'm not sure how exactly you are measuring/defining "replication lag" but
if you mean "lag in how long until the newly replicated documents are
visible in searches" then that _may_ be fairly easy to explain...
: My previo
Thanks for your answer Erick.
For the polling interval, we use 1 second for the small index and 1 minute
for the big one. I'll try to increase it up to 5 minutes and see if it
solve the problem.The issue doesn't occur with the default cache settings
(i.e. cache size=512).
Indeed, we strive for ne
You polling interval is much too short. 1 second
is probably getting you into resource contention
issues.
A more reasonable interval is on the order of several
minutes. If you really need near real time
searching, consider 4.0 which supports NRT
Best
Erick
On Fri, Aug 31, 2012 at 10:02 AM, Damie
Look at how the older rsync-based snapshooter works: it uses the Unix
rsync program to very efficiently spot and copy updated files in the
master index. It runs from each query slave, just like Java
replication. Unlike Java replication, it just uses the SSH copy
protocol, and does not talk to the m
Ugh, after a mess of additional flailing around, it appears I just
discovered that the Replicate Now form on the Replication Admin page
does not work in the text-based browser 'links'. :(
Running /replication?command=fetchindex" with curl did the trick. Now
everything is synced up.
Thanks for you
Clocks on the separate machines are irrelevant, so don't worry about that bit.
The index version _starts out_ as a timestamp as I understand it, but
from there on when
you change the index and commit it should just bump up NOT get a new timestamp.
1> it's strange that the version on the master wh
Nevermind, I realized that my master index was not tickling the index
version number when a commit or optimize happened. I gave in and nuke
and paved it, and now it seems fine.
Is there any known reason why this would happen, so I can avoid this
in the future?
Thanks,
Michael Della Bitta
-
SOLR-1855 has a script that checks replication details:
/solr/${CORE}/replication?command=details
# Get the last time the core replicated correctly.
# Get the last time the core failed to replicate.
# Is this core replicating (aka pulling index from master) right now?
See:
https://issues.apache
A couple of things to check.
1> Are you optimizing all the time? An optimization will merge all the
segments into a single segment, which will cause the whole
index to be replicated after each optimization.
Best
Erick
On Wed, Jun 6, 2012 at 1:33 AM, William Bell wrote:
> We are using S
Sorry hit send too fast. The shards were listed as active. Also the solr
instances were still running but the file system they wrote to had become
read only. I thought that would make replication fail and when the issue
was fixed and solr restarted replication would then succeed. Am I hitting
some
I have not tried to reproduce as of yet but hope to do so Monday. The
machine that had the issue was a vm out of my control so I'm not certain
how it was restored. I am using a fairly recent nightly build within the
last few weeks
On Friday, May 11, 2012, Mark Miller wrote:
> So it's easy to repr
So it's easy to reproduce? What do you mean restored from a prior state?
What snapshot are you on these days for future ref?
You have double checked to make sure that shard is listed as ACTIVE right?
On May 11, 2012, at 4:55 PM, Jamie Johnson wrote:
> I've had a few instances where a machine ha
my setup includes a asynchron replication.
this means, both are master AND slave at the same time. so i can easy switch
master and slave on the fly without resarting any server with mass of
scripts ... i trigger a replication via cronjob and look everytime, if
server is master or slave. only slave
my setup includes a asynchron replication.
this means, both are master AND slave at the same time. so i can easy switch
master and slave on the fly without resarting any server with mass of
scripts ... i trigger a replication via cronjob and look everytime, if
server is master or slave. only slave
Why would you replicate data import properties? The master does the importing
not the slave...
Sent from my Mobile device
720-256-8076
On May 9, 2012, at 7:23 AM, stockii wrote:
> Hello.
>
>
> i running a solr replication. works well, but i need to replicate my
> dataimport-properties.
>
>
bevore this problem i got this problem
https://issues.apache.org/jira/browse/SOLR-1781
-
--- System
One Server, 12 GB RAM, 2 Solr Instances, 8 Cores,
1 Core with 45 Million Documents other Cores < 200.000
- Solr1 for Searc
OK, I was thrown off by your use of "schema", I thought
you were talking about schema.xml
Anyway, assuming you have some kind of loop that pages
through the documents via Solr, gets the results and then
sends them to another Solr server... yeah, that'll be slow.
You have the "deep paging" prob
Thanks ..
i need to index data from one solr to another solr with different analyser
..
Now i am able to do this by querying from solr which will be index into
another solr
NOTE: As the field which i need to reindex is stored so it is easy by as my
index has 31 lakh record it is taking lot of time
Why would you want to? This seems like an
XY problem, see:
http://people.apache.org/~hossman/#xyproblem
See the "confFiles" section here:
http://wiki.apache.org/solr/SolrReplication
although it mentions solrconfig.xml, it
might work with schema.xml.
BUT: This strikes me as really, really
dangerou
If by that you mean in a master/slave setup just replicate a single
field in the index, no you can't. Nor can you just replicate only the
changed fields in an index, Lucene isn't structured that way..
Otherwise, can you provide more background on what you're hoping for
here? Your question was rath
Hello!
Thanks for the answer Shawn.
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
> On 2/6/2012 3:04 AM, Rafał Kuć wrote:
>> Hello!
>>
>> We have Solr running on Windows. Once in a while we see a problem with
>> replication failing. While slave server replic
On 2/6/2012 3:04 AM, Rafał Kuć wrote:
Hello!
We have Solr running on Windows. Once in a while we see a problem with
replication failing. While slave server replicates the index, it throws
exception like the following:
SEVERE: Unable to copy index file from:
D:\web\solr\collection\data\index.20
al Message-
> From: Jonathan Rochkind [mailto:rochk...@jhu.edu]
> Sent: Thursday, January 19, 2012 11:43 AM
> To: solr-user@lucene.apache.org
> Cc: Dyer, James
> Subject: Re: replication, disk space
>
> Okay, I do have an index.properties file too, and THAT one does contain
> the
January 19, 2012 11:43 AM
To: solr-user@lucene.apache.org
Cc: Dyer, James
Subject: Re: replication, disk space
Okay, I do have an index.properties file too, and THAT one does contain
the name of an index directory.
But it's got the name of the timestamped index directory! Not sure how
-Commerce Systems
Ingram Content Group
(615) 213-4311
-Original Message-
From: Artem Lokotosh [mailto:arco...@gmail.com]
Sent: Wednesday, January 18, 2012 12:24 PM
To: solr-user@lucene.apache.org
Subject: Re: replication, disk space
Which OS do you using?
Maybe related to this Solr bug
htt
On 1/18/2012 1:53 PM, Tomás Fernández Löbbe wrote:
As far as I know, the replication is supposed to delete the old directory
index. However, the initial question is "why is this new index directory
being created". Are you adding/updating documents in the slave? what about
optimizing it? Are you r
eans deleting
"index".
James Dyer
E-Commerce Systems
Ingram Content Group
(615) 213-4311
-Original Message-
From: Artem Lokotosh [mailto:arco...@gmail.com]
Sent: Wednesday, January 18, 2012 12:24 PM
To: solr-user@lucene.apache.org
Subject: Re: replication, disk space
Which OS
Thanks for the response. I am using Linux (RedHat).
It sounds like it may possibly be related to that bug.
But the thing is, the timestamped index directory is looking to me like
it's the _current_ one, with the non-timestamped one being an old out of
date one. So that does not seem to be qui
t being used,
> even if that means deleting "index".
>
> James Dyer
> E-Commerce Systems
> Ingram Content Group
> (615) 213-4311
>
> -Original Message-
> From: Artem Lokotosh [mailto:arco...@gmail.com]
> Sent: Wednesday, January 18, 2012 12:2
e-
From: Artem Lokotosh [mailto:arco...@gmail.com]
Sent: Wednesday, January 18, 2012 12:24 PM
To: solr-user@lucene.apache.org
Subject: Re: replication, disk space
Which OS do you using?
Maybe related to this Solr bug
https://issues.apache.org/jira/browse/SOLR-1781
On Wed, Jan 18, 2012 at 6:32 PM, J
Which OS do you using?
Maybe related to this Solr bug
https://issues.apache.org/jira/browse/SOLR-1781
On Wed, Jan 18, 2012 at 6:32 PM, Jonathan Rochkind wrote:
> So Solr 1.4. I have a solr master/slave, where it actually doesn't poll for
> replication, it only replicates irregularly when I issue
Hi Herman,
Try adding this to your replication config:
00:00:10
See also http://search-lucene.com/?q=commitReserveDuration&fc_project=Solr
Otis
Performance Monitoring SaaS for Solr -
http://sematext.com/spm/solr-performance-monitoring/index.html
- Original Message -
> From: H
On 12/22/2011 4:39 AM, Dean Pullen wrote:
> Yeh the drop index via the URL command doesn't help anyway - when rebuilding
> the index the timestamp is obviously ahead of master (as the slave is being
> created now) so the replication will still not happen.
If you deleted the index and create the
We're simply restoring the master via a backed up snapshot (created using the
ReplicationHandler) and then trying to get the slave to replicate it.
On 21 Dec 2011, at 18:09, Erick Erickson wrote:
> You can't. But index restoration should be a very rare thing,
> or you have some lurking problem i
Yeh the drop index via the URL command doesn't help anyway - when rebuilding
the index the timestamp is obviously ahead of master (as the slave is being
created now) so the replication will still not happen.
On 21 Dec 2011, at 16:37, Dean Pullen wrote:
> I can't see a way, if the slave is on
101 - 200 of 445 matches
Mail list logo