On 8/2/2017 8:56 AM, Michael B. Klein wrote:
> SCALE DOWN
> 1) Call admin/collections?action=BACKUP for each collection to a
> shared NFS volume
> 2) Shut down all the nodes
>
> SCALE UP
> 1) Spin up 2 Zookeeper nodes and wait for them to stabilize
> 2) Spin up 3 Solr nodes and wait for them to sho
And the one that isn't getting the updates is the one marked in the cloud
diagram as the leader.
/me bangs head on desk
On Wed, Aug 2, 2017 at 10:31 AM, Michael B. Klein wrote:
> Another observation: After bringing the cluster back up just now, the
> "1-in-3 nodes don't get the updates" issue p
Another observation: After bringing the cluster back up just now, the
"1-in-3 nodes don't get the updates" issue persists, even with the cloud
diagram showing 3 nodes, all green.
On Wed, Aug 2, 2017 at 9:56 AM, Michael B. Klein wrote:
> Thanks for your responses, Shawn and Erick.
>
> Some clarif
Thanks for your responses, Shawn and Erick.
Some clarification questions, but first a description of my (non-standard)
use case:
My Zookeeper/SolrCloud cluster is running on Amazon AWS. Things are working
well so far on the production cluster (knock wood); its the staging cluster
that's giving me
And please do not use optimize unless your index is
totally static. I only recommend it when the pattern is
to update the index periodically, like every day or
something and not update any docs in between times.
Implied in Shawn's e-mail was that you should undo
anything you've done in terms of co
On 8/1/2017 12:09 PM, Michael B. Klein wrote:
> I have a 3-node solrcloud cluster orchestrated by zookeeper. Most stuff
> seems to be working OK, except that one of the nodes never seems to get its
> replica updated.
>
> Queries take place through a non-caching, round-robin load balancer. The
> col
I have a 3-node solrcloud cluster orchestrated by zookeeper. Most stuff
seems to be working OK, except that one of the nodes never seems to get its
replica updated.
Queries take place through a non-caching, round-robin load balancer. The
collection looks fine, with one shard and a replicationFacto
> I am currently using SOLR 4.4. but not planning to use solrcloud in very
near
> future.
> I have 3 master / 3 slave setup. Each master is linked to its
> corresponding
> slave.. I have disabled auto polling..
> We do both push (using MQ) and pull indexing using SOLRJ indexing
program.
> I have en
store that data anywhere except in
index).
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-replication-question-tp4081161.html
Sent from the Solr - User mailing list archive at Nabble.com.
Okay one last note... just for closure... looks like it was addressed in
solr 4.1+ (I was looking at 4.0).
On Thu, Jan 24, 2013 at 11:14 PM, Amit Nithian wrote:
> Okay so after some debugging I found the problem. While the replication
> piece will download the index from the master server and m
Okay so after some debugging I found the problem. While the replication
piece will download the index from the master server and move the files to
the index directory but during the commit phase, these "older" generation
files are deleted and the index is essentially left in tact.
I noticed that a
message in context:
> http://lucene.472066.n3.nabble.com/SolrCloud-replication-question-tp3993761p3994017.html
> Sent from the Solr - User mailing list archive at Nabble.com.
As also
> observed at
>
> http://carsabi.com/car-news/2012/03/23/optimizing-solr-7x-your-search-speed/
>
> I am now benchmarking my workload to compare replication vs. sharding
> performance on a single machine.
>
> --
> View this message in context:
> http://lucene
-search-speed/
I am now benchmarking my workload to compare replication vs. sharding
performance on a single machine.
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-replication-question-tp3993761p3994017.html
Sent from the Solr - User mailing list archive at
I still get roughly an
> > n-fold increase in query throughput with n replicas? And if so, why would
> > one do master/slave replication with multiple copies of the index at all?
> >
> > --
> > View this message in context:
> > http://lucene.472066.n3.nabble.com
master/slave configuration). Will I still get roughly an
> n-fold increase in query throughput with n replicas? And if so, why would
> one do master/slave replication with multiple copies of the index at all?
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.
-replication-question-tp3993761.html
Sent from the Solr - User mailing list archive at Nabble.com.
That's great information.
Thanks for all the help and guidance, its been invaluable.
Thanks
Ben
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: 26 March 2012 12:21
To: solr-user@lucene.apache.org
Subject: Re: Simple Slave Replication Question
It&
s the whole index?
>
> Thanks
> Ben
>
> -Original Message-
> From: Tomás Fernández Löbbe [mailto:tomasflo...@gmail.com]
> Sent: 23 March 2012 15:10
> To: solr-user@lucene.apache.org
> Subject: Re: Simple Slave Replication Question
>
> Also, what happens if, in
then work across the whole index?
Thanks
Ben
-Original Message-
From: Tomás Fernández Löbbe [mailto:tomasflo...@gmail.com]
Sent: 23 March 2012 15:10
To: solr-user@lucene.apache.org
Subject: Re: Simple Slave Replication Question
Also, what happens if, instead of adding the 40K docs yo
ndex and then it copys the full 5gb
>> index.
>>
>> Thanks
>> Ben
>>
>> -Original Message-
>> From: Tomás Fernández Löbbe [mailto:tomasflo...@gmail.com]
>> Sent: 23 March 2012 14:29
>> To: solr-user@lucene.apache.org
>> Subject: Re: Simp
2012 14:29
> To: solr-user@lucene.apache.org
> Subject: Re: Simple Slave Replication Question
>
> Hi Ben, only new segments are replicated from master to slave. In a
> situation where all the segments are new, this will cause the index to be
> fully replicated, but this rarely happe
Replication Question
Hi Ben, only new segments are replicated from master to slave. In a situation
where all the segments are new, this will cause the index to be fully
replicated, but this rarely happen with incremental updates. It can also happen
if the slave Solr assumes it has an "invalid&q
ic and network pipes.
>
> -Original Message-
> From: Martin Koch [mailto:m...@issuu.com]
> Sent: 23 March 2012 14:07
> To: solr-user@lucene.apache.org
> Subject: Re: Simple Slave Replication Question
>
> I guess this would depend on network bandwidth, but we mo
So do you just simpy address this with big nic and network pipes.
-Original Message-
From: Martin Koch [mailto:m...@issuu.com]
Sent: 23 March 2012 14:07
To: solr-user@lucene.apache.org
Subject: Re: Simple Slave Replication Question
I guess this would depend on network bandwidth, but we
I guess this would depend on network bandwidth, but we move around
150G/hour when hooking up a new slave to the master.
/Martin
On Fri, Mar 23, 2012 at 12:33 PM, Ben McCarthy <
ben.mccar...@tradermedia.co.uk> wrote:
> Hello,
>
> Im looking at the replication from a master to a number of slaves.
Hello,
Im looking at the replication from a master to a number of slaves. I have
configured it and it appears to be working. When updating 40K records on the
master is it standard to always copy over the full index, currently 5gb in
size. If this is standard what do people do who have massiv
Ok, great. Just wanted to make sure someone was aware. Thanks for
looking into this.
On Thu, Feb 16, 2012 at 8:26 AM, Mark Miller wrote:
>
> On Feb 14, 2012, at 10:57 PM, Jamie Johnson wrote:
>
>> Not sure if this is
>> expected or not.
>
> Nope - should be already resolved or will be today th
On Feb 14, 2012, at 10:57 PM, Jamie Johnson wrote:
> Not sure if this is
> expected or not.
Nope - should be already resolved or will be today though.
- Mark Miller
lucidimagination.com
All of the nodes now show as being Active. When starting the replicas
I did receive the following message though. Not sure if this is
expected or not.
INFO: Attempting to replicate from
http://JamiesMac.local:8501/solr/slice2_shard2/
Feb 14, 2012 10:53:34 PM org.apache.solr.common.SolrException
Doing so now, will let you know if I continue to see the same issues
On Tue, Feb 14, 2012 at 4:59 PM, Mark Miller wrote:
> Doh - looks like I was just seeing a test issue. Do you mind updating and
> trying the latest rev? At the least there should be some better logging
> around the recovery.
>
Doh - looks like I was just seeing a test issue. Do you mind updating and
trying the latest rev? At the least there should be some better logging around
the recovery.
I'll keep working on tests in the meantime.
- Mark
On Feb 14, 2012, at 3:15 PM, Jamie Johnson wrote:
> Sounds good, if I pull
Sounds good, if I pull the latest from trunk and rerun will that be
useful or were you able to duplicate my issue now?
On Tue, Feb 14, 2012 at 3:00 PM, Mark Miller wrote:
> Okay Jamie, I think I have a handle on this. It looks like an issue with what
> config files are being used by cores create
Okay Jamie, I think I have a handle on this. It looks like an issue with what
config files are being used by cores created with the admin core handler - I
think it's just picking up default config and not the correct config for the
collection. This means they end up using config that has no Upda
Thanks Mark, not a huge rush, just me trying to get to use the latest
stuff on our project.
On Tue, Feb 14, 2012 at 10:53 AM, Mark Miller wrote:
> Sorry, have not gotten it yet, but will be back trying later today - monday,
> tuesday tend to be slow for me (meetings and crap).
>
> - Mark
>
> On
Sorry, have not gotten it yet, but will be back trying later today - monday,
tuesday tend to be slow for me (meetings and crap).
- Mark
On Feb 14, 2012, at 9:10 AM, Jamie Johnson wrote:
> Has there been any success in replicating this? I'm wondering if it
> could be something with my setup tha
Has there been any success in replicating this? I'm wondering if it
could be something with my setup that is causing the issue...
On Mon, Feb 13, 2012 at 8:55 AM, Jamie Johnson wrote:
> Yes, I have the following layout on the FS
>
> ./bootstrap.sh
> ./example (standard example directory from di
Yes, I have the following layout on the FS
./bootstrap.sh
./example (standard example directory from distro containing jetty
jars, solr confs, solr war, etc)
./slice1
- start.sh
-solr.xml
- slice1_shard1
- data
- slice2_shard2
-data
./slice2
- start.sh
- solr.xml
-slice2_shard1
Do you have unique dataDir for each instance?
13.2.2012 14.30 "Jamie Johnson" kirjoitti:
I don't see any errors in the log. here are the following scripts I'm
running, and to create the cores I run the following commands
curl
'http://localhost:8501/solr/admin/cores?action=CREATE&name=slice1_shard1&collection=collection1&shard=slice1&collection.configName=config1'
curl
'http://local
Yeah, that is what I would expect - for a node to be marked as down, it either
didn't finish starting, or it gave up recovering...either case should be
logged. You might try searching for the recover keyword and see if there are
any interesting bits around that.
Meanwhile, I have dug up a coupl
I didn't see anything in the logs, would it be an error?
On Sat, Feb 11, 2012 at 3:58 PM, Mark Miller wrote:
>
> On Feb 11, 2012, at 3:08 PM, Jamie Johnson wrote:
>
>> I wiped the zk and started over (when I switch networks I get
>> different host names and honestly haven't dug into why). That b
On Feb 11, 2012, at 3:08 PM, Jamie Johnson wrote:
> I wiped the zk and started over (when I switch networks I get
> different host names and honestly haven't dug into why). That being
> said the latest state shows all in sync, why would the cores show up
> as down?
If recovery fails X times (s
I wiped the zk and started over (when I switch networks I get
different host names and honestly haven't dug into why). That being
said the latest state shows all in sync, why would the cores show up
as down?
On Sat, Feb 11, 2012 at 11:08 AM, Mark Miller wrote:
>
> On Feb 10, 2012, at 9:40 PM, Ja
On Feb 10, 2012, at 9:40 PM, Jamie Johnson wrote:
>
>
> how'd you resolve this issue?
>
I was basing my guess on seeing "JamiesMac.local" and "jamiesmac" in your first
cluster state dump - your latest doesn't seem to mismatch like that though.
- Mark Miller
lucidimagination.com
hmmperhaps I'm seeing the issue you're speaking of. I have
everything running right now and my state is as follows:
{"collection1":{
"slice1":{
"JamiesMac.local:8501_solr_slice1_shard1":{
"shard_id":"slice1",
"leader":"true",
"state":"active",
"core":
On Feb 10, 2012, at 9:33 AM, Jamie Johnson wrote:
> jamiesmac
Another note:
Have no idea if this is involved, but when I do tests with my linux box and mac
I run into the following:
My linux box auto finds the address of halfmetal and my macbook mbpro.local. If
I accept those defaults, my ma
Thanks.
If the given ZK snapshot was the end state, then two nodes are marked as
down. Generally that happens because replication failed - if you have not,
I'd check the logs for those two nodes.
- Mark
On Fri, Feb 10, 2012 at 7:35 PM, Jamie Johnson wrote:
> nothing seems that different. In r
nothing seems that different. In regards to the states of each I'll
try to verify tonight.
This was using a version I pulled from SVN trunk yesterday morning
On Fri, Feb 10, 2012 at 6:22 PM, Mark Miller wrote:
> Also, it will help if you can mention the exact version of solrcloud you are
> tal
Also, it will help if you can mention the exact version of solrcloud you are
talking about in each issue - I know you have one from the old branch, and I
assume a version off trunk you are playing with - so a heads up on which and if
trunk, what rev or day will help in the case that I'm trying t
I'm trying, but so far I don't see anything. I'll have to try and mimic your
setup closer it seems.
I tried starting up 6 solr instances on different ports as 2 shards, each with
a replication factor of 3.
Then I indexed 20k documents to the cluster and verified doc counts.
Then I shutdown all
Sorry for pinging this again, is more information needed on this? I
can provide more details but am not sure what to provide.
On Fri, Feb 10, 2012 at 10:26 AM, Jamie Johnson wrote:
> Sorry, I shut down the full solr instance.
>
> On Fri, Feb 10, 2012 at 9:42 AM, Mark Miller wrote:
>> Can you ex
Sorry, I shut down the full solr instance.
On Fri, Feb 10, 2012 at 9:42 AM, Mark Miller wrote:
> Can you explain a little more how you doing this? How are you bringing the
> cores down and then back up? Shutting down a full solr instance, unloading
> the core?
>
> On Feb 10, 2012, at 9:33 AM, J
Can you explain a little more how you doing this? How are you bringing the
cores down and then back up? Shutting down a full solr instance, unloading the
core?
On Feb 10, 2012, at 9:33 AM, Jamie Johnson wrote:
> I know that the latest Solr Cloud doesn't use standard replication but
> I have a q
I know that the latest Solr Cloud doesn't use standard replication but
I have a question about how it appears to be working. I currently
have the following cluster state
{"collection1":{
"slice1":{
"JamiesMac.local:8501_solr_slice1_shard1":{
"shard_id":"slice1",
"state":
sync with master.
May 6, 2011 1:35:05 PM org.apache.solr.handler.SnapPuller fetchLatestIndex
INFO: Slave in sync with master.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Replication-question-tp2909157p2909157.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
I am fairly new to solr, and have setup two servers, one with master,
other as a slave.
I have a load balancer in front with 2 different VIP, one to do
gets/reads distributed evenly on the master and slave, and another VIP
to do posts/updates just to the master. If the master fails I have
th
57 matches
Mail list logo