Shawn Heisey-2 wrote
> On 9/22/2015 11:54 AM, vsilgalis wrote:
>> I've actually read that article a few times.
>>
>> Yeah I know we aren't perfect in opening searchers. Yes we are committing
>> from the client, this is something that is changing in our next code
>&
Erick Erickson wrote
> Things shouldn't be going into recovery that often.
>
> Exceeding the maxwarming searchers indicates that you're committing
> very often, and that your autowarming interval exceeds the interval
> between
> commits (either hard commit with openSearcher set to true or soft
>
We have a collection with 2 shards, 3 nodes per shard running solr 4.10.2
Our issue is that cores that get in recovery never recover, they are in a
constant state of recovery unless we restart the node and then reload the
core on the leader. Updates seem to get to the server fine as the
I fixed this issue by reloading the core on the leader for the shard.
Still curious how this happened, any help would be greatly appreciated.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Has-anyone-seen-this-error-tp4200975p4201067.html
Sent from the Solr - User mailing
We are getting this on a couple of nodes wondering if there is a way to
recover the node:
Setting up to try to start recovery on replica http://ip/solr/classic_bt/
after: org.apache.solr.common.SolrException: Conflict
Thanks
--
View this message in context:
The leader in the cluster is what is throwing the error.
One of the stack traces:
http://lucene.472066.n3.nabble.com/file/n4201019/SolrError-Conflict.png
However I didn't notice this one before which has a bit more info:
org.apache.solr.common.SolrException: Conflict
request:
I haven't had time to really take a look at this. But read a couple of
articles regarding the hard commit and it actually makes sense. We were
seeing tlogs in the multiple GBs during ingest. I will have some time in a
couple of weeks to come back to testing indexing. Thanks for the help.
Vy
Right now index size is about 10GB on each shard (yes I could use more RAM),
but I'm looking more for a step up then step down approach. I will try
adding more RAM to these machines as my next step.
1. Zookeeper is external to these boxes in a three node cluster with more
than enough RAM to keep
We upgraded recently to Solr 4.10.2 from 4.2.1 and have been seeing errors
regarding the dreaded broken pipe when doing our reindexing of all our
content.
Specifically:
ERROR - 2015-04-13 17:09:12.310;
org.apache.solr.update.StreamingSolrServers$1; error
java.net.SocketException: Broken pipe
just a couple of notes:
this a 2 shard setup with 2 nodes per shard.
Currently these are on VMs with 8 cores and 8GB of ram each (java max heap
is ~5588mb but we usually never even get that high) backed by a NFS file
store which we store the indexes on (netapp SAN with nfs exports on SAS
disk).
So we actually 3 of the 6 machines automatically restart the SOLR service as
memory pressure was too high, 2 were by SIGABRT and one was java OOMkiller.
I dropped a pmap on one of the solr services before it died.
Basically i need to figure out what the other direct memory references are
outside
We have a 2 shard SOLRCloud implementation with 6 servers in production. We
have allocated 24GB to each server and are using JVM max memory settings of
-Xmx14336 on each of the servers. We are using the same embedded jetty that
SOLR comes with. The JVM side of things looks like what I'd expect
thanks for the quick reply.
I made to rule out what I could around how Linux is handling this stuff.
Yes I'm using the default swappiness setting of 60, but at this point it
looks like the machine is swapping now because of low memory.
Here is the vmstat and free -m results:
dash:
http://lucene.472066.n3.nabble.com/file/n4086902/solr_dash1.png
JVM section:
http://lucene.472066.n3.nabble.com/file/n4086902/solr_dash2.png
ps output:
http://lucene.472066.n3.nabble.com/file/n4086902/solr_ps_out.png
Erick that may be one of the ways I approach this, I just want to
http://lucene.472066.n3.nabble.com/file/n4086923/huge.png
That doesn't seem to be a problem.
Markus, are you saying that I should plan on resident memory being at least
double my heap size? I haven't run into issues around this before but then
again I don't know everything.
Is this a rule of
I didn't change it and haven't seen any issues.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Zookeeper-Ensemble-Startup-Parameters-For-SolrCloud-tp4063905p4064654.html
Sent from the Solr - User mailing list archive at Nabble.com.
As an example, I have 9 SOLR nodes (3 clusters of 3) using different versions
of SOLR (4.1, 4.1, and 4.2.1), utilizing the same zookeeper ensemble (3
servers), using chroot for the different configs across clusters.
My zookeeper servers are just VMs, dual-core with 1GB of RAM and are only
used
So we have 3 servers in a SolrCloud cluster.
http://lucene.472066.n3.nabble.com/file/n4053506/Cloud1.png
We have 2 shards for our collection (classic_bt) with a shard on each of the
first two servers as the picture shows. The third server has replicas of the
first 2 shards just for high
Michael Della Bitta-2 wrote
Hello Vytenis,
What exactly do you mean by aren't distributing across the shards?
Do you mean that POSTs against the server for shard 1 never end up
resulting in documents saved in shard 2?
So we indexed a set of 33010 documents on server01 which are now in
Chris Hostetter-3 wrote
I'm not familiar with the details, but i've seen miller respond to a
similar question with reference to the issue of not explicitly specifying
numShards when creating your collections...
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201303.mbox/%
Michael Della Bitta-2 wrote
With earlier versions of Solr Cloud, if there was any error or warning
when you made a collection, you likely were set up for implicit
routing which means that documents only go to the shard you're talking
to. What you want is compositeId routing, which works how
Michael Della Bitta-2 wrote
If you can work with a clean state, I'd turn off all your shards,
clear out the Solr directories in Zookeeper, reset solr.xml for each
of your shards, upgrade to the latest version of Solr, and turn
everything back on again. Then upload config, recreate your
22 matches
Mail list logo