Re: Upgrade 6.2.1 to 7.5.0 - "Connection evictor" Threads not closed

2018-11-26 Thread Jason Gerlowski
Hey Sebastian,

As for how Solr/SolrJ compatibility is handled, the story for SolrJ
looks a lot like the story for Solr itself - major version changes can
introduce breaking changes, so it is best to avoid using SolrJ 6.x
with Solr 7.x.  In practice I think changes that break Solr/SolrJ
compatibility are relatively rare though, so it might be possible if
your hand is forced.

As for the behavior you described...I think I understand what you're
describing, but to make sure:  Are the "connection-evictor" threads
accumulating in your client application, on the Solr server itself, or
both?

I suspect you're seeing this in your client code.  If so, it'd really
help us to help you if you could provide some more details on how
you're using SolrJ.  Can you share a small snippet (JUnit test?) that
reproduces the problem?  How are you creating the SolrClient you're
using to send requests?  Which SolrClient implementation(s) are you
using?  Are you providing your own HttpClient, or letting SolrClient
create its own?  It'll be much easier for others to help with a little
more detail there.

Best,

Jason

On Fri, Nov 23, 2018 at 10:38 AM Sebastian Riemer  wrote:
>
> Hi,
>
> we've recently changed our Solr-Version from 6.2.1 to 7.5.0, and since then, 
> whenever we execute a query on solr, a new thread is being created and never 
> closed.
>
> These threads are all labelled "Connection evictor" and the gather until a 
> critical mass is reached and either the OS cannot create anymore OS threads, 
> or an out of memory error is being produced.
>
> First I thought, that this might have as cause we were using a higher 
> SolrJ-Version than our Solr-Server (by mistakenly forgetting to uprade the 
> server version too):
>
> So we had for SolrJ: 7.4.0
>
> 
> org.apache.solr
> solr-solrj
> 7.4.0
> 
>
> And for Solr-Server:  6.2.1
>
> But now I just installed the newest Solr-Server-Version 7.5.0 and still I see 
> with each Solr-Search performed an additional Thread being created and never 
> released.
>
> When downgrading SolrJ to 6.2.1 I can verify, that no new threads are created 
> when doing a solr search.
>
> What do you think about this? Are there any known pitfalls? Maybe I missed 
> some crucial changes necessary when upgrading to 7.5.0?
>
> What about differing versions in SolrJ and Solr-Server? As far as I recall 
> the docs, one major-version-difference up/down in both ways should be o.k.
>
> Thanks for all your feedback,
>
> Yours sincerely
>
> Sebastian Riemer


Is reload necessary for updates to files referenced in schema, like synonyms, protwords, etc?

2018-11-26 Thread Shawn Heisey
I know that changes to the schema require a reload.  But do changes to 
files referenced by a schema also require a reload?  So if for instance 
I were to change the contents of a synonym file, would I need to reload 
the core before Solr would use the new file?  Synonyms in this case are 
at query time, but other files like protwords are used at index time.


I *THINK* that a reload is required, but I can't be sure without 
checking the code, and it would probably take me more than a couple of 
hours to unravel the code enough to answer the question myself.


It is not SolrCloud, so there's no ZK to worry about.

Thanks,
Shawn



Re: Is reload necessary for updates to files referenced in schema, like synonyms, protwords, etc?

2018-11-26 Thread Walter Underwood
Should be easy to check with the analysis UI. Add a synonym and see if it is 
used.

I seem to remember some work on reloading synonyms on the fly without a core 
reload. These seem related...

https://issues.apache.org/jira/browse/SOLR-5200
https://issues.apache.org/jira/browse/SOLR-5234

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Nov 26, 2018, at 11:43 AM, Shawn Heisey  wrote:
> 
> I know that changes to the schema require a reload.  But do changes to files 
> referenced by a schema also require a reload?  So if for instance I were to 
> change the contents of a synonym file, would I need to reload the core before 
> Solr would use the new file?  Synonyms in this case are at query time, but 
> other files like protwords are used at index time.
> 
> I *THINK* that a reload is required, but I can't be sure without checking the 
> code, and it would probably take me more than a couple of hours to unravel 
> the code enough to answer the question myself.
> 
> It is not SolrCloud, so there's no ZK to worry about.
> 
> Thanks,
> Shawn
> 



Autoscaling using triggers to create new replicas

2018-11-26 Thread Daniel Carrasco
Hello,

I'm trying to create an autoscaling cluster with node_added_trigger
and node_lost_trigger triggers to allow to grow and srink depending of
load, but I've not found too much info about this options and then only
I've archieve is a collection that creates a lot of réplicas on same node
when it starts, and another collection that just keeps with the same
replica number.

I've created three ZK nodes, and I've joined a Solr node to that ZK
cluster.  After that, I've created two collections with this simple
commands:
curl "
http://127.0.0.1:8983/solr/admin/collections?action=CREATE=test=1=1=1
"
curl "
http://127.0.0.1:8983/solr/admin/collections?action=CREATE=test2=1=1=1
"

I've added the two triggers using the example of the Solr manual:
curl -X POST -H 'Content-Type: application/json' '
http://localhost:8983/solr/admin/autoscaling' --data-binary '{
"set-trigger": {"name": "node_added_trigger","event": "nodeAdded",
  "waitFor": "5s","preferredOperation": "ADDREPLICA" } }'
curl -X POST -H 'Content-Type: application/json' '
http://localhost:8983/solr/admin/autoscaling' --data-binary '{
 "set-trigger": {"name": "node_lost_trigger","event": "nodeLost",
  "waitFor": "120s","preferredOperation": "DELETENODE"  }}'


And after all this, I've added a node to test.
When that node starts, the test collection starts to add replicas without
control (it added even 12 replicas on the new node), and the test2
collection keeps on just one replica. If I delete the test collection and
repeat the process of add the new node, then the test2 have the same
behavior  and creates a lot of replicas on new node.

Delete trigger works just fine, and when the node is down for about 120s it
is deleted from collection without problem.

Is there any way to create just one replica when a node joins the cluster?.

Thanks!
-- 
_

  Daniel Carrasco Marín
  Ingeniería para la Innovación i2TIC, S.L.
  Tlf:  +34 911 12 32 84 Ext: 223
  www.i2tic.com
_