Re: Solr crashing / slowing down the performance

2017-07-03 Thread Walter Underwood
With 8GB of RAM and 5.5 GB of Java memory, there is zero room for caching 
indexes. The OS will eat a gigabyte or so, then there are other processes 
running. So either the index accesses are pounding on the disk or the Java heap 
is getting swapped out.

This machine is too small. The smallest Solr machine we deploy, even in test 
and dev, has 15 GB of RAM, SSD disks, and has an 8 GB Java heap. In prod, we 
run with enough RAM that the entire index can live in RAM file buffers.

We don’t do a lot of faceting or other memory-intensive queries. We mostly just 
search.

wunder 
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)


> On Jul 3, 2017, at 5:53 PM, Erick Erickson  wrote:
> 
> The physical memory is something of a red herring, see:
> http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
> 
> The JVM being near the limit is cause for concern. At a glance you are
> simply running too close to the edge of your JVM. One thing I've seen
> when running this close is that garbage collection kicks in and only
> recovers a bit of memory, just enough to keep going then immediately
> goes into another GC cycle, spending lots of CPU cycles doing nothing
> but GC.
> 
> Try looking at your GC logs with something like gcviewer
> (https://sourceforge.net/projects/gcviewer/) or gceasy
> (http://gceasy.io/) to get a sense of what's really going on.
> 
> One common mistake is to issue commits from the client after every
> batch of docs is indexed to Solr. You may be wasting a lot of cycles
> if you're doing that, I'd let my autocommit interval handle it all.
> See: 
> https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
> 
> Best,
> Erick
> 
> On Mon, Jul 3, 2017 at 12:55 PM, Venkateswarlu Bommineni
>  wrote:
>> Hi Eric,
>> 
>> Thanks for reply.
>> 
>> Please find the image in below url .
>> 
>> https://drive.google.com/open?id=0B9BkzwYA2P-VelJIRXhybFhpLUk
>> 
>> As per me the physical memory and JVM heap is not proportionate , Please
>> correct me if i am wrong.
>> 
>> and in solr logs i am not getting any OOM errors as such.
>> 
>> I have gone through the Shawn's post partially , problem here is that i
>> can't change much of the configuration.
>> but i will try.
>> 
>> 
>> Thanks,
>> Venkat.
>> 
>> 
>> On Mon, Jul 3, 2017 at 11:12 PM, Erick Erickson 
>> wrote:
>> 
>>> Images don't come through, you'll have to put it somewhere and post a link.
>>> Have you seen Shawn's page?
>>> 
>>> https://wiki.apache.org/solr/ShawnHeisey
>>> 
>>> And what does your Solr log say happens? OOM? Other? Could you throttle
>>> your indexing client to spare some CPU cycles for querying?
>>> 
>>> Best,
>>> Erick
>>> 
>>> On Mon, Jul 3, 2017 at 8:32 AM, Venkateswarlu Bommineni 
>>> wrote:
>>> 
 Hi Team,
 
 Background:
 
 We have a Solr 6.2 having multiple cores ( Six cores ) in it , We have
 other system (CMS) that will push the data to Solr.
 
 Issue:
 
 When ever we are doing full index from other system (installed in
 different box) , some times Solr JVM is crashing.sometimes even don't
>>> crash
 the query performance is very slow especially AutoSuggestion query (using
 spellCheck component).
 
 Please find the below memory settings:
 [image: Inline image 1]
 
 Please help me in setting optimal memory settings.
 
 
 Thanks,
 Venkat.
 
>>> 



Re: Allow Join over two sharded collection

2017-07-03 Thread mganeshs
Hi Susheel,

To make use of Joins only option is I should go for manual routing. If I go
for manual routing based on time, we miss the power of distributing the load
while indexing. It will end up with all indexing happens in newly created
shard, which we feel this will not be efficient approach and degrades the
performance of indexing as we have lot of jvms running, but still all
indexing going to one single shard for indexing and we are also expecting
1M+ docs per month in coming days. 

For your question on whether we will query old aged document... ? Mostly we
won't query old aged documents. With querying pattern, it's clear we should
go for manual routing and creating alias. But when it comes to indexing, in
order to distribute the load of indexing, we felt default routing is the
best option, but Join will not work. And that's the reason for asking when
this feature will be in place ?

Regards,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Allow-Join-over-two-sharded-collection-tp4343443p4344098.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 5.2+ using SSL and non-SSL ports

2017-07-03 Thread Shalin Shekhar Mangar
No, Solr cannot use both SSL and non-SSL at the same time. You must choose
one.

On Mon, Jul 3, 2017 at 10:29 PM, sputul  wrote:

> I have SSL enabled in Solr 5 but Zookeeper needs to be started with proper
> url scheme. Does this imply Solr Cloud cannot use SSL and non-SSL at the
> same time as Zookeeper itself need separate ports?
>
> Thanks.
>
>
>
> --
> View this message in context: http://lucene.472066.n3.
> nabble.com/Solr-5-2-using-SSL-and-non-SSL-ports-tp4312859p4343997.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 
Regards,
Shalin Shekhar Mangar.


Re: Solr dynamic "on the fly fields"

2017-07-03 Thread Erick Erickson
I don't know how one would do this. But I would ask what the use-case
is. Creating such fields at index time just seems like it would be
inviting abuse by creating a zillion fields as you have no control
over what gets created. I'm assuming your tenants don't talk to each
other

Have you thought about using function queries to pull this data out as
needed at _query_ time? See:
https://cwiki.apache.org/confluence/display/solr/Function+Queries

Best,
Erick

On Mon, Jul 3, 2017 at 12:06 PM, Pablo Anzorena  wrote:
> Thanks Erick,
>
> For my use case it's not possible any of those solutions. I have a
> multitenancy scheme in the most basic level, that is I have a single
> collection with fields (clientId, field1, field2, ..., field50) attending
> many clients.
>
> Clients can create custom fields based on arithmetic operations of any
> other field.
>
> So, is it possible to update let's say field49 with the follow operation:
> log(field39) + field25 on clientId=43?
>
> Do field39 and field25 need to be stored to accomplish this? Is there any
> other way to avoid storing them?
>
> Thanks!
>
>
> 2017-07-03 15:00 GMT-03:00 Erick Erickson :
>
>> There are two ways:
>> 1> define a dynamic field pattern, i.e.
>>
>> 
>>
>> Now just add any field in the doc you want. If it ends in "_sum" and
>> no other explicit field matches you have a new field.
>>
>> 2> Use the managed schema to add these on the fly. I don't recommend
>> this from what I know of your use case, this is primarily intended for
>> front-ends to be able to modify the schema and/or "field guessing".
>>
>> I do caution you though that either way don't go over-the-top. If
>> you're thinking of thousands of different fields that can lead to
>> performance issues.
>>
>> You can either put stuff in the field on your indexing client or
>> create a custom update component, perhaps the simplest would be a
>> "StatelessScriptUpdateProcessorFactory:
>>
>> see: https://cwiki.apache.org/confluence/display/solr/
>> Update+Request+Processors#UpdateRequestProcessors-
>> UpdateRequestProcessorFactories
>>
>> Best,
>> Erick
>>
>> On Mon, Jul 3, 2017 at 10:52 AM, Pablo Anzorena 
>> wrote:
>> > Hey,
>> >
>> > I was wondering if there is some way to add fields "on the fly" based on
>> > arithmetic operations on other fields. For example add a new field
>> > "custom_field" = log(field1) + field2 -5.
>> >
>> > Thanks.
>>


Re: Solr crashing / slowing down the performance

2017-07-03 Thread Erick Erickson
The physical memory is something of a red herring, see:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

The JVM being near the limit is cause for concern. At a glance you are
simply running too close to the edge of your JVM. One thing I've seen
when running this close is that garbage collection kicks in and only
recovers a bit of memory, just enough to keep going then immediately
goes into another GC cycle, spending lots of CPU cycles doing nothing
but GC.

Try looking at your GC logs with something like gcviewer
(https://sourceforge.net/projects/gcviewer/) or gceasy
(http://gceasy.io/) to get a sense of what's really going on.

One common mistake is to issue commits from the client after every
batch of docs is indexed to Solr. You may be wasting a lot of cycles
if you're doing that, I'd let my autocommit interval handle it all.
See: 
https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

Best,
Erick

On Mon, Jul 3, 2017 at 12:55 PM, Venkateswarlu Bommineni
 wrote:
> Hi Eric,
>
> Thanks for reply.
>
> Please find the image in below url .
>
> https://drive.google.com/open?id=0B9BkzwYA2P-VelJIRXhybFhpLUk
>
> As per me the physical memory and JVM heap is not proportionate , Please
> correct me if i am wrong.
>
> and in solr logs i am not getting any OOM errors as such.
>
> I have gone through the Shawn's post partially , problem here is that i
> can't change much of the configuration.
> but i will try.
>
>
> Thanks,
> Venkat.
>
>
> On Mon, Jul 3, 2017 at 11:12 PM, Erick Erickson 
> wrote:
>
>> Images don't come through, you'll have to put it somewhere and post a link.
>> Have you seen Shawn's page?
>>
>> https://wiki.apache.org/solr/ShawnHeisey
>>
>> And what does your Solr log say happens? OOM? Other? Could you throttle
>> your indexing client to spare some CPU cycles for querying?
>>
>> Best,
>> Erick
>>
>> On Mon, Jul 3, 2017 at 8:32 AM, Venkateswarlu Bommineni 
>> wrote:
>>
>> > Hi Team,
>> >
>> > Background:
>> >
>> > We have a Solr 6.2 having multiple cores ( Six cores ) in it , We have
>> > other system (CMS) that will push the data to Solr.
>> >
>> > Issue:
>> >
>> > When ever we are doing full index from other system (installed in
>> > different box) , some times Solr JVM is crashing.sometimes even don't
>> crash
>> > the query performance is very slow especially AutoSuggestion query (using
>> > spellCheck component).
>> >
>> > Please find the below memory settings:
>> > [image: Inline image 1]
>> >
>> > Please help me in setting optimal memory settings.
>> >
>> >
>> > Thanks,
>> > Venkat.
>> >
>>


Solr Prod Issue | KeeperErrorCode = ConnectionLoss for /overseer_elect/leader

2017-07-03 Thread Bhalla, Rahat

Hi Solr Users,

I hope this email finds you all in the best of spirits and in a mood where 
you'd be willing to help a young developer (me :) ) with issues that I'm facing 
in regards with the Solr Cloud.

At my organization, we are running a Solr Cloud with 5 Nodes for Solr Instances 
with 13 collections spread across the 5 nodes and an ensemble of 3 zookeeper 
instances spread across three different nodes.

Over the last one week, our leader node seems to be going down every other day 
and while we restart the solr instances they still go down within the next 24 
Hours or more.

We have tried rebooting the nodes that host the solr instances and that hasn't 
helped. We plan to clear out the zookeeper logs and data folders before the 
restart of the zookeeper instances.

As of now, I'm the only one supporting Solr in my organization and any insight 
from you could help me a great deal to fix the issue. I'm copying the Exception 
stack trace from this morning. Any recommendations that you might have will be 
great appreciated.

Below is a snapshot of one of the zoo nodes:

[cid:image001.png@01D2F43C.30CA38E0]

Exception Stacktrace

138127149 
[OverseerCollectionConfigSetProcessor-98234161688412161-prod-solr-node01:9080_solr-n_000140]
 [ERROR] 2017-07-03 05:02:55 (OverseerTaskProcessor.java:amILeader:392) -
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = 
ConnectionLoss for /overseer_elect/leader
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:348)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
at 
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:345)
at 
org.apache.solr.cloud.OverseerTaskProcessor.amILeader(OverseerTaskProcessor.java:384)
at 
org.apache.solr.cloud.OverseerTaskProcessor.run(OverseerTaskProcessor.java:191)
at java.lang.Thread.run(Unknown Source)
138133409 [qtp778720569-10329] [ERROR] 2017-07-03 05:03:01 
(SolrException.java:log:148) - org.apache.solr.common.SolrException: Could not 
load collection from ZK: feedsOutBoundToExchange
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1047)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:610)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionsMap(ClusterState.java:248)
at 
org.apache.solr.handler.admin.CollectionsHandler$CollectionOperation$20.call(CollectionsHandler.java:674)
at 
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:195)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:663)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:445)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:518)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)

Re: Work-around for "indexed without position data"

2017-07-03 Thread Solr User
Not sure if it helps beyond the steps to reproduce that I supplied above,
but I also see that "Omit Term Frequencies & Positions" is still set on the
field according to the LukeRequestHandler:

ITS--OF--



On Mon, Jun 5, 2017 at 1:18 PM, Solr User  wrote:

> Sorry for the delay.  I was able to reproduce this easily with my setup,
> but reproducing this on a Solr example proved challenging.  Hopefully the
> work that I did to find the situation in which this is produced will help
> in resolving the problem.  The driving factor for this appears to be how
> updates are sent to Solr.  When sending batches of updates with commits,
> the problem is reproduced.  If the commit is held until after all updates
> are sent, then no problem is produced.  This leads me to believe that this
> issue has something to do with overlapping commits or index merges.  This
> was reproducible regardless of running classic or managed schema and
> regardless of running Solr core or SolrCloud.
>
> There are not many steps to reproduce this, but you will need a way to
> send these updates.  I have included inline create.sh and create.pl
> scripts to generate the data and send the updates.  You can index a
> lastModified field or something to convince yourself that everything has
> been re-indexed.  I left that out to keep the steps lean.  Also, this test
> is using commit statements from the client sending the updates for
> simplicity even though it is not a good practice.  My normal setup is using
> Solrj with commitWithin to allow Solr to manage when the commits take
> place, but the same error is produced either way.
>
>
> *STEPS TO REPRODUCE*
>
>1. Install Solr 5.5.3 and change to that working directory
>2. bin/solr -e techproducts
>3. bin/solr stop [Why these next 3 steps?  These are to start the
>index completely new without the 32 example documents as opposed to a
>delete query.  The documents are not posted after the core is detected the
>second time.]
>4. rm -rf ./example/techproducts/solr/techproducts/data/
>5. bin/solr -e techproducts
>6. ./create.sh
>7. curl -X POST -H 'Content-type:application/json' --data-binary '{
>"replace-field":{ "name":"cat", "type":"text_en_splitting", "indexed":true,
>"multiValued":true, "stored":true } }' http://localhost:8983/solr/
>techproducts/schema
>8. http://localhost:8983/solr/techproducts/select?q=cat:%
>22hard%20drive%22  [error]
>9. ./create.sh
>10. http://localhost:8983/solr/techproducts/select?q=cat:%
>22hard%20drive%22  [error even though all documents have been
>re-indexed]
>
> *create.sh*
> #!/bin/bash
> for i in {1..100}; do
> echo "$i"
> ./create.pl $i > ./create.xml$i
> curl http://localhost:8983/solr/techproducts/update?commit=true -H
> "Content-Type: text/xml" --data-binary @./create.xml$i
> done
>
> *create.pl *
> #!/usr/bin/perl
> my $S = $ARGV[0];
> my $I = 100;
> my $N = $S*$I + $I;
> my $i;
> print "\n";
> for($i=$S*$I; $i<$N; $i++) {
>print "SP${i}cat
> hard drive ${i}\n";
> }
> print "\n";
>
>
> On Fri, May 26, 2017 at 2:14 AM, Rick Leir  wrote:
>
>> Can you reproduce this error? What are the steps you take to reproduce
>> it? ( simple is better).
>>
>> cheers -- Rick
>>
>>
>>
>> On 2017-05-25 05:46 PM, Solr User wrote:
>>
>>> This is in regards to changing a field type from string to
>>> text_en_splitting, re-indexing all documents, even optimizing to give the
>>> index a chance to merge segments and rewrite itself entirely, and then
>>> getting this error when running a phrase query:
>>> java.lang.IllegalStateException: field "blah" was indexed without
>>> position
>>> data; cannot run PhraseQuery
>>>
>>> I have encountered this issue before and have always done one of the
>>> following as a work-around:
>>> 1.  Instead of changing the field type on an existing field just create a
>>> new field and retire the old one.
>>> 2.  Delete the index directory and start from scratch.
>>>
>>> These work-arounds are not always ideal.  Does anyone know what is
>>> holding
>>> onto that old field type definition?  What thinks it is still a string?
>>> Every document has been re-indexed and I am sure of this because I have a
>>> time stamp indexed.  Is there any other way to get this to work?
>>>
>>> For what it is worth, I am running this in SolrCloud mode but I remember
>>> seeing this issue before SolrCloud was released as well.
>>>
>>>
>>
>


Re: Solr crashing / slowing down the performance

2017-07-03 Thread Venkateswarlu Bommineni
Hi Eric,

Thanks for reply.

Please find the image in below url .

https://drive.google.com/open?id=0B9BkzwYA2P-VelJIRXhybFhpLUk

As per me the physical memory and JVM heap is not proportionate , Please
correct me if i am wrong.

and in solr logs i am not getting any OOM errors as such.

I have gone through the Shawn's post partially , problem here is that i
can't change much of the configuration.
but i will try.


Thanks,
Venkat.


On Mon, Jul 3, 2017 at 11:12 PM, Erick Erickson 
wrote:

> Images don't come through, you'll have to put it somewhere and post a link.
> Have you seen Shawn's page?
>
> https://wiki.apache.org/solr/ShawnHeisey
>
> And what does your Solr log say happens? OOM? Other? Could you throttle
> your indexing client to spare some CPU cycles for querying?
>
> Best,
> Erick
>
> On Mon, Jul 3, 2017 at 8:32 AM, Venkateswarlu Bommineni 
> wrote:
>
> > Hi Team,
> >
> > Background:
> >
> > We have a Solr 6.2 having multiple cores ( Six cores ) in it , We have
> > other system (CMS) that will push the data to Solr.
> >
> > Issue:
> >
> > When ever we are doing full index from other system (installed in
> > different box) , some times Solr JVM is crashing.sometimes even don't
> crash
> > the query performance is very slow especially AutoSuggestion query (using
> > spellCheck component).
> >
> > Please find the below memory settings:
> > [image: Inline image 1]
> >
> > Please help me in setting optimal memory settings.
> >
> >
> > Thanks,
> > Venkat.
> >
>


Re: Solr dynamic "on the fly fields"

2017-07-03 Thread Pablo Anzorena
Thanks Erick,

For my use case it's not possible any of those solutions. I have a
multitenancy scheme in the most basic level, that is I have a single
collection with fields (clientId, field1, field2, ..., field50) attending
many clients.

Clients can create custom fields based on arithmetic operations of any
other field.

So, is it possible to update let's say field49 with the follow operation:
log(field39) + field25 on clientId=43?

Do field39 and field25 need to be stored to accomplish this? Is there any
other way to avoid storing them?

Thanks!


2017-07-03 15:00 GMT-03:00 Erick Erickson :

> There are two ways:
> 1> define a dynamic field pattern, i.e.
>
> 
>
> Now just add any field in the doc you want. If it ends in "_sum" and
> no other explicit field matches you have a new field.
>
> 2> Use the managed schema to add these on the fly. I don't recommend
> this from what I know of your use case, this is primarily intended for
> front-ends to be able to modify the schema and/or "field guessing".
>
> I do caution you though that either way don't go over-the-top. If
> you're thinking of thousands of different fields that can lead to
> performance issues.
>
> You can either put stuff in the field on your indexing client or
> create a custom update component, perhaps the simplest would be a
> "StatelessScriptUpdateProcessorFactory:
>
> see: https://cwiki.apache.org/confluence/display/solr/
> Update+Request+Processors#UpdateRequestProcessors-
> UpdateRequestProcessorFactories
>
> Best,
> Erick
>
> On Mon, Jul 3, 2017 at 10:52 AM, Pablo Anzorena 
> wrote:
> > Hey,
> >
> > I was wondering if there is some way to add fields "on the fly" based on
> > arithmetic operations on other fields. For example add a new field
> > "custom_field" = log(field1) + field2 -5.
> >
> > Thanks.
>


Re: Solr dynamic "on the fly fields"

2017-07-03 Thread Erick Erickson
There are two ways:
1> define a dynamic field pattern, i.e.



Now just add any field in the doc you want. If it ends in "_sum" and
no other explicit field matches you have a new field.

2> Use the managed schema to add these on the fly. I don't recommend
this from what I know of your use case, this is primarily intended for
front-ends to be able to modify the schema and/or "field guessing".

I do caution you though that either way don't go over-the-top. If
you're thinking of thousands of different fields that can lead to
performance issues.

You can either put stuff in the field on your indexing client or
create a custom update component, perhaps the simplest would be a
"StatelessScriptUpdateProcessorFactory:

see: 
https://cwiki.apache.org/confluence/display/solr/Update+Request+Processors#UpdateRequestProcessors-UpdateRequestProcessorFactories

Best,
Erick

On Mon, Jul 3, 2017 at 10:52 AM, Pablo Anzorena  wrote:
> Hey,
>
> I was wondering if there is some way to add fields "on the fly" based on
> arithmetic operations on other fields. For example add a new field
> "custom_field" = log(field1) + field2 -5.
>
> Thanks.


Solr dynamic "on the fly fields"

2017-07-03 Thread Pablo Anzorena
Hey,

I was wondering if there is some way to add fields "on the fly" based on
arithmetic operations on other fields. For example add a new field
"custom_field" = log(field1) + field2 -5.

Thanks.


Re: Solr crashing / slowing down the performance

2017-07-03 Thread Erick Erickson
Images don't come through, you'll have to put it somewhere and post a link.
Have you seen Shawn's page?

https://wiki.apache.org/solr/ShawnHeisey

And what does your Solr log say happens? OOM? Other? Could you throttle
your indexing client to spare some CPU cycles for querying?

Best,
Erick

On Mon, Jul 3, 2017 at 8:32 AM, Venkateswarlu Bommineni 
wrote:

> Hi Team,
>
> Background:
>
> We have a Solr 6.2 having multiple cores ( Six cores ) in it , We have
> other system (CMS) that will push the data to Solr.
>
> Issue:
>
> When ever we are doing full index from other system (installed in
> different box) , some times Solr JVM is crashing.sometimes even don't crash
> the query performance is very slow especially AutoSuggestion query (using
> spellCheck component).
>
> Please find the below memory settings:
> [image: Inline image 1]
>
> Please help me in setting optimal memory settings.
>
>
> Thanks,
> Venkat.
>


Re: Using ASCIIFoldingFilterFactory

2017-07-03 Thread Erick Erickson
The best thing to do is go to the admin/analysis page and see if you
get exactly what you expect. You'll see the transformations that each
step in your chain do.

I mean your usage looks OK, but al that says is the syntax looks find.
Only you can see if the actual chain you've defined does what you
expect, and the admin/analysis page is the go-to for that.

Best,
Erick

On Mon, Jul 3, 2017 at 10:06 AM, SOLR4189  wrote:
> Hey all,
> I need to convert alphabetic, numeric and symbollic unicode characters to
> their ASCII equivalents. The solr.ASCIIFoldingFilterFactory is the solution
> for my request. I'm wondering if my usage of the filter is correct and if
> anyone encountered any problems using the specified filter (I'm using
> Solr-4.10.3).
> The image included specifies my usage of the filter. Thanks in advance!
>
> 
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Using-ASCIIFoldingFilterFactory-tp4343999.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Using ASCIIFoldingFilterFactory

2017-07-03 Thread SOLR4189
Hey all,
I need to convert alphabetic, numeric and symbollic unicode characters to
their ASCII equivalents. The solr.ASCIIFoldingFilterFactory is the solution
for my request. I'm wondering if my usage of the filter is correct and if
anyone encountered any problems using the specified filter (I'm using
Solr-4.10.3). 
The image included specifies my usage of the filter. Thanks in advance!


 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Using-ASCIIFoldingFilterFactory-tp4343999.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr 5.2+ using SSL and non-SSL ports

2017-07-03 Thread sputul
I have SSL enabled in Solr 5 but Zookeeper needs to be started with proper
url scheme. Does this imply Solr Cloud cannot use SSL and non-SSL at the
same time as Zookeeper itself need separate ports?

Thanks.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-5-2-using-SSL-and-non-SSL-ports-tp4312859p4343997.html
Sent from the Solr - User mailing list archive at Nabble.com.


Replication - Unable to download tlog

2017-07-03 Thread Rénald Koch
When I activate the replication of a shard via the web interface, the data
will replicate well on the new shard, but once all the data has been
copied, the data will be erased and the synchronization will start again
indefinitely.

When I look in the logs, I have this error:
2017-06-29 10:51:39.768 ERROR
(recoveryExecutor-3-thread-1-processing-n:X.X.X.X:8983_solr
x:collection_shard2_replica2 s:shard2 c:collection r:core_node4) [c:collection
s:shard2 r:core_node4 x:collection_shard2_replica2]
o.a.s.h.ReplicationHandler Index fetch failed
:org.apache.solr.common.SolrException: Unable to download
tlog.2131263.1571535118797897728 completely. Downloaded 0!=871
at
org.apache.solr.handler.IndexFetcher$FileFetcher.cleanup(IndexFetcher.java:1591)
at
org.apache.solr.handler.IndexFetcher$FileFetcher.fetch(IndexFetcher.java:1474)
at
org.apache.solr.handler.IndexFetcher$FileFetcher.fetchFile(IndexFetcher.java:1449)
at
org.apache.solr.handler.IndexFetcher.downloadTlogFiles(IndexFetcher.java:893)
at
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:494)
at
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:301)
at
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:400)
at
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:219)
at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:471)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:284)
at
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)

I tried to extend the tlog retention time (especially with the
commitReserveDuration option), but it does not work.


Solr crashing / slowing down the performance

2017-07-03 Thread Venkateswarlu Bommineni
Hi Team,

Background:

We have a Solr 6.2 having multiple cores ( Six cores ) in it , We have
other system (CMS) that will push the data to Solr.

Issue:

When ever we are doing full index from other system (installed in different
box) , some times Solr JVM is crashing.sometimes even don't crash the query
performance is very slow especially AutoSuggestion query (using spellCheck
component).

Please find the below memory settings:
[image: Inline image 1]

Please help me in setting optimal memory settings.


Thanks,
Venkat.


RE: Solr 6.4. Can't index MS Visio vsdx files

2017-07-03 Thread Allison, Timothy B.
Sorry.  Y, you'll have to update commons-compress to 1.14.

-Original Message-
From: Gytis Mikuciunas [mailto:gyt...@gmail.com] 
Sent: Monday, July 3, 2017 9:15 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr 6.4. Can't index MS Visio vsdx files

hi,

So I'm back from my long vacations :)

I'm trying to bring-up a fresh solr 6.6 standalone instance on windows
2012R2 server.

Replaced:

poi-*3.15-beta1 ---> poi-*3.16
tika-*1.13 ---> tika-*1.15


Tried to index one txt file and got (with poi and tika files that come out of 
the box, it indexes this txt file without errors):


SimplePostTool: WARNING: Response:   
Error 500 Server Error

HTTP ERROR 500
Problem accessing /solr/v20170703xxx/update/extract. Reason:
Server ErrorCaused
by:java.lang.NoClassDefFoundError:
org/apache/commons/compress/archivers/ArchiveStreamProvider
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(Unknown Source)
at java.security.SecureClassLoader.defineClass(Unknown Source)
at java.net.URLClassLoader.defineClass(Unknown Source)
at java.net.URLClassLoader.access$100(Unknown Source)
at java.net.URLClassLoader$1.run(Unknown Source)
at java.net.URLClassLoader$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.net.FactoryURLClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at
org.apache.tika.parser.pkg.ZipContainerDetector.detectArchiveFormat(ZipContainerDetector.java:112)
at
org.apache.tika.parser.pkg.ZipContainerDetector.detect(ZipContainerDetector.java:83)
at
org.apache.tika.detect.CompositeDetector.detect(CompositeDetector.java:77)
at
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:115)
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:228)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
at
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at

Re: Automatically Restart Solr

2017-07-03 Thread Susheel Kumar
Got it but unless you know what caused OOM you can run into this again.
Restart may not help really.

You should try to find out from gc logs if it was sudden increase which
caused OOM (some culprit query or heavy ingestion) or it was over the
period of time (due to cache util..).

Based on that you may want to proceed further..

On Mon, Jul 3, 2017 at 7:56 AM, rojerick luna 
wrote:

> Thanks Furkan.
>
> Hi Susheel - our Solr was running so long until we experienced Solr went
> system out of memory. we already increased the virtual memory and so far so
> good. we just want an automated restart to refresh Solr, just another
> proactive initiative.
>
> Best Regards,
> Jeck
>
> > On 3 Jul 2017, at 7:02 PM, Susheel Kumar  wrote:
> >
> > I am curios why you need to restart solr every week.  Our Prod solr
> > instance (6.0) has been running Since Nov,16 with no restart
> >
> > On Sun, Jul 2, 2017 at 12:55 PM, Furkan KAMACI 
> > wrote:
> >
> >> Hi Jeck,
> >>
> >> Here is the documentation about how you can run Solr as service:
> >> https://lucene.apache.org/solr/guide/6_6/taking-solr-to-production.html
> >>
> >> However, as far as I see you use Windows as operating system. There is
> >> currently an open issue for creating scripts to run as a Windows
> Service:
> >> https://issues.apache.org/jira/browse/SOLR-7105 but not yet completed.
> >>
> >> Could you check this:
> >> http://coding-art.blogspot.com.tr/2016/07/running-solr-
> >> 61-as-windows-service.html
> >>
> >> Kind Regards,
> >> Furkan KAMACI
> >>
> >>
> >> On Sun, Jul 2, 2017 at 6:12 PM, rojerick luna  >
> >> wrote:
> >>
> >>> Hi,
> >>>
> >>> Anyone who successfully set this up? Thanks
> >>>
> >>> Best Regards,
> >>> Jeck
> >>>
>  On 20 Jun 2017, at 7:10 PM, rojerick luna 
> >>> wrote:
> 
>  Hi,
> 
>  I'm trying to automate Solr restart every week.
> 
>  I created a stop.bat and updated the start.bat which I found on an
> >>> article online. Using stop.bat and start.bat is working fine. However
> >> when
> >>> I created a Task Scheduler (Windows Scheduler) and setup the frequency
> to
> >>> stop and start (using the bat files), it's not working; the Solr app
> >> didn't
> >>> restart.
> 
>  Please let me know if you have successfully tried it and send me steps
> >>> how you've setup the Task Scheduler.
> 
>  Best Regards,
>  Jeck Luna
> >>>
> >>>
> >>
>
>


Re: Solr 6.4. Can't index MS Visio vsdx files

2017-07-03 Thread Gytis Mikuciunas
hi,

So I'm back from my long vacations :)

I'm trying to bring-up a fresh solr 6.6 standalone instance on windows
2012R2 server.

Replaced:

poi-*3.15-beta1 ---> poi-*3.16
tika-*1.13 ---> tika-*1.15


Tried to index one txt file and got (with poi and tika files that come out
of the box, it indexes this txt file without errors):


SimplePostTool: WARNING: Response: 


Error 500 Server Error

HTTP ERROR 500
Problem accessing /solr/v20170703xxx/update/extract. Reason:
Server ErrorCaused
by:java.lang.NoClassDefFoundError:
org/apache/commons/compress/archivers/ArchiveStreamProvider
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(Unknown Source)
at java.security.SecureClassLoader.defineClass(Unknown Source)
at java.net.URLClassLoader.defineClass(Unknown Source)
at java.net.URLClassLoader.access$100(Unknown Source)
at java.net.URLClassLoader$1.run(Unknown Source)
at java.net.URLClassLoader$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.net.FactoryURLClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at
org.apache.tika.parser.pkg.ZipContainerDetector.detectArchiveFormat(ZipContainerDetector.java:112)
at
org.apache.tika.parser.pkg.ZipContainerDetector.detect(ZipContainerDetector.java:83)
at
org.apache.tika.detect.CompositeDetector.detect(CompositeDetector.java:77)
at
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:115)
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:228)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
at
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.ClassNotFoundException:

Re: Automatically Restart Solr

2017-07-03 Thread rojerick luna
Thanks Furkan.

Hi Susheel - our Solr was running so long until we experienced Solr went system 
out of memory. we already increased the virtual memory and so far so good. we 
just want an automated restart to refresh Solr, just another proactive 
initiative.

Best Regards,
Jeck

> On 3 Jul 2017, at 7:02 PM, Susheel Kumar  wrote:
> 
> I am curios why you need to restart solr every week.  Our Prod solr
> instance (6.0) has been running Since Nov,16 with no restart
> 
> On Sun, Jul 2, 2017 at 12:55 PM, Furkan KAMACI 
> wrote:
> 
>> Hi Jeck,
>> 
>> Here is the documentation about how you can run Solr as service:
>> https://lucene.apache.org/solr/guide/6_6/taking-solr-to-production.html
>> 
>> However, as far as I see you use Windows as operating system. There is
>> currently an open issue for creating scripts to run as a Windows Service:
>> https://issues.apache.org/jira/browse/SOLR-7105 but not yet completed.
>> 
>> Could you check this:
>> http://coding-art.blogspot.com.tr/2016/07/running-solr-
>> 61-as-windows-service.html
>> 
>> Kind Regards,
>> Furkan KAMACI
>> 
>> 
>> On Sun, Jul 2, 2017 at 6:12 PM, rojerick luna 
>> wrote:
>> 
>>> Hi,
>>> 
>>> Anyone who successfully set this up? Thanks
>>> 
>>> Best Regards,
>>> Jeck
>>> 
 On 20 Jun 2017, at 7:10 PM, rojerick luna 
>>> wrote:
 
 Hi,
 
 I'm trying to automate Solr restart every week.
 
 I created a stop.bat and updated the start.bat which I found on an
>>> article online. Using stop.bat and start.bat is working fine. However
>> when
>>> I created a Task Scheduler (Windows Scheduler) and setup the frequency to
>>> stop and start (using the bat files), it's not working; the Solr app
>> didn't
>>> restart.
 
 Please let me know if you have successfully tried it and send me steps
>>> how you've setup the Task Scheduler.
 
 Best Regards,
 Jeck Luna
>>> 
>>> 
>> 



Re: Same score for different length matches

2017-07-03 Thread alessandro.benedetti
In addition to what Chris has correctly suggested, I would like to focus on
this sentence :
"  I am decently certain that at one point in time it worked in a way 
that a higher match length would rank higher"

You mean a match in a longer field would rank higher than a match in a
shorter field ?
is that what you want ( because it is counter intuitive) ?

Furthermore I see that some stemming is applied at query time , is that what
you want ?




-
---
Alessandro Benedetti
Search Consultant, R Software Engineer, Director
Sease Ltd. - www.sease.io
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Same-score-for-different-length-matches-tp4343660p4343917.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Automatically Restart Solr

2017-07-03 Thread Susheel Kumar
I am curios why you need to restart solr every week.  Our Prod solr
instance (6.0) has been running Since Nov,16 with no restart

On Sun, Jul 2, 2017 at 12:55 PM, Furkan KAMACI 
wrote:

> Hi Jeck,
>
> Here is the documentation about how you can run Solr as service:
> https://lucene.apache.org/solr/guide/6_6/taking-solr-to-production.html
>
> However, as far as I see you use Windows as operating system. There is
> currently an open issue for creating scripts to run as a Windows Service:
> https://issues.apache.org/jira/browse/SOLR-7105 but not yet completed.
>
> Could you check this:
> http://coding-art.blogspot.com.tr/2016/07/running-solr-
> 61-as-windows-service.html
>
> Kind Regards,
> Furkan KAMACI
>
>
> On Sun, Jul 2, 2017 at 6:12 PM, rojerick luna 
> wrote:
>
> > Hi,
> >
> > Anyone who successfully set this up? Thanks
> >
> > Best Regards,
> > Jeck
> >
> > > On 20 Jun 2017, at 7:10 PM, rojerick luna 
> > wrote:
> > >
> > > Hi,
> > >
> > > I'm trying to automate Solr restart every week.
> > >
> > > I created a stop.bat and updated the start.bat which I found on an
> > article online. Using stop.bat and start.bat is working fine. However
> when
> > I created a Task Scheduler (Windows Scheduler) and setup the frequency to
> > stop and start (using the bat files), it's not working; the Solr app
> didn't
> > restart.
> > >
> > > Please let me know if you have successfully tried it and send me steps
> > how you've setup the Task Scheduler.
> > >
> > > Best Regards,
> > > Jeck Luna
> >
> >
>


Re: Solr 6.5.1 crashing when too many queries with error or high memory usage are queried

2017-07-03 Thread Toke Eskildsen
On Sun, 2017-07-02 at 15:00 +0800, Zheng Lin Edwin Yeo wrote:
> I'm currently facing the issue whereby the Solr crashed when I have
> issued too many queries with error or those with high memory usage,
> like JSON facet or Streaming expressions.
> 
> What could be the issue here?

Solr does not have any auto-limiting of the number of concurrent
requests. You will have to build that yourself (quite hard) or impose a
hard limit in your request layer that is low enough to guarantee that
you don't run out of memory in Solr.

You could raise the amount of memory allocated for Solr, but even then
you might want to have a hard limit, just to avoid the occasional "cat
steps on F5 and the browser issues a gazillion requests"-scenario.
-- 
Toke Eskildsen, Royal Danish Library