Unable to set preferred leader

2020-06-23 Thread Karl Stoney
Hey,
We have a SolrCloud collection with 8 replicas, and one of those replicas has 
the `property.preferredleader: true` set.   However when we perform a 
`REBALANCELEADERS` we get:

```
{
  "responseHeader": {
"status": 0,
"QTime": 62268
  },
  "Summary": {
"Failure": "Not all active replicas with preferredLeader property are 
leaders"
  },
  "failures": {
"shard1": {
  "status": "failed",
  "msg": "Could not change leder for slice shard1 to core_node9"
}
  }
}
```

There is nothing in the solr logs on any of the nodes to indicate the reason 
for the failure.

What I have noticed is that 4 of the nodes briefly go orange in the gui (eg 
“down”), and for a moment 9 of them go into yellow (eg “recovering”), before 
all becoming active again with the same (incorrect) leader.

We use the same model on 4 other collections to set the preferred leader to a 
particular replica and they all work fine.

Does anyone have any ideas?

Thanks
Karl
Unless expressly stated otherwise in this email, this e-mail is sent on behalf 
of Auto Trader Limited Registered Office: 1 Tony Wilson Place, Manchester, 
Lancashire, M15 4FN (Registered in England No. 03909628). Auto Trader Limited 
is part of the Auto Trader Group Plc group. This email and any files 
transmitted with it are confidential and may be legally privileged, and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error please notify the sender. 
This email message has been swept for the presence of computer viruses.


Re: collect reload causes a big latency spike

2020-03-03 Thread Karl Stoney
Hey Erick.
Our CI process will do the following whenever it detects either a schema or 
solrconfig change:

  *   Upload the new configuration to zookeeper
  *   Link it to the existing collection
  *   Reload collection

This might be for something as simply as tweaking the caches.

I'd say on average, this happens a couple of times per week - however it always 
causes that spike.  As we're a 24/7 business I am trying to mitigate it as much 
as possible.  We've invested quite a bit of time in newSearcher queries but 
aren't really getting what we need from it.

Does reload collection bin of the current, warm searcher before the newSearcher 
is ready?  Therefore if useColdSearcher=false requests are blocked until the 
collection warming is complete (rather than still serving from the old 
searcher).

Karl

From: Erick Erickson 
Sent: 03 March 2020 15:28
To: solr-user 
Subject: Re: collect reload causes a big latency spike

Reload throws most everything away, there's no provision for autowarming
from the old caches. Consider that schema and solrconfig may have changed,
so there would be lots of places could go wrong.

I have to ask why you're reloading often enough to see this? This is a
heavyweight action, really intended to be used very rarely. If it's because
you're using schemaless mode I recommend you don't do that.

If you insist on frequently reloading, though, you can configure the
firstsearcher with static warming queries, that event is intended exactly
to autowarm cold searchers.

But that just means the replica won't even start serving queries until
autowarming is complete, so if you're reloading often enough that your
users are noticing spikes, I don't think that will help.

So my question returns: why are you reloading so often? Sounds like an XY
problem...

Best,
Erick

On Tue, Mar 3, 2020, 07:38 Karl Stoney 
wrote:

> Hi Everyone,
> When we use the solr collections API to reload a collection, we get a
> large latency spike in requests.  I'm surprised by this because when we do
> new soft commits, our warming means they're near enough undetectable.
>
> Could anyone confirm if solr collection reload not use the filterCache and
> queryResultCache auto warm, and instead give you a "newSearcher"?  As
> that's how it feels.
>
> And if that's the case; surely it should do?
>
> Thanks
> Karl
> This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office:
> 1 Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England
> No. 9439967). This email and any files transmitted with it are confidential
> and may be legally privileged, and intended solely for the use of the
> individual or entity to whom they are addressed. If you have received this
> email in error please notify the sender. This email message has been swept
> for the presence of computer viruses.
>
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


collect reload causes a big latency spike

2020-03-03 Thread Karl Stoney
Hi Everyone,
When we use the solr collections API to reload a collection, we get a large 
latency spike in requests.  I'm surprised by this because when we do new soft 
commits, our warming means they're near enough undetectable.

Could anyone confirm if solr collection reload not use the filterCache and 
queryResultCache auto warm, and instead give you a "newSearcher"?  As that's 
how it feels.

And if that's the case; surely it should do?

Thanks
Karl
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: Async RELOADCOLLECTION never completes

2020-02-15 Thread Karl Stoney
I actually narrowed this down to changing the schema version from 1.5 to 1.6 
and then doing a RELOADCOLLECTION, it consistently hangs.  Several of our nodes 
go briefly into a recovering state too.

From: Karl Stoney 
Sent: 13 February 2020 09:49
To: solr-user@lucene.apache.org 
Subject: Re: Async RELOADCOLLECTION never completes

When performing a rolling restart we see:

09:43:31.890 
[OverseerThreadFactory-42-thread-5-processing-n:solr-5.search-solr.prod.k8.atcloud.io:80_solr]
 ERROR org.apache.solr.cloud.OverseerTaskProcessor - 
:org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode 
= Session expired for /overseer/collection-map-failure

Which I find interesting, everything (resources wise) is very healthy.

On 13/02/2020, 09:34, "Karl Stoney"  
wrote:

Hi,
We’re periodically seeing an ASYNC task to RELOADCOLLECTION never complete, 
it’s just permanently “running”:

❯ curl -s 
https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fsolr.search-solr.prod.k8.atcloud.io%2Fsolr%2Fadmin%2Fcollectionsdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C7a5d7c308dac49b5ab3408d7b06a0ff3%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637171841881478917sdata=fjl0QkOCCanmgUHAXX%2F5HvzBcexCmugLItWNIzG1D%2Fc%3Dreserved=0\?action\=REQUESTSTATUS\\=1581585716
 | jq .
{
  "responseHeader": {
"status": 0,
"QTime": 2
  },
  "status": {
"state": "running",
"msg": "found [1581585716] in running tasks"
  }
}

The collection appears to have been reloaded fine (from the gui, it’s using 
the right config), so we’re a bit baffled.

The only way I’ve found to clear this up is to rolling restart solr.

Solr 8.4.1

Any ideas?
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 
1 Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Timeout occured while waiting response from server

2020-02-15 Thread Karl Stoney
Hi Folks,
Solr 8.4.1 - we've started doing an expungeDeletes daily at 5am for various 
reasons but I've also started having these log messages appear in our master 
from some of the followers.

There's no disk io/cpu contention, everything's running fast - so not sure:


  1.  Why we're getting this
  2.  If we need to be worried

Anyone got any ideas?
Karl

05:10:14.457 
[updateExecutor-5-thread-3217-processing-n:solr-0.search-solr.prod.k8.atcloud.io:80_solr
 x:at-uk-002_shard1_replica_n6 c:at-uk-002 s:shard1 r:core_node9] ERROR 
org.apache.solr.update.SolrCmdDistributor - org.apache.solr.clien
t.solrj.SolrServerException: Timeout occured while waiting response from server 
at: 
http://solr-5.search-solr.prod.k8.atcloud.io/solr/at-uk-002_shard1_replica_n1/update?update.distrib=FROMLEADER=http%3A%2F%2Fsolr-0.search-sol
r.prod.k8.atcloud.io%3A80%2Fsolr%2Fat-uk-002_shard1_replica_n6%2F
at 
org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:407)
at 
org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:753)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.request(ConcurrentUpdateHttp2SolrClient.java:369)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1290)
at 
org.apache.solr.update.SolrCmdDistributor.doRequest(SolrCmdDistributor.java:344)
at 
org.apache.solr.update.SolrCmdDistributor.lambda$submit$0(SolrCmdDistributor.java:333)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.util.concurrent.TimeoutException
at 
org.eclipse.jetty.client.util.InputStreamResponseListener.get(InputStreamResponseListener.java:216)
at 
org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:398)
... 13 more

05:10:14.457 
[updateExecutor-5-thread-3216-processing-n:solr-0.search-solr.prod.k8.atcloud.io:80_solr
 x:at-uk-002_shard1_replica_n6 c:at-uk-002 s:shard1 r:core_node9] ERROR 
org.apache.solr.update.SolrCmdDistributor - org.apache.solr.clien
t.solrj.SolrServerException: Timeout occured while waiting response from server 
at: 
http://solr-2.search-solr.prod.k8.atcloud.io/solr/at-uk-002_shard1_replica_n10/update?update.distrib=FROMLEADER=http%3A%2F%2Fsolr-0.search-so
lr.prod.k8.atcloud.io%3A80%2Fsolr%2Fat-uk-002_shard1_replica_n6%2F
at 
org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:407)
at 
org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:753)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.request(ConcurrentUpdateHttp2SolrClient.java:369)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1290)
at 
org.apache.solr.update.SolrCmdDistributor.doRequest(SolrCmdDistributor.java:344)
at 
org.apache.solr.update.SolrCmdDistributor.lambda$submit$0(SolrCmdDistributor.java:333)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.util.concurrent.TimeoutException
at 
org.eclipse.jetty.client.util.InputStreamResponseListener.get(InputStreamResponseListener.java:216)
at 
org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:398)
... 13 more

05:10:14.457 
[updateExecutor-5-thread-3208-processing-n:solr-0.search-solr.prod.k8.atcloud.io:80_solr
 x:at-uk-002_shard1_replica_n6 c:at-uk-002 s:shard1 r:core_node9] ERROR 
org.apache.solr.update.SolrCmdDistributor - org.apache.solr.clien
t.solrj.SolrServerException: Timeout occured while waiting 

Re: Async RELOADCOLLECTION never completes

2020-02-13 Thread Karl Stoney
When performing a rolling restart we see:

09:43:31.890 
[OverseerThreadFactory-42-thread-5-processing-n:solr-5.search-solr.prod.k8.atcloud.io:80_solr]
 ERROR org.apache.solr.cloud.OverseerTaskProcessor - 
:org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode 
= Session expired for /overseer/collection-map-failure

Which I find interesting, everything (resources wise) is very healthy.

On 13/02/2020, 09:34, "Karl Stoney"  
wrote:

Hi,
We’re periodically seeing an ASYNC task to RELOADCOLLECTION never complete, 
it’s just permanently “running”:

❯ curl -s 
https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fsolr.search-solr.prod.k8.atcloud.io%2Fsolr%2Fadmin%2Fcollectionsdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C3a627213825a4b56415008d7b067eb73%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637171832681589729sdata=Kx3OY%2BMkXw%2Bob0M0ZRmnehfAxffoSdGvJyV%2FlfdwdxY%3Dreserved=0\?action\=REQUESTSTATUS\\=1581585716
 | jq .
{
  "responseHeader": {
"status": 0,
"QTime": 2
  },
  "status": {
"state": "running",
"msg": "found [1581585716] in running tasks"
  }
}

The collection appears to have been reloaded fine (from the gui, it’s using 
the right config), so we’re a bit baffled.

The only way I’ve found to clear this up is to rolling restart solr.

Solr 8.4.1

Any ideas?
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 
1 Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Async RELOADCOLLECTION never completes

2020-02-13 Thread Karl Stoney
Hi,
We’re periodically seeing an ASYNC task to RELOADCOLLECTION never complete, 
it’s just permanently “running”:

❯ curl -s 
http://solr.search-solr.prod.k8.atcloud.io/solr/admin/collections\?action\=REQUESTSTATUS\\=1581585716
 | jq .
{
  "responseHeader": {
"status": 0,
"QTime": 2
  },
  "status": {
"state": "running",
"msg": "found [1581585716] in running tasks"
  }
}

The collection appears to have been reloaded fine (from the gui, it’s using the 
right config), so we’re a bit baffled.

The only way I’ve found to clear this up is to rolling restart solr.

Solr 8.4.1

Any ideas?
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Would changing the schema version from 1.5 to 1.6 require a reindex

2020-02-13 Thread Karl Stoney
Hey,
I’m going to bump our schema version from 1.5 to 1.6 to get the implicit 
useDocValuesAsStored=true, would this require a reindex?

Thanks
Karl
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: REINDEXCOLLECTION fatal error in DaemonStream

2020-02-12 Thread Karl Stoney
Hmm interestingly this happened when I set an `fq` 
(*,old_version:_version_,old_lmake:L_MAKE,old_lmodel:L_MODEL) which I pulled 
from our old dataimporthandler.  Removing that it worked fine

From: Karl Stoney 
Sent: 12 February 2020 17:20
To: solr-user@lucene.apache.org 
Subject: REINDEXCOLLECTION fatal error in DaemonStream

Hey folks,
Trying out the REINDEXCOLLECTION but getting the following error:

Anyone seen it before?


17:14:09.610 
[DaemonStream-at-uk-002-88-thread-1-processing-n:solr-0.search-solr.dev.k8.atcloud.io:80_solr
 x:at-uk-001_shard1_replica_n1 c:at-uk-001 s:shard1 r:core_node2] ERROR 
org.apache.solr.client.solrj.io.stream.DaemonStream - Fatal Error in 
DaemonStream:at-uk-002
java.lang.NullPointerException: null
at 
org.apache.solr.client.solrj.io.stream.TopicStream.read(TopicStream.java:380) 
~[solr-solrj-8.4.2-SNAPSHOT.jar:8.4.2-SNAPSHOT 
7d3ac7c284b26ce62f41d3b8686f70c7d6bd758d - root - 2020-02-11 19:56:06]
at 
org.apache.solr.client.solrj.io.stream.PushBackStream.read(PushBackStream.java:88)
 ~[solr-solrj-8.4.2-SNAPSHOT.jar:8.4.2-SNAPSHOT 
7d3ac7c284b26ce62f41d3b8686f70c7d6bd758d - root - 2020-02-11 19:56:06]
at 
org.apache.solr.client.solrj.io.stream.UpdateStream.read(UpdateStream.java:111) 
~[solr-solrj-8.4.2-SNAPSHOT.jar:8.4.2-SNAPSHOT 
7d3ac7c284b26ce62f41d3b8686f70c7d6bd758d - root - 2020-02-11 19:56:06]
at 
org.apache.solr.client.solrj.io.stream.CommitStream.read(CommitStream.java:116) 
~[solr-solrj-8.4.2-SNAPSHOT.jar:8.4.2-SNAPSHOT 
7d3ac7c284b26ce62f41d3b8686f70c7d6bd758d - root - 2020-02-11 19:56:06]
at 
org.apache.solr.client.solrj.io.stream.DaemonStream$StreamRunner.stream(DaemonStream.java:338)
 ~[solr-solrj-8.4.2-SNAPSHOT.jar:8.4.2-SNAPSHOT 
7d3ac7c284b26ce62f41d3b8686f70c7d6bd758d - root - 2020-02-11 19:56:06]
at 
org.apache.solr.client.solrj.io.stream.DaemonStream$StreamRunner.run(DaemonStream.java:319)
 ~[solr-solrj-8.4.2-SNAPSHOT.jar:8.4.2-SNAPSHOT 
7d3ac7c284b26ce62f41d3b8686f70c7d6bd758d - root - 2020-02-11 19:56:06]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
 ~[solr-solrj-8.4.2-SNAPSHOT.jar:8.4.2-SNAPSHOT 
7d3ac7c284b26ce62f41d3b8686f70c7d6bd758d - root - 2020-02-11 19:56:06]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


REINDEXCOLLECTION fatal error in DaemonStream

2020-02-12 Thread Karl Stoney
Hey folks,
Trying out the REINDEXCOLLECTION but getting the following error:

Anyone seen it before?


17:14:09.610 
[DaemonStream-at-uk-002-88-thread-1-processing-n:solr-0.search-solr.dev.k8.atcloud.io:80_solr
 x:at-uk-001_shard1_replica_n1 c:at-uk-001 s:shard1 r:core_node2] ERROR 
org.apache.solr.client.solrj.io.stream.DaemonStream - Fatal Error in 
DaemonStream:at-uk-002
java.lang.NullPointerException: null
at 
org.apache.solr.client.solrj.io.stream.TopicStream.read(TopicStream.java:380) 
~[solr-solrj-8.4.2-SNAPSHOT.jar:8.4.2-SNAPSHOT 
7d3ac7c284b26ce62f41d3b8686f70c7d6bd758d - root - 2020-02-11 19:56:06]
at 
org.apache.solr.client.solrj.io.stream.PushBackStream.read(PushBackStream.java:88)
 ~[solr-solrj-8.4.2-SNAPSHOT.jar:8.4.2-SNAPSHOT 
7d3ac7c284b26ce62f41d3b8686f70c7d6bd758d - root - 2020-02-11 19:56:06]
at 
org.apache.solr.client.solrj.io.stream.UpdateStream.read(UpdateStream.java:111) 
~[solr-solrj-8.4.2-SNAPSHOT.jar:8.4.2-SNAPSHOT 
7d3ac7c284b26ce62f41d3b8686f70c7d6bd758d - root - 2020-02-11 19:56:06]
at 
org.apache.solr.client.solrj.io.stream.CommitStream.read(CommitStream.java:116) 
~[solr-solrj-8.4.2-SNAPSHOT.jar:8.4.2-SNAPSHOT 
7d3ac7c284b26ce62f41d3b8686f70c7d6bd758d - root - 2020-02-11 19:56:06]
at 
org.apache.solr.client.solrj.io.stream.DaemonStream$StreamRunner.stream(DaemonStream.java:338)
 ~[solr-solrj-8.4.2-SNAPSHOT.jar:8.4.2-SNAPSHOT 
7d3ac7c284b26ce62f41d3b8686f70c7d6bd758d - root - 2020-02-11 19:56:06]
at 
org.apache.solr.client.solrj.io.stream.DaemonStream$StreamRunner.run(DaemonStream.java:319)
 ~[solr-solrj-8.4.2-SNAPSHOT.jar:8.4.2-SNAPSHOT 
7d3ac7c284b26ce62f41d3b8686f70c7d6bd758d - root - 2020-02-11 19:56:06]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
 ~[solr-solrj-8.4.2-SNAPSHOT.jar:8.4.2-SNAPSHOT 
7d3ac7c284b26ce62f41d3b8686f70c7d6bd758d - root - 2020-02-11 19:56:06]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: Storage/Volume type for Kubernetes Solr POD?

2020-02-11 Thread Karl Stoney
yes we scale with pd-ssd or local-ssd just fine

From: Susheel Kumar 
Sent: 11 February 2020 17:15
To: solr-user@lucene.apache.org 
Subject: Re: Storage/Volume type for Kubernetes Solr POD?

Thanks, Karl for sharing.  With local SSD's you be able to auto scale. Is
that correct?

On Fri, Feb 7, 2020 at 5:22 AM Nicolas PARIS 
wrote:

> hi all
>
> what about cephfs or lustre distrubuted filesystem for such purpose ?
>
>
> Karl Stoney  writes:
>
> > we personally run solr on google cloud kubernetes engine and each node
> has a 512Gb persistent ssd (network attached) storage which gives roughly
> this performance (read/write):
> >
> > Sustained random IOPS limit 15,360.00 15,360.00
> > Sustained throughput limit (MB/s) 245.76  245.76
> >
> > and we get very good performance.
> >
> > ultimately though it's going to depend on your workload
> > 
> > From: Susheel Kumar 
> > Sent: 06 February 2020 13:43
> > To: solr-user@lucene.apache.org 
> > Subject: Storage/Volume type for Kubernetes Solr POD?
> >
> > Hello,
> >
> > Whats type of storage/volume is recommended to run Solr on Kubernetes
> POD?
> > I know in the past Solr has issues with NFS storing its indexes and was
> not
> > recommended.
> >
> >
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fkubernetes.io%2Fdocs%2Fconcepts%2Fstorage%2Fvolumes%2Fdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C2a0b14d26833424dc60c08d7af161a99%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637170381771386211sdata=7nowVhOGd8ZDSPYWpUrNwWCl6dza6yDKrw94aORNfZ8%3Dreserved=0
> >
> > Thanks,
> > Susheel
> > This e-mail is sent on behalf of Auto Trader Group Plc, Registered
> Office: 1 Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in
> England No. 9439967). This email and any files transmitted with it are
> confidential and may be legally privileged, and intended solely for the use
> of the individual or entity to whom they are addressed. If you have
> received this email in error please notify the sender. This email message
> has been swept for the presence of computer viruses.
>
>
> --
> nicolas paris
>
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: Storage/Volume type for Kubernetes Solr POD?

2020-02-07 Thread Karl Stoney
we personally run solr on google cloud kubernetes engine and each node has a 
512Gb persistent ssd (network attached) storage which gives roughly this 
performance (read/write):

Sustained random IOPS limit 15,360.00 15,360.00
Sustained throughput limit (MB/s) 245.76  245.76

and we get very good performance.

ultimately though it's going to depend on your workload

From: Susheel Kumar 
Sent: 06 February 2020 13:43
To: solr-user@lucene.apache.org 
Subject: Storage/Volume type for Kubernetes Solr POD?

Hello,

Whats type of storage/volume is recommended to run Solr on Kubernetes POD?
I know in the past Solr has issues with NFS storing its indexes and was not
recommended.

https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fkubernetes.io%2Fdocs%2Fconcepts%2Fstorage%2Fvolumes%2Fdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7Cade649a9f6e84e1ee7d008d7ab0a8c7b%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637165934101219754sdata=wsc4v3dJwTzOqSirbo7DvdmrimTL2sOX66Ug%2FvzrRw8%3Dreserved=0

Thanks,
Susheel
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: DataImportHandler SolrEntityProcessor configuration for local copy

2020-02-06 Thread Karl Stoney
Spoke too soon, looks like it memory leaks.  After about 1.3m the old gc times 
went through the root and solr was almost unresponsive, had to abort.  We're 
going to write our own implementation to copy data from one core to another 
that runs outside of solr.

On 06/02/2020, 09:57, "Karl Stoney"  wrote:

I cannot believe how much of a difference that cursorMark and sort order 
made.
Previously it died about 800k docs, now we're at 1.2m without any slowdown.

Thank you so much

On 06/02/2020, 08:14, "Mikhail Khludnev"  wrote:

Hello, Karl.
Please check these:

https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Flucene.apache.org%2Fsolr%2Fguide%2F6_6%2Fpagination-of-results.html%23constraints-when-using-cursorsdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C31a2300d8a0e42a9e28f08d7aadc92c7%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637165736641024457sdata=pNw8x6YUBTtXst60oMAe8UqWvUtakYvoJ9%2FKn7R8ETo%3Dreserved=0


https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Flucene.apache.org%2Fsolr%2Fguide%2F6_6%2Fuploading-structured-data-store-data-with-the-data-import-handler.html%23solrentityprocessordata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C31a2300d8a0e42a9e28f08d7aadc92c7%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637165736641024457sdata=572w%2Br7QtZ8eHORG5UVrE3yE3SZaUXsuqFpRuwE80sw%3Dreserved=0
 cursorMark="true"
Good luck.


On Wed, Feb 5, 2020 at 10:06 PM Karl Stoney
 wrote:

> Hey All,
> I'm trying to implement a simplistic reindex strategy to copy all of 
the
> data out of one collection, into another, on a single node (no 
distributed
> queries).
>
> It's approx 4 million documents, with an index size of 26gig.  Based 
on
> your experience, I'm wondering what people feel sensible values for 
the
> SolrEntityProcessor are (to give me a sensible starting point, to 
save me
> iterating over loads of them).
>
> This is where I'm at right now.  I know `rows` would increase memory
> pressure but speed up the copy, I can't really find anywhere online 
where
> people have benchmarked different values for rows and the default (50)
> seems quite low.
>
> 
> 
>  query="*:*"
>  rows="100"
>  fl="*,old_version:_version_"
>  wt="javabin"
>  
url="https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2F127.0.0.1%2Fsolr%2Fat-ukdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C31a2300d8a0e42a9e28f08d7aadc92c7%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637165736641024457sdata=e9BfXappFygVqSlweYXJdsxf5TXtlrL%2BwHop7PrOsJQ%3Dreserved=0;>
>
> 
> 
>
> Any suggestions are welcome.
> Thanks
> This e-mail is sent on behalf of Auto Trader Group Plc, Registered 
Office:
> 1 Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in 
England
> No. 9439967). This email and any files transmitted with it are 
confidential
> and may be legally privileged, and intended solely for the use of the
> individual or entity to whom they are addressed. If you have received 
this
> email in error please notify the sender. This email message has been 
swept
> for the presence of computer viruses.
>


--
Sincerely yours
Mikhail Khludnev




This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: DataImportHandler SolrEntityProcessor configuration for local copy

2020-02-06 Thread Karl Stoney
I cannot believe how much of a difference that cursorMark and sort order made.
Previously it died about 800k docs, now we're at 1.2m without any slowdown.

Thank you so much

On 06/02/2020, 08:14, "Mikhail Khludnev"  wrote:

Hello, Karl.
Please check these:

https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Flucene.apache.org%2Fsolr%2Fguide%2F6_6%2Fpagination-of-results.html%23constraints-when-using-cursorsdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C31a2300d8a0e42a9e28f08d7aadc92c7%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637165736641024457sdata=pNw8x6YUBTtXst60oMAe8UqWvUtakYvoJ9%2FKn7R8ETo%3Dreserved=0


https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Flucene.apache.org%2Fsolr%2Fguide%2F6_6%2Fuploading-structured-data-store-data-with-the-data-import-handler.html%23solrentityprocessordata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C31a2300d8a0e42a9e28f08d7aadc92c7%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637165736641024457sdata=572w%2Br7QtZ8eHORG5UVrE3yE3SZaUXsuqFpRuwE80sw%3Dreserved=0
 cursorMark="true"
Good luck.


On Wed, Feb 5, 2020 at 10:06 PM Karl Stoney
 wrote:

> Hey All,
> I'm trying to implement a simplistic reindex strategy to copy all of the
> data out of one collection, into another, on a single node (no distributed
> queries).
>
> It's approx 4 million documents, with an index size of 26gig.  Based on
> your experience, I'm wondering what people feel sensible values for the
> SolrEntityProcessor are (to give me a sensible starting point, to save me
> iterating over loads of them).
>
> This is where I'm at right now.  I know `rows` would increase memory
> pressure but speed up the copy, I can't really find anywhere online where
> people have benchmarked different values for rows and the default (50)
> seems quite low.
>
> 
> 
>  query="*:*"
>  rows="100"
>  fl="*,old_version:_version_"
>  wt="javabin"
>  
url="https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2F127.0.0.1%2Fsolr%2Fat-ukdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C31a2300d8a0e42a9e28f08d7aadc92c7%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637165736641024457sdata=e9BfXappFygVqSlweYXJdsxf5TXtlrL%2BwHop7PrOsJQ%3Dreserved=0;>
>
> 
> 
>
> Any suggestions are welcome.
> Thanks
> This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office:
> 1 Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in 
England
> No. 9439967). This email and any files transmitted with it are 
confidential
> and may be legally privileged, and intended solely for the use of the
> individual or entity to whom they are addressed. If you have received this
> email in error please notify the sender. This email message has been swept
> for the presence of computer viruses.
>


--
Sincerely yours
Mikhail Khludnev


This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


DataImportHandler SolrEntityProcessor configuration for local copy

2020-02-05 Thread Karl Stoney
Hey All,
I'm trying to implement a simplistic reindex strategy to copy all of the data 
out of one collection, into another, on a single node (no distributed queries).

It's approx 4 million documents, with an index size of 26gig.  Based on your 
experience, I'm wondering what people feel sensible values for the 
SolrEntityProcessor are (to give me a sensible starting point, to save me 
iterating over loads of them).

This is where I'm at right now.  I know `rows` would increase memory pressure 
but speed up the copy, I can't really find anywhere online where people have 
benchmarked different values for rows and the default (50) seems quite low.



   http://127.0.0.1/solr/at-uk;>
   



Any suggestions are welcome.
Thanks
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: Solr Cloud on Docker?

2020-02-05 Thread Karl Stoney
Nothing much to add to the below apart from we also successfully run solr on 
kubernetes.  It took some implementation effort but we're now at a point where 
we can do `kubectl scale --replicas=x statefulset/solr` and increase capacity 
in minutes with solr's autoscaling taking care of the new shard creation.

Very happy.

From: Dominique Bejean 
Sent: 05 February 2020 17:53
To: Dwane Hall 
Cc: Scott Stults ; 
solr-user@lucene.apache.org 
Subject: Re: Solr Cloud on Docker?

Thank you Dwane. Great info :)


Le mer. 5 févr. 2020 à 11:49, Dwane Hall  a écrit :

> Hey Dominique,
>
> From a memory management perspective I don't do any container resource
> limiting specifically in Docker (although as you mention you certainly
> can).  In our circumstances these hosts are used specifically for Solr so I
> planned and tested my capacity beforehand. We have ~768G of RAM on each of
> these 5 hosts so with 20x16G heaps we had ~320G of heap being used by Solr,
> some overhead for Docker and the other OS services leaving ~400G for the OS
> cache and whatever wants to grab it on each host. Not everyone will have
> servers this large which is why we really had to take advantage of multiple
> Solr instances/host and Docker became important for our cluster operation
> management.  Our disk's are not SSD's either and all instances write to the
> same raid 5 spinner which is bind mounted to the containers.  With this
> configuration we've been able to achieve consistent median response times
> of under 500ms across the largest collection but obviously query type
> varies this (no terms, leading wildcards etc.).  Our QPS is not huge
> ranging from 2-20/sec but if we need to scale further or speed up response
> times there's certainly wins that can be made at a disk level.  For our
> current circumstances we're very content with the deployment.
>
> In not sure if you've read Toke's blog on his experiences at the Royal
> Danish Library but I found it really useful when capacity planning and
> recommend reading it (
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsbdevel.wordpress.com%2F2016%2F11%2F30%2F70tb-16b-docs-4-machines-1-solrcloud%2Fdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C551dd53ab648462d6ae008d7aa6463d4%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637165220483158911sdata=LXYgh3kUAo4X4mDbDIqhJO%2B%2BR3FdrTxci3sNw%2Frm0sc%3Dreserved=0
> ).
>
> As always it's recommend to test for your own conditions and best of luck
> with your deployment!
>
> Dwane
>
> --
> *From:* Scott Stults 
> *Sent:* Thursday, 30 January 2020 1:45 AM
> *To:* solr-user@lucene.apache.org 
> *Subject:* Re: Solr Cloud on Docker?
>
> One of our clients has been running a big Solr Cloud (100-ish nodes, TB
> index, billions of docs) in kubernetes for over a year and it's been
> wonderful. I think during that time the biggest scrapes we got were when we
> ran out of disk space. Performance and reliability has been solid
> otherwise. Like Dwane alluded to, a lot of operations pitfalls can be
> avoided if you do your Docker orchestration through kubernetes.
>
>
> k/r,
> Scott
>
> On Tue, Jan 28, 2020 at 3:34 AM Dominique Bejean <
> dominique.bej...@eolya.fr>
> wrote:
>
> > Hi  Dwane,
> >
> > Thank you for sharing this great solr/docker user story.
> >
> > According to your Solr/JVM memory requirements (Heap size + MetaSpace +
> > OffHeap size) are you specifying specific settings in docker-compose
> files
> > (mem_limit, mem_reservation, mem_swappiness, ...) ?
> > I suppose you are limiting total memory used by all dockerised Solr in
> > order to keep free memory on host for MMAPDirectory ?
> >
> > In short can you explain the memory management ?
> >
> > Regards
> >
> > Dominique
> >
> >
> >
> >
> > Le lun. 23 déc. 2019 à 00:17, Dwane Hall  a
> écrit :
> >
> > > Hey Walter,
> > >
> > > I recently migrated our Solr cluster to Docker and am very pleased I
> did
> > > so. We run relativity large servers and run multiple Solr instances per
> > > physical host and having managed Solr upgrades on bare metal installs
> > since
> > > Solr 5, containerisation has been a blessing (currently Solr 7.7.2). In
> > our
> > > case we run 20 Solr nodes per host over 5 hosts totalling 100 Solr
> > > instances. Here I host 3 collections of varying size. The first
> contains
> > > 60m docs (8 shards), the second 360m (12 shards) , and the third 1.3b
> (30
> > > shards) all with 2 NRT replicas. The docs are primarily database
> sourced
> > > but are not tiny by any means.
> > >
> > > Here are some of my comments from our migration journey:
> > > - Running Solr on Docker should be no different to bare metal. You
> still
> > > need to test for your environment and conditions and follow the guides
> > and
> > > best practices outlined in the excellent Lucidworks blog post
> > >
> >
> 

ID is a required field in SolrSchema . But not found in DataConfig

2020-02-04 Thread Karl Stoney
Hey all,
I'm trying to use the DIH to copy from one collection to another, it appears to 
work (data gets copied) however I've noticed this in the logs:

17:39:58.167 [qtp1472216456-87] INFO  
org.apache.solr.handler.dataimport.config.DIHConfiguration - ID is a required 
field in SolrSchema . But not found in DataConfig

I can't find the appropriate configuration to get rid of it.  Do I need to care?

My config looks like this:



   http://127.0.0.1/solr/at-uk;>
   



Cheers
Karl
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: NRT Real time Get with documentCache

2020-02-03 Thread Karl Stoney
Great stuff thank you Erick

On 04/02/2020, 00:17, "Erick Erickson"  wrote:

The documentCache shouldn’t matter at all. RTG should return the latest doc 
by maintaining a pointer into the tlogs and returning that version.

> On Feb 3, 2020, at 6:43 PM, Karl Stoney 
 wrote:
>
> Hi,
> Could anyone let me know if a real time get would return a cached, up to 
date version of a document if we enabled documentCache?
>
> Thanks
> Karl
> This e-mail is sent on behalf of Auto Trader Group Plc, Registered 
Office: 1 Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in 
England No. 9439967). This email and any files transmitted with it are 
confidential and may be legally privileged, and intended solely for the use of 
the individual or entity to whom they are addressed. If you have received this 
email in error please notify the sender. This email message has been swept for 
the presence of computer viruses.



This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


NRT Real time Get with documentCache

2020-02-03 Thread Karl Stoney
Hi,
Could anyone let me know if a real time get would return a cached, up to date 
version of a document if we enabled documentCache?

Thanks
Karl
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Connection spike when slight solr latency spike

2020-02-03 Thread Karl Stoney
Hey all,
When our searcher refreshes on a soft-commit, we get a slight latency spike 
(p99th response times can jump up to about 200ms from 100ms), however what we 
see in the upstream clients using org.apache.solr.client.solrj SolrClient is a 
big spike in connections outbound (70-80 per client, from usually around 25/26) 
and a much higher response time (orders of magnitude).

Naturally the increase in connections could be a symptom of the slight increase 
in latency (to maintain throughput more connections are needed), but it feels 
like we're hitting some sort of limit causing some requests to stall/get 
blocked.

Has anyone seen any behaviour like this before?

SolrClient 8.4.1
Solr 7.7

Thanks in advance
Karl
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: G1GC Pauses (Young Gen)

2020-02-02 Thread Karl Stoney
So interesting fact, setting XX:MaxGCPauseMillis causes g1gc to dynamically 
adjust the size of your young space, and setting it too low makes it nose dive 
as tiny as possible during the memory allocations that happen during soft 
commits.

Setting XX:MaxGCPauseMillis much high has actually improved my gc experience.

I'm still getting pretty large pauses during the soft commits though seemingly 
because of the amount of memory being allocated.  I'm not sure there's much I 
can do about it other than more memory, however 31gb heap seems pretty 
reasonable given the size of my index?

From: Karl Stoney 
Sent: 01 February 2020 16:13
To: solr-user@lucene.apache.org 
Subject: G1GC Pauses (Young Gen)

Hey all, me again.

I'm still investigating the pauses that I get when a soft commit happens.  I'm 
now convinced they're coming from G1GC pauses that happen when the soft commit 
happens and wondering if anyone can see what's up.  Caveat: I'm no JVM expert.

I've uploaded a small time window to gceasy and the latency spikes we see 
correlate to young gen completely emptying (which is I presume as a result of 
the new searcher loading?)

https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgceasy.io%2Fmy-gc-report.jsp%3Fp%3Dc2hhcmVkLzIwMjAvMDIvMS8tLWdjLXNvbHItMy0yMDIwXzAyXzAxLTE1XzA2LmxvZy0tMTUtNTItNDA%3D%26channel%3DWEBdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C50583bc3d3b8447709da08d7a731ad66%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637161704107629087sdata=w9QDb0v1qfezc%2Bj9M4a0ZG0Yw6pDRcnzp0qpHCJzMGk%3Dreserved=0

Here's one of the longer pauses.  The only thing that stands out to me is that 
it's getting rid of a lot more Eden region than the other pauses

[1294.080s][info ][gc,start  ] GC(369) Pause Young (Normal) (G1 Evacuation 
Pause)
[1294.080s][info ][gc,task   ] GC(369) Using 8 workers of 8 for evacuation
[1294.080s][debug][gc,age] GC(369) Desired survivor size 785173704 
bytes, new threshold 1 (max threshold 1)
[1294.194s][trace][gc,age] GC(369) Age table with threshold 1 (max 
threshold 1)
[1294.194s][trace][gc,age] GC(369) - age   1:  128787840 bytes,  
128787840 total
[1294.194s][info ][gc,mmu] GC(369) MMU target violated: 51.0ms 
(50.0ms/51.0ms)
[1294.194s][info ][gc,phases ] GC(369)   Pre Evacuate Collection Set: 0.2ms
[1294.194s][info ][gc,phases ] GC(369)   Evacuate Collection Set: 110.7ms
[1294.194s][info ][gc,phases ] GC(369)   Post Evacuate Collection Set: 3.4ms
[1294.194s][info ][gc,phases ] GC(369)   Other: 0.5ms
[1294.194s][info ][gc,heap   ] GC(369) Eden regions: 397->0(217)
[1294.194s][info ][gc,heap   ] GC(369) Survivor regions: 19->8(52)
[1294.194s][info ][gc,heap   ] GC(369) Old regions: 1340->1359
[1294.194s][info ][gc,heap   ] GC(369) Humongous regions: 2->2
[1294.194s][info ][gc,metaspace  ] GC(369) Metaspace: 56772K->56772K(1101824K)
[1294.194s][info ][gc] GC(369) Pause Young (Normal) (G1 Evacuation 
Pause) 28120M->21883M(31744M) 114.912ms
[1294.194s][info ][gc,cpu] GC(369) User=0.67s Sys=0.23s Real=0.11s
[1294.195s][info ][gc,stringdedup] Concurrent String Deduplication (1294.195s)
[1294.196s][info ][gc,stringdedup] Concurrent String Deduplication 
106.2K->15696.0B(93032.0B) avg 84.1% (1294.195s, 1294.196s) 1.397ms

We're using the following settings:

The machine has 64gb RAM, 31gb heap.

The core we're constantly querying and updating is 40GB.  It's got a soft auto 
commit every 10 minutes.

  export GC_TUNE="-XshowSettings:vm \
-XX:+UnlockExperimentalVMOptions \
-XX:+ExitOnOutOfMemoryError \
-XX:+UseG1GC \
-XX:+PerfDisableSharedMem \
-XX:+ParallelRefProcEnabled \
-XX:G1MaxNewSizePercent=30 \
-XX:G1NewSizePercent=6 \
-XX:G1HeapRegionSize=16M \
-XX:G1HeapWastePercent=10 \
-XX:G1MixedGCCountTarget=16 \
-XX:InitiatingHeapOccupancyPercent=70 \
-XX:MaxGCPauseMillis=50 \
-XX:-ResizePLAB \
-XX:MaxTenuringThreshold=1 \
-XX:ParallelGCThreads=8 \
-XX:ConcGCThreads=2 \
-XX:TargetSurvivorRatio=90 \
-XX:+UseStringDeduplication"

Any advice would be greatly appreciated!
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of th

G1GC Pauses (Young Gen)

2020-02-01 Thread Karl Stoney
Hey all, me again.

I'm still investigating the pauses that I get when a soft commit happens.  I'm 
now convinced they're coming from G1GC pauses that happen when the soft commit 
happens and wondering if anyone can see what's up.  Caveat: I'm no JVM expert.

I've uploaded a small time window to gceasy and the latency spikes we see 
correlate to young gen completely emptying (which is I presume as a result of 
the new searcher loading?)

https://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMjAvMDIvMS8tLWdjLXNvbHItMy0yMDIwXzAyXzAxLTE1XzA2LmxvZy0tMTUtNTItNDA==WEB

Here's one of the longer pauses.  The only thing that stands out to me is that 
it's getting rid of a lot more Eden region than the other pauses

[1294.080s][info ][gc,start  ] GC(369) Pause Young (Normal) (G1 Evacuation 
Pause)
[1294.080s][info ][gc,task   ] GC(369) Using 8 workers of 8 for evacuation
[1294.080s][debug][gc,age] GC(369) Desired survivor size 785173704 
bytes, new threshold 1 (max threshold 1)
[1294.194s][trace][gc,age] GC(369) Age table with threshold 1 (max 
threshold 1)
[1294.194s][trace][gc,age] GC(369) - age   1:  128787840 bytes,  
128787840 total
[1294.194s][info ][gc,mmu] GC(369) MMU target violated: 51.0ms 
(50.0ms/51.0ms)
[1294.194s][info ][gc,phases ] GC(369)   Pre Evacuate Collection Set: 0.2ms
[1294.194s][info ][gc,phases ] GC(369)   Evacuate Collection Set: 110.7ms
[1294.194s][info ][gc,phases ] GC(369)   Post Evacuate Collection Set: 3.4ms
[1294.194s][info ][gc,phases ] GC(369)   Other: 0.5ms
[1294.194s][info ][gc,heap   ] GC(369) Eden regions: 397->0(217)
[1294.194s][info ][gc,heap   ] GC(369) Survivor regions: 19->8(52)
[1294.194s][info ][gc,heap   ] GC(369) Old regions: 1340->1359
[1294.194s][info ][gc,heap   ] GC(369) Humongous regions: 2->2
[1294.194s][info ][gc,metaspace  ] GC(369) Metaspace: 56772K->56772K(1101824K)
[1294.194s][info ][gc] GC(369) Pause Young (Normal) (G1 Evacuation 
Pause) 28120M->21883M(31744M) 114.912ms
[1294.194s][info ][gc,cpu] GC(369) User=0.67s Sys=0.23s Real=0.11s
[1294.195s][info ][gc,stringdedup] Concurrent String Deduplication (1294.195s)
[1294.196s][info ][gc,stringdedup] Concurrent String Deduplication 
106.2K->15696.0B(93032.0B) avg 84.1% (1294.195s, 1294.196s) 1.397ms

We're using the following settings:

The machine has 64gb RAM, 31gb heap.

The core we're constantly querying and updating is 40GB.  It's got a soft auto 
commit every 10 minutes.

  export GC_TUNE="-XshowSettings:vm \
-XX:+UnlockExperimentalVMOptions \
-XX:+ExitOnOutOfMemoryError \
-XX:+UseG1GC \
-XX:+PerfDisableSharedMem \
-XX:+ParallelRefProcEnabled \
-XX:G1MaxNewSizePercent=30 \
-XX:G1NewSizePercent=6 \
-XX:G1HeapRegionSize=16M \
-XX:G1HeapWastePercent=10 \
-XX:G1MixedGCCountTarget=16 \
-XX:InitiatingHeapOccupancyPercent=70 \
-XX:MaxGCPauseMillis=50 \
-XX:-ResizePLAB \
-XX:MaxTenuringThreshold=1 \
-XX:ParallelGCThreads=8 \
-XX:ConcGCThreads=2 \
-XX:TargetSurvivorRatio=90 \
-XX:+UseStringDeduplication"

Any advice would be greatly appreciated!
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: Shards.preference to current leader

2020-01-31 Thread Karl Stoney
Once again Erick much appreciate for the reply!
I didn't realise updates forwarded to all NRT replicas for real time gets, I 
was under the impression it was eventually.  That totally answers my question 
and relieves my concerns!

Cheers
Karl


On 31/01/2020, 00:38, "Erick Erickson"  wrote:

I’m not sure it’s worth the effort. Or deterministic for that matter.

First, when you say “get” are you talking about the get request handler 
(real time get)? Or a search? Let’s take the two cases separately (and I’m 
assuming NRT replicas).

The get request handler: This pulls the most current record whether it’s 
been committed or not. So consider an update request. Before it acks back to 
the client, the raw document has been forwarded to _all_ replicas and is 
available for real time get from any of them. There might be some tiny 
advantage in getting it from the leader, but I suspect it’s not really 
measurable.

Searching. There’s no guarantee at all that the leader will have the doc 
available for searching first due to clock skew for autocommit (and here I’m 
assuming you are letting autocommit or commitwithin handle commits, you 
should). So say the leader (L) and follower (F) have their autocommit timers 
start when they get their first update. At the start, L will start at time T 
and Fs timer will start a bit later due to propagation delays. So the wall 
clock time for commits well start out different and can continue to skew. If 
they get far enough apart, an incoming doc can hit the leader just _after_ the 
last commit but hit the follower just _before_ the next commit. So the doc on 
the follower can actually return the doc from a search _sooner_ than the leader.

None of that is true about TLOG and PULL replicas however.

This is one of those things I’d recommend you prove is something users 
actually could notice before spending any time trying to implement, it strikes 
me as a red herring.

Best,
Erick

> On Jan 30, 2020, at 6:16 PM, Karl Stoney 
 wrote:
>
> Hey all,
> Is it possible to perform a get request which favours the current leader, 
therefore guaranteeing the most up to date record?
>
> I was looking at shards.preference but couldn’t see a way to prefer the 
leader.
>
> Get Outlook for 
iOS<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Faka.ms%2Fo0ukefdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C82b562a208e149d0564008d7a5e5e991%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637160279188155081sdata=AuslYLx3NpsVBcHrijq14rN0jpvzXFMJZNKMcGPYGa4%3Dreserved=0>
> This e-mail is sent on behalf of Auto Trader Group Plc, Registered 
Office: 1 Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in 
England No. 9439967). This email and any files transmitted with it are 
confidential and may be legally privileged, and intended solely for the use of 
the individual or entity to whom they are addressed. If you have received this 
email in error please notify the sender. This email message has been swept for 
the presence of computer viruses.



This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Shards.preference to current leader

2020-01-30 Thread Karl Stoney
Hey all,
Is it possible to perform a get request which favours the current leader, 
therefore guaranteeing the most up to date record?

I was looking at shards.preference but couldn’t see a way to prefer the leader.

Get Outlook for iOS
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: Replica type affinity

2020-01-30 Thread Karl Stoney
Hey,
Thanks for the reply but I'm trying to have something fully automated and 
dynamic.  For context I run solr on kubernetes, and at the moment it works 
beautifully with autoscaling (i can scale up the kubernetes deployment and solr 
adds replicas and removes them).

I'm trying to add a new type of node though, backed by very fast but ephemeral 
disks and the idea was to have only PULL replicas running on those nodes 
automatically and NRT on the persistent disk instances.

Might be a pipe dream but I'm striving for no manual configuration.

From: Edward Ribeiro 
Sent: 30 January 2020 16:56
To: solr-user@lucene.apache.org 
Subject: Re: Replica type affinity

Hi Karl,

During collection creation you can specify the `createNodeSet` parameter as
specified by the Solr Reference Guide snippet below:

"createNodeSet
Allows defining the nodes to spread the new collection across. The format
is a comma-separated list of node_names, such as
localhost:8983_solr,localhost:8984_solr,localhost:8985_solr.
If not provided, the CREATE operation will create shard-replicas spread
across all live Solr nodes.
Alternatively, use the special value of EMPTY to initially create no
shard-replica within the new collection and then later use the ADDREPLICA
operation to add shard-replicas when and where required."


There's also Collections API that you can use the node parameter of
ADDREPLICA to specify the node that replica shard should be created on.
See:
https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Flucene.apache.org%2Fsolr%2Fguide%2F6_6%2Fcollections-api.html%23CollectionsAPI-Input.9data=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7Ce6f81aab85274cd0081408d7a5a56464%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637160002076345528sdata=3pFUtr6o7vK0srGR60lIUc%2Fo9QSftmAcnQDkcx5z%2Bl8%3Dreserved=0
Other
commands that can be useful are REPLACENODE, MOVEREPLICA.

Edward


On Thu, Jan 30, 2020 at 1:00 PM Karl Stoney
 wrote:

> Hey everyone,
> Does anyone know of a way to have solr replicas assigned to specific nodes
> by some sort of identifying value (in solrcloud).
>
> In summary I’m trying to have some Read only replicas only every be
> assigned to nodes named “solr-ephemeral-x” and my nrt and masters assigned
> to “solr-index”.
>
> Kind of like rack affinity in elasticsearch!
>
> Get Outlook for 
> iOS<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Faka.ms%2Fo0ukefdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7Ce6f81aab85274cd0081408d7a5a56464%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637160002076345528sdata=a%2BRpt9TyPy4oksfWZzl79rs7pLIwPnPE4AX%2B2SZr03w%3Dreserved=0>
> This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office:
> 1 Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England
> No. 9439967). This email and any files transmitted with it are confidential
> and may be legally privileged, and intended solely for the use of the
> individual or entity to whom they are addressed. If you have received this
> email in error please notify the sender. This email message has been swept
> for the presence of computer viruses.
>
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: Performance Issue since Solr 7.7 with wt=javabin

2020-01-30 Thread Karl Stoney
To be specific sorry, we already build 77 from source, I don’t have confidence 
in back porting the fix however so it would be awesome if someone would help 
out other 77 users who aren’t able to upgrade to 84 yet with this important fix 
:(

Get Outlook for iOS<https://aka.ms/o0ukef>

From: Karl Stoney 
Sent: Thursday, January 30, 2020 3:56:31 PM
To: solr-user@lucene.apache.org 
Subject: Re: Performance Issue since Solr 7.7 with wt=javabin

I don’t have confidence in my ability to do that, I was hoping someone could 
help out as moving to 8.4 is too much of a jump for me right now!

Would really appreciate it..

Get Outlook for 
iOS<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Faka.ms%2Fo0ukefdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C31d1a342a9d14a76c2f908d7a59d039b%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637159966090301280sdata=uxA25dkVY4afCu9M7EB6cVDmd731oK10tlbqXrquZ7Q%3Dreserved=0>

From: Jan Høydahl 
Sent: Thursday, January 30, 2020 2:23:40 PM
To: solr-user 
Subject: Re: Performance Issue since Solr 7.7 with wt=javabin

No further releases are planned for 7.x, so your best bet is to patch 
branch_7_7 yourself and build a custom Solr version.

Jan

> 29. jan. 2020 kl. 20:54 skrev Karl Stoney 
> :
>
> Could anyone produce a patch for 7.7 please?
> 
> From: Florent Sithi 
> Sent: 29 January 2020 14:34
> To: solr-user@lucene.apache.org 
> Subject: Re: Performance Issue since Solr 7.7 with wt=javabin
>
> yes thanks so much, fixed in 8.4.0
>
>
>
> --
> Sent from: 
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Flucene.472066.n3.nabble.com%2FSolr-User-f472068.htmldata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C31d1a342a9d14a76c2f908d7a59d039b%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637159966090301280sdata=MZ70%2BqLfibjYwveSBnQyL9ME0dmHLgok6ci4qdOpqH8%3Dreserved=0
>
> This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
> Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
> 9439967). This email and any files transmitted with it are confidential and 
> may be legally privileged, and intended solely for the use of the individual 
> or entity to whom they are addressed. If you have received this email in 
> error please notify the sender. This email message has been swept for the 
> presence of computer viruses.

This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Replica type affinity

2020-01-30 Thread Karl Stoney
Hey everyone,
Does anyone know of a way to have solr replicas assigned to specific nodes by 
some sort of identifying value (in solrcloud).

In summary I’m trying to have some Read only replicas only every be assigned to 
nodes named “solr-ephemeral-x” and my nrt and masters assigned to “solr-index”.

Kind of like rack affinity in elasticsearch!

Get Outlook for iOS
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: Performance Issue since Solr 7.7 with wt=javabin

2020-01-30 Thread Karl Stoney
I don’t have confidence in my ability to do that, I was hoping someone could 
help out as moving to 8.4 is too much of a jump for me right now!

Would really appreciate it..

Get Outlook for iOS<https://aka.ms/o0ukef>

From: Jan Høydahl 
Sent: Thursday, January 30, 2020 2:23:40 PM
To: solr-user 
Subject: Re: Performance Issue since Solr 7.7 with wt=javabin

No further releases are planned for 7.x, so your best bet is to patch 
branch_7_7 yourself and build a custom Solr version.

Jan

> 29. jan. 2020 kl. 20:54 skrev Karl Stoney 
> :
>
> Could anyone produce a patch for 7.7 please?
> 
> From: Florent Sithi 
> Sent: 29 January 2020 14:34
> To: solr-user@lucene.apache.org 
> Subject: Re: Performance Issue since Solr 7.7 with wt=javabin
>
> yes thanks so much, fixed in 8.4.0
>
>
>
> --
> Sent from: 
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Flucene.472066.n3.nabble.com%2FSolr-User-f472068.htmldata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7Ce2428e6206154c5bec6e08d7a5900731%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637159910319192220sdata=VZ19SKPemqvUcvrdJEX%2FIZ7JEypez8lvG6U6aYYudjM%3Dreserved=0
>
> This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
> Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
> 9439967). This email and any files transmitted with it are confidential and 
> may be legally privileged, and intended solely for the use of the individual 
> or entity to whom they are addressed. If you have received this email in 
> error please notify the sender. This email message has been swept for the 
> presence of computer viruses.

This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: Solr Searcher 100% Latency Spike

2020-01-30 Thread Karl Stoney
Hey Erick,
Firstly - thank you so much for your detailed response - it is really 
appreciated!
Unfortunately some of the context of my original message was lost in because 
the screenshots weren't there.
The additional latency spike does absolutely result in a poor user experience 
for us, some of our legacy applications hit solr quite a few times in order to 
render the client experience so the compound effect can take a search result 
render from 500ms to 3-4 seconds for a chunk of our users every 10 minutes.

I know I'll never get this down to 0, I'm just striving to make what changes 
are feasible without going down too much of a rabbit hole.  Please note I'm 
relatively new to Solr and have inherited a legacy stack __

The memory footprint is lower because I also reduced the size, not just the 
warming value.  The warmup time is now sub 1second which I'm good with.

I am working through the static warming queries today with one of the teams, so 
hopefully that will also have an impact.

I will look at the docValues as well.

Thanks again
Karl


On 30/01/2020, 00:24, "Erick Erickson"  wrote:

Autowarming is significantly misunderstood. One of it's purposes in “the 
bad old days” was to rebuild very expensive on-heap structures for 
searching/sorting/grouping/and function queries.

These are exactly what docValues are designed to make much, much faster.

If you are still using spinning disks, the other benefit of warming queries 
is to read the index off disk and into MMapDirectory space. SSDs make this much 
faster too.


I often see two common mistakes:
1> no autowarming
2> excessive autowarming

I usually recommend people start with, say autowarm counts in the 10-20 as 
a start.

One implication of what you’ve said so far is that the additional 9 seconds 
your old autowarming took didn’t get you any benefit either, so putting it back 
isn’t indicated. I’m not quite clear why you say your memory footprint is 
lower, it’s unrelated to autowarming unless you also decreased your size 
parameter. If you’re saying that your reduced cache size hasn’t changed your 
95th percentile, I’d keep reducing it until it _did_ have a measurable effect.

The hit ratio is only loosely related to autowarming. So focusing on 
autowarming as a way to improve the hit ratio is probably the wrong focus.

So the first thing I’d do is make very, very sure that all the fields I 
used for grouping/sorting/faceting/function operations are docValues. Second, a 
static warming query that insured this rather relying on autowarming of the 
queryResultCache to happen to exercise those functions would be another step. 
NOTE: you don’t have to do all those operations on every field, just sorting on 
each field would suffice. NOTE: as of Solr 7.6, you can add “uninvertible=true” 
to your field types to insure that you have docValues set, see: SOLR-12962

And then I’d ask how much effort is smoothing out that kind of spike worth? 
You certainly see it with monitoring tools, but do users notice at all? If not, 
I wouldn’t spend all that much effort pursuing it…

Best,
Erick


> On Jan 29, 2020, at 4:48 PM, Karl Stoney 
 wrote:
>
> So interestingly tweaking my filter cache i've got the warming time down 
to 1s (from 10!) and also reduced my memory footprint due to the smaller cache 
size.
>
> However, I still get these latency spikes (these changes have made no 
difference to them).
>
> So the theory about them being due to the warming being too intensive is 
wrong.
>
> I know the images didn't load btw so when I say spike I mean p95th 
response time going from 50ms to 100-120ms momentarily.
> 
> From: Walter Underwood 
> Sent: 29 January 2020 21:30
> To: solr-user@lucene.apache.org 
> Subject: Re: Solr Searcher 100% Latency Spike
>
> Looking at the log, that takes one or two seconds after a complete batch 
reload (master/slave). So that is loading a cold index, all new files. This is 
not a big index, about a half million book titles.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> 
https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fobserver.wunderwood.org%2Fdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7Cc67416e932d74851402d08d7a51ad3c3%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637159406947278454sdata=E1YkJlFTDtQPSkC9%2BNHft%2FjqkuTFXaz0BKO5RxahV3w%3Dreserved=0
  (my blog)
>
>> On Jan 29, 2020, at 1:21 PM, Karl Stoney 
 wrote:
>>
>> Out of curiosity, could you define "fast"?
>> I'm wondering what sort of figures people target their searcher warm 
time at
>> 
>> From: Walter Underwood 
>> Sent: 29 January 2020 21:13
>> To: solr-user@l

Re: Solr Searcher 100% Latency Spike

2020-01-29 Thread Karl Stoney
So interestingly tweaking my filter cache i've got the warming time down to 1s 
(from 10!) and also reduced my memory footprint due to the smaller cache size.

However, I still get these latency spikes (these changes have made no 
difference to them).

So the theory about them being due to the warming being too intensive is wrong.

I know the images didn't load btw so when I say spike I mean p95th response 
time going from 50ms to 100-120ms momentarily.

From: Walter Underwood 
Sent: 29 January 2020 21:30
To: solr-user@lucene.apache.org 
Subject: Re: Solr Searcher 100% Latency Spike

Looking at the log, that takes one or two seconds after a complete batch reload 
(master/slave). So that is loading a cold index, all new files. This is not a 
big index, about a half million book titles.

wunder
Walter Underwood
wun...@wunderwood.org
https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fobserver.wunderwood.org%2Fdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C88a60f1aa3e14255da7b08d7a5026ee3%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637159302173939949sdata=hhLg7bOfsLMN8OgLR625oj8xX%2Fm%2BZ%2BVOf1C813e4xk8%3Dreserved=0
  (my blog)

> On Jan 29, 2020, at 1:21 PM, Karl Stoney 
>  wrote:
>
> Out of curiosity, could you define "fast"?
> I'm wondering what sort of figures people target their searcher warm time at
> 
> From: Walter Underwood 
> Sent: 29 January 2020 21:13
> To: solr-user@lucene.apache.org 
> Subject: Re: Solr Searcher 100% Latency Spike
>
> I use a static set of warming queries, about 20 of them. That is fast and 
> gets a decent amount of the index into file buffers. Your top queries won’t 
> change much unless you have a news site or a seasonal business.
>
> Like this:
>
>
>  
>
>  
>  introduction
>  intermediate
>  fundamentals
>  understanding
>  introductory
>  precalculus
>  foundations
>  microeconomics
>  microbiology
>  macroeconomics
>  discovering
>  international
>  mathematics
>  organizational
>  criminology
>  developmental
>  engineering
>
>  
>
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fobserver.wunderwood.org%2Fdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C88a60f1aa3e14255da7b08d7a5026ee3%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637159302173939949sdata=hhLg7bOfsLMN8OgLR625oj8xX%2Fm%2BZ%2BVOf1C813e4xk8%3Dreserved=0
>   (my blog)
>
>> On Jan 29, 2020, at 1:01 PM, Shawn Heisey  wrote:
>>
>> On 1/29/2020 12:44 PM, Karl Stoney wrote:
>>> Looking for a bit of support here.  When we soft commit (every 10 minutes), 
>>> we get a latency spike that means response times for solr are loosely 
>>> double, as you can see in this screenshot:
>>
>> Attachments almost never make it to the list.  We cannot see any of your 
>> screenshots.
>>
>>> They do correlate to filterCache warmup, which seem to take between 10s and 
>>> 30s:
>>> We don't have any other caches enabled, due to the high level of 
>>> cardinality of the queries.
>>> The spikes are specifically on /select
>>> We have the following autowarm configuration for the filterCache:
>>>>> size="8192"
>>> initialSize="8192"
>>> cleanupThread="true"
>>> autowarmCount="900"/>
>>
>> Autowarm, especially on filterCache, can be an extremely lengthy process.  
>> What Solr must do in order to warm the cache here is execute up to 900 
>> queries, sequentially, on the new index.  That can take a lot of time and 
>> use a lot of resources like CPU and I/O.
>>
>> In order to reduce the impact of cache warming, I had to reduce my own 
>> autowarmCount on the filterCache to 4.
>>
>> Thanks,
>> Shawn
>
> This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
> Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
> 9439967). This email and any files transmitted with it are confidential and 
> may be legally privileged, and intended solely for the use of the individual 
> or entity to whom they are addressed. If you have received this email in 
> error please notify the sender. This email message has been swept for the 
> presence of computer viruses.

This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: Solr Searcher 100% Latency Spike

2020-01-29 Thread Karl Stoney
Out of curiosity, could you define "fast"?
I'm wondering what sort of figures people target their searcher warm time at

From: Walter Underwood 
Sent: 29 January 2020 21:13
To: solr-user@lucene.apache.org 
Subject: Re: Solr Searcher 100% Latency Spike

I use a static set of warming queries, about 20 of them. That is fast and gets 
a decent amount of the index into file buffers. Your top queries won’t change 
much unless you have a news site or a seasonal business.

Like this:


  

  
  introduction
  intermediate
  fundamentals
  understanding
  introductory
  precalculus
  foundations
  microeconomics
  microbiology
  macroeconomics
  discovering
  international
  mathematics
  organizational
  criminology
  developmental
  engineering

  


wunder
Walter Underwood
wun...@wunderwood.org
https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fobserver.wunderwood.org%2Fdata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C48627550665c47efecae08d7a5002b8e%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637159292473223261sdata=ZCCITDfh2TlR4KKwLzZ%2BVQL1b6%2F3OXewXFS1T3nhlVo%3Dreserved=0
  (my blog)

> On Jan 29, 2020, at 1:01 PM, Shawn Heisey  wrote:
>
> On 1/29/2020 12:44 PM, Karl Stoney wrote:
>> Looking for a bit of support here.  When we soft commit (every 10 minutes), 
>> we get a latency spike that means response times for solr are loosely 
>> double, as you can see in this screenshot:
>
> Attachments almost never make it to the list.  We cannot see any of your 
> screenshots.
>
>> They do correlate to filterCache warmup, which seem to take between 10s and 
>> 30s:
>> We don't have any other caches enabled, due to the high level of cardinality 
>> of the queries.
>> The spikes are specifically on /select
>> We have the following autowarm configuration for the filterCache:
>> >  size="8192"
>>  initialSize="8192"
>>  cleanupThread="true"
>>  autowarmCount="900"/>
>
> Autowarm, especially on filterCache, can be an extremely lengthy process.  
> What Solr must do in order to warm the cache here is execute up to 900 
> queries, sequentially, on the new index.  That can take a lot of time and use 
> a lot of resources like CPU and I/O.
>
> In order to reduce the impact of cache warming, I had to reduce my own 
> autowarmCount on the filterCache to 4.
>
> Thanks,
> Shawn

This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: Solr Searcher 100% Latency Spike

2020-01-29 Thread Karl Stoney
Hey Shawn,
Thanks for the reply - funnily enough that is exactly what i'm trialing now.  
I've significantly lowered the autoWarm (as well as the size) and still have a 
0.95+ cache hit rate through searcher loads.

I'm going to continue to tweak these values down so long as i keep the hit rate 
above 90, which should reduce some memory pressure at least.

Thanks
Karl

From: Shawn Heisey 
Sent: 29 January 2020 21:01
To: solr-user@lucene.apache.org 
Subject: Re: Solr Searcher 100% Latency Spike

On 1/29/2020 12:44 PM, Karl Stoney wrote:
> Looking for a bit of support here.  When we soft commit (every 10
> minutes), we get a latency spike that means response times for solr are
> loosely double, as you can see in this screenshot:

Attachments almost never make it to the list.  We cannot see any of your
screenshots.

> They do correlate to filterCache warmup, which seem to take between 10s
> and 30s:
>
> We don't have any other caches enabled, due to the high level of
> cardinality of the queries.
>
> The spikes are specifically on /select
>
>
> We have the following autowarm configuration for the filterCache:
>
> size="8192"
>   initialSize="8192"
>   cleanupThread="true"
>   autowarmCount="900"/>

Autowarm, especially on filterCache, can be an extremely lengthy
process.  What Solr must do in order to warm the cache here is execute
up to 900 queries, sequentially, on the new index.  That can take a lot
of time and use a lot of resources like CPU and I/O.

In order to reduce the impact of cache warming, I had to reduce my own
autowarmCount on the filterCache to 4.

Thanks,
Shawn

This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Re: Performance Issue since Solr 7.7 with wt=javabin

2020-01-29 Thread Karl Stoney
Could anyone produce a patch for 7.7 please?

From: Florent Sithi 
Sent: 29 January 2020 14:34
To: solr-user@lucene.apache.org 
Subject: Re: Performance Issue since Solr 7.7 with wt=javabin

yes thanks so much, fixed in 8.4.0



--
Sent from: 
https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Flucene.472066.n3.nabble.com%2FSolr-User-f472068.htmldata=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7C908476b216bd4c6d8cb008d7a4d5af4d%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C1%7C637159109977374057sdata=tcRNMCd5JOMFnx9ukCqikpVUUB%2FTOCwmsrZsalNUc4I%3Dreserved=0

This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Solr Searcher 100% Latency Spike

2020-01-29 Thread Karl Stoney
Hi All,
Looking for a bit of support here.  When we soft commit (every 10 minutes), we 
get a latency spike that means response times for solr are loosely double, as 
you can see in this screenshot:

[cid:ed9fa791-0776-43fc-8f22-d8a568f5c084]

These do correlate to GC spikes (albeit not particularly bad):
[cid:36724c12-ef9a-4764-b132-86794824bd61]

But don't really correlate to disk or cpu/ram stress:
[cid:6e0306e4-ed51-459c-b55f-d0e1175fcc34]

They do correlate to filterCache warmup, which seem to take between 10s and 30s:
[cid:7b8acb40-fa7b-4653-9214-6b2010b3529d]

We don't have any other caches enabled, due to the high level of cardinality of 
the queries.

The spikes are specifically on /select
[cid:19b70fb5-9894-4b30-a6a3-459e0e58f532]

We have the following autowarm configuration for the filterCache:



And some suitable queries in our newSearcher warmup config.

I'm at a lot at what else to do to try and minimise these spikes.  Does anyone 
have any ideas?

Thanks
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Unable to start sole-exporter on branch_7x

2019-04-10 Thread Karl Stoney
Hi,
I’m getting the following error when trying to start `solr-exporter` on branch 
`7_x`.

INFO  - 2019-04-10 23:36:10.872; org.apache.solr.core.SolrResourceLoader; solr 
home defaulted to 'solr/' (could not find system property or JNDI)
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/lucene/util/IOUtils
at 
org.apache.solr.core.SolrResourceLoader.close(SolrResourceLoader.java:881)
at 
org.apache.solr.prometheus.exporter.SolrExporter.loadMetricsConfiguration(SolrExporter.java:221)
at 
org.apache.solr.prometheus.exporter.SolrExporter.main(SolrExporter.java:205)
Caused by: java.lang.ClassNotFoundException: org.apache.lucene.util.IOUtils
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 3 more

Any ideas?
This e-mail is sent on behalf of Auto Trader Group Plc, Registered Office: 1 
Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in England No. 
9439967). This email and any files transmitted with it are confidential and may 
be legally privileged, and intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this email in error 
please notify the sender. This email message has been swept for the presence of 
computer viruses.


Errors during solrcloud replication (7.7.x)

2019-03-01 Thread Karl Stoney
Hey all,
I’m looking for some support with replication errors we’re seeing in SolrCloud 
7.7.x (tried both .0 and .1).

I’ve created a StackOverflow issue:

We have errors in SolrCloud (7.7.1) during replication, which we can't 
understand.  We thought it may be 
https://issues.apache.org/jira/browse/SOLR-13255 or 
https://issues.apache.org/jira/browse/SOLR-13249 which is why we upgraded to 
7.7.1 but it’s still there.

On our currently elected leader, we see:
```
request: 
http://solr-1.search-solr.preprod.k8.atcloud.io:80/solr/at-uk_shard1_replica_n2/update?update.distrib=FROMLEADER=http%3A%2F%2Fsolr-2.search-solr.preprod.k8.atcloud.io%3A80%2Fsolr%2Fat-uk_shard1_replica_n1%2F=javabin=2
Remote error message: org.apache.solr.common.util.ByteArrayUtf8CharSequence 
cannot be cast to java.lang.String
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:385)
 ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
- 2019-02-23 02:39:09]
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:183)
 ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
- 2019-02-23 02:39:09]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
 ~[metrics-core-3.2.6.jar:3.2.6]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
- 2019-02-23 02:39:09]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[?:1.8.0_191]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[?:1.8.0_191]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
```

And if you go an look in the logs for the replica, you see:
```
08:35:22.060 [qtp1540374340-20] ERROR org.apache.solr.servlet.HttpSolrCall - 
null:java.lang.ClassCastException: 
org.apache.solr.common.util.ByteArrayUtf8CharSequence cannot be cast to 
java.lang.String
at 
org.apache.solr.common.util.JavaBinCodec.readEnumFieldValue(JavaBinCodec.java:813)
at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:339)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
at 
org.apache.solr.common.util.JavaBinCodec.readSolrInputDocument(JavaBinCodec.java:640)
at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:337)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
at 
org.apache.solr.common.util.JavaBinCodec.readMapEntry(JavaBinCodec.java:819)
at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:341)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:295)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readIterator(JavaBinUpdateRequestCodec.java:280)
at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:333)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readNamedList(JavaBinUpdateRequestCodec.java:235)
at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:298)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
at 
org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:191)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:126)
at 
org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:123)
at 
org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:70)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2551)
at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710)
at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
at