Replica goes into recovery mode in Solr 6.1.0

2020-07-20 Thread vishal patel
I am using Solr version 6.1.0, Java 8 version and G1GC on production. We have 2 
shards and each shard has 1 replica.
Some times my replica goes into recovery mode and when I check my GC log, I can 
not find the GC pause time more than 600 milliseconds. sometimes GC pause time 
goes near to 1 seconds but at that time replica does not go into recovery mode.

My Error Log:
shard: https://drive.google.com/file/d/1F8Bn7jSXspe2HRelh_vJjKy9DsTRl9h0/view
replica: https://drive.google.com/file/d/1y0fC_n5u3MBMQbXrvxtqaD8vBBXDLR6I/view

When I searched my error "org.apache.http.NoHttpResponseException:  failed to 
respond" in Google, I found the one Solr jira case : 
https://issues.apache.org/jira/browse/SOLR-7483

Any one gives me details about that jira case? is it resolved in other jira 
case?

Regards,
Vishal patel




Sent from Outlook


Reinstall broken?

2020-07-20 Thread Andrew O. Dugas
I installed SOLR 8.x but then realized I needed 7.7.x for my purposes. I
uninstall 8.x and installed 7.7.3 no problem.

However, it seems to have issue with port 8983. This is what I get when I
try to open the UI in a browser, which I assume is due to some not
uninstall 8.x artifact:

HTTP ERROR 404

Problem accessing /solr/. Reason:

Not Found

Caused by:

javax.servlet.ServletException: javax.servlet.UnavailableException:
Error processing the request. CoreContainer is either not initialized
or shutting down.
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:502)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: javax.servlet.UnavailableException: Error processing the
request. CoreContainer is either not initialized or shutting down.
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:359)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
... 12 more

Caused by:

javax.servlet.UnavailableException: Error processing the request.
CoreContainer is either not initialized or shutting down.
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:359)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)

Re: zookeeper data and collection properties were lost

2020-07-20 Thread Shawn Heisey

On 7/20/2020 10:30 AM, yaswanth kumar wrote:

1# I did make sure that zoo.cfg got the proper data dir and its not
pointing to temp folder; do I need to set the variables in ZK_ENV.sh. as
well on top of the zoo.cfg ??


Those are questions about the ZK server, which we are not completely 
qualified to answer.  ZK and Solr are separate Apache projects, with 
separate mailing lists.  We have some familiarity with ZK because it is 
required to run Solr in cloud mode, but are not experts.  We can only 
provide minimal help with standalone ZK servers ... you would need to 
talk to the ZK project for the best information.



Here are my confusions, as I said we are in two node architecture in DEV
but maintaining only one instance of zookeeper, is that true that I need to
maintain the same folder structure that we specify on the dataDir of
zoo.cfg on both the nodes ??


Each ZK server is independent of the others and should have its own data 
directory.  ZK will handle creating the contents of that directory, it 
is likely not something you would do.  Each server could have a 
different setting for the data directory, or the same setting.  Note 
that if the setting is the same on multiple servers, that each of those 
directories should point to separate storage.  If you try to use a 
shared directory (perhaps with NFS) then I would imagine that ZK will 
not function correctly.


A fault tolerant install of ZK cannot be created with only two servers. 
It requires a minimum of three.  For the Solr part, only two servers are 
required for minimal fault tolerance.  Each Solr server must be 
configured with the addresses and ports of all 3 (or more) zookeeper 
servers.


See the Note in the following sections of the ZK documentation:

https://zookeeper.apache.org/doc/r3.5.8/zookeeperAdmin.html#sc_zkMulitServerSetup

https://zookeeper.apache.org/doc/r3.5.8/zookeeperStarted.html#sc_RunningReplicatedZooKeeper

Thanks,
Shawn


Re: AtomicUpdate on SolrCloud is not working

2020-07-20 Thread Shawn Heisey

On 7/19/2020 1:37 AM, yo tomi wrote:

I have no choice but use post-processor.
However bug of SOLR-8030 makes me not feel like using it.


Can you explain why you need the trim field and remove blank field 
processors to be post processors?  When I think about these 
functionalities, they should work fully as expected even when executed 
as "pre" processors.


Thanks,
Shawn


Re: zookeeper data and collection properties were lost

2020-07-20 Thread yaswanth kumar
Thanks Erick for a quick response.

Here are my responses for your questions
1# I did make sure that zoo.cfg got the proper data dir and its not
pointing to temp folder; do I need to set the variables in ZK_ENV.sh. as
well on top of the zoo.cfg ??

2# I can confirm that we are not using the embedded one but we are using a
standalone zookeeper 3.4.14 and also the admin UI is showing what we
configured (port 2181)

Here are my confusions, as I said we are in two node architecture in DEV
but maintaining only one instance of zookeeper, is that true that I need to
maintain the same folder structure that we specify on the dataDir of
zoo.cfg on both the nodes ??

Thanks,

On Mon, Jul 20, 2020 at 12:22 PM Erick Erickson 
wrote:

> Some possibilities:
>
> 1> you haven’t changed your data dir for Zookeeper from the default
> "/tmp/zookeeper”
>
> 2> you aren’t pointing to the Zookeepers you think you are. In particular
> are you running embedded zookeeper? This should be apparent if you look on
> the admin page ant the zookeeper URLs you’re pointing at are on port 9983
>
> this is almost certainly some kind of misconfiguration, zookeeper data
> doesn’t just disappear on its own that I know of. The admin UI will also
> show you the exact parameters that Solr starts up with, check that they’re
> all pointing to the ZK ensemble you expect and that the data directory is
> preserved across restarts/reboots etc.
>
> Best,
> Erick
>
> > On Jul 20, 2020, at 12:02 PM, yaswanth kumar 
> wrote:
> >
> > HI Team,
> >
> > Can someone help me understand on what could be the reason to lose both
> > zookeeper data and also the collection information that will be stored
> for
> > each collection in the path ../solr/server/solr/
> >
> > Here are the details of what versions that we use
> >
> > Solr - 8.2
> > Zookeeper 3.4.14
> >
> > Two node solr cloud with zookeeper on single node, and when ever we see
> an
> > issue with networking between these two nodes, and once the connectivity
> is
> > restored, but when we restart the zookeeper service , everything was lost
> > under /zookeeper_data/version-2/ and also the collection folders that
> used
> > to exists under ../solr/server/solr/
> >
> > *Note*: We are testing this in DEV environment, but with this behavior we
> > are afraid of moving this to production without knowing if that's an
> issue
> > with some configuration or zookeeper behavior and we need to adjust
> > something else to not to wipe out the configs.
> >
> > --
> > Thanks & Regards,
> > Yaswanth Kumar Konathala.
> > yaswanth...@gmail.com
>
>

-- 
Thanks & Regards,
Yaswanth Kumar Konathala.
yaswanth...@gmail.com


Re: zookeeper data and collection properties were lost

2020-07-20 Thread Erick Erickson
Some possibilities:

1> you haven’t changed your data dir for Zookeeper from the default 
"/tmp/zookeeper”

2> you aren’t pointing to the Zookeepers you think you are. In particular are 
you running embedded zookeeper? This should be apparent if you look on the 
admin page ant the zookeeper URLs you’re pointing at are on port 9983

this is almost certainly some kind of misconfiguration, zookeeper data doesn’t 
just disappear on its own that I know of. The admin UI will also show you the 
exact parameters that Solr starts up with, check that they’re all pointing to 
the ZK ensemble you expect and that the data directory is preserved across 
restarts/reboots etc.

Best,
Erick

> On Jul 20, 2020, at 12:02 PM, yaswanth kumar  wrote:
> 
> HI Team,
> 
> Can someone help me understand on what could be the reason to lose both
> zookeeper data and also the collection information that will be stored for
> each collection in the path ../solr/server/solr/
> 
> Here are the details of what versions that we use
> 
> Solr - 8.2
> Zookeeper 3.4.14
> 
> Two node solr cloud with zookeeper on single node, and when ever we see an
> issue with networking between these two nodes, and once the connectivity is
> restored, but when we restart the zookeeper service , everything was lost
> under /zookeeper_data/version-2/ and also the collection folders that used
> to exists under ../solr/server/solr/
> 
> *Note*: We are testing this in DEV environment, but with this behavior we
> are afraid of moving this to production without knowing if that's an issue
> with some configuration or zookeeper behavior and we need to adjust
> something else to not to wipe out the configs.
> 
> -- 
> Thanks & Regards,
> Yaswanth Kumar Konathala.
> yaswanth...@gmail.com



RE: Sitecore 9.3 / Solr 8.1.1 - Zookeeper Issue

2020-07-20 Thread Austin Kimmel
This error seems similar to what we are having:

http://mail-archives.apache.org/mod_mbox/lucene-dev/201907.mbox/%3ccapswd+oa5n3f2xqj0o58dqokpwudaefhynagoqg7rg3y8cu...@mail.gmail.com%3E


Austin Kimmel
Software Developer
Vail Resorts, Inc.
303-404-1922 
akim...@vailresorts.com

VAILRESORTS®
EXPERIENCE OF A LIFETIME


-Original Message-
From: Austin Kimmel 
Sent: Monday, July 20, 2020 9:10 AM
To: solr-user@lucene.apache.org
Cc: Simon Croak ; Shelby Busby 

Subject: RE: Sitecore 9.3 / Solr 8.1.1 - Zookeeper Issue

Hello,

I've been in contact with Sitecore Support, but they say that the issue is on 
the Solr side.  I see the errors when I am trying to rebuild the indexes.  The 
index is populated the first time and then each sequential time that I try to 
rebuild the index it or access it there is an error.  

This morning I was able to rebuild the index but saw the following error while 
it was processing in the log:

2020-07-20 14:36:56.273 ERROR 
(updateExecutor-5-thread-23-processing-x:pj4_sitecore_core_index_shard1_replica_n4
 r:core_node6 null n:10.5.64.40:8984_solr c:pj4_sitecore_core_index s:shard1) 
[c:pj4_sitecore_core_index s:shard1 r:core_node6 
x:pj4_sitecore_core_index_shard1_replica_n4] 
o.a.s.u.ErrorReportingConcurrentUpdateSolrClient Error when calling 
SolrCmdDistributor$Req: cmd=add{,id=(null)}; node=ForwardNode: 
https://10.5.64.41:8984/solr/pj4_sitecore_core_index_shard1_replica_n1/ to 
https://10.5.64.41:8984/solr/pj4_sitecore_core_index_shard1_replica_n1/ => 
java.io.IOException: java.io.IOException: Broken pipe
at 
org.eclipse.jetty.client.util.DeferredContentProvider.flush(DeferredContentProvider.java:193)
java.io.IOException: java.io.IOException: Broken pipe

That error is then the same error that I see after each following rebuild or 
when I try to access the index.

Here are the instructions that Sitecore provides with integrating Solr within 
their platform: 
https://doc.sitecore.com/developers/93/platform-administration-and-architecture/en/walkthrough--setting-up-solr.html

Thanks!

Austin Kimmel
Software Developer
Vail Resorts, Inc.
303-404-1922 
akim...@vailresorts.com

VAILRESORTS®
EXPERIENCE OF A LIFETIME

-Original Message-
From: matthew sporleder [mailto:msporle...@gmail.com] 
Sent: Monday, July 20, 2020 5:18 AM
To: solr-user@lucene.apache.org
Subject: Re: Sitecore 9.3 / Solr 8.1.1 - Zookeeper Issue

ATTENTION: This eMail originated from outside of Vail Resorts and may or may 
not be legitimate. Although we do our best to screen phishing emails, please 
use extra caution before opening any attachments or clicking on any links 
unless you are absolutely sure the source of the email is trusted. If in doubt 
about the legitimacy of this email, please use the Report Phish button for 
validation.



FWIW the real error is msg":"SolrCore is loading which is bad if you are in the 
middle of indexing

What is happening on solr at this time?

> On Jul 20, 2020, at 4:46 AM, Charlie Hull  wrote:
>
> Hi Austin,
>
> Sitecore is a commercial product so your first port of call should be whoever 
> sold you or is supporting Sitecorea quick (and by no means deep) bit of 
> research shows this error may be generated by the Sitecore indexer process 
> calling Solr. We won't be able to see how it does that if it's closed source 
> code.
>
> Cheers
>
> Charlie
>
>> On 20/07/2020 04:53, Austin Kimmel wrote:
>> Hello,
>>
>> We are seeing the following errors with Sitecore 9.3 connecting to a Solr 
>> 8.1.1 cluster running on Zookeeper and haven't been able to resolve:
>>
>>
>> 2020-07-17 18:10:58.238 WARN  (zkCallback-8-thread-3) 
>> [c:pj4_sitecore_web_index s:shard1 r:core_node5 
>> x:pj4_sitecore_web_index_shard1_replica_n2] o.a.s.u.PeerSync PeerSync: 
>> core=pj4_sitecore_web_index_shard1_replica_n2 
>> url=https://10.5.64.40:8984/solr  got a 503 from 
>> https://10.5.64.41:8984/solr/pj4_sitecore_web_index_shard1_replica_n1/, 
>> counting as success => 
>> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
>> from server at 
>> https://10.5.64.41:8984/solr/pj4_sitecore_web_index_shard1_replica_n1: 
>> Expected mime type application/octet-stream but got application/json. {   
>> "error":{ "metadata":[   
>> "error-class","org.apache.solr.common.SolrException",   
>> "root-error-class","org.apache.solr.common.SolrException"], 
>> "msg":"SolrCore is loading", "code":503}}
>>
>> at 
>> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:613)
>>  org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
>> from server at 
>> https://10.5.64.41:8984/solr/pj4_sitecore_web_index_shard1_replica_n1: 
>> Expected mime type application/octet-stream but got application/json. {   
>> "error":{ "metadata":[   
>> "error-class","org.apache.solr.common.SolrException",   
>> "root-error-class","org.apache.solr.common.SolrException"], 
>> "msg":"SolrCore is loading", 

zookeeper data and collection properties were lost

2020-07-20 Thread yaswanth kumar
HI Team,

Can someone help me understand on what could be the reason to lose both
zookeeper data and also the collection information that will be stored for
each collection in the path ../solr/server/solr/

Here are the details of what versions that we use

Solr - 8.2
Zookeeper 3.4.14

Two node solr cloud with zookeeper on single node, and when ever we see an
issue with networking between these two nodes, and once the connectivity is
restored, but when we restart the zookeeper service , everything was lost
under /zookeeper_data/version-2/ and also the collection folders that used
to exists under ../solr/server/solr/

*Note*: We are testing this in DEV environment, but with this behavior we
are afraid of moving this to production without knowing if that's an issue
with some configuration or zookeeper behavior and we need to adjust
something else to not to wipe out the configs.

-- 
Thanks & Regards,
Yaswanth Kumar Konathala.
yaswanth...@gmail.com


Re: Question regarding replica leader

2020-07-20 Thread Vishal Vaibhav
So how do we recover from such state ?  When I am trying addreplica , it
returns me 503. Also my node has multiple replicas out of them most are
dead. How do we make get rid of those dead replicas via script. ?is that a
possibility?

On Mon, 20 Jul 2020 at 11:00 AM, Radu Gheorghe 
wrote:

> Hi Vishal,
>
> I think that’s true, yes. The cluster has a leader (overseer), but this
> particular shard doesn’t seem to have a leader (yet). Logs should give you
> some pointers about why this happens (it may be, for example, that each
> replica is waiting for the other to become a leader, because each missed
> some updates).
>
> Best regards,
> Radu
> --
> Sematext Cloud - Full Stack Observability - https://sematext.com
> Solr and Elasticsearch Consulting, Training and Production Support
>
> > On 20 Jul 2020, at 04:17, Vishal Vaibhav  wrote:
> >
> > Hi any pointers on this ?
> >
> > On Wed, 15 Jul 2020 at 11:13 AM, Vishal Vaibhav 
> wrote:
> >
> >> Hi Solr folks,
> >>
> >> I am using solr cloud 8.4.1 . I am using*
> >> `/solr/admin/collections?action=CLUSTERSTATUS`*. Hitting this endpoint I
> >> get a list of replicas in which one is active but neither of them is
> >> leader. Something like this
> >>
> >> "core_node72": {"core": "rules_shard1_replica_n71","base_url": "node3,"
> >> node_name": "node3 base url","state": "active","type": "NRT","
> >> force_set_state": "false"},"core_node74": {"core":
> >> "rules_shard1_replica_n73","base_url": "node1","node_name":
> >> "node1_base_url","state": "down","type": "NRT","force_set_state":
> "false"}
> >> }}},"router": {"name": "compositeId"},"maxShardsPerNode": "1","
> >> autoAddReplicas": "false","nrtReplicas": "1","tlogReplicas": "0","
> >> znodeVersion": 276,"configName": "rules"}},"live_nodes":
> ["node1","node2",
> >> "node3","node4"] And when i see overseer status
> >> solr/admin/collections?action=OVERSEERSTATUS I get response like this
> which
> >> shows node 3 as leaderresponseHeader": {"status": 0,"QTime": 66},"leader
> >> ": "node 3","overseer_queue_size": 0,"overseer_work_queue_size": 0,"
> >> overseer_collection_queue_size": 2,"overseer_operations": ["addreplica",
> >>
> >> Does it mean the cluster is having a leader node but there is no leader
> >> replica as of now? And why the leader election is not happening?
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
>
>


RE: Sitecore 9.3 / Solr 8.1.1 - Zookeeper Issue

2020-07-20 Thread Austin Kimmel
Hello,

I've been in contact with Sitecore Support, but they say that the issue is on 
the Solr side.  I see the errors when I am trying to rebuild the indexes.  The 
index is populated the first time and then each sequential time that I try to 
rebuild the index it or access it there is an error.  

This morning I was able to rebuild the index but saw the following error while 
it was processing in the log:

2020-07-20 14:36:56.273 ERROR 
(updateExecutor-5-thread-23-processing-x:pj4_sitecore_core_index_shard1_replica_n4
 r:core_node6 null n:10.5.64.40:8984_solr c:pj4_sitecore_core_index s:shard1) 
[c:pj4_sitecore_core_index s:shard1 r:core_node6 
x:pj4_sitecore_core_index_shard1_replica_n4] 
o.a.s.u.ErrorReportingConcurrentUpdateSolrClient Error when calling 
SolrCmdDistributor$Req: cmd=add{,id=(null)}; node=ForwardNode: 
https://10.5.64.41:8984/solr/pj4_sitecore_core_index_shard1_replica_n1/ to 
https://10.5.64.41:8984/solr/pj4_sitecore_core_index_shard1_replica_n1/ => 
java.io.IOException: java.io.IOException: Broken pipe
at 
org.eclipse.jetty.client.util.DeferredContentProvider.flush(DeferredContentProvider.java:193)
java.io.IOException: java.io.IOException: Broken pipe

That error is then the same error that I see after each following rebuild or 
when I try to access the index.

Here are the instructions that Sitecore provides with integrating Solr within 
their platform: 
https://doc.sitecore.com/developers/93/platform-administration-and-architecture/en/walkthrough--setting-up-solr.html

Thanks!

Austin Kimmel
Software Developer
Vail Resorts, Inc.
303-404-1922 
akim...@vailresorts.com

VAILRESORTS®
EXPERIENCE OF A LIFETIME

-Original Message-
From: matthew sporleder [mailto:msporle...@gmail.com] 
Sent: Monday, July 20, 2020 5:18 AM
To: solr-user@lucene.apache.org
Subject: Re: Sitecore 9.3 / Solr 8.1.1 - Zookeeper Issue

ATTENTION: This eMail originated from outside of Vail Resorts and may or may 
not be legitimate. Although we do our best to screen phishing emails, please 
use extra caution before opening any attachments or clicking on any links 
unless you are absolutely sure the source of the email is trusted. If in doubt 
about the legitimacy of this email, please use the Report Phish button for 
validation.



FWIW the real error is msg":"SolrCore is loading which is bad if you are in the 
middle of indexing

What is happening on solr at this time?

> On Jul 20, 2020, at 4:46 AM, Charlie Hull  wrote:
>
> Hi Austin,
>
> Sitecore is a commercial product so your first port of call should be whoever 
> sold you or is supporting Sitecorea quick (and by no means deep) bit of 
> research shows this error may be generated by the Sitecore indexer process 
> calling Solr. We won't be able to see how it does that if it's closed source 
> code.
>
> Cheers
>
> Charlie
>
>> On 20/07/2020 04:53, Austin Kimmel wrote:
>> Hello,
>>
>> We are seeing the following errors with Sitecore 9.3 connecting to a Solr 
>> 8.1.1 cluster running on Zookeeper and haven't been able to resolve:
>>
>>
>> 2020-07-17 18:10:58.238 WARN  (zkCallback-8-thread-3) 
>> [c:pj4_sitecore_web_index s:shard1 r:core_node5 
>> x:pj4_sitecore_web_index_shard1_replica_n2] o.a.s.u.PeerSync PeerSync: 
>> core=pj4_sitecore_web_index_shard1_replica_n2 
>> url=https://10.5.64.40:8984/solr  got a 503 from 
>> https://10.5.64.41:8984/solr/pj4_sitecore_web_index_shard1_replica_n1/, 
>> counting as success => 
>> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
>> from server at 
>> https://10.5.64.41:8984/solr/pj4_sitecore_web_index_shard1_replica_n1: 
>> Expected mime type application/octet-stream but got application/json. {   
>> "error":{ "metadata":[   
>> "error-class","org.apache.solr.common.SolrException",   
>> "root-error-class","org.apache.solr.common.SolrException"], 
>> "msg":"SolrCore is loading", "code":503}}
>>
>> at 
>> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:613)
>>  org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
>> from server at 
>> https://10.5.64.41:8984/solr/pj4_sitecore_web_index_shard1_replica_n1: 
>> Expected mime type application/octet-stream but got application/json. {   
>> "error":{ "metadata":[   
>> "error-class","org.apache.solr.common.SolrException",   
>> "root-error-class","org.apache.solr.common.SolrException"], 
>> "msg":"SolrCore is loading", "code":503}}
>>
>>
>> 2020-07-17 18:10:58.276 ERROR (zkCallback-8-thread-3) 
>> [c:pj4_sitecore_web_index s:shard1 r:core_node5 
>> x:pj4_sitecore_web_index_shard1_replica_n2] o.a.s.c.SyncStrategy Sync 
>> request error: 
>> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
>> from server at 
>> https://10.5.64.41:8984/solr/pj4_sitecore_web_index_shard1_replica_n1: 
>> Expected mime type application/octet-stream but got application/json. {   
>> "error":{ "metadata":[   
>> 

Re: How to route requests to a specific core of a node hosting multiple shards?

2020-07-20 Thread Erick Erickson
Hmm, ok. 

I’d have to defer to David Smiley about whether that was an intended change.

I’m curious whether you can actually measure the difference in performance. If
you can then that changes the urgency. Of course it’ll be a little more 
expensive
for the replica serving shard2 on that machine to forward it to the replica
serving shard1, but since it’s not going across the network IDK if it’s a 
consequential difference.

Best,
Erick

> On Jul 20, 2020, at 10:04 AM, Jason J Baik  wrote:
> 
> Our use case here is that we want to highlight a single document (against
> user-provided keywords), and we know the document's unique key already.
> So this is really not a distributed query, but more of a get by id, but we
> use SolrClient.query() for highlighting capabilities.
> And since we know the unique key, for speed gains, we've been making use of
> the "_route_" param to limit the request to the shard containing the
> document.
> 
> Our use case aside, SOLR-11444
>  generally seems to be at
> odds with the advertised use of the "_route_" param
> https://lucene.apache.org/solr/guide/7_5/solrcloud-query-routing-and-read-tolerance.html#_route_-parameter
> .
> Solr is routing the request to the correct "node", but it no longer routes
> to the correct "shard" on that node?
> 
> 
> On Mon, Jul 20, 2020 at 9:33 AM Erick Erickson 
> wrote:
> 
>> First I want to check if this is an XY problem. Why do you want to do this?
>> 
>> If you’re using CloudSolrClient, requests are automatically load balanced.
>> And
>> even if you send a top-level request (assuming you do NOT set
>> distrib=false),
>> then the request may be forwarded to another Solr node anyway. This is to
>> handle the case where people are sending requests to a specific node, you
>> don’t
>> really want that node doing all the aggregating.
>> 
>> Of course if you’re using an external load balancer, you can avoid all
>> that.
>> 
>> I’m not sure what the value is of sending a general request to a specific
>> core in the same JVM. A “node” is really Solr running in a JVM, so there
>> may be multiple of these on a particular machine, but the resolution
>> takes that into account.
>> 
>> If you have reason to ping a specific replica _only_ (I’ve often done this
>> for
>> troubleshooting), address the full replica and add “distrib=false”, i.e.
>> http://…../solr/collection1_shard1_replica1?q=*:*=false
>> 
>> Best,
>> Erick
>> 
>>> On Jul 20, 2020, at 9:02 AM, Jason J Baik 
>> wrote:
>>> 
>>> Hi,
>>> 
>>> After upgrading from Solr 6.6.2 to 7.6.0, we're seeing an issue with
>>> request routing in CloudSolrClient. It seems that we've lost the ability
>> to
>>> route a request to a specific core of a node.
>>> 
>>> For example, if a host is serving shard 1 core 1, and shard 2 core
>>> 1, @6.6.2, adding a "_route_="
>>> param was sufficient for CloudSolrClient to figure out the request should
>>> go to shard 1 core 1, but @7.6.0, the request is routed to one of them
>>> randomly.
>>> 
>>> It seems the core-level url resolution has been removed from
>>> CloudSolrClient at commit e001f352895c83652c3cf31e3c724d29a46bb721 around
>>> L1053, as part of SOLR-11444
>>> . The url the request
>> is
>>> sent to is now constructed only to the node level, and no longer to the
>>> core level.
>>> 
>>> There's a related issue for this at SOLR-10695
>>> , and SOLR-9063
>>>  but not quite the
>> same.
>>> Can somebody please advise what the new way to achieve this nowadays is?
>> 
>> 



Re: How to route requests to a specific core of a node hosting multiple shards?

2020-07-20 Thread Jason J Baik
Our use case here is that we want to highlight a single document (against
user-provided keywords), and we know the document's unique key already.
So this is really not a distributed query, but more of a get by id, but we
use SolrClient.query() for highlighting capabilities.
And since we know the unique key, for speed gains, we've been making use of
the "_route_" param to limit the request to the shard containing the
document.

Our use case aside, SOLR-11444
 generally seems to be at
odds with the advertised use of the "_route_" param
https://lucene.apache.org/solr/guide/7_5/solrcloud-query-routing-and-read-tolerance.html#_route_-parameter
.
Solr is routing the request to the correct "node", but it no longer routes
to the correct "shard" on that node?


On Mon, Jul 20, 2020 at 9:33 AM Erick Erickson 
wrote:

> First I want to check if this is an XY problem. Why do you want to do this?
>
> If you’re using CloudSolrClient, requests are automatically load balanced.
> And
> even if you send a top-level request (assuming you do NOT set
> distrib=false),
> then the request may be forwarded to another Solr node anyway. This is to
> handle the case where people are sending requests to a specific node, you
> don’t
> really want that node doing all the aggregating.
>
> Of course if you’re using an external load balancer, you can avoid all
> that.
>
> I’m not sure what the value is of sending a general request to a specific
> core in the same JVM. A “node” is really Solr running in a JVM, so there
> may be multiple of these on a particular machine, but the resolution
> takes that into account.
>
> If you have reason to ping a specific replica _only_ (I’ve often done this
> for
> troubleshooting), address the full replica and add “distrib=false”, i.e.
> http://…../solr/collection1_shard1_replica1?q=*:*=false
>
> Best,
> Erick
>
> > On Jul 20, 2020, at 9:02 AM, Jason J Baik 
> wrote:
> >
> > Hi,
> >
> > After upgrading from Solr 6.6.2 to 7.6.0, we're seeing an issue with
> > request routing in CloudSolrClient. It seems that we've lost the ability
> to
> > route a request to a specific core of a node.
> >
> > For example, if a host is serving shard 1 core 1, and shard 2 core
> > 1, @6.6.2, adding a "_route_="
> > param was sufficient for CloudSolrClient to figure out the request should
> > go to shard 1 core 1, but @7.6.0, the request is routed to one of them
> > randomly.
> >
> > It seems the core-level url resolution has been removed from
> > CloudSolrClient at commit e001f352895c83652c3cf31e3c724d29a46bb721 around
> > L1053, as part of SOLR-11444
> > . The url the request
> is
> > sent to is now constructed only to the node level, and no longer to the
> > core level.
> >
> > There's a related issue for this at SOLR-10695
> > , and SOLR-9063
> >  but not quite the
> same.
> > Can somebody please advise what the new way to achieve this nowadays is?
>
>


Re: How to route requests to a specific core of a node hosting multiple shards?

2020-07-20 Thread Erick Erickson
First I want to check if this is an XY problem. Why do you want to do this?

If you’re using CloudSolrClient, requests are automatically load balanced. And
even if you send a top-level request (assuming you do NOT set distrib=false),
then the request may be forwarded to another Solr node anyway. This is to
handle the case where people are sending requests to a specific node, you don’t
really want that node doing all the aggregating.

Of course if you’re using an external load balancer, you can avoid all that.

I’m not sure what the value is of sending a general request to a specific
core in the same JVM. A “node” is really Solr running in a JVM, so there
may be multiple of these on a particular machine, but the resolution
takes that into account.

If you have reason to ping a specific replica _only_ (I’ve often done this for
troubleshooting), address the full replica and add “distrib=false”, i.e.
http://…../solr/collection1_shard1_replica1?q=*:*=false

Best,
Erick

> On Jul 20, 2020, at 9:02 AM, Jason J Baik  wrote:
> 
> Hi,
> 
> After upgrading from Solr 6.6.2 to 7.6.0, we're seeing an issue with
> request routing in CloudSolrClient. It seems that we've lost the ability to
> route a request to a specific core of a node.
> 
> For example, if a host is serving shard 1 core 1, and shard 2 core
> 1, @6.6.2, adding a "_route_="
> param was sufficient for CloudSolrClient to figure out the request should
> go to shard 1 core 1, but @7.6.0, the request is routed to one of them
> randomly.
> 
> It seems the core-level url resolution has been removed from
> CloudSolrClient at commit e001f352895c83652c3cf31e3c724d29a46bb721 around
> L1053, as part of SOLR-11444
> . The url the request is
> sent to is now constructed only to the node level, and no longer to the
> core level.
> 
> There's a related issue for this at SOLR-10695
> , and SOLR-9063
>  but not quite the same.
> Can somebody please advise what the new way to achieve this nowadays is?



How to route requests to a specific core of a node hosting multiple shards?

2020-07-20 Thread Jason J Baik
Hi,

After upgrading from Solr 6.6.2 to 7.6.0, we're seeing an issue with
request routing in CloudSolrClient. It seems that we've lost the ability to
route a request to a specific core of a node.

For example, if a host is serving shard 1 core 1, and shard 2 core
1, @6.6.2, adding a "_route_="
param was sufficient for CloudSolrClient to figure out the request should
go to shard 1 core 1, but @7.6.0, the request is routed to one of them
randomly.

It seems the core-level url resolution has been removed from
CloudSolrClient at commit e001f352895c83652c3cf31e3c724d29a46bb721 around
L1053, as part of SOLR-11444
. The url the request is
sent to is now constructed only to the node level, and no longer to the
core level.

There's a related issue for this at SOLR-10695
, and SOLR-9063
 but not quite the same.
Can somebody please advise what the new way to achieve this nowadays is?


Re: Sitecore 9.3 / Solr 8.1.1 - Zookeeper Issue

2020-07-20 Thread matthew sporleder
FWIW the real error is msg":"SolrCore is loading which is bad if you are in the 
middle of indexing

What is happening on solr at this time?

> On Jul 20, 2020, at 4:46 AM, Charlie Hull  wrote:
> 
> Hi Austin,
> 
> Sitecore is a commercial product so your first port of call should be whoever 
> sold you or is supporting Sitecorea quick (and by no means deep) bit of 
> research shows this error may be generated by the Sitecore indexer process 
> calling Solr. We won't be able to see how it does that if it's closed source 
> code.
> 
> Cheers
> 
> Charlie
> 
>> On 20/07/2020 04:53, Austin Kimmel wrote:
>> Hello,
>> 
>> We are seeing the following errors with Sitecore 9.3 connecting to a Solr 
>> 8.1.1 cluster running on Zookeeper and haven't been able to resolve:
>> 
>> 
>> 2020-07-17 18:10:58.238 WARN  (zkCallback-8-thread-3) 
>> [c:pj4_sitecore_web_index s:shard1 r:core_node5 
>> x:pj4_sitecore_web_index_shard1_replica_n2] o.a.s.u.PeerSync PeerSync: 
>> core=pj4_sitecore_web_index_shard1_replica_n2 
>> url=https://10.5.64.40:8984/solr  got a 503 from 
>> https://10.5.64.41:8984/solr/pj4_sitecore_web_index_shard1_replica_n1/, 
>> counting as success => 
>> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
>> from server at 
>> https://10.5.64.41:8984/solr/pj4_sitecore_web_index_shard1_replica_n1: 
>> Expected mime type application/octet-stream but got application/json. {   
>> "error":{ "metadata":[   
>> "error-class","org.apache.solr.common.SolrException",   
>> "root-error-class","org.apache.solr.common.SolrException"], 
>> "msg":"SolrCore is loading", "code":503}}
>> 
>> at 
>> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:613)
>>  org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
>> from server at 
>> https://10.5.64.41:8984/solr/pj4_sitecore_web_index_shard1_replica_n1: 
>> Expected mime type application/octet-stream but got application/json. {   
>> "error":{ "metadata":[   
>> "error-class","org.apache.solr.common.SolrException",   
>> "root-error-class","org.apache.solr.common.SolrException"], 
>> "msg":"SolrCore is loading", "code":503}}
>> 
>> 
>> 2020-07-17 18:10:58.276 ERROR (zkCallback-8-thread-3) 
>> [c:pj4_sitecore_web_index s:shard1 r:core_node5 
>> x:pj4_sitecore_web_index_shard1_replica_n2] o.a.s.c.SyncStrategy Sync 
>> request error: 
>> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
>> from server at 
>> https://10.5.64.41:8984/solr/pj4_sitecore_web_index_shard1_replica_n1: 
>> Expected mime type application/octet-stream but got application/json. {   
>> "error":{ "metadata":[   
>> "error-class","org.apache.solr.common.SolrException",   
>> "root-error-class","org.apache.solr.common.SolrException"], 
>> "msg":"SolrCore is loading", "code":503}}
>> 
>> 
>> 2020-07-17 18:10:59.598 ERROR (qtp1661210650-149) [   ] o.a.s.s.HttpSolrCall 
>> null:org.apache.solr.common.SolrException: Error trying to proxy request for 
>> url: https://10.5.64.42:8984/solr/pj4_sitecore_web_index/admin/ping at 
>> org.apache.solr.servlet.HttpSolrCall.remoteQuery(HttpSolrCall.java:692) 
>> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:526) at 
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:397)
>>  at 
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:343)
>>  at 
>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>>  at 
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)   
>>   at 
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>>  at 
>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)  
>>at 
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>>  at 
>> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>>  at 
>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
>>  at 
>> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>>  at 
>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
>>  at 
>> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>>  at 
>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
>>  at 
>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
>>  at 
>> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>>  at 
>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
>>  at 
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>>  at 
>> 

Re: Querying solr using many QueryParser in one call

2020-07-20 Thread Charlie Hull

Hi,

It's very hard to answer questions like 'how fast/slow might this be' - 
the best way to find out is to try, e.g. to build a prototype that you 
can time. To be useful this prototype should use representative data and 
queries. Once you have this, you can try improving performance with 
strategies like the cacheing you describe.


Charlie

On 16/07/2020 18:14, harjag...@gmail.com wrote:

Hi All,
Below are question regarding querying solr using many QueryParser in one
call.
We have need to do a search by keyword and also include few specific
documents to result. We don't want to use elevator component as that would
put those mandatory documents to the top of the result. We would like to mix
those mandatory documents with organic keyword lookup result set and also
make sure those mandatory documents take part in other scoring mechanism
like bq's.On top of this we would also need to classify documents matched by
keyword lookup against mandatory docs.We ended up doing the below solr query
param to achieve it.

fl=id,title,isTermMatch:exists(query({!type=edismax qf=$qf v=blah})),score
q=({!edismax qf=$qf v=$searchQuery mm=$mm}) OR ({!edismax qf=$qf
v=$docIdQuery mm=0 sow=true})
docIdQuery=5985612 6339445 5357348
searchQuery=blah

Below are my question
1.As you can see we are calling three query parser in one call what would be
the performance implication of the search?
2.As you can see two of those queries. the one in q and one in fl is the
same. would query result cache help?
3.In general what is the implications on performance when we do a search
calling multiple query parser in a single call?



--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html



--
Charlie Hull
OpenSource Connections, previously Flax

tel/fax: +44 (0)8700 118334
mobile:  +44 (0)7767 825828
web: www.o19s.com



Re: Sitecore 9.3 / Solr 8.1.1 - Zookeeper Issue

2020-07-20 Thread Charlie Hull

Hi Austin,

Sitecore is a commercial product so your first port of call should be 
whoever sold you or is supporting Sitecorea quick (and by no means 
deep) bit of research shows this error may be generated by the Sitecore 
indexer process calling Solr. We won't be able to see how it does that 
if it's closed source code.


Cheers

Charlie

On 20/07/2020 04:53, Austin Kimmel wrote:

Hello,

We are seeing the following errors with Sitecore 9.3 connecting to a Solr 8.1.1 
cluster running on Zookeeper and haven't been able to resolve:


2020-07-17 18:10:58.238 WARN  (zkCallback-8-thread-3) [c:pj4_sitecore_web_index s:shard1 r:core_node5 x:pj4_sitecore_web_index_shard1_replica_n2] o.a.s.u.PeerSync PeerSync: 
core=pj4_sitecore_web_index_shard1_replica_n2 url=https://10.5.64.40:8984/solr  got a 503 from https://10.5.64.41:8984/solr/pj4_sitecore_web_index_shard1_replica_n1/, 
counting as success => org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at 
https://10.5.64.41:8984/solr/pj4_sitecore_web_index_shard1_replica_n1: Expected mime type application/octet-stream but got application/json. {   "error":{ 
"metadata":[   "error-class","org.apache.solr.common.SolrException",   
"root-error-class","org.apache.solr.common.SolrException"], "msg":"SolrCore is loading", "code":503}}

 at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:613) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: 
Error from server at https://10.5.64.41:8984/solr/pj4_sitecore_web_index_shard1_replica_n1: Expected mime type application/octet-stream but got application/json. {   
"error":{ "metadata":[   "error-class","org.apache.solr.common.SolrException",   
"root-error-class","org.apache.solr.common.SolrException"], "msg":"SolrCore is loading", "code":503}}


2020-07-17 18:10:58.276 ERROR (zkCallback-8-thread-3) [c:pj4_sitecore_web_index s:shard1 r:core_node5 x:pj4_sitecore_web_index_shard1_replica_n2] o.a.s.c.SyncStrategy 
Sync request error: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at 
https://10.5.64.41:8984/solr/pj4_sitecore_web_index_shard1_replica_n1: Expected mime type application/octet-stream but got application/json. {   "error":{ 
"metadata":[   "error-class","org.apache.solr.common.SolrException",   
"root-error-class","org.apache.solr.common.SolrException"], "msg":"SolrCore is loading", "code":503}}


2020-07-17 18:10:59.598 ERROR (qtp1661210650-149) [   ] o.a.s.s.HttpSolrCall 
null:org.apache.solr.common.SolrException: Error trying to proxy request for 
url: https://10.5.64.42:8984/solr/pj4_sitecore_web_index/admin/ping at 
org.apache.solr.servlet.HttpSolrCall.remoteQuery(HttpSolrCall.java:692) at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:526) at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:397)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:343)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
 at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)   
  at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) 
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
 at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480) 
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)   
  at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
 at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
at org.eclipse.jetty.server.Server.handle(Server.java:502) at