Re: Solr background merge in case of pull replicas

2021-01-06 Thread kshitij tyagi
Hi,

I am not querying on tlog replicas, solr version is 8.6 and 2 tlogs and 4
pull replica setup.

why should pull replicas be affected during background segment merges?

Regards,
kshitij

On Wed, Jan 6, 2021 at 9:48 PM Ritvik Sharma  wrote:

> Hi
> It may be the cause of rebalancing and querying is not available not on
> tlog at that moment.
> You can check tlog logs and pull log when u are facing this issue.
>
> May i know which version of solr you are using? and what is the ration of
> tlog and pull nodes.
>
> On Wed, 6 Jan 2021 at 2:46 PM, kshitij tyagi 
> wrote:
>
> > Hi,
> >
> > I am having a  tlog + pull replica solr cloud setup.
> >
> > 1. I am observing that whenever background segment merge is triggered
> > automatically, i see high response time on all of my solr nodes.
> >
> > As far as I know merges must be happening on tlog and hence the increase
> > response time, i am not able to understand that why my pull replicas are
> > affected during background index merges.
> >
> > Can someone give some insights on this? What is affecting my pull
> replicas
> > during index merges?
> >
> > Regards,
> > kshitij
> >
>


Re: How pull replica works

2021-01-06 Thread Tomás Fernández Löbbe
Hi Abhishek,
The pull replicas uses the "/replication" endpoint to copy full segment
files (sections of the index) from the leader. It works in a similar way to
the legacy leader/follower replication. This[1] talk tries to explain the
different replica types and how they work.

HTH,

Tomás

[1] https://www.youtube.com/watch?v=C8C9GRTCSzY

On Tue, Jan 5, 2021 at 10:29 PM Abhishek Mishra 
wrote:

> I want to know how pull replica replicate from leader in real? Does
> internally admin API get data from the leader in form of batches?
>
> Regards,
> Abhishek
>


"Failed to reserve shared memory."

2021-01-06 Thread TK Solr
My client is having a sudden death syndrome of Solr 8.3.1. Solr stops responding 
suddenly and they have to restart Solr.
(It is not clear if the Solr/jetty process was dead or alive but not responding. 
The OOM log isn't found.)


In the Solr start up log, these three error messages were found:

OpenJDK 64-Bit Server VM warning: Failed to reserve shared memory. (error = 1)
OpenJDK 64-Bit Server VM warning: Failed to reserve shared memory. (error = 12)
OpenJDK 64-Bit Server VM warning: Failed to reserve shared memory. (error = 12)

I am wondering if anyone has seen these errors.


I found this article

https://stackoverflow.com/questions/45968433/java-hotspottm-64-bit-server-vm-warning-failed-to-reserve-shared-memory-er

which suggests removal of the JVM option -XX:+UseLargePage, which is added by 
bin/solr script if GC_TUNE is not defined. Would that be a good idea? I'm not 
quite sure what kind of variable GC_TUNE is. It is used as in:


  if [ -z ${GC_TUNE+x} ]; then
...

    '-XX:+AlwaysPreTouch')
  else
    GC_TUNE=($GC_TUNE)
  fi

I'm not familiar with *${*GC_TUNES*+x}* and*($*GC_TUNE*)* syntax. Is this a 
special kind of environmental variable?



TK





Re: maxBooleanClauses change in solr.xml not reflecting in solr 8.4.1

2021-01-06 Thread dinesh naik
Thanks Hoss,
Yes, i was making the change in solr.xml in wrong directory earlier.

Also as you said:

: You need to update EVERY solrconfig.xml that the JVM is loading for this
to
: actually work.

that has not been true for a while, see SOLR-13336 / SOLR-10921 ...

I validated this and it's working as expected. We don't need to update
every solrconfig.xml.

The value mentioned in solr.xml is global and if maxBooleanClauses for any
collection in solrconfig.xml exceeds the limit specified in solr.xml then
we get the exception.

Thanks for replying.

On Wed, Jan 6, 2021 at 10:57 PM dinesh naik 
wrote:

> Thanks Shawn,
>
> This entry  ${sol
> r.max.booleanClauses:2048}  in solr.xml was introduced only in solr
> 8.x version and were not present in 7.6 version.
>
> We have this in solrconfig.xml in 8.4.1 version.
>  ${solr.max.booleanClauses:2048} maxBooleanClauses>
> i was updating the solr.xml in the installation directory and not the
> installed data directory, hence the change was not reflecting.
> After updating the correct solr.xml and restarting the Solr nodes the new
> value is working as expected.
>
> On Wed, Jan 6, 2021 at 10:34 PM Chris Hostetter 
> wrote:
>
>>
>> : You need to update EVERY solrconfig.xml that the JVM is loading for
>> this to
>> : actually work.
>>
>> that has not been true for a while, see SOLR-13336 / SOLR-10921 ...
>>
>> : > 2. updated  solr.xml :
>> : > ${solr.max.booleanClauses:2048}
>> :
>> : I don't think it's currently possible to set the value with solr.xml.
>>
>> Not only is it possible, it's neccessary -- the value in solr.xml acts as
>> a hard upper limit (and affects all queries, even internally expanded
>> queries) on the "soft limit" in solrconfig.xml (that only affects
>> explicitly supplied boolean queries from users)
>>
>> As to the original question...
>>
>> > 2021-01-05 14:03:59.603 WARN  (qtp1545077099-27)
>> x:col1_shard1_replica_n3
>> > o.a.s.c.SolrConfig solrconfig.xml:  of 2048 is
>> greater
>> > than global limit of 1024 and will have no effect
>>
>> I attempted to reproduce this with 8.4.1 and did not see the probem you
>> are describing.
>>
>> Are you 100% certain you are updating the correct solr.xml file?  If you
>> add some non-xml giberish to the solr.xml you are editing does the solr
>> node fail to start up?
>>
>> Remember that when using SolrCloud, solr will try to load solr.xml from
>> zk
>> first, and only look on local disk if it can't be found in ZK ... look
>> for
>> log messages like "solr.xml found in ZooKeeper. Loading..." vs "Loading
>> solr.xml from SolrHome (not found in ZooKeeper)"
>>
>>
>>
>>
>> -Hoss
>> http://www.lucidworks.com/
>>
>
>
> --
> Best Regards,
> Dinesh Naik
>


-- 
Best Regards,
Dinesh Naik


Re: maxBooleanClauses change in solr.xml not reflecting in solr 8.4.1

2021-01-06 Thread dinesh naik
Thanks Shawn,

This entry  ${solr.max.booleanClauses:2048}  in solr.xml was introduced only in solr 8.x version and were not
present in 7.6 version.

We have this in solrconfig.xml in 8.4.1 version.
 ${solr.max.booleanClauses:2048}
i was updating the solr.xml in the installation directory and not the
installed data directory, hence the change was not reflecting.
After updating the correct solr.xml and restarting the Solr nodes the new
value is working as expected.

On Wed, Jan 6, 2021 at 10:34 PM Chris Hostetter 
wrote:

>
> : You need to update EVERY solrconfig.xml that the JVM is loading for this
> to
> : actually work.
>
> that has not been true for a while, see SOLR-13336 / SOLR-10921 ...
>
> : > 2. updated  solr.xml :
> : > ${solr.max.booleanClauses:2048}
> :
> : I don't think it's currently possible to set the value with solr.xml.
>
> Not only is it possible, it's neccessary -- the value in solr.xml acts as
> a hard upper limit (and affects all queries, even internally expanded
> queries) on the "soft limit" in solrconfig.xml (that only affects
> explicitly supplied boolean queries from users)
>
> As to the original question...
>
> > 2021-01-05 14:03:59.603 WARN  (qtp1545077099-27) x:col1_shard1_replica_n3
> > o.a.s.c.SolrConfig solrconfig.xml:  of 2048 is greater
> > than global limit of 1024 and will have no effect
>
> I attempted to reproduce this with 8.4.1 and did not see the probem you
> are describing.
>
> Are you 100% certain you are updating the correct solr.xml file?  If you
> add some non-xml giberish to the solr.xml you are editing does the solr
> node fail to start up?
>
> Remember that when using SolrCloud, solr will try to load solr.xml from zk
> first, and only look on local disk if it can't be found in ZK ... look for
> log messages like "solr.xml found in ZooKeeper. Loading..." vs "Loading
> solr.xml from SolrHome (not found in ZooKeeper)"
>
>
>
>
> -Hoss
> http://www.lucidworks.com/
>


-- 
Best Regards,
Dinesh Naik


Re: maxBooleanClauses change in solr.xml not reflecting in solr 8.4.1

2021-01-06 Thread Chris Hostetter


: You need to update EVERY solrconfig.xml that the JVM is loading for this to
: actually work.

that has not been true for a while, see SOLR-13336 / SOLR-10921 ...

: > 2. updated  solr.xml :
: > ${solr.max.booleanClauses:2048}
: 
: I don't think it's currently possible to set the value with solr.xml.

Not only is it possible, it's neccessary -- the value in solr.xml acts as 
a hard upper limit (and affects all queries, even internally expanded 
queries) on the "soft limit" in solrconfig.xml (that only affects 
explicitly supplied boolean queries from users)

As to the original question...

> 2021-01-05 14:03:59.603 WARN  (qtp1545077099-27) x:col1_shard1_replica_n3
> o.a.s.c.SolrConfig solrconfig.xml:  of 2048 is greater
> than global limit of 1024 and will have no effect

I attempted to reproduce this with 8.4.1 and did not see the probem you 
are describing.

Are you 100% certain you are updating the correct solr.xml file?  If you 
add some non-xml giberish to the solr.xml you are editing does the solr 
node fail to start up?

Remember that when using SolrCloud, solr will try to load solr.xml from zk 
first, and only look on local disk if it can't be found in ZK ... look for 
log messages like "solr.xml found in ZooKeeper. Loading..." vs "Loading 
solr.xml from SolrHome (not found in ZooKeeper)"




-Hoss
http://www.lucidworks.com/


Re: Solr background merge in case of pull replicas

2021-01-06 Thread Ritvik Sharma
Hi
It may be the cause of rebalancing and querying is not available not on
tlog at that moment.
You can check tlog logs and pull log when u are facing this issue.

May i know which version of solr you are using? and what is the ration of
tlog and pull nodes.

On Wed, 6 Jan 2021 at 2:46 PM, kshitij tyagi  wrote:

> Hi,
>
> I am having a  tlog + pull replica solr cloud setup.
>
> 1. I am observing that whenever background segment merge is triggered
> automatically, i see high response time on all of my solr nodes.
>
> As far as I know merges must be happening on tlog and hence the increase
> response time, i am not able to understand that why my pull replicas are
> affected during background index merges.
>
> Can someone give some insights on this? What is affecting my pull replicas
> during index merges?
>
> Regards,
> kshitij
>


Re: Possible bug on LTR when using solr 8.6.3 - index out of bounds DisiPriorityQueue.add(DisiPriorityQueue.java:102)

2021-01-06 Thread Florin Babes
Hello, Christine and thank you for your help!

So, we've investigated further based on your suggestions and have the
following things to note:

Reproducibility: We can reproduce the same queries on multiple runs, with
the same error.
Data as a factor: Our setup is single-sharded, so we can't investigate
further on this.
Feature vs. Model: We've also tried a dummy LinearModel with only two
features and the problem still occurs.
Identification of the troublesome feature(s): We've narrowed our model to
only two features and the problem always occurs (for some queries, not all)
when we have a feature with a mm=1 and a feature with a mm>=3. The problem
also occurs when we only do feature extraction and the problem seems to
always occur on the feature with the bigger mm. The errors seem to be
related to the size of the head DisiPriorityQueue created here:
https://github.com/apache/lucene-solr/blob/branch_8_6/lucene/core/src/java/org/apache/lucene/search/MinShouldMatchSumScorer.java#L107
as the error changes as we change the mm for the second feature:

1 feature with mm=1 and one with mm=3 -> Index 4 out of bounds for length 4
1 feature with mm=1 and one with mm=5 -> Index 2 out of bounds for length 2

You can find below the dummy feature-store.

[
{
"store": "dummystore",
"name": "similarity_name_mm_1",
"class": "org.apache.solr.ltr.feature.SolrFeature",
"params": {
"q": "{!dismax qf=name mm=1}${term}"
}
},
{
"store": "dummystore",
"name": "similarity_names_mm_3",
"class": "org.apache.solr.ltr.feature.SolrFeature",
"params": {
"q": "{!dismax qf=name mm=3}${term}"
}
}
]

The problem starts occuring in Solr 8.6.0, as we tried multiple versions <
8.6 and >= 8.6 and the problem started on 8.6.0 and we tend to believe it's
because of the following changes:
https://issues.apache.org/jira/browse/SOLR-14364 as they're the only major
changes related to LTR which were introduced in Solr 8.6.0.

I've created a Solr JIRA bug/issue ticket here:
https://issues.apache.org/jira/browse/SOLR-15071

Thank you for your help!

În mar., 5 ian. 2021 la 19:40, Christine Poerschke (BLOOMBERG/ LONDON) <
cpoersc...@bloomberg.net> a scris:

> Hello Florin Babes,
>
> Thanks for this detailed report! I agree you experiencing
> ArrayIndexOutOfBoundsException during SolrFeature computation sounds like a
> bug, would you like to open a SOLR JIRA issue for it?
>
> Here's some investigative ideas I would have, in no particular order:
>
> Reproducibility: if a failed query is run again, does it also fail second
> time around (when some caches may be used)?
>
> Data as a factor: is your setup single-sharded or multi-sharded? in a
> multi-sharded setup if the same query fails on some shards but succeeds on
> others (and all shards have some documents that match the query) then this
> could support a theory that a certain combination of data and features
> leads to the exception.
>
> Feature vs. Model: you mention use of a MultipleAdditiveTrees model, if
> the same features are used in a LinearModel instead, do the same errors
> happen? or if no model is used but only feature extraction is done, does
> that give errors?
>
> Identification of the troublesome feature(s): narrowing down to a single
> feature or a small combination of features could make it easier to figure
> out the problem. assuming the existing logging doesn't identify the
> features, replacing the org.apache.solr.ltr.feature.SolrFeature with a
> com.mycompany.solr.ltr.feature.MySolrFeature containing instrumentation
> could provide insights e.g. the existing code [2] logs feature names for
> UnsupportedOperationException and if it also caught
> ArrayIndexOutOfBoundsException then it could log the feature name before
> rethrowing the exception.
>
> Based on your detail below and this [3] conditional in the code probably
> at least two features will be necessary to hit the issue, but for
> investigative purposes two features could still be simplified potentially
> to effectively one feature e.g. if one feature is a SolrFeature and the
> other is a ValueFeature or if featureA and featureB are both SolrFeature
> features with _identical_ parameters but different names.
>
> Hope that helps.
>
> Regards,
>
> Christine
>
> [1]
> https://lucene.apache.org/solr/guide/8_6/learning-to-rank.html#extracting-features
> [2]
> https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.6.3/solr/contrib/ltr/src/java/org/apache/solr/ltr/feature/SolrFeature.java#L243
> [3]
> https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.6.3/solr/contrib/ltr/src/java/org/apache/solr/ltr/LTRScoringQuery.java#L520-L525
>
> From: solr-user@lucene.apache.org At: 01/04/21 17:31:44To:
> solr-user@lucene.apache.org
> Subject: Possible bug on LTR when using solr 8.6.3 - index out of bounds
> DisiPriorityQueue.add(DisiPriorityQueue.java:102)
>
> Hello,
> We are trying to update Solr 

Proximity Searches with Phrases

2021-01-06 Thread Mark R
Use Case: Is it possible to perform a proximity search using phrases for 
example: "phrase 1" within 10 words of "phrase 2"

SOLR Version: 8.4.1

Query using: "(\"word1 word2\"(\"word3 word4\")"~10

While this returns results seems to be evaluating the words with each other, 
word1 and word2, word1 and word3, word2 and word3, rather than phrase1 and 
phrase2.

Are stop words removed when querying, I assume yes. ?

Thanks in advance

Mark



Re: No Live server exception: Solr Cloud 6.6.6

2021-01-06 Thread Ritvik Sharma
Thanks for the reply Eric !

I have tried multiple versions of solr cloud , 8.3, 8.6.0,.8.6.2. Every
version has some issues either on indexing or query searching like with 8.3
, indexing throws below error,
request: http://X:8983/solr/searchcollection_shard2_replica_t103/

Remote error message: ERROR: [doc=] *unknown field* '314257s_seourls'
at
org.apache.solr.client.solrj.impl.CloudSolrClient.getRouteException(CloudSolrClient.java:125)
at
org.apache.solr.client.solrj.impl.CloudSolrClient.getRouteException(CloudSolrClient.java:46)
at
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.directUpdate(BaseCloudSolrClient.java:549)
at
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1037)
at
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
at org.*springframework*
.data.solr.core.SolrTemplate.lambda$saveBeans$3(SolrTemplate.java:227)
at
org.springframework.data.solr.core.SolrTemplate.execute(SolrTemplate.java:167)
... 29 common frames omitted
Caused by:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error
from server at http://x:8983/solr/searchcollection_shard1_replica_t105:
*Async
exception during distributed update: Error from server at*
http://yy:8983/solr/searchcollection_shard2_replica_t103/: *null*

I googled and googled (lol) then someone advised to upgrade version. But
there are many other issues with  Querying also.
Recently I tried to solr 6.6.6 version but it seems that there is
model-store issues with other errors also.

Please advise which stable solr cloud version can be used !


On Wed, 6 Jan 2021 at 18:09, Eric Pugh 
wrote:

> I think you are going in the wrong direction in your upgrade path…. While
> it may *seem* simpler to go from master/slave 6.6.6 to SolrCloud 6.6.6, you
> are much better off just going from master/slave 6.6.6 to SolrCloud on 8.7
> (or whatever is the latest).
>
> SolrCloud has evolved since Solr 6 by two MAJOR versions, and is much more
> robust with so many fixes.  Today, I suspect very few folks who know the
> innards of Solr are actually still familiar with the 6.x line!
>
> This is also a really good opportunity to relook at your schema as well,
> and make sure you are using all the features in the best way possible.
>
>
>
>
> > On Jan 6, 2021, at 1:40 AM, Ritvik Sharma  wrote:
> >
> > Hi Guys,
> >
> > Any update.
> >
> > On Tue, 5 Jan 2021 at 18:06, Ritvik Sharma 
> wrote:
> >
> >> Hi Guys
> >>
> >> Happy New Year.
> >>
> >> We are trying to move to solr cloud 6.6.6 as we are using same version
> >> master-slave arch.
> >>
> >> solr cloud: 6.6.6
> >> zk: 3.4.10
> >>
> >> We are facing few errors
> >> 1. Every time we upload a model-store using curl XPUT command , it is
> >> showing at that time but after reloading collection , it is removed
> >> automatically.
> >>
> >> 2.While querying the data, we are getting below exception,
> >>
> >> "msg": "org.apache.solr.client.solrj.SolrServerException: No live
> >> SolrServers available to handle this request:[
> >> http://x.x.x.x:8983/solr/solrcollection_shard1_replica2,
> >> http://x.x.x.y:8983/solr/solrcollection_shard1_replica1]","trace":
> "org.apache.solr.common.SolrException:
> >> org.apache.solr.client.solrj.SolrServerException: No live SolrServers
> >> available to handle this request:[
> >> http://x.x.x.x:8983/solr/solrcollection_shard1_replica2,
> >> http://x.x.x.y:8983/solr/solrcollection_shard1_replica1]\n\tat
> >>
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:416)\n\tat
> >>
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\n\tat
> >> org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)\n\tat
> >>
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:724)\n\tat
> >> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:530)\n\tat
> >>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)\n\tat
> >>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)\n\tat
> >>
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
> >>
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
> >>
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
> >>
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
> >>
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
> >>
> >>
> >>
> >>
> 

Re: No Live server exception: Solr Cloud 6.6.6

2021-01-06 Thread Eric Pugh
I think you are going in the wrong direction in your upgrade path…. While it 
may *seem* simpler to go from master/slave 6.6.6 to SolrCloud 6.6.6, you are 
much better off just going from master/slave 6.6.6 to SolrCloud on 8.7 (or 
whatever is the latest).

SolrCloud has evolved since Solr 6 by two MAJOR versions, and is much more 
robust with so many fixes.  Today, I suspect very few folks who know the 
innards of Solr are actually still familiar with the 6.x line!

This is also a really good opportunity to relook at your schema as well, and 
make sure you are using all the features in the best way possible.




> On Jan 6, 2021, at 1:40 AM, Ritvik Sharma  wrote:
> 
> Hi Guys,
> 
> Any update.
> 
> On Tue, 5 Jan 2021 at 18:06, Ritvik Sharma  wrote:
> 
>> Hi Guys
>> 
>> Happy New Year.
>> 
>> We are trying to move to solr cloud 6.6.6 as we are using same version
>> master-slave arch.
>> 
>> solr cloud: 6.6.6
>> zk: 3.4.10
>> 
>> We are facing few errors
>> 1. Every time we upload a model-store using curl XPUT command , it is
>> showing at that time but after reloading collection , it is removed
>> automatically.
>> 
>> 2.While querying the data, we are getting below exception,
>> 
>> "msg": "org.apache.solr.client.solrj.SolrServerException: No live
>> SolrServers available to handle this request:[
>> http://x.x.x.x:8983/solr/solrcollection_shard1_replica2,
>> http://x.x.x.y:8983/solr/solrcollection_shard1_replica1]","trace": 
>> "org.apache.solr.common.SolrException:
>> org.apache.solr.client.solrj.SolrServerException: No live SolrServers
>> available to handle this request:[
>> http://x.x.x.x:8983/solr/solrcollection_shard1_replica2,
>> http://x.x.x.y:8983/solr/solrcollection_shard1_replica1]\n\tat
>> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:416)\n\tat
>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\n\tat
>> org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)\n\tat
>> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:724)\n\tat
>> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:530)\n\tat
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)\n\tat
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)\n\tat
>> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>> 
>> 
>> 
>> 
>> 

___
Eric Pugh | Founder & CEO | OpenSource Connections, LLC | 434.466.1467 | 
http://www.opensourceconnections.com  | 
My Free/Busy   
Co-Author: Apache Solr Enterprise Search Server, 3rd Ed 


This e-mail and all contents, including attachments, is considered to be 
Company Confidential unless explicitly stated otherwise, regardless of whether 
attachments are marked as such.



Re: Identifying open segments.

2021-01-06 Thread Jacob Ward
Thanks Ilan.

Yes I'm working on a process for distributing and backing-up indexes
externally. I discovered the beauty of the snapshot API which does exactly
what I want - temporarily protects closed segments and returns a list of
all files requires to restore that snapshot.

On Tue, 5 Jan 2021 at 23:11, Ilan Ginzburg  wrote:

> Are you trying to copy the index by an external process not running in
> the Solr JVM? I believe this is risky if the Solr JVM is running at
> the same time. For example segments can be deleted by Solr.
> There might also be closed segments that you do not need but that are
> still on the disk (no longer part of the current commit point).
>
> You could look at backup options in Solr, I believe they basically do
> what you need (I'm not familiar with what's available but I'm sure you
> can find the info).
>
> Ilan
>
>
> On Tue, Jan 5, 2021 at 12:46 PM Jacob Ward  wrote:
> >
> > Hello,
> >
> > I am looking for a way to identify the open segment files in a lucene
> > index, so that I can export only the closed segments (and the segmentsN
> > file). My current ideas are:
> >
> > - Ignore any segment files newer than the segmentsN file.
> > OR
> > - Open the segmentsN file using Lucene core's SegmentInfos class (which I
> > presume would allow me to identify which are the closed segments).
> >
> > Could anyone provide suggestions on how to do this best? Ideally I'd like
> > to do this without the SegmentInfos class if there is a suitable method.
> >
> > Thanks.
> >
> > --
> >
> > Jacob Ward|Graduate Data Infrastructure Engineer
> >
> > jw...@brandwatch.com
> >
> >
> > NEW YORK   | BOSTON   | BRIGHTON   | LONDON   | BERLIN |   STUTTGART |
> > PARIS   | SINGAPORE | SYDNEY
>


-- 

Jacob Ward|Graduate Data Infrastructure Engineer

jw...@brandwatch.com


NEW YORK   | BOSTON   | BRIGHTON   | LONDON   | BERLIN |   STUTTGART |
PARIS   | SINGAPORE | SYDNEY


Solr background merge in case of pull replicas

2021-01-06 Thread kshitij tyagi
Hi,

I am having a  tlog + pull replica solr cloud setup.

1. I am observing that whenever background segment merge is triggered
automatically, i see high response time on all of my solr nodes.

As far as I know merges must be happening on tlog and hence the increase
response time, i am not able to understand that why my pull replicas are
affected during background index merges.

Can someone give some insights on this? What is affecting my pull replicas
during index merges?

Regards,
kshitij