Help needed with Solrcloud error messages

2019-02-04 Thread Webster Homer
We have a number of collections in a Solrcloud.

The cloud has 2 shards each with 2 replicas, 4 nodes. On one of the nodes I am 
seeing a lot of errors in the log like this:
2019-02-04 20:27:11.831 ERROR (qtp1595212853-88527) [c:sial-catalog-product 
s:shard1 r:core_node4 x:sial-catalog-product_shard1_replica2] 
o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: Error reading 
document with docId 417762
2019-02-04 20:29:49.779 ERROR (qtp1595212853-87296) [c:sial-catalog-product 
s:shard1 r:core_node4 x:sial-catalog-product_shard1_replica2] 
o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: Error reading 
document with docId 417676
2019-02-04 20:23:47.505 ERROR (qtp1595212853-87538) [c:sial-catalog-product 
s:shard1 r:core_node4 x:sial-catalog-product_shard1_replica2] 
o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: Error reading 
document with docId 414871

There are many more than these three. What does this mean?

On the same node I also see problems with 2 other collections:
ehs-catalog-qmdoc_shard1_replica2: 
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
Error opening new searcher
sial-catalog-category-180721_shard2_replica_n4: 
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
Error opening new searcher

Yet another replica on this node is down

What could cause the error reading docId problems? Why is there a problem 
opening a new searcher on   2 unrelated collections which just happen to be on 
the same node? How do I go about diagnosing the problems?

We've been seeing a lot of problems with solrcloud.

We are on Solr 7.2




Re: Why solr sends a request for a metrics every minute?

2019-02-04 Thread levtannen
Thank you Jan. 
Now, when I know what it is, I probably will not try to suppress the metrics
itself, but instead will suppress the log message in log4j2.xml using an
appropriate filter. This way I will have metrics in case I will figure out
how to use it and will not clog the log. I hope this will cause performance
degradation.
 



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: BBox question

2019-02-04 Thread Scott Stults
Hi Fernando,

Solr (Lucene) uses a tree-based filter called BKD-tree. There's a good
write-up of the approach over on the Elasticsearch blog:

https://www.elastic.co/blog/lucene-points-6.0

and a cool animation of it in action on Youtube:

https://www.youtube.com/watch?v=x9WnzOvsGKs

The blog write-up and Jira issue talk about performance vs other approaches.


k/r,
Scott

On Mon, Feb 4, 2019 at 1:17 PM Fernando Otero 
wrote:

> Hey guys,
>   I was wondering if BBoxes use filters (ie: goes through all
> documents) or uses the index to do a range filter?
> It's clear in the doc that the performance is better than geodist but I
> couldn't find implementation details.I'm not sure if the performance comes
> from doing less comparissons, simple calculations or both (which I assume
> it's the case)
>
> Thanks!
>
> --
>
> Fernando Otero
>
> Sr Engineering Manager, Panamera
>
> Buenos Aires - Argentina
>
> Email:  fernando.ot...@olx.com
>


-- 
Scott Stults | Founder & Solutions Architect | OpenSource Connections, LLC
| 434.409.2780
http://www.opensourceconnections.com


Re: by: java.util.zip.DataFormatException: invalid distance too far back reported by Solr API

2019-02-04 Thread Monique Monteiro
Hi all,

In fact, moving the parsing to the client solved the problem!

Thanks!
Monique

On Thu, Jan 31, 2019 at 8:25 AM Jan Høydahl  wrote:

> Hi
>
> This is Apache Tika that cannot parse a zip file or possibly a zip
> formatted office file.
> You have to post the full stack trace (which you'll find in the solr.log
> on server side)
> if you want help in locating the source of the issue, you may be able to
> configure Tika
>
> Have you tried to specify ignoreTikaException=true on the request? See
> https://lucene.apache.org/solr/guide/7_6/uploading-data-with-solr-cell-using-apache-tika.html
>
> At the end of the day it would be a much better architecture to parse the
> PDFs using plain standalone TikaServer and then construct a Solr Document
> in your Python code which is then posted to Solr. Reason is you have much
> better control over parse errors and how to map metadata to your schema
> fields. Also you don't want to overload Solr with all this work, it can
> even crash the whole Solr server if some parser crashes or gets stuck in an
> infinite loop.
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> > 30. jan. 2019 kl. 20:49 skrev Monique Monteiro  >:
> >
> > Hi all,
> >
> > I'm writing a Python routine to upload thousands of PDF files to Solr,
> and
> > after trying to upload some files, Solr reports the following error in a
> > HTTP 500 response:
> >
> > "by: java.util.zip.DataFormatException: invalid distance too far back"
> >
> > Does anyone have any idea about how to overcome this?
> >
> > Thanks in advance,
> > Monique Monteiro
>
>

-- 
Monique Monteiro
Twitter: http://twitter.com/monilouise


Re: Why solr sends a request for a metrics every minute?

2019-02-04 Thread Jan Høydahl
Yes, it is the Metrics History Handler that does this to collect some key 
metrics in a central place.
Read more here https://lucene.apache.org/solr/guide/7_6/metrics-history.html

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 4. feb. 2019 kl. 18:45 skrev levtannen :
> 
> Hello Solr community, 
> 
> My solrcloud system consists of 3 machines, each running a zookeeper and a
> solr server. It manages about 200 collections with 1 shard each. 
> When I run it, I see  that every minutes samebody sends a request for some
> metrics to my system. Because nobody can sent requests to my development
> system I assume that it is solr sends these requests by itself to itself.
> Could anybody explain me, what is going on and how can I controll such
> requests?
> Bellow is an example of  log messages that are produced by these requests.   
> 
> Regards
> Lev Tannen
> 
> Message example:
> 2019-02-04 16:39:13.487 INFO  (qtp817348612-16) [   ] o.a.s.s.HttpSolrCall
> [admin] webapp=null path=/admin/metrics
> params={wt=javabin=2=solr.core.OKN-A-documents.shard1.replica_n4:QUERY./select.requests=solr.core.CA5-
>
> B-documents.shard1.replica_n4:UPDATE./update.requests=solr.core.LAM-B-cases.shard1.replica_n4:UPDATE./update.requests=solr.core.NV-A-documents.shard1.replica_n2:QUERY./select.requests=solr.core.FLN-B-documents.shard1.repl
>
> ica_n1:INDEX.sizeInBytes=solr.core.GU-B-documents.shard1.replica_n1:INDEX.sizeInBytes=solr.core.CO-B-documents.shard1.replica_n4:UPDATE./update.requests=solr.core.MD-A-documents.shard1.replica_n1:QUERY./select.requests
>
> y=solr.core.INS-B-documents.shard1.replica_n2:INDEX.sizeInBytes=solr.core.MD-A-documents.shard1.replica_n1:INDEX.sizeInBytes=solr.core.INS-A-cases.shard1.replica_n4:QUERY./select.requests=solr.core.ILS-A-documents.shard1.
>
> replica_n1:QUERY./select.requests=solr.core.VAW-B-cases.shard1.replica_n4:QUERY./select.requests=solr.core.ARW-A-cases.shard1.replica_n2:UPDATE./update.requests=solr.core.OKW-A-documents.shard1.replica_n1:QUERY./select.re
>
> quests=solr.core.KS-A-cases.shard1.replica_n1:QUERY./select.requests=solr.core.NYN-B-cases.shard1.replica_n1:INDEX.sizeInBytes=solr.core.MA-B-cases.shard1.replica_n4:INDEX.sizeInBytes=solr.core.FLM-B-cases.shard1.repl
>
> ica_n2:INDEX.sizeInBytes=solr.core.NYE-A-cases.shard1.replica_n1:QUERY./select.requests=solr.core.MN-A-documents.shard1.replica_n2:QUERY./select.requests=solr.core.NYS-B-cases.shard1.replica_n1:UPDATE./update.requests
>
> =solr.core.MIE-B-documents.shard1.replica_n2:INDEX.sizeInBytes=solr.core.ILS-A-cases.shard1.replica_n2:INDEX.sizeInBytes=solr.core.CAN-B-cases.shard1.replica_n2:INDEX.sizeInBytes=solr.core.ARE-A-documents.shard1.replica_n
>
> 4:UPDATE./update.requests=solr.core.MOW-A-documents.shard1.replica_n5:QUERY./select.requests=solr.core.INS-B-cases.shard1.replica_n4:QUERY./select.requests=solr.core.ND-B-cases.shard1.replica_n1:UPDATE./update.requests
>
> y=solr.core.OHN-A-documents.shard1.replica_n1:UPDATE./update.requests=solr.core.UT-B-documents.shard1.replica_n2:UPDATE./update.requests=solr.core.TNE-B-cases.shard1.replica_n1:INDEX.sizeInBytes=solr.core.OHS-B-cases.shar
>
> d1.replica_n2:UPDATE./update.requests=solr.core.KYW-B-documents.shard1.replica_n3:UPDATE./update.requests=solr.core.TXE-A-cases.shard1.replica_n1:INDEX.sizeInBytes=solr.core.ND-A-cases.shard1.replica_n4:UPDATE./update.req
>
> uests=solr.core.NV-A-documents.shard1.replica_n2:UPDATE./update.requests=solr.core.WVN-A-documents.shard1.replica_n1:QUERY./select.requests=solr.core.UT-B-documents.shard1.replica_n2:INDEX.sizeInBytes=solr.core.TXN-B-
>
> documents.shard1.replica_n1:UPDATE./update.requests=solr.core.CA4-B-documents.shard1.replica_n1:INDEX.sizeInBytes=solr.core.HI-B-cases.shard1.replica_n2:QUERY./select.requests=solr.core.TNM-A-documents.shard1.replica_n2:U
>
> PDATE./update.requests=solr.core.CA3-B-documents.shard1.replica_n4:UPDATE./update.requests=solr.core.NE-B-documents.shard1.replica_n1:INDEX.sizeInBytes=solr.core.NCM-B-cases.shard1.replica_n2:UPDATE./update.requests=s
>
> olr.core.CAS-B-documents.shard1.replica_n4:QUERY./select.requests=solr.core.GU-B-cases.shard1.replica_n4:UPDATE./update.requests=solr.core.CAC-A-cases.shard1.replica_n4:INDEX.sizeInBytes=solr.core.KS-A-documents.shard1.re
>
> plica_n2:INDEX.sizeInBytes=solr.core.LAE-A-documents.shard1.replica_n2:UPDATE./update.requests=solr.core.CA7-B-documents.shard1.replica_n3:INDEX.sizeInBytes=solr.core.PAM-A-cases.shard1.replica_n4:QUERY./select.requests
>
> ey=solr.core.HI-A-cases.shard1.replica_n2:QUERY./select.requests=solr.core.MIW-B-cases.shard1.replica_n2:INDEX.sizeInBytes=solr.core.LAE-B-documents.shard1.replica_n4:QUERY./select.requests=solr.core.CA10-B-documents.shar
>
> 

BBox question

2019-02-04 Thread Fernando Otero
Hey guys,
  I was wondering if BBoxes use filters (ie: goes through all
documents) or uses the index to do a range filter?
It's clear in the doc that the performance is better than geodist but I
couldn't find implementation details.I'm not sure if the performance comes
from doing less comparissons, simple calculations or both (which I assume
it's the case)

Thanks!

-- 

Fernando Otero

Sr Engineering Manager, Panamera

Buenos Aires - Argentina

Email:  fernando.ot...@olx.com


Why solr sends a request for a metrics every minute?

2019-02-04 Thread levtannen
Hello Solr community, 

My solrcloud system consists of 3 machines, each running a zookeeper and a
solr server. It manages about 200 collections with 1 shard each. 
When I run it, I see  that every minutes samebody sends a request for some
metrics to my system. Because nobody can sent requests to my development
system I assume that it is solr sends these requests by itself to itself.
Could anybody explain me, what is going on and how can I controll such
requests?
Bellow is an example of  log messages that are produced by these requests.   

Regards
Lev Tannen

Message example:
2019-02-04 16:39:13.487 INFO  (qtp817348612-16) [   ] o.a.s.s.HttpSolrCall
[admin] webapp=null path=/admin/metrics
params={wt=javabin=2=solr.core.OKN-A-documents.shard1.replica_n4:QUERY./select.requests=solr.core.CA5-
   
B-documents.shard1.replica_n4:UPDATE./update.requests=solr.core.LAM-B-cases.shard1.replica_n4:UPDATE./update.requests=solr.core.NV-A-documents.shard1.replica_n2:QUERY./select.requests=solr.core.FLN-B-documents.shard1.repl
   
ica_n1:INDEX.sizeInBytes=solr.core.GU-B-documents.shard1.replica_n1:INDEX.sizeInBytes=solr.core.CO-B-documents.shard1.replica_n4:UPDATE./update.requests=solr.core.MD-A-documents.shard1.replica_n1:QUERY./select.requests
   
y=solr.core.INS-B-documents.shard1.replica_n2:INDEX.sizeInBytes=solr.core.MD-A-documents.shard1.replica_n1:INDEX.sizeInBytes=solr.core.INS-A-cases.shard1.replica_n4:QUERY./select.requests=solr.core.ILS-A-documents.shard1.
   
replica_n1:QUERY./select.requests=solr.core.VAW-B-cases.shard1.replica_n4:QUERY./select.requests=solr.core.ARW-A-cases.shard1.replica_n2:UPDATE./update.requests=solr.core.OKW-A-documents.shard1.replica_n1:QUERY./select.re
   
quests=solr.core.KS-A-cases.shard1.replica_n1:QUERY./select.requests=solr.core.NYN-B-cases.shard1.replica_n1:INDEX.sizeInBytes=solr.core.MA-B-cases.shard1.replica_n4:INDEX.sizeInBytes=solr.core.FLM-B-cases.shard1.repl
   
ica_n2:INDEX.sizeInBytes=solr.core.NYE-A-cases.shard1.replica_n1:QUERY./select.requests=solr.core.MN-A-documents.shard1.replica_n2:QUERY./select.requests=solr.core.NYS-B-cases.shard1.replica_n1:UPDATE./update.requests
   
=solr.core.MIE-B-documents.shard1.replica_n2:INDEX.sizeInBytes=solr.core.ILS-A-cases.shard1.replica_n2:INDEX.sizeInBytes=solr.core.CAN-B-cases.shard1.replica_n2:INDEX.sizeInBytes=solr.core.ARE-A-documents.shard1.replica_n
   
4:UPDATE./update.requests=solr.core.MOW-A-documents.shard1.replica_n5:QUERY./select.requests=solr.core.INS-B-cases.shard1.replica_n4:QUERY./select.requests=solr.core.ND-B-cases.shard1.replica_n1:UPDATE./update.requests
   
y=solr.core.OHN-A-documents.shard1.replica_n1:UPDATE./update.requests=solr.core.UT-B-documents.shard1.replica_n2:UPDATE./update.requests=solr.core.TNE-B-cases.shard1.replica_n1:INDEX.sizeInBytes=solr.core.OHS-B-cases.shar
   
d1.replica_n2:UPDATE./update.requests=solr.core.KYW-B-documents.shard1.replica_n3:UPDATE./update.requests=solr.core.TXE-A-cases.shard1.replica_n1:INDEX.sizeInBytes=solr.core.ND-A-cases.shard1.replica_n4:UPDATE./update.req
   
uests=solr.core.NV-A-documents.shard1.replica_n2:UPDATE./update.requests=solr.core.WVN-A-documents.shard1.replica_n1:QUERY./select.requests=solr.core.UT-B-documents.shard1.replica_n2:INDEX.sizeInBytes=solr.core.TXN-B-
   
documents.shard1.replica_n1:UPDATE./update.requests=solr.core.CA4-B-documents.shard1.replica_n1:INDEX.sizeInBytes=solr.core.HI-B-cases.shard1.replica_n2:QUERY./select.requests=solr.core.TNM-A-documents.shard1.replica_n2:U
   
PDATE./update.requests=solr.core.CA3-B-documents.shard1.replica_n4:UPDATE./update.requests=solr.core.NE-B-documents.shard1.replica_n1:INDEX.sizeInBytes=solr.core.NCM-B-cases.shard1.replica_n2:UPDATE./update.requests=s
   
olr.core.CAS-B-documents.shard1.replica_n4:QUERY./select.requests=solr.core.GU-B-cases.shard1.replica_n4:UPDATE./update.requests=solr.core.CAC-A-cases.shard1.replica_n4:INDEX.sizeInBytes=solr.core.KS-A-documents.shard1.re
   
plica_n2:INDEX.sizeInBytes=solr.core.LAE-A-documents.shard1.replica_n2:UPDATE./update.requests=solr.core.CA7-B-documents.shard1.replica_n3:INDEX.sizeInBytes=solr.core.PAM-A-cases.shard1.replica_n4:QUERY./select.requests
   
ey=solr.core.HI-A-cases.shard1.replica_n2:QUERY./select.requests=solr.core.MIW-B-cases.shard1.replica_n2:INDEX.sizeInBytes=solr.core.LAE-B-documents.shard1.replica_n4:QUERY./select.requests=solr.core.CA10-B-documents.shar
   
d1.replica_n1:INDEX.sizeInBytes=solr.core.CAN-B-documents.shard1.replica_n1:INDEX.sizeInBytes=solr.core.FLS-A-cases.shard1.replica_n2:QUERY./select.requests=solr.core.NCW-A-cases.shard1.replica_n4:QUERY./select.requests
   
ey=solr.core.VT-A-documents.shard1.replica_n1:INDEX.sizeInBytes=solr.core.LAE-B-cases.shard1.replica_n4:INDEX.sizeInBytes=solr.core.SD-B-documents.shard1.replica_n4:INDEX.sizeInBytes=solr.core.GAM-A-documents.shard1.repli
   

solr SSL encryption degardes solr performance

2019-02-04 Thread Anchal Sharma2


Hi All,

We had recently enabled SSL on solr. But afterwards ,our application
performance  has degraded significantly i.e the time for the source
application  to fetch  a record from solr has increased from approx 4 ms to
200 ms(this is for a single record) .This amounts to a lot of time ,when
multiple calls are made to solr.

Has any one experienced this ,and please share if some one has any
suggestion .

Thanks & Regards,
-
Anchal Sharma


LFUCache

2019-02-04 Thread Markus Jelsma
Hello,

Thanks to SOLR-12743 - one of our collections can't use FastLRUCache - we are 
considering LFUCache instead. But there is SOLR-3393 as well, claiming the 
current implementation is inefficient.

But ConcurrentLRUCache and ConcurrentLFUCache both use ConcurrentHashmap under 
the hood, the get() code is practically identical. So based on the code, i 
would think that, despite LFUCache being inefficient, it is neither slower nor 
faster than FastLRUCache for get(), right?

Or am i missing something obvious here?

Thanks,
Markus

https://issues.apache.org/jira/browse/SOLR-12743
https://issues.apache.org/jira/browse/SOLR-3393


Ignore accent in a request

2019-02-04 Thread SAUNIER Maxence
Hello,

How can I ignore accent in the query result ?

Request : 
http://*:8983/solr/***/select?defType=dismax=je+suis+avarié=title%5e20+subject%5e15+category%5e1+content%5e0.5=757

I want to have doc with avarié and avarie.

I have add this in my schema :

  {
"name": "string",
"positionIncrementGap": "100",
"analyzer": {
  "filters": [
{
  "class": "solr.LowerCaseFilterFactory"
},
{
  "class": "solr.ASCIIFoldingFilterFactory"
},
{
  "class": "solr.EdgeNGramFilterFactory",
  "minGramSize": "3",
  "maxGramSize": "50"
}
  ],
  "tokenizer": {
"class": "solr.KeywordTokenizerFactory"
  }
},
"stored": true,
"indexed": true,
"sortMissingLast": true,
"class": "solr.TextField"
  },

But it not working.

Thanks.


Re: Asynchronous Calls to Backup/Restore Collections ignoring errors

2019-02-04 Thread Jason Gerlowski
Hi Steffen,

There are a few "known issues" in this area.  Probably most relevant
is SOLR-6595, which covers a few error-reporting issues for
"collection-admin" operations.  I don't think we've gotten any reports
yet of success/failure determination being broken for asynchronous
operations, but that's not too surprising given my understanding of
how that bit of the code works.  So "yes", this is a known issue.
We've made some progress towards improving the situation, but there's
still work to be done.

As for workarounds, I can't think of any clever suggestions.  You
might be able to issue a query to the collection to see if it returns
any docs, or a particular number of expected docs.  But that may not
be possible, depending on what you meant by the collection being
"unusable" above.

Best,

Jason

On Thu, Jan 31, 2019 at 10:10 AM Steffen Moldenhauer
 wrote:
>
> Hi all,
>
> we are using the collection API backup and restore to transfer collections 
> from a pre-prod to a production system. We are currently using Solr version 
> 6.6.5
> But sometimes that automated process fails and collections are not working on 
> the production system.
>
> It seems that the asynchronous API calls backup and restore do not report 
> some errors/exceptions.
>
> I tried it with the solrcloud gettingstarted example:
>
> http://localhost:8983/solr/admin/collections?action=BACKUP=backup-gettingstarted=gettingstarted=D:\solr_backup
>
> http://localhost:8983/solr/admin/collections?action=DELETE=gettingstarted
>
> Now I simulate an error just by deleting somthing from the backup in the 
> file-system and try to restore the incomplete backup:
>
> http://localhost:8983/solr/admin/collections?action=RESTORE=backup-gettingstarted=gettingstarted=D:\solr_backup=1000
>
> http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS=1000
> 0 name="QTime">2 name="state">completedfound [1000] in completed 
> tasks
>
> The status is completed but the collection is not usable.
>
> With a synchronous restore call I get:
>
> http://localhost:8983/solr/admin/collections?action=RESTORE=backup-gettingstarted=gettingstarted=D:\solr_backup
> 500 name="QTime">6456org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  Could not restore coreCould not 
> restore core500 name="metadata"> name="error-class">org.apache.solr.common.SolrException name="root-error-class">org.apache.solr.common.SolrException name="msg">Could not restore core name="trace">org.apache.solr.common.SolrException: Could not restore core
>at 
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:300)
>at 
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:237)
>at 
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:215)
>at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
>at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:748)
>at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:729)
>at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:510)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>at 
> 

Re: AIX platform: Solr goes down with java.lang.OutOfMemoryError with Open JDK 11

2019-02-04 Thread Shawn Heisey

On 2/4/2019 5:53 AM, balu...@gmail.com wrote:

I am running solr 7.5.0 with Open JDK11 in AIX platform. When i trigger data
import operation , solr is going down with below error on AIX platform but,
the same thing works in RHEL platform.

The same solr 7.5.0 data import operation is success with JDK8 in same AIX
platform.


Java 11 is not qualified with any version of Solr yet.  We don't know 
whether it works or not.  Java 9 is known to work with Solr 7.x.  My 
recommendation here is to stick with Java 8 until we can find and fix 
any problems with 11.



*Error from solr.log  Solr 7.5.0 with Open JDK 11 on AIX platform:*

/*ERROR (coreContainerWorkExecutor-2-thread-1) [   ] o.a.s.c.CoreContainer
Error waiting for SolrCore to be loaded on startup
java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830,
errno 11*/


This OutOfMemoryError is not actually due to memory.  Java is saying 
that it failed to create a thread.


Typically this is caused by the OS restricting the number of processes a 
user is allowed to start.  Sometimes the OS might treat threads 
differently than processes so a different limit might need to be 
increased ... I have no idea whether AIX behaves that way or not.


The fact that it works with Java 8 is a little odd.  Maybe Java 11 
itself creates more threads than 8 does.


Thanks,
Shawn


AIX platform: Solr goes down with java.lang.OutOfMemoryError with Open JDK 11

2019-02-04 Thread balu...@gmail.com
Hi,

I am running solr 7.5.0 with Open JDK11 in AIX platform. When i trigger data
import operation , solr is going down with below error on AIX platform but,
the same thing works in RHEL platform. 

The same solr 7.5.0 data import operation is success with JDK8 in same AIX
platform.

*Error from solr.log  Solr 7.5.0 with Open JDK 11 on AIX platform:*

/*ERROR (coreContainerWorkExecutor-2-thread-1) [   ] o.a.s.c.CoreContainer
Error waiting for SolrCore to be loaded on startup
java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830,
errno 11*/

I tried setting java memory as 8G *-Xms8g -Xmx8g* but, still i face the
issue in AIX platform.

Any facing similar issue? 

Please help.

Thanks,
Bala




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html