Re: [CAUTION] SSL + Solr 8.5.1 in cloud mode + Java 8

2020-07-13 Thread Natarajan, Rajeswari
Thank you so much for the response.  Below are the configs I have in solr.in.sh 
and I followed  https://lucene.apache.org/solr/guide/8_5/enabling-ssl.html 
documentation

# Enables HTTPS. It is implicitly true if you set SOLR_SSL_KEY_STORE. Use this 
config
# to enable https module with custom jetty configuration.
SOLR_SSL_ENABLED=true
# Uncomment to set SSL-related system properties
# Be sure to update the paths to the correct keystore for your environment
SOLR_SSL_KEY_STORE=etc/solr-ssl.keystore.p12
SOLR_SSL_KEY_STORE_PASSWORD=secret
SOLR_SSL_TRUST_STORE=etc/solr-ssl.keystore.p12
SOLR_SSL_TRUST_STORE_PASSWORD=secret
# Require clients to authenticate
SOLR_SSL_NEED_CLIENT_AUTH=false
# Enable clients to authenticate (but not require)
SOLR_SSL_WANT_CLIENT_AUTH=false
# SSL Certificates contain host/ip "peer name" information that is validated by 
default. Setting
# this to false can be useful to disable these checks when re-using a 
certificate on many hosts
SOLR_SSL_CHECK_PEER_NAME=true

In local , with the below certificate it works
---

keytool -list -keystore solr-ssl.keystore.p12
Enter keystore password:  
Keystore type: PKCS12
Keystore provider: SUN

Your keystore contains 1 entry

solr-18, Jun 26, 2020, PrivateKeyEntry, 
Certificate fingerprint (SHA1): 
AB:F2:C8:84:E8:E7:A2:BF:2D:0D:2F:D3:95:4A:98:5B:2A:88:81:50
C02W48C6HTD6:solr-8.5.1 i843100$ keytool -list -v -keystore 
solr-ssl.keystore.p12
Enter keystore password:  
Keystore type: PKCS12
Keystore provider: SUN

Your keystore contains 1 entry

Alias name: solr-18
Creation date: Jun 26, 2020
Entry type: PrivateKeyEntry
Certificate chain length: 1
Certificate[1]:
Owner: CN=localhost, OU=Organizational Unit, O=Organization, L=Location, 
ST=State, C=Country
Issuer: CN=localhost, OU=Organizational Unit, O=Organization, L=Location, 
ST=State, C=Country
Serial number: 45a822c8
Valid from: Fri Jun 26 00:13:03 PDT 2020 until: Sun Nov 10 23:13:03 PST 2047
Certificate fingerprints:
 MD5:  0B:80:54:89:44:65:93:07:1F:81:88:8D:EC:BD:38:41
 SHA1: AB:F2:C8:84:E8:E7:A2:BF:2D:0D:2F:D3:95:4A:98:5B:2A:88:81:50
 SHA256: 
9D:65:A6:55:D7:22:B2:72:C2:20:55:66:F8:0C:9C:48:B1:F6:48:40:A4:FB:CB:26:77:DE:C4:97:34:69:25:42
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3

Extensions: 

#1: ObjectId: 2.5.29.17 Criticality=false
SubjectAlternativeName [
  DNSName: localhost
  IPAddress: 172.20.10.4
  IPAddress: 127.0.0.1
]

#2: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
: 1B 6F BB 65 A4 3C 6A F4   C9 05 08 89 88 0E 9E 76  .o.e. supported on Server
>   at
> 
org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:223)
>

In both cases , the config is same except the keystore certificates . In the 
JIRA (https://issues.apache.org/jira/browse/SOLR-14105) , I see the fix says it 
supports multiple DNS and multiple certificates. So I thought it should be ok. 
Please let me know .

keytool -list -keystore  /etc/nginx/certs/sidecar.p12 
Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
Enter keystore password:  
Keystore type: PKCS12
Keystore provider: SUN

Your keystore contains 1 entry

1, Jul 7, 2020, PrivateKeyEntry, 
Certificate fingerprint (SHA1): 
E2:3B:4B:4A:0E:05:CF:DA:59:09:55:8D:4E:6D:8A:1D:4E:DD:D4:62
bash-5.0# 
-

bash-5.0#  keytool -list -v -keystore /etc/nginx/certs/sidecar.p12 
Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8
Enter keystore password:  
Keystore type: PKCS12
Keystore provider: SUN

Your keystore contains 1 entry

Alias name: 1
Creation date: Jul 7, 2020
Entry type: PrivateKeyEntry
Certificate chain length: 2
Certificate[1]:
Owner: OU=Cobalt, O=SAP, L=Walldorf, ST=Walldorf, C=DE
Issuer: CN=SAP Ariba Cobalt Sidecar Intermediate CA, OU=COBALT, O=SAP Ariba, 
ST=CA, C=US
Serial number: 1000
Valid from: Tue Jul 07 05:14:37 GMT 2020 until: Thu Jul 07 05:14:37 GMT 2022
Certificate fingerprints:
 MD5:  C0:13:87:37:96:C2:E2:DD:B9:D7:B4:E3:6B:73:A0:EC
 SHA1: E2:3B:4B:4A:0E:05:CF:DA:59:09:55:8D:4E:6D:8A:1D:4E:DD:D4:62
 SHA256: 
89:AB:8E:3B:D4:EC:A6:D0:0E:D7:CB:65:8C:92:13:32:F2:FD:7E:41:C9:39:F5:66:D5:7D:F1:04:13:8A:4E:92
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3

Extensions: 

#1: ObjectId: 2.16.840.1.113730.1.13 Criticality=false
: 16 24 4F 70 65 6E 53 53   4C 20 47 65 6E 65 72 61  .$OpenSSL Genera
0010: 74 65 64 20 53 65 72 76   65 72 20 43 65 72 74 69  ted Server Certi
0020: 66 69 63 61 74 65  ficate


#2: ObjectId: 2.5.29.35 Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
: E9 5C 42 72 5E 70 D9 02   05 AA 11 BA 0D 4D 8D 0D  .\Br^p...M..
0010: F3 37 2C 95.7,.
]
[CN=SAP Ariba Cobalt CA, OU=ES, O=SAP Ariba, L=Palo Alto, ST=CA, C=US]
SerialNumber: [1001]
]

#3: ObjectId: 2.5.29.19 

Re: Solr heap Old generation grows and it is not recovered by G1GC

2020-07-13 Thread Odysci
Shawn,

thanks for the extra info.
The OOM errors were indeed because of heap space. In my case most of the GC
calls were not full GC. Only when heap was really near the top, a full GC
was done.
I'll try out your suggestion of increasing the G1 heap region size. I've
been using 4m, and from what you said, a 2m allocation would be considered
humongous. My test cases have a few allocations that are definitely bigger
than 2m (estimating based on the number of docs returned), but most of them
are not.

When i was using maxRamMB, the size used was "compatible" with the the size
values, assuming the avg 2K bytes docs that our index has.
As far as I could tell in my runs, removing maxRamMB did change the GC
behavior for the better. That is, now, heap goes up and down as expected,
and before (with maxRamMB) it seemed to increase continuously.
Thanks

Reinaldo

On Sun, Jul 12, 2020 at 1:02 AM Shawn Heisey  wrote:

> On 6/25/2020 2:08 PM, Odysci wrote:
> > I have a solrcloud setup with 12GB heap and I've been trying to optimize
> it
> > to avoid OOM errors. My index has about 30million docs and about 80GB
> > total, 2 shards, 2 replicas.
>
> Have you seen the full OutOfMemoryError exception text?  OOME can be
> caused by problems that are not actually memory-related.  Unless the
> error specifically mentions "heap space" we might be chasing the wrong
> thing here.
>
> > When the queries return a smallish number of docs (say, below 1000), the
> > heap behavior seems "normal". Monitoring the gc log I see that young
> > generation grows then when GC kicks in, it goes considerably down. And
> the
> > old generation grows just a bit.
> >
> > However, at some point i have a query that returns over 300K docs (for a
> > total size of approximately 1GB). At this very point the OLD generation
> > size grows (almost by 2GB), and it remains high for all remaining time.
> > Even as new queries are executed, the OLD generation size does not go
> down,
> > despite multiple GC calls done afterwards.
>
> Assuming the OOME exceptions were indeed caused by running out of heap,
> then the following paragraphs will apply:
>
> G1 has this concept called "humongous allocations".  In order to reach
> this designation, a memory allocation must get to half of the G1 heap
> region size.  You have set this to 4 megabytes, so any allocation of 2
> megabytes or larger is humongous.  Humongous allocations bypass the new
> generation entirely and go directly into the old generation.  The max
> value that can be set for the G1 region size is 32MB.  If you increase
> the region size and the behavior changes, then humongous allocations
> could be something to investigate.
>
> In the versions of Java that I have used, humongous allocations can only
> be reclaimed as garbage by a full GC.  I do not know if Oracle has
> changed this so the smaller collections will do it or not.
>
> Were any of those multiple GCs a Full GC?  If they were, then there is
> probably little or no garbage to collect.  You've gotten a reply from
> "Zisis T." with some possible causes for this.  I do not have anything
> to add.
>
> I did not know about any problems with maxRamMB ... but if I were
> attempting to limit cache sizes, I would do so by the size values, not a
> specific RAM size.  The size values you have chosen (8192 and 16384)
> will most likely result in a total cache size well beyond the limits
> you've indicated with maxRamMB.  So if there are any bugs in the code
> with the maxRamMB parameter, you might end up using a LOT of memory that
> you didn't expect to be using.
>
> Thanks,
> Shawn
>


Re: JTS, IsWithin predicate, and multivalued fields

2020-07-13 Thread Murray Johnston
Replying to myself and for posterity:


This is expected behavior per the comments on 
https://issues.apache.org/jira/browse/LUCENE-4644 that originally added the 
IsWithin predicate.


Too bad, I was really pleased with my idea but I can see why it was implemented 
the way it was.  Back to the drawing board



From: Murray Johnston 
Sent: Monday, July 13, 2020 11:52:20 AM
To: solr-user@lucene.apache.org
Subject: JTS, IsWithin predicate, and multivalued fields

Message from External Sender

Hi all,


I'm trying to use (abuse[1]) a SpatialRecursivePrefixTreeFieldType field that 
is multivalued with the IsWithin JTS predicate.  After some testing, it appears 
that all values of the field must satisfy the predicate in order for the 
document to be returned.  Is that expected?  It seems somewhat different than 
other semantics for a multivalued field.


Thanks,


-Murray



[1] My use case is an extension of the SpatialForTimeDurations usage.  My 
prices have a "time in advance" that must also be satisfied.  My idea was to 
model this as a line instead of just a point and verify that the entire line 
IsWithin the bounding box but I might be blocked from doing this if it won't 
work for a multivalued field.




Re: [CAUTION] SSL + Solr 8.5.1 in cloud mode + Java 8

2020-07-13 Thread Kevin Risden
>
> In local with just certificate and one domain name  the SSL communication
> worked. With multiple DNS and 2 certificates SSL fails with below exception.
>

A client keystore by definition can only have a single certificate. A
server keystore can have multiple certificates. The reason being is that a
client can only be identified by a single certificate.

Can you share more details about specifically what your solr.in.sh configs
look like related to keystore/truststore and which files? Specifically
highlight which files have multiple certificates in them.

It looks like for the Solr internal http client, the client keystore has
more than one certificate in it and the error is correct. This is more
strict with recent versions of Jetty 9.4.x. Previously this would silently
fail, but was still incorrect. Now the error is bubbled up so that there is
no silent misconfigurations.

Kevin Risden


On Mon, Jul 13, 2020 at 4:54 PM Natarajan, Rajeswari <
rajeswari.natara...@sap.com> wrote:

> I looked at the patch mentioned in the JIRA
> https://issues.apache.org/jira/browse/SOLR-14105  reporting the below
> issue. I looked at the solr 8.5.1 code base , I see the patch is applied.
> But still seeing the same  exception with different stack trace. The
> initial excsption stacktrace was at
>
> at
> org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:245)
>
>
> Now the exception we encounter is at httpsolrclient creation
>
>
> Caused by: java.lang.RuntimeException:
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only
> supported on Server
>   at
> org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:223)
>
> I commented the JIRA also. Let me know if this is still an issue.
>
> Thanks,
> Rajeswari
>
> On 7/13/20, 2:03 AM, "Natarajan, Rajeswari" 
> wrote:
>
> Re-sending to see if anyone encountered  had this combination and
> encountered this issue. In local with just certificate and one domain name
> the SSL communication worked. With multiple DNS and 2 certificates SSL
> fails with below exception.  Below JIRA says it is fixed for
> Http2SolrClient , wondering if this is fixed for http1 solr client as we
> pass -Dsolr.http1=true .
>
> Thanks,
> Rajeswari
>
> https://issues.apache.org/jira/browse/SOLR-14105
>
> On 7/6/20, 10:02 PM, "Natarajan, Rajeswari" <
> rajeswari.natara...@sap.com> wrote:
>
> Hi,
>
> We are using Solr 8.5.1 in cloud mode  with Java 8. We are
> enabling  TLS  with http1  (as we get a warning java 8 + solr 8.5 SSL can’t
> be enabled) and we get below exception
>
>
>
> 2020-07-07 03:58:53.078 ERROR (main) [   ] o.a.s.c.SolrCore
> null:org.apache.solr.common.SolrException: Error instantiating
> shardHandlerFactory class [HttpShardHandlerFactory]:
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only
> supported on Server
>   at
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
>   at
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:647)
>   at
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:263)
>   at
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:183)
>   at
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:134)
>   at
> org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:751)
>   at
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
>   at
> java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
>   at
> java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
>   at
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
>   at
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
>   at
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:360)
>   at
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1445)
>   at
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1409)
>   at
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:822)
>   at
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:275)
>   at
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
>   at
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
>   at
> org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:46)
>   at
> org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
>   at
> 

Re: [CAUTION] SSL + Solr 8.5.1 in cloud mode + Java 8

2020-07-13 Thread Natarajan, Rajeswari
I looked at the patch mentioned in the JIRA  
https://issues.apache.org/jira/browse/SOLR-14105  reporting the below issue. I 
looked at the solr 8.5.1 code base , I see the patch is applied. But still 
seeing the same  exception with different stack trace. The initial excsption 
stacktrace was at 

at 
org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:245)


Now the exception we encounter is at httpsolrclient creation


Caused by: java.lang.RuntimeException: 
java.lang.UnsupportedOperationException: X509ExtendedKeyManager only supported 
on Server
  at 
org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:223)

I commented the JIRA also. Let me know if this is still an issue.

Thanks,
Rajeswari

On 7/13/20, 2:03 AM, "Natarajan, Rajeswari"  
wrote:

Re-sending to see if anyone encountered  had this combination and 
encountered this issue. In local with just certificate and one domain name  the 
SSL communication worked. With multiple DNS and 2 certificates SSL fails with 
below exception.  Below JIRA says it is fixed for Http2SolrClient , wondering 
if this is fixed for http1 solr client as we pass -Dsolr.http1=true .

Thanks,
Rajeswari

https://issues.apache.org/jira/browse/SOLR-14105

On 7/6/20, 10:02 PM, "Natarajan, Rajeswari"  
wrote:

Hi,

We are using Solr 8.5.1 in cloud mode  with Java 8. We are enabling  
TLS  with http1  (as we get a warning java 8 + solr 8.5 SSL can’t be enabled) 
and we get below exception



2020-07-07 03:58:53.078 ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.SolrException: Error instantiating 
shardHandlerFactory class [HttpShardHandlerFactory]: 
java.lang.UnsupportedOperationException: X509ExtendedKeyManager only supported 
on Server
  at 
org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
  at org.apache.solr.core.CoreContainer.load(CoreContainer.java:647)
  at 
org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:263)
  at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:183)
  at 
org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:134)
  at 
org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:751)
  at 
java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
  at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
  at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
  at 
java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
  at 
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
  at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:360)
  at 
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1445)
  at 
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1409)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:822)
  at 
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:275)
  at 
org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:46)
  at 
org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
  at 
org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:513)
  at 
org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:154)
  at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:173)
  at 
org.eclipse.jetty.deploy.providers.WebAppProvider.fileAdded(WebAppProvider.java:447)
  at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:66)
  at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:784)
  at 
org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:753)
  at org.eclipse.jetty.util.Scanner.scan(Scanner.java:641)
  at org.eclipse.jetty.util.Scanner.doStart(Scanner.java:540)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:146)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 

JTS, IsWithin predicate, and multivalued fields

2020-07-13 Thread Murray Johnston
Hi all,


I'm trying to use (abuse[1]) a SpatialRecursivePrefixTreeFieldType field that 
is multivalued with the IsWithin JTS predicate.  After some testing, it appears 
that all values of the field must satisfy the predicate in order for the 
document to be returned.  Is that expected?  It seems somewhat different than 
other semantics for a multivalued field.


Thanks,


-Murray



[1] My use case is an extension of the SpatialForTimeDurations usage.  My 
prices have a "time in advance" that must also be satisfied.  My idea was to 
model this as a line instead of just a point and verify that the entire line 
IsWithin the bounding box but I might be blocked from doing this if it won't 
work for a multivalued field.




Performance difference between query by id and filter query on property

2020-07-13 Thread Drew Kidder
We're switching to using composite routing in our solr cloud collection,
and of course that changes the document id. If I'm setting the document id
myself, what is the performance difference between q=id:123!4567 and
q=*:*=some_field:4567?

Example:

Pre-indexed document:
   - field1: 4567
   - field2: 123
   - field3: some random string

Post-indexed document in solr (using field2 as the shard prefix and field1
as the document id):
- id: 123!4567
- field1: 4567
- field2: 123
- field3: some random string

What would be the performance difference between "q=id:123!4567" and
"q=*:*=field1:4567"? In the end, they're both returning documents by id.
The main difference I can see is that it would take up a cache entry to do
what is essentially a lookup by id (or the value that I want to be the id,
before composite routing comes into play).

Since Solr doesn't drop the shard prefix from the document id when it
indexes, any way to get around this to use another discreet field as a
pseudo-document id?

--
Drew(i...@gmail.com)
http://wyntermute.dyndns.org/blog/

-- I Drive Way Too Fast To Worry About Cholesterol.


Re: [EXTERNAL] Re: JSON Facet with local parameter

2020-07-13 Thread Mohamed Sirajudeen Mayitti Ahamed Pillai
Thanks Chris, 

It works like a charm. 

http://server:8983/solr/collection/select?arrivalRange=121YEARS={%22NEW%20ARRIVALS%22:{%22start%22:NOW/DAY-${arrivalRange},%20%22sort%22:%22index%22,%22type%22:%22range%22,%22field%22:%22pdp_activation_date_dt%22,%22gap%22:%22%2B${arrivalRange}%22,%22mincount%22:1,%22limit%22:-1,%22end%22:%22NOW/DAY%22}}=*:*=0


"facets": {
"count": 14751,
"NEW ARRIVALS": {
"buckets": [
{
"val": "1899-07-13T00:00:00Z",
"count": 14750
}
]
}
}

On 7/13/20, 11:12 AM, "Chris Hostetter"  wrote:


The JSON based query APIs (including JSON Faceting) use (and unfortunately 
subtly different) '${NAME}' syntax for dereferencing variables in the 
"body" of a JSON data structure...


https://urldefense.com/v3/__https://lucene.apache.org/solr/guide/8_5/json-request-api.html*parameter-substitution-macro-expansion__;Iw!!COEUMkxN4gA!f1pqjDYAzcrrmKryMhBnkRR_tzBHbc0z9Q_uQTrqY7WBwWyFhxuKuZsnQpdOF6g8Fgv1jIFYwAzU9VQh6ADVIX-B$
 

...but note that you may need to put "quotes" around the variable 
de-reference in order to make it a valid JSON string.


: Date: Mon, 13 Jul 2020 04:03:50 +
: From: Mohamed Sirajudeen Mayitti Ahamed Pillai
: 
: Reply-To: solr-user@lucene.apache.org
: To: "solr-user@lucene.apache.org" 
: Subject: JSON Facet with local parameter
: 
: Is it possible to refer local parameter for Range JSON Facet’s 
star/end/gap inputs ?
: 
: 
: I am trying something like below, but it is now working.
: 
https://urldefense.com/v3/__http://server:8983/solr/kfl/select?arrivalRange=NOW*DAY-10DAYS=*7B*22NEW__;LyUl!!COEUMkxN4gA!f1pqjDYAzcrrmKryMhBnkRR_tzBHbc0z9Q_uQTrqY7WBwWyFhxuKuZsnQpdOF6g8Fgv1jIFYwAzU9VQh6J9KdRNL$
  ARRIVALS":{"start":$arrivalRange, 
"sort":"index","type":"range","field":"pdp_activation_date_dt","gap":"+10DAYS","mincount":1,"limit":-1,"end":"NOW/DAY"}}=*:*=0
: 
: Getting below error,
: 
: "error": {"metadata": 
["error-class","org.apache.solr.common.SolrException","root-error-class","org.apache.solr.common.SolrException"],"msg":
 "Can't parse value $arrivalRange for field: pdp_activation_date_dt","code": 
400}
: 
: 
: How to instruct Solr JSON Facet to reference another parameter that is 
added to the search request ?
: 
: 
: 
: 
: 

-Hoss

https://urldefense.com/v3/__http://www.lucidworks.com/__;!!COEUMkxN4gA!f1pqjDYAzcrrmKryMhBnkRR_tzBHbc0z9Q_uQTrqY7WBwWyFhxuKuZsnQpdOF6g8Fgv1jIFYwAzU9VQh6I9KiXHR$
 



Re: JSON Facet with local parameter

2020-07-13 Thread Chris Hostetter

The JSON based query APIs (including JSON Faceting) use (and unfortunately 
subtly different) '${NAME}' syntax for dereferencing variables in the 
"body" of a JSON data structure...

https://lucene.apache.org/solr/guide/8_5/json-request-api.html#parameter-substitution-macro-expansion

...but note that you may need to put "quotes" around the variable 
de-reference in order to make it a valid JSON string.


: Date: Mon, 13 Jul 2020 04:03:50 +
: From: Mohamed Sirajudeen Mayitti Ahamed Pillai
: 
: Reply-To: solr-user@lucene.apache.org
: To: "solr-user@lucene.apache.org" 
: Subject: JSON Facet with local parameter
: 
: Is it possible to refer local parameter for Range JSON Facet’s star/end/gap 
inputs ?
: 
: 
: I am trying something like below, but it is now working.
: 
http://server:8983/solr/kfl/select?arrivalRange=NOW/DAY-10DAYS={"NEW 
ARRIVALS":{"start":$arrivalRange, 
"sort":"index","type":"range","field":"pdp_activation_date_dt","gap":"+10DAYS","mincount":1,"limit":-1,"end":"NOW/DAY"}}=*:*=0
: 
: Getting below error,
: 
: "error": {"metadata": 
["error-class","org.apache.solr.common.SolrException","root-error-class","org.apache.solr.common.SolrException"],"msg":
 "Can't parse value $arrivalRange for field: pdp_activation_date_dt","code": 
400}
: 
: 
: How to instruct Solr JSON Facet to reference another parameter that is added 
to the search request ?
: 
: 
: 
: 
: 

-Hoss
http://www.lucidworks.com/

Re: Replica goes into recovery mode in Solr 6.1.0

2020-07-13 Thread vishal patel
Thanks for your reply.

When I searched my error "org.apache.http.NoHttpResponseException:  failed to 
respond" in Google, I found the one Solr jira case : 
https://issues.apache.org/jira/browse/SOLR-7483.  I saw a comment of @Erick 
Erickson.
is this issue resolved? Can I get that jira case?
[SOLR-7483] Investigate ways to deal with the tlog growing indefinitely while 
it's being replayed - ASF JIRA
WARN - 2015-04-28 21:38:43.345; [ ] org.apache.solr.handler.IndexFetcher; File 
_xv.si did not match. expected checksum is 617655777 and actual is checksum 
1090588695. expected length is 419 and actual length is 419 WARN - 2015-04-28 
21:38:43.349; [ ] org.apache.solr.handler.IndexFetcher; File _xv.fnm did not 
match. expected checksum is 1992662616 and actual is checksum 1632122630. 
expected ...
issues.apache.org

In this same error which I got.
My Error Log:
shard: https://drive.google.com/file/d/1F8Bn7jSXspe2HRelh_vJjKy9DsTRl9h0/view
replica: https://drive.google.com/file/d/1y0fC_n5u3MBMQbXrvxtqaD8vBBXDLR6I/view

Regards,
Vishal Patel

From: Walter Underwood 
Sent: Friday, July 10, 2020 11:15 PM
To: solr-user@lucene.apache.org 
Subject: Re: Replica goes into recovery mode in Solr 6.1.0

Sorting and faceting takes a lot of memory. From your charts, I would try
a 31 GB heap. That would make GC faster. 680 ms is very long for a GC
and can cause problems.

Combine a 680 ms GC with a 100 ms soft commit time and you can have
lots of trouble.

Change your soft commit time to 1 (ten seconds) or longer.

Look at a 24 hour graph of heap usage. It should look like a sawtooth,
increasing, then dropping after every full GC. The bottom of the sawtooth
is the the memory that Solr actually needs. Take the highest number from
the bottom of the sawtooth and add some extra, maybe 2 GB. Try that
heap size.

Upgrade to 6.6.2. That includes all bug fixes for the 6.x release. The 6.x
release had several bad bugs, especially in the middle releases. We were
switching prod to Sol Cloud while those were being released and it was
not fun.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jul 10, 2020, at 4:59 AM, vishal patel  
> wrote:
>
> Thanks for quick reply.
>
> I assume caches (are they too large?), perhaps uninverted indexes.
> Docvalues would help with latter ones. Do you use them?
>>> We do not use any cache. we disabled the cache from solrconfig.xml
> here is my solrconfig .xml and schema.xml
> https://drive.google.com/file/d/12SHl3YGP7jT4goikBkeyB2s1NX5_C2gz/view
> https://drive.google.com/file/d/1LwA1d4OiMhQQv806tR0HbZoEjA8IyfdR/view
>
> We used Docvalues on that field which is used for sorting or faceting.
>
> You could also try upgrading to the latest version in 6.x series as a starter.
>>> I will surely try.
>
> So, the node in question isn't responding quickly enough to http requests and 
> gets put into recovery. The log for the recovering node starts too late, so I 
> can't say anything about what happened before 14:42:43.943 that lead to 
> recovery.
>>> There is no error before 14:42:43.943 just search and insert requests are 
>>> there. I got that node is responding but why it is not responding? Due to 
>>> lack of memory or any other cause
> why we cannot get idea from log for reason of not responding.
>
> Is there any monitor for Solr from where we can find the root cause?
>
> Regards,
> Vishal Patel
>
>
> 
> From: Ere Maijala 
> Sent: Friday, July 10, 2020 4:27 PM
> To: solr-user@lucene.apache.org 
> Subject: Re: Replica goes into recovery mode in Solr 6.1.0
>
> vishal patel kirjoitti 10.7.2020 klo 12.45:
>> Thanks for your input.
>>
>> Walter already said that setting soft commit max time to 100 ms is a recipe 
>> for disaster
 I know that but our application is already developed and run on live 
 environment since last 5 years. Actually, we want to show a data very 
 quickly after the insert.
>>
>> you have huge JVM heaps without an explanation for the reason
 We gave the 55GB ram because our usage is like that large query search and 
 very frequent searching and indexing.
>> Here is my memory snapshot which I have taken from GC.
>
> Yes, I can see that a lot of memory is in use, but the question is why.
> I assume caches (are they too large?), perhaps uninverted indexes.
> Docvalues would help with latter ones. Do you use them?
>
>> I have tried Solr upgrade from 6.1.0 to 8.5.1 but due to some issue we 
>> cannot do. I have also asked in here
>> https://lucene.472066.n3.nabble.com/Sorting-in-other-collection-in-Solr-8-5-1-td4459506.html#a4459562
>
> You could also try upgrading to the latest version in 6.x series as a
> starter.
>
>> Why we cannot find the reason of recovery from log? like memory or CPU 
>> issue, frequent index or search, large query hit,
>> My log at the time of recovery
>> 

Re: SOLR and Zookeeper compatibility

2020-07-13 Thread Bernd Fehling



Am 13.07.20 um 09:55 schrieb Mithun Seal:
> Hi Team,
> 
> Could you please help me with below compatibility question.
> 
> 1. We are trying to install zookeeper externally along with SOLR 7.5.0. 
> As noted, SOLR 7.5.0 comes with Zookeeper 1.3.11. 

Where did you get that info from?
AFAIK, Solr 7.5.0 comes with Apache ZooKeeper 3.4.11.

Regards
Bernd

> Can I install Zookeeper
> 1.3.10 with SOLR 7.5.0. Zookeeper 1.3.10 will be compatible with SOLR 7.5.0?
> 
> 2. What is the suggested version of Zookeeper should be used with SOLR
> 7.5.0?
> 
> 
> Thanks,
> Mithun
> 


SOLR and Zookeeper compatibility

2020-07-13 Thread Mithun Seal
Hi Team,

Could you please help me with below compatibility question.

1. We are trying to install zookeeper externally along with SOLR 7.5.0. As
noted, SOLR 7.5.0 comes with Zookeeper 1.3.11. Can I install Zookeeper
1.3.10 with SOLR 7.5.0. Zookeeper 1.3.10 will be compatible with SOLR 7.5.0?

2. What is the suggested version of Zookeeper should be used with SOLR
7.5.0?


Thanks,
Mithun


SSL + Solr 8.5.1 in cloud mode + Java 8

2020-07-13 Thread Natarajan, Rajeswari
Re-sending to see if anyone encountered  had this combination and encountered 
this issue. In local with just certificate and one domain name  the SSL 
communication worked. With multiple DNS and 2 certificates SSL fails with below 
exception.  Below JIRA says it is fixed for Http2SolrClient , wondering if this 
is fixed for http1 solr client as we pass -Dsolr.http1=true .

Thanks,
Rajeswari

https://issues.apache.org/jira/browse/SOLR-14105

On 7/6/20, 10:02 PM, "Natarajan, Rajeswari"  
wrote:

Hi,

We are using Solr 8.5.1 in cloud mode  with Java 8. We are enabling  TLS  
with http1  (as we get a warning java 8 + solr 8.5 SSL can’t be enabled) and we 
get below exception



2020-07-07 03:58:53.078 ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.SolrException: Error instantiating 
shardHandlerFactory class [HttpShardHandlerFactory]: 
java.lang.UnsupportedOperationException: X509ExtendedKeyManager only supported 
on Server
  at 
org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
  at org.apache.solr.core.CoreContainer.load(CoreContainer.java:647)
  at 
org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:263)
  at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:183)
  at 
org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:134)
  at 
org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:751)
  at 
java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
  at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
  at 
java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
  at 
java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
  at 
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
  at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:360)
  at 
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1445)
  at 
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1409)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:822)
  at 
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:275)
  at 
org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:46)
  at 
org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
  at 
org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:513)
  at 
org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:154)
  at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:173)
  at 
org.eclipse.jetty.deploy.providers.WebAppProvider.fileAdded(WebAppProvider.java:447)
  at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:66)
  at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:784)
  at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:753)
  at org.eclipse.jetty.util.Scanner.scan(Scanner.java:641)
  at org.eclipse.jetty.util.Scanner.doStart(Scanner.java:540)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:146)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:599)
  at 
org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:249)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
  at org.eclipse.jetty.server.Server.start(Server.java:407)
  at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
  at 
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:100)
  at org.eclipse.jetty.server.Server.doStart(Server.java:371)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
  at 
org.eclipse.jetty.xml.XmlConfiguration.lambda$main$0(XmlConfiguration.java:1888)
  at java.security.AccessController.doPrivileged(Native Method)
  at 

Re: Solr multi word search across multiple fields with mm

2020-07-13 Thread Venu
After some research came across below article 
1.  edismax-and-multiterm-synonyms-oddities/

 
. 
2.  apache mail archive

  
3.  apache mail archive2

  

Looks like this is already an existing problem. 

The only way I see is clubbing all the required fields into a single field
and do an mm on that field. 



--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html