[jira] [Updated] (SOLR-14504) ZkController LiveNodesListener has NullPointerException in startup race

2020-05-21 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14504:

Description: 
If a NODELOST event happens before the cloudManager is initialized then a 
NullPointerException will occur on this line 
[https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
{code:java}
byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
cloudManager.getTimeSource().getEpochTimeNs())); {code}
Rather than accessing cloudManager directly, getSolrCloudManager() should be 
called.

 

This happens very rarely, but if it happens it will stop Solr starting, result 
in "CoreContainer is either not initialized or shutting down". Snippet from 
8.3.1
{noformat}
2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Waiting 
for client to connect to ZooKeeper
2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
o.a.s.c.c.ConnectionManager zkClient has connected
2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
is connected to ZooKeeper
2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Waiting 
for client to connect to ZooKeeper
2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
o.a.s.c.c.ConnectionManager zkClient has connected
2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
is connected to ZooKeeper
2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated live 
nodes from ZooKeeper... (0) -> (1)
2020-05-19
 03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> 
(0)
2020-05-19 03:44:56.614 ERROR (main) [   ] 
o.a.s.s.SolrDispatchFilter Could not start Solr. Check solr/home 
property and the logs
2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
null:java.lang.NullPointerException
at 
org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
at 
org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
at 
org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
at org.apache.solr.cloud.ZkController.(ZkController.java:473)
at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
at 
org.apache.solr.core.CoreContainer.load(CoreContainer.java:631){noformat}
 

 

  was:
If a NODELOST event happens before the cloudManager is initialized then a 
NullPointerException will occur on this line 
[https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
{code:java}
byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
cloudManager.getTimeSource().getEpochTimeNs())); {code}
Rather than accessing cloudManager directly, getSolrCloudManager() should be 
called

 

 


> ZkController LiveNodesListener has NullPointerException in startup race
> ---
>
> Key: SOLR-14504
> URL: https://issues.apache.org/jira/browse/SOLR-14504
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 
> 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
>
> If a NODELOST event happens before the cloudManager is initialized then a 
> NullPointerException will occur on this line 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
> {code:java}
> byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs())); {code}
> Rather than accessing cloudManager directly, getSolrCloudManager() should be 
> called.
>  
> This happens very rarely, but if it happens it will stop Solr starting, 
> result in "CoreContainer is either not initialized or shutting down". Snippet 
> from 8.3.1
> {noformat}
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  

[jira] [Commented] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-21 Thread Cao Manh Dat (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112881#comment-17112881
 ] 

Cao Manh Dat commented on SOLR-14419:
-

Ok then it +1 from me.

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-21 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112869#comment-17112869
 ] 

Mikhail Khludnev commented on SOLR-14419:
-

[~caomanhdat], I might not fully understand your point. Do you expect to put 
DSL objects into {{json.param}} like
{code}
{
   "param":{
"q1" : {"lucene":{"query":"foo"}}
}
}
{code} ?
I've found it unreachable in SOLR-12490 and introduced "json.queries". 

Ok. Here is a sample spinet for facet exclusion and nested docs facets:
{code}
q=*:*=content_type:parentDocument&
fq={!parent tag=sku_filters which=$prnts filters=$sku_fqs}&  // two refs, would 
be nice to express in DSL
sku_fqs={!tag=sku_attr1_tag}sku_attr1:foo&
sku_fqs={!tag=sku_attr2_tag}sku_attr2:bar&
json.facet={
"sku_attr1":{ 
"type":"terms",
"field":"sku_attr1",
"limit":-1,
"domain":   {
  "excludeTags":"sku_filters",// drop top level 
{!parent 
  "blockChildren":"{!v=$prnts}",
  "filter":   // $sku_fqs 
refers many params that's not how it works by default
  "{!filters param=$sku_fqs  
excludeTags=sku_attr1_tag}"
},
"facet": {
"by_parent":"uniqueBlock(_root_)"
}
 },
   ///  "sku_attr2":{ ... etc
}

{code} 



> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14504) ZkController LiveNodesListener has NullPointerException in startup race

2020-05-21 Thread Colvin Cowie (Jira)
Colvin Cowie created SOLR-14504:
---

 Summary: ZkController LiveNodesListener has NullPointerException 
in startup race
 Key: SOLR-14504
 URL: https://issues.apache.org/jira/browse/SOLR-14504
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 8.5.1, 8.4.1, 8.5, 8.3.1, 8.4, 8.3, 8.1.1, 7.7.3, 8.2, 
8.1, 8.0, 7.7.2, 7.7.1, 7.7
Reporter: Colvin Cowie


If a NODELOST event happens before the cloudManager is initialized then a 
NullPointerException will occur on this line 
[https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
{code:java}
byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
cloudManager.getTimeSource().getEpochTimeNs())); {code}
Rather than accessing cloudManager directly, getSolrCloudManager() should be 
called

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14505) Intermittent NPE in ZkController.registerLiveNodesListener()

2020-05-21 Thread Colvin Cowie (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113005#comment-17113005
 ] 

Colvin Cowie commented on SOLR-14505:
-

Oh dear, another race condition: I created 
https://issues.apache.org/jira/browse/SOLR-14504 a little while ago :)

> Intermittent NPE in ZkController.registerLiveNodesListener()
> 
>
> Key: SOLR-14505
> URL: https://issues.apache.org/jira/browse/SOLR-14505
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.3.1
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Minor
>
> Reported by [~cjcowie] in SOLR-13072 & mailing lists:
> "Running on Solr 8.3.1
> {code:java}
> 2020-05-19 03:44:40.220 INFO  (main) [   ] o.a.s.c.ZkContainer Zookeeper 
> client=X:9983/_cluster
> 2020-05-19 03:44:40.238 INFO  (main) [   ] o.a.s.c.c.SolrZkClient Using 
> ZkCredentialsProvider: 
> xxx.zookeeper.auth.internal.EncodedZkCredentialsProvider
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)
> 2020-05-19 03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (0)
> 2020-05-19 03:44:56.614 ERROR (main) [   ] o.a.s.s.SolrDispatchFilter Could 
> not start Solr. Check solr/home property and the logs
> 2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NullPointerException
> at 
> org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
> at 
> org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
> at 
> org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
> at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
> at org.apache.solr.cloud.ZkController.(ZkController.java:473)
> at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:631)at 
> xxx.solr.servlet.RobustSolrDispatchFilter.createCoreContainer(RobustSolrDispatchFilter.java:71)
>  {code}
> I couldn't find any bug reports in JIRA for the NPE.
>   
>  Here's the full log
>  [https://drive.google.com/open?id=1hQrF25blNgKLXijOMYJ30wn-Lfy6uKVm]
>   
>  The NPE is coming from 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
>   
>  _byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs()));_ 
>  so I don't know whether it's the cloudManager or the time source that's null.
>  That bit of the ZkController was added by 
> https://issues.apache.org/jira/browse/SOLR-13072 and I see it is only hit if
>  
> _zkStateReader.getAutoScalingConfig().hasTriggerForEvents(TriggerEventType.NODELOST);_
>   
>  We have never (knowingly) configured autoscaling and we don't use it, but I 
> see the autoscaling files are present in ZK. Is the autoscaling.json etc 
> created by default when it is absent in ZooKeeper?
>   
>  The interesting bit of the log above, aside from the NPE, is this:
>  _2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)_
>  _2020-05-19 03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (0)_
>  Which suggests to me that there's either a race condition or the problem is 
> caused by some zookeeper outage during startup. Since there's a 16 second gap 
> between those messages.
>   
>  It's possible that the problem is in some way caused by our own code in 
> xxx.solr.servlet.RobustSolrDispatchFilter which wraps the 
> SolrDispatchFIlter, and creates a SolrZkClient in a try/with resources (so 
> should be autoclosed), but that all happens before createCoreContainer is 
> called."



--
This message was sent by Atlassian Jira

[jira] [Commented] (SOLR-14504) ZkController LiveNodesListener has NullPointerException in startup race

2020-05-21 Thread Colvin Cowie (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113034#comment-17113034
 ] 

Colvin Cowie commented on SOLR-14504:
-

So with breakpoints on in a debugger it's easy to reproduce this, but I'm not 
seeing a _nice_ way to write a test for it, at least not in the 
ZkControllerTest. There's a lot of stuff happening in the constructor of the 
ZkController (both registerLiveNodesListener and getSolrCloudManager are 
called, so the race only exists between those calls) and 
registerLiveNodesListener is private.

 

I've gotten part way to testing it by constructing the ZkController on another 
thread, but then the NullPointerException is just lost in a callback thread's 
stack... The NullPointerException causing the Core container load to fail can 
only happen when a node lost event occurs in the initial {{at 
org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)}}

 

 

> ZkController LiveNodesListener has NullPointerException in startup race
> ---
>
> Key: SOLR-14504
> URL: https://issues.apache.org/jira/browse/SOLR-14504
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 
> 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
>
> If a NODELOST event happens before the cloudManager is initialized then a 
> NullPointerException will occur on this line 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
> {code:java}
> byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs())); {code}
> Rather than accessing cloudManager directly, getSolrCloudManager() should be 
> called.
>  
> This happens very rarely, but if it happens it will stop Solr starting, 
> result in "CoreContainer is either not initialized or shutting down". Snippet 
> from 8.3.1
> {noformat}
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)
> 2020-05-19
>  03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> 
> (0)
> 2020-05-19 03:44:56.614 ERROR (main) [   ] 
> o.a.s.s.SolrDispatchFilter Could not start Solr. Check solr/home 
> property and the logs
> 2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NullPointerException
>   at 
> org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
>   at 
> org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
>   at 
> org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
>   at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
>   at org.apache.solr.cloud.ZkController.(ZkController.java:473)
>   at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
>   at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:631){noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14504) ZkController LiveNodesListener has NullPointerException in startup race

2020-05-21 Thread Colvin Cowie (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113034#comment-17113034
 ] 

Colvin Cowie edited comment on SOLR-14504 at 5/21/20, 10:16 AM:


So with breakpoints on in a debugger it's easy to reproduce this, but I'm not 
seeing a _nice_ way to write a test for it, at least not in the 
ZkControllerTest. There's a lot of stuff happening in the constructor of the 
ZkController (both registerLiveNodesListener and getSolrCloudManager are 
called, so the race only exists between those calls) and 
registerLiveNodesListener is private.

 

I've gotten part way to testing it by constructing the ZkController on another 
thread, but then the NullPointerException can just be lost in a callback 
thread's stack... The NullPointerException causing the Core container load to 
fail can only happen when a node lost event occurs in the initial {{at 
org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)}}

 

In fact it seems as though the NPE on startup can only happen if a node is lost 
between the first call to getLiveNodes() and the second call to getLiveNodes()  
in org.apache.solr.common.cloud.ZkStateReader#registerLiveNodesListener
{code:java}
if (listener.onChange(new TreeSet<>(getClusterState().getLiveNodes()), new 
TreeSet<>(getClusterState().getLiveNodes( {{code}
since otherwise the LiveNodesListener will just exit before it gets to the 
point it uses the cloud manager at.

 

So the window for it to happen seems to be very small


was (Author: cjcowie):
So with breakpoints on in a debugger it's easy to reproduce this, but I'm not 
seeing a _nice_ way to write a test for it, at least not in the 
ZkControllerTest. There's a lot of stuff happening in the constructor of the 
ZkController (both registerLiveNodesListener and getSolrCloudManager are 
called, so the race only exists between those calls) and 
registerLiveNodesListener is private.

 

I've gotten part way to testing it by constructing the ZkController on another 
thread, but then the NullPointerException is just lost in a callback thread's 
stack... The NullPointerException causing the Core container load to fail can 
only happen when a node lost event occurs in the initial {{at 
org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)}}

 

In fact it seems as though the NPE on startup can only happen if a node is lost 
between the first call to getLiveNodes() and the second call to getLiveNodes()  
in org.apache.solr.common.cloud.ZkStateReader#registerLiveNodesListener
{code:java}
if (listener.onChange(new TreeSet<>(getClusterState().getLiveNodes()), new 
TreeSet<>(getClusterState().getLiveNodes( {{code}
since otherwise the LiveNodesListener will just exit before it gets to the 
point it uses the cloud manager at

> ZkController LiveNodesListener has NullPointerException in startup race
> ---
>
> Key: SOLR-14504
> URL: https://issues.apache.org/jira/browse/SOLR-14504
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 
> 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
>
> If a NODELOST event happens before the cloudManager is initialized then a 
> NullPointerException will occur on this line 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
> {code:java}
> byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs())); {code}
> Rather than accessing cloudManager directly, getSolrCloudManager() should be 
> called.
>  
> This happens very rarely, but if it happens it will stop Solr starting, 
> result in "CoreContainer is either not initialized or shutting down". Snippet 
> from 8.3.1
> {noformat}
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes 

[jira] [Commented] (SOLR-14504) ZkController LiveNodesListener has NullPointerException in startup race

2020-05-21 Thread Colvin Cowie (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113064#comment-17113064
 ] 

Colvin Cowie commented on SOLR-14504:
-

Without making big changes to ZkController's actual implementation, I don't 
really see a way to write a useful (automated) regression test for this.

[~ab] what are you thoughts on just fixing it withtout a test? Or can you see a 
good way to test it?

> ZkController LiveNodesListener has NullPointerException in startup race
> ---
>
> Key: SOLR-14504
> URL: https://issues.apache.org/jira/browse/SOLR-14504
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 
> 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14504.patch
>
>
> If a NODELOST event happens before the cloudManager is initialized then a 
> NullPointerException will occur on this line 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
> {code:java}
> byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs())); {code}
> Rather than accessing cloudManager directly, getSolrCloudManager() should be 
> called.
>  
> This happens very rarely, but if it happens it will stop Solr starting, 
> result in "CoreContainer is either not initialized or shutting down". Snippet 
> from 8.3.1
> {noformat}
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)
> 2020-05-19
>  03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> 
> (0)
> 2020-05-19 03:44:56.614 ERROR (main) [   ] 
> o.a.s.s.SolrDispatchFilter Could not start Solr. Check solr/home 
> property and the logs
> 2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NullPointerException
>   at 
> org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
>   at 
> org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
>   at 
> org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
>   at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
>   at org.apache.solr.cloud.ZkController.(ZkController.java:473)
>   at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
>   at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:631){noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14504) ZkController LiveNodesListener has NullPointerException in startup race

2020-05-21 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14504:

Attachment: SOLR-14504.patch

> ZkController LiveNodesListener has NullPointerException in startup race
> ---
>
> Key: SOLR-14504
> URL: https://issues.apache.org/jira/browse/SOLR-14504
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 
> 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14504.patch
>
>
> If a NODELOST event happens before the cloudManager is initialized then a 
> NullPointerException will occur on this line 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
> {code:java}
> byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs())); {code}
> Rather than accessing cloudManager directly, getSolrCloudManager() should be 
> called.
>  
> This happens very rarely, but if it happens it will stop Solr starting, 
> result in "CoreContainer is either not initialized or shutting down". Snippet 
> from 8.3.1
> {noformat}
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)
> 2020-05-19
>  03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> 
> (0)
> 2020-05-19 03:44:56.614 ERROR (main) [   ] 
> o.a.s.s.SolrDispatchFilter Could not start Solr. Check solr/home 
> property and the logs
> 2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NullPointerException
>   at 
> org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
>   at 
> org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
>   at 
> org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
>   at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
>   at org.apache.solr.cloud.ZkController.(ZkController.java:473)
>   at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
>   at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:631){noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14504) ZkController LiveNodesListener has NullPointerException in startup race

2020-05-21 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14504:

Status: Patch Available  (was: Open)

> ZkController LiveNodesListener has NullPointerException in startup race
> ---
>
> Key: SOLR-14504
> URL: https://issues.apache.org/jira/browse/SOLR-14504
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 
> 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14504.patch
>
>
> If a NODELOST event happens before the cloudManager is initialized then a 
> NullPointerException will occur on this line 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
> {code:java}
> byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs())); {code}
> Rather than accessing cloudManager directly, getSolrCloudManager() should be 
> called.
>  
> This happens very rarely, but if it happens it will stop Solr starting, 
> result in "CoreContainer is either not initialized or shutting down". Snippet 
> from 8.3.1
> {noformat}
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)
> 2020-05-19
>  03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> 
> (0)
> 2020-05-19 03:44:56.614 ERROR (main) [   ] 
> o.a.s.s.SolrDispatchFilter Could not start Solr. Check solr/home 
> property and the logs
> 2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NullPointerException
>   at 
> org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
>   at 
> org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
>   at 
> org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
>   at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
>   at org.apache.solr.cloud.ZkController.(ZkController.java:473)
>   at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
>   at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:631){noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14504) ZkController LiveNodesListener has NullPointerException in startup race

2020-05-21 Thread Colvin Cowie (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113064#comment-17113064
 ] 

Colvin Cowie edited comment on SOLR-14504 at 5/21/20, 11:01 AM:


Without making big changes to ZkController's actual implementation, I don't 
really see a way to write a useful (automated) regression test for this.

[~ab] what are you thoughts on just fixing it without a test? Or can you see a 
good way to test it?


was (Author: cjcowie):
Without making big changes to ZkController's actual implementation, I don't 
really see a way to write a useful (automated) regression test for this.

[~ab] what are you thoughts on just fixing it withtout a test? Or can you see a 
good way to test it?

> ZkController LiveNodesListener has NullPointerException in startup race
> ---
>
> Key: SOLR-14504
> URL: https://issues.apache.org/jira/browse/SOLR-14504
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 
> 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14504.patch
>
>
> If a NODELOST event happens before the cloudManager is initialized then a 
> NullPointerException will occur on this line 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
> {code:java}
> byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs())); {code}
> Rather than accessing cloudManager directly, getSolrCloudManager() should be 
> called.
>  
> This happens very rarely, but if it happens it will stop Solr starting, 
> result in "CoreContainer is either not initialized or shutting down". Snippet 
> from 8.3.1
> {noformat}
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)
> 2020-05-19
>  03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> 
> (0)
> 2020-05-19 03:44:56.614 ERROR (main) [   ] 
> o.a.s.s.SolrDispatchFilter Could not start Solr. Check solr/home 
> property and the logs
> 2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NullPointerException
>   at 
> org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
>   at 
> org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
>   at 
> org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
>   at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
>   at org.apache.solr.cloud.ZkController.(ZkController.java:473)
>   at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
>   at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:631){noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13072) Management of markers for nodeLost / nodeAdded events is broken

2020-05-21 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113004#comment-17113004
 ] 

Andrzej Bialecki commented on SOLR-13072:
-

[~cjcowie] thanks for reporting this - I created a separate issue to track 
this: SOLR-14505.

> Management of markers for nodeLost / nodeAdded events is broken
> ---
>
> Key: SOLR-13072
> URL: https://issues.apache.org/jira/browse/SOLR-13072
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.5, 7.6, 8.0
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: 7.7, 8.0, master (9.0)
>
>
> In order to prevent {{nodeLost}} events from being lost when it's the 
> Overseer leader that is the node that was lost a mechanism was added to 
> record markers for these events by any other live node, in 
> {{ZkController.registerLiveNodesListener()}}. As similar mechanism also 
> exists for {{nodeAdded}} events.
> On Overseer leader restart if the autoscaling configuration didn't contain 
> any triggers that consume {{nodeLost}} events then these markers are removed. 
> If there are 1 or more trigger configs that consume {{nodeLost}} events then 
> these triggers would read the markers, remove them and generate appropriate 
> events.
> However, as the {{NodeMarkersRegistrationTest}} shows this mechanism is 
> broken and susceptible to race conditions.
> It's not unusual to have more than 1 {{nodeLost}} trigger because in addition 
> to any user-defined triggers there's always one that is automatically defined 
> if missing: {{.auto_add_replicas}}. However, if there's more than 1 
> {{nodeLost}} trigger then the process of consuming and removing the markers 
> becomes non-deterministic - each trigger may pick up (and delete) all, none, 
> or some of the markers.
> So as it is now this mechanism is broken if more than 1 {{nodeLost}} or more 
> than 1 {{nodeAdded}} trigger is defined.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14504) ZkController LiveNodesListener has NullPointerException in startup race

2020-05-21 Thread Colvin Cowie (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113034#comment-17113034
 ] 

Colvin Cowie edited comment on SOLR-14504 at 5/21/20, 10:15 AM:


So with breakpoints on in a debugger it's easy to reproduce this, but I'm not 
seeing a _nice_ way to write a test for it, at least not in the 
ZkControllerTest. There's a lot of stuff happening in the constructor of the 
ZkController (both registerLiveNodesListener and getSolrCloudManager are 
called, so the race only exists between those calls) and 
registerLiveNodesListener is private.

 

I've gotten part way to testing it by constructing the ZkController on another 
thread, but then the NullPointerException is just lost in a callback thread's 
stack... The NullPointerException causing the Core container load to fail can 
only happen when a node lost event occurs in the initial {{at 
org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)}}

 

In fact it seems as though the NPE on startup can only happen if a node is lost 
between the first call to getLiveNodes() and the second call to getLiveNodes()  
in org.apache.solr.common.cloud.ZkStateReader#registerLiveNodesListener
{code:java}
if (listener.onChange(new TreeSet<>(getClusterState().getLiveNodes()), new 
TreeSet<>(getClusterState().getLiveNodes( {{code}
since otherwise the LiveNodesListener will just exit before it gets to the 
point it uses the cloud manager at


was (Author: cjcowie):
So with breakpoints on in a debugger it's easy to reproduce this, but I'm not 
seeing a _nice_ way to write a test for it, at least not in the 
ZkControllerTest. There's a lot of stuff happening in the constructor of the 
ZkController (both registerLiveNodesListener and getSolrCloudManager are 
called, so the race only exists between those calls) and 
registerLiveNodesListener is private.

 

I've gotten part way to testing it by constructing the ZkController on another 
thread, but then the NullPointerException is just lost in a callback thread's 
stack... The NullPointerException causing the Core container load to fail can 
only happen when a node lost event occurs in the initial {{at 
org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)}}

 

 

> ZkController LiveNodesListener has NullPointerException in startup race
> ---
>
> Key: SOLR-14504
> URL: https://issues.apache.org/jira/browse/SOLR-14504
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 
> 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
>
> If a NODELOST event happens before the cloudManager is initialized then a 
> NullPointerException will occur on this line 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
> {code:java}
> byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs())); {code}
> Rather than accessing cloudManager directly, getSolrCloudManager() should be 
> called.
>  
> This happens very rarely, but if it happens it will stop Solr starting, 
> result in "CoreContainer is either not initialized or shutting down". Snippet 
> from 8.3.1
> {noformat}
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)
> 2020-05-19
>  03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> 
> (0)
> 2020-05-19 03:44:56.614 ERROR (main) [   ] 
> o.a.s.s.SolrDispatchFilter Could not start Solr. Check solr/home 
> property and the logs
> 2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NullPointerException
>   at 
> org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
>   at 
> 

[jira] [Commented] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-21 Thread Colvin Cowie (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113066#comment-17113066
 ] 

Colvin Cowie commented on SOLR-14503:
-

Hi [~caomanhdat] can you take a look at this? Thanks

> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14503.patch, SOLR-14503.patch
>
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used even when you specify a different waitForZk value
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14504) ZkController LiveNodesListener has NullPointerException in startup race

2020-05-21 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113118#comment-17113118
 ] 

Andrzej Bialecki commented on SOLR-14504:
-

The proposed fix makes sense, I'll commit it shortly - thanks!

> ZkController LiveNodesListener has NullPointerException in startup race
> ---
>
> Key: SOLR-14504
> URL: https://issues.apache.org/jira/browse/SOLR-14504
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 
> 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14504.patch
>
>
> If a NODELOST event happens before the cloudManager is initialized then a 
> NullPointerException will occur on this line 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
> {code:java}
> byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs())); {code}
> Rather than accessing cloudManager directly, getSolrCloudManager() should be 
> called.
>  
> This happens very rarely, but if it happens it will stop Solr starting, 
> result in "CoreContainer is either not initialized or shutting down". Snippet 
> from 8.3.1
> {noformat}
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)
> 2020-05-19
>  03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> 
> (0)
> 2020-05-19 03:44:56.614 ERROR (main) [   ] 
> o.a.s.s.SolrDispatchFilter Could not start Solr. Check solr/home 
> property and the logs
> 2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NullPointerException
>   at 
> org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
>   at 
> org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
>   at 
> org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
>   at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
>   at org.apache.solr.cloud.ZkController.(ZkController.java:473)
>   at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
>   at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:631){noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14505) Intermittent NPE in ZkController.registerLiveNodesListener()

2020-05-21 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113008#comment-17113008
 ] 

Andrzej Bialecki commented on SOLR-14505:
-

:D ok, let's close this one.

> Intermittent NPE in ZkController.registerLiveNodesListener()
> 
>
> Key: SOLR-14505
> URL: https://issues.apache.org/jira/browse/SOLR-14505
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.3.1
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Minor
>
> Reported by [~cjcowie] in SOLR-13072 & mailing lists:
> "Running on Solr 8.3.1
> {code:java}
> 2020-05-19 03:44:40.220 INFO  (main) [   ] o.a.s.c.ZkContainer Zookeeper 
> client=X:9983/_cluster
> 2020-05-19 03:44:40.238 INFO  (main) [   ] o.a.s.c.c.SolrZkClient Using 
> ZkCredentialsProvider: 
> xxx.zookeeper.auth.internal.EncodedZkCredentialsProvider
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)
> 2020-05-19 03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (0)
> 2020-05-19 03:44:56.614 ERROR (main) [   ] o.a.s.s.SolrDispatchFilter Could 
> not start Solr. Check solr/home property and the logs
> 2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NullPointerException
> at 
> org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
> at 
> org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
> at 
> org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
> at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
> at org.apache.solr.cloud.ZkController.(ZkController.java:473)
> at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:631)at 
> xxx.solr.servlet.RobustSolrDispatchFilter.createCoreContainer(RobustSolrDispatchFilter.java:71)
>  {code}
> I couldn't find any bug reports in JIRA for the NPE.
>   
>  Here's the full log
>  [https://drive.google.com/open?id=1hQrF25blNgKLXijOMYJ30wn-Lfy6uKVm]
>   
>  The NPE is coming from 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
>   
>  _byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs()));_ 
>  so I don't know whether it's the cloudManager or the time source that's null.
>  That bit of the ZkController was added by 
> https://issues.apache.org/jira/browse/SOLR-13072 and I see it is only hit if
>  
> _zkStateReader.getAutoScalingConfig().hasTriggerForEvents(TriggerEventType.NODELOST);_
>   
>  We have never (knowingly) configured autoscaling and we don't use it, but I 
> see the autoscaling files are present in ZK. Is the autoscaling.json etc 
> created by default when it is absent in ZooKeeper?
>   
>  The interesting bit of the log above, aside from the NPE, is this:
>  _2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)_
>  _2020-05-19 03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (0)_
>  Which suggests to me that there's either a race condition or the problem is 
> caused by some zookeeper outage during startup. Since there's a 16 second gap 
> between those messages.
>   
>  It's possible that the problem is in some way caused by our own code in 
> xxx.solr.servlet.RobustSolrDispatchFilter which wraps the 
> SolrDispatchFIlter, and creates a SolrZkClient in a try/with resources (so 
> should be autoclosed), but that all happens before createCoreContainer is 
> called."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (SOLR-14505) Intermittent NPE in ZkController.registerLiveNodesListener()

2020-05-21 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki resolved SOLR-14505.
-
Resolution: Duplicate

> Intermittent NPE in ZkController.registerLiveNodesListener()
> 
>
> Key: SOLR-14505
> URL: https://issues.apache.org/jira/browse/SOLR-14505
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.3.1
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Minor
>
> Reported by [~cjcowie] in SOLR-13072 & mailing lists:
> "Running on Solr 8.3.1
> {code:java}
> 2020-05-19 03:44:40.220 INFO  (main) [   ] o.a.s.c.ZkContainer Zookeeper 
> client=X:9983/_cluster
> 2020-05-19 03:44:40.238 INFO  (main) [   ] o.a.s.c.c.SolrZkClient Using 
> ZkCredentialsProvider: 
> xxx.zookeeper.auth.internal.EncodedZkCredentialsProvider
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)
> 2020-05-19 03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (0)
> 2020-05-19 03:44:56.614 ERROR (main) [   ] o.a.s.s.SolrDispatchFilter Could 
> not start Solr. Check solr/home property and the logs
> 2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NullPointerException
> at 
> org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
> at 
> org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
> at 
> org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
> at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
> at org.apache.solr.cloud.ZkController.(ZkController.java:473)
> at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:631)at 
> xxx.solr.servlet.RobustSolrDispatchFilter.createCoreContainer(RobustSolrDispatchFilter.java:71)
>  {code}
> I couldn't find any bug reports in JIRA for the NPE.
>   
>  Here's the full log
>  [https://drive.google.com/open?id=1hQrF25blNgKLXijOMYJ30wn-Lfy6uKVm]
>   
>  The NPE is coming from 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
>   
>  _byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs()));_ 
>  so I don't know whether it's the cloudManager or the time source that's null.
>  That bit of the ZkController was added by 
> https://issues.apache.org/jira/browse/SOLR-13072 and I see it is only hit if
>  
> _zkStateReader.getAutoScalingConfig().hasTriggerForEvents(TriggerEventType.NODELOST);_
>   
>  We have never (knowingly) configured autoscaling and we don't use it, but I 
> see the autoscaling files are present in ZK. Is the autoscaling.json etc 
> created by default when it is absent in ZooKeeper?
>   
>  The interesting bit of the log above, aside from the NPE, is this:
>  _2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)_
>  _2020-05-19 03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (0)_
>  Which suggests to me that there's either a race condition or the problem is 
> caused by some zookeeper outage during startup. Since there's a 16 second gap 
> between those messages.
>   
>  It's possible that the problem is in some way caused by our own code in 
> xxx.solr.servlet.RobustSolrDispatchFilter which wraps the 
> SolrDispatchFIlter, and creates a SolrZkClient in a try/with resources (so 
> should be autoclosed), but that all happens before createCoreContainer is 
> called."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Created] (SOLR-14505) Intermittent NPE in ZkController.registerLiveNodesListener()

2020-05-21 Thread Andrzej Bialecki (Jira)
Andrzej Bialecki created SOLR-14505:
---

 Summary: Intermittent NPE in 
ZkController.registerLiveNodesListener()
 Key: SOLR-14505
 URL: https://issues.apache.org/jira/browse/SOLR-14505
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 8.3.1
Reporter: Andrzej Bialecki
Assignee: Andrzej Bialecki


Reported by [~cjcowie] in SOLR-13072 & mailing lists:

"Running on Solr 8.3.1
{code:java}
2020-05-19 03:44:40.220 INFO  (main) [   ] o.a.s.c.ZkContainer Zookeeper 
client=X:9983/_cluster
2020-05-19 03:44:40.238 INFO  (main) [   ] o.a.s.c.c.SolrZkClient Using 
ZkCredentialsProvider: 
xxx.zookeeper.auth.internal.EncodedZkCredentialsProvider
2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Waiting 
for client to connect to ZooKeeper
2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
o.a.s.c.c.ConnectionManager zkClient has connected
2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
is connected to ZooKeeper
2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Waiting 
for client to connect to ZooKeeper
2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
o.a.s.c.c.ConnectionManager zkClient has connected
2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
is connected to ZooKeeper
2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated live 
nodes from ZooKeeper... (0) -> (1)
2020-05-19 03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (0)
2020-05-19 03:44:56.614 ERROR (main) [   ] o.a.s.s.SolrDispatchFilter Could not 
start Solr. Check solr/home property and the logs
2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
null:java.lang.NullPointerException
at 
org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
at 
org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
at 
org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
at org.apache.solr.cloud.ZkController.(ZkController.java:473)
at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:631)at 
xxx.solr.servlet.RobustSolrDispatchFilter.createCoreContainer(RobustSolrDispatchFilter.java:71)
 {code}
I couldn't find any bug reports in JIRA for the NPE.
 
Here's the full log
[https://drive.google.com/open?id=1hQrF25blNgKLXijOMYJ30wn-Lfy6uKVm]
 
The NPE is coming from 
[https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
 
_byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
cloudManager.getTimeSource().getEpochTimeNs()));_ 
so I don't know whether it's the cloudManager or the time source that's null.
That bit of the ZkController was added by 
https://issues.apache.org/jira/browse/SOLR-13072 and I see it is only hit if
_zkStateReader.getAutoScalingConfig().hasTriggerForEvents(TriggerEventType.NODELOST);_
 
We have never (knowingly) configured autoscaling and we don't use it, but I see 
the autoscaling files are present in ZK. Is the autoscaling.json etc created by 
default when it is absent in ZooKeeper?
 
The interesting bit of the log above, aside from the NPE, is this:
_2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
live nodes from ZooKeeper... (0) -> (1)_
_2020-05-19 03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (0)_
Which suggests to me that there's either a race condition or the problem is 
caused by some zookeeper outage during startup. Since there's a 16 second gap 
between those messages.
 
It's possible that the problem is in some way caused by our own code in 
xxx.solr.servlet.RobustSolrDispatchFilter which wraps the 
SolrDispatchFIlter, and creates a SolrZkClient in a try/with resources (so 
should be autoclosed), but that all happens before createCoreContainer is 
called.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14505) Intermittent NPE in ZkController.registerLiveNodesListener()

2020-05-21 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki updated SOLR-14505:

Description: 
Reported by [~cjcowie] in SOLR-13072 & mailing lists:

"Running on Solr 8.3.1
{code:java}
2020-05-19 03:44:40.220 INFO  (main) [   ] o.a.s.c.ZkContainer Zookeeper 
client=X:9983/_cluster
2020-05-19 03:44:40.238 INFO  (main) [   ] o.a.s.c.c.SolrZkClient Using 
ZkCredentialsProvider: 
xxx.zookeeper.auth.internal.EncodedZkCredentialsProvider
2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Waiting 
for client to connect to ZooKeeper
2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
o.a.s.c.c.ConnectionManager zkClient has connected
2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
is connected to ZooKeeper
2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Waiting 
for client to connect to ZooKeeper
2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
o.a.s.c.c.ConnectionManager zkClient has connected
2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
is connected to ZooKeeper
2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated live 
nodes from ZooKeeper... (0) -> (1)
2020-05-19 03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (0)
2020-05-19 03:44:56.614 ERROR (main) [   ] o.a.s.s.SolrDispatchFilter Could not 
start Solr. Check solr/home property and the logs
2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
null:java.lang.NullPointerException
at 
org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
at 
org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
at 
org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
at org.apache.solr.cloud.ZkController.(ZkController.java:473)
at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:631)at 
xxx.solr.servlet.RobustSolrDispatchFilter.createCoreContainer(RobustSolrDispatchFilter.java:71)
 {code}
I couldn't find any bug reports in JIRA for the NPE.
  
 Here's the full log
 [https://drive.google.com/open?id=1hQrF25blNgKLXijOMYJ30wn-Lfy6uKVm]
  
 The NPE is coming from 
[https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
  
 _byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
cloudManager.getTimeSource().getEpochTimeNs()));_ 
 so I don't know whether it's the cloudManager or the time source that's null.
 That bit of the ZkController was added by 
https://issues.apache.org/jira/browse/SOLR-13072 and I see it is only hit if
 
_zkStateReader.getAutoScalingConfig().hasTriggerForEvents(TriggerEventType.NODELOST);_
  
 We have never (knowingly) configured autoscaling and we don't use it, but I 
see the autoscaling files are present in ZK. Is the autoscaling.json etc 
created by default when it is absent in ZooKeeper?
  
 The interesting bit of the log above, aside from the NPE, is this:
 _2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
live nodes from ZooKeeper... (0) -> (1)_
 _2020-05-19 03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (0)_
 Which suggests to me that there's either a race condition or the problem is 
caused by some zookeeper outage during startup. Since there's a 16 second gap 
between those messages.
  
 It's possible that the problem is in some way caused by our own code in 
xxx.solr.servlet.RobustSolrDispatchFilter which wraps the 
SolrDispatchFIlter, and creates a SolrZkClient in a try/with resources (so 
should be autoclosed), but that all happens before createCoreContainer is 
called."

  was:
Reported by [~cjcowie] in SOLR-13072 & mailing lists:

"Running on Solr 8.3.1
{code:java}
2020-05-19 03:44:40.220 INFO  (main) [   ] o.a.s.c.ZkContainer Zookeeper 
client=X:9983/_cluster
2020-05-19 03:44:40.238 INFO  (main) [   ] o.a.s.c.c.SolrZkClient Using 
ZkCredentialsProvider: 
xxx.zookeeper.auth.internal.EncodedZkCredentialsProvider
2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Waiting 
for client to connect to ZooKeeper
2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
o.a.s.c.c.ConnectionManager zkClient has connected
2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
is connected to ZooKeeper
2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Waiting 
for client to connect to 

[jira] [Commented] (SOLR-14384) Stack SolrRequestInfo

2020-05-21 Thread Nazerke Seidan (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113070#comment-17113070
 ] 

Nazerke Seidan commented on SOLR-14384:
---

[~mkhl] I think it is a good idea to add some more information to the 
description for the clarification purpose. I couldn't edit the description. 

> Stack SolrRequestInfo
> -
>
> Key: SOLR-14384
> URL: https://issues.apache.org/jira/browse/SOLR-14384
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Sometimes SolrRequestInfo need to be suspended or overridden. [~dsmiley] 
> suggests to introduce stacking for it. See linked issues for the context and 
> discussion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] tflobbe closed pull request #1511: SOLR-13289: minExactHits -> minExactCount

2020-05-21 Thread GitBox


tflobbe closed pull request #1511:
URL: https://github.com/apache/lucene-solr/pull/1511


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9376) Fix or suppress 20 resource leak precommit warnings in lucene/search

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113652#comment-17113652
 ] 

ASF subversion and git services commented on LUCENE-9376:
-

Commit 21b08d5cab743f171b982bc8f929a75556a44ab6 in lucene-solr's branch 
refs/heads/master from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=21b08d5 ]

LUCENE-9376: Fix or suppress 20 resource leak precommit warnings in 
lucene/search


> Fix or suppress 20 resource leak precommit warnings in lucene/search
> 
>
> Key: LUCENE-9376
> URL: https://issues.apache.org/jira/browse/LUCENE-9376
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Andras Salamon
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-9376.patch
>
>
> There are 20 resource leak precommit warnings in org/apache/lucene/search:
> {noformat}
>  [ecj-lint] 71. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestFuzzyQuery.java
>  (at line 414)
>  [ecj-lint]   MockAnalyzer analyzer = new MockAnalyzer(random());
>  [ecj-lint]
>  [ecj-lint] Resource leak: 'analyzer' is never closed
> --
>  [ecj-lint] 72. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestFuzzyQuery.java
>  (at line 557)
>  [ecj-lint]   RandomIndexWriter w = new RandomIndexWriter(random(), dir);
>  [ecj-lint] ^
>  [ecj-lint] Resource leak: 'w' is never closed
> --
>  [ecj-lint] 73. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestLRUQueryCache.java
>  (at line 185)
>  [ecj-lint]   throw error.get();
>  [ecj-lint]   ^^
>  [ecj-lint] Resource leak: 'mgr' is not closed at this location
> --
>  [ecj-lint] 74. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestLRUQueryCache.java
>  (at line 185)
>  [ecj-lint]   throw error.get();
>  [ecj-lint]   ^^
>  [ecj-lint] Resource leak: 'w' is not closed at this location
> --
>  [ecj-lint] 75. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestSameScoresWithThreads.java
>  (at line 49)
>  [ecj-lint]   LineFileDocs docs = new LineFileDocs(random());
>  [ecj-lint]
>  [ecj-lint] Resource leak: 'docs' is never closed
> --
>  [ecj-lint] 76. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestSearcherManager.java
>  (at line 313)
>  [ecj-lint]   SearcherManager sm = new SearcherManager(writer, false, false, 
> new SearcherFactory());
>  [ecj-lint]   ^^
>  [ecj-lint] Resource leak: 'sm' is never closed
> --
>  [ecj-lint] 79. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestTermQuery.java
>  (at line 52)
>  [ecj-lint]   new TermQuery(new Term("foo", "bar"), TermStates.build(new 
> MultiReader().getContext(), new Term("foo", "bar"), true)));
>  [ecj-lint]  
> ^
>  [ecj-lint] Resource leak: '' is never closed
> --
>  [ecj-lint] 15. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/test-framework/src/java/org/apache/lucene/search/ShardSearchingTestBase.java
>  (at line 554)
>  [ecj-lint]   final LineFileDocs docs = new LineFileDocs(random());
>  [ecj-lint]  
>  [ecj-lint] Resource leak: 'docs' is never closed
> --
>  [ecj-lint] 1. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/highlighter/src/java/org/apache/lucene/search/uhighlight/UnifiedHighlighter.java
>  (at line 598)
>  [ecj-lint]   IndexReader indexReaderWithTermVecCache =
>  [ecj-lint]   ^^^
>  [ecj-lint] Resource leak: 'indexReaderWithTermVecCache' is never closed
> --
>  [ecj-lint] 1. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/highlighter/src/test/org/apache/lucene/search/highlight/HighlighterTest.java
>  (at line 1365)
>  [ecj-lint]   Analyzer analyzer = new SynonymAnalyzer(synonyms);
>  [ecj-lint]
>  [ecj-lint] Resource leak: 'analyzer' is never closed
> --
>  [ecj-lint] 2. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/highlighter/src/test/org/apache/lucene/search/highlight/TokenSourcesTest.java
>  (at line 379)
>  [ecj-lint]   final BaseTermVectorsFormatTestCase.RandomTokenStream 
> rTokenStream =
>  [ecj-lint] 
> 
>  [ecj-lint] Resource leak: 'rTokenStream' is never closed
> --
>  [ecj-lint] 3. 

[jira] [Updated] (SOLR-14479) Fix or suppress warnings in solr/analysis

2020-05-21 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-14479:
--
Summary: Fix or suppress warnings in solr/analysis  (was: Fix or suppress 
warnings in solr/core/analysis)

> Fix or suppress warnings in solr/analysis
> -
>
> Key: SOLR-14479
> URL: https://issues.apache.org/jira/browse/SOLR-14479
> Project: Solr
>  Issue Type: Sub-task
>  Components: Build
>Reporter: Erick Erickson
>Assignee: Gus Heck
>Priority: Major
>
> [~gus] Ask and ye shall receive.
> Here's how I'd like to approach this:
>  * Let's start with solr/core, one subdirectory at a time.
>  * See SOLR-14474 for how we want to address auxiliary classes, especially 
> the question to move them to their own file or nest them. It'll be fuzzy 
> until we get some more experience.
>  * Let's just clean everything up _except_ deprecations. My thinking here is 
> that there will be a bunch of code changes that we can/should backport to 8x 
> to clean up the warnings. Deprecations will be (probably) 9.0 only so 
> there'll be fewer problems with maintaining the two branches if we leave 
> deprecations out of the mix for the present.
>  * Err on the side of adding @SuppressWarnings rather than code changes for 
> this phase. If it's reasonably safe to change the code (say by adding ) do 
> so, but substantive changes are too likely to have unintended consequences. 
> I'd like to reach a consensus on what changes are "safe", that'll probably be 
> an ongoing discussion as we run into them for a while.
>  * I expect there'll be a certain amount of stepping on each other's toes, no 
> doubt to clean some things up in one of the subdirectories we'll have to 
> change something in an ancestor directory, but we can deal with those as they 
> come up, probably that'll just mean merging the current master with the fork 
> we're working on...
> Let me know what you think or if you'd like to change the approach.
> Oh, and all I did here was take the first subdirectory of solr/core that I 
> found, feel free to take on something else.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14480) Fix or suppress warnings in solr/api

2020-05-21 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-14480:
--
Summary: Fix or suppress warnings in solr/api  (was: Fix or suppress 
warnings in solr/core/api)

> Fix or suppress warnings in solr/api
> 
>
> Key: SOLR-14480
> URL: https://issues.apache.org/jira/browse/SOLR-14480
> Project: Solr
>  Issue Type: Sub-task
>  Components: Build
>Reporter: Erick Erickson
>Assignee: Atri Sharma
>Priority: Major
>
> [~atri] Here's one for you!
> Here's how I'd like to approach this:
>  * Let's start with solr/core, one subdirectory at a time.
>  * See SOLR-14474 for how we want to address auxiliary classes, especially 
> the question to move them to their own file or nest them. It'll be fuzzy 
> until we get some more experience.
>  * Let's just clean everything up _except_ deprecations. My thinking here is 
> that there will be a bunch of code changes that we can/should backport to 8x 
> to clean up the warnings. Deprecations will be (probably) 9.0 only so 
> there'll be fewer problems with maintaining the two branches if we leave 
> deprecations out of the mix for the present.
>  * Err on the side of adding @SuppressWarnings rather than code changes for 
> this phase. If it's reasonably safe to change the code (say by adding ) do 
> so, but substantive changes are too likely to have unintended consequences. 
> I'd like to reach a consensus on what changes are "safe", that'll probably be 
> an ongoing discussion as we run into them for a while.
>  * I expect there'll be a certain amount of stepping on each other's toes, no 
> doubt to clean some things up in one of the subdirectories we'll have to 
> change something in an ancestor directory, but we can deal with those as they 
> come up, probably that'll just mean merging the current master with the fork 
> we're working on...
> Let me know what you think or if you'd like to change the approach.
> Oh, and all I did here was take the second subdirectory of solr/core that I 
> found, feel free to take on something else.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] tflobbe opened a new pull request #1511: SOLR-13289: minExactHits -> minExactCount

2020-05-21 Thread GitBox


tflobbe opened a new pull request #1511:
URL: https://github.com/apache/lucene-solr/pull/1511


   Rename the parameter used to define the number of hits to count



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7368) Remove queryNorm

2020-05-21 Thread Dumitru Daniliuc (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-7368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113558#comment-17113558
 ] 

Dumitru Daniliuc edited comment on LUCENE-7368 at 5/21/20, 9:55 PM:


[~jpountz], thanks for looking into this! Here's the old explain message 
(Lucene 6.6.6):
{noformat}
202743.53 = , product of:
  587.6624 = sum of:
587.6624 = sum of:
  587.6624 = sum of:
587.6624 = weight(username:barackobama in 0) 
[UserSimilarityProvider], result of:
  587.6624 = score(doc=0,freq=1.0), product of:
33.93845 = queryWeight, product of:
  1.96 = boost
  17.315535 = idf, computed as log((docCount+1)/(docFreq+1)) + 
1 from:
1.0 = docFreq
2.4365572E7 = docCount
  1.0 = queryNorm
17.315535 = fieldWeight in 0, product of:
  1.0 = tf(freq=1.0), with freq of:
1.0 = termFreq=1.0
  17.315535 = idf, computed as log((docCount+1)/(docFreq+1)) + 
1 from:
1.0 = docFreq
2.4365572E7 = docCount
  1.0 = fieldNorm(doc=0)
  345.0 = 
{noformat}

And here's the new one (Lucene 7.7.2):
{noformat}
11708.552 = , product of:
  33.93783 = sum of:
33.93783 = sum of:
  33.93783 = sum of:
33.93783 = weight(username:barackobama in 0) 
[UserSimilarityProvider], result of:
  33.93783 = score(doc=0,freq=1.0), product of:
1.96 = boost
17.31522 = fieldWeight in 0, product of:
  1.0 = tf(freq=1.0), with freq of:
1.0 = termFreq=1.0
  17.31522 = idf, computed as log((docCount+1)/(docFreq+1)) + 1 
from:
1.0 = docFreq
2.4357912E7 = docCount
  1.0 = fieldNorm(doc=0)
  345.0 = 
{noformat}

I'll take a look at the methods you mentioned.


was (Author: ddaniliuc):
[~jpountz], thanks for looking into this! Here's the old explain message:
{noformat}
202743.53 = , product of:
  587.6624 = sum of:
587.6624 = sum of:
  587.6624 = sum of:
587.6624 = weight(username:barackobama in 0) 
[UserSimilarityProvider], result of:
  587.6624 = score(doc=0,freq=1.0), product of:
33.93845 = queryWeight, product of:
  1.96 = boost
  17.315535 = idf, computed as log((docCount+1)/(docFreq+1)) + 
1 from:
1.0 = docFreq
2.4365572E7 = docCount
  1.0 = queryNorm
17.315535 = fieldWeight in 0, product of:
  1.0 = tf(freq=1.0), with freq of:
1.0 = termFreq=1.0
  17.315535 = idf, computed as log((docCount+1)/(docFreq+1)) + 
1 from:
1.0 = docFreq
2.4365572E7 = docCount
  1.0 = fieldNorm(doc=0)
  345.0 = 
{noformat}

And here's the new one:
{noformat}
11708.552 = , product of:
  33.93783 = sum of:
33.93783 = sum of:
  33.93783 = sum of:
33.93783 = weight(username:barackobama in 0) 
[UserSimilarityProvider], result of:
  33.93783 = score(doc=0,freq=1.0), product of:
1.96 = boost
17.31522 = fieldWeight in 0, product of:
  1.0 = tf(freq=1.0), with freq of:
1.0 = termFreq=1.0
  17.31522 = idf, computed as log((docCount+1)/(docFreq+1)) + 1 
from:
1.0 = docFreq
2.4357912E7 = docCount
  1.0 = fieldNorm(doc=0)
  345.0 = 
{noformat}

I'll take a look at the methods you mentioned.

> Remove queryNorm
> 
>
> Key: LUCENE-7368
> URL: https://issues.apache.org/jira/browse/LUCENE-7368
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Major
> Fix For: 7.0
>
> Attachments: LUCENE-7368.patch
>
>
> Splitting LUCENE-7347 into smaller tasks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14507) Option to pass solr.hdfs.home in API backup/restore calls

2020-05-21 Thread Haley Reeve (Jira)
Haley Reeve created SOLR-14507:
--

 Summary: Option to pass solr.hdfs.home in API backup/restore calls
 Key: SOLR-14507
 URL: https://issues.apache.org/jira/browse/SOLR-14507
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Backup/Restore
Reporter: Haley Reeve


The Solr backup/restore API has an optional parameter for specifying the 
directory to backup to. However, the HdfsBackupRepository class doesn't use 
this location when creating the HDFS Filesystem object. Instead it uses the 
solr.hdfs.home setting configured in solr.xml. This functionally means that the 
backup location, which can be passed to the API call dynamically, is limited by 
the static home directory defined in solr.xml. This requirement means that if 
the solr.hdfs.home path and backup location don't share the same URI scheme and 
hostname, the backup will fail, even if the backup could otherwise have been 
written to the specified location successfully.

If we had the option to pass the solr.hdfs.home path as part of the API call, 
it would remove this limitation on the backup location.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] tflobbe merged pull request #1517: SOLR-13289: Use the final collector's scoreMode

2020-05-21 Thread GitBox


tflobbe merged pull request #1517:
URL: https://github.com/apache/lucene-solr/pull/1517


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13289) Support for BlockMax WAND

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113605#comment-17113605
 ] 

ASF subversion and git services commented on SOLR-13289:


Commit 5e9483e7885cab47b7d0e6249cfeb1fc02ffc257 in lucene-solr's branch 
refs/heads/master from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5e9483e ]

SOLR-13289: Use the final collector's scoreMode (#1517)

This is needed in case a PostFilter changes the scoreMode

> Support for BlockMax WAND
> -
>
> Key: SOLR-13289
> URL: https://issues.apache.org/jira/browse/SOLR-13289
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Tomas Eduardo Fernandez Lobbe
>Priority: Major
> Attachments: SOLR-13289.patch, SOLR-13289.patch
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> LUCENE-8135 introduced BlockMax WAND as a major speed improvement. Need to 
> expose this via Solr. When enabled, the numFound returned will not be exact.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1501: SOLR-13289: Add Refguide changes

2020-05-21 Thread GitBox


tflobbe commented on a change in pull request #1501:
URL: https://github.com/apache/lucene-solr/pull/1501#discussion_r428974497



##
File path: solr/solr-ref-guide/src/common-query-parameters.adoc
##
@@ -361,3 +361,42 @@ This is what happens if a similar request is sent that 
adds `echoParams=all` to
   }
 }
 
+
+== minExactHits Parameter

Review comment:
   Discussed in https://issues.apache.org/jira/browse/SOLR-13289





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7368) Remove queryNorm

2020-05-21 Thread Dumitru Daniliuc (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-7368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113558#comment-17113558
 ] 

Dumitru Daniliuc commented on LUCENE-7368:
--

[~jpountz], thanks for looking into this! Here's the old explain message:
{noformat}
202743.53 = , product of:
  587.6624 = sum of:
587.6624 = sum of:
  587.6624 = sum of:
587.6624 = weight(username:barackobama in 0) 
[UserSimilarityProvider], result of:
  587.6624 = score(doc=0,freq=1.0), product of:
33.93845 = queryWeight, product of:
  1.96 = boost
  17.315535 = idf, computed as log((docCount+1)/(docFreq+1)) + 
1 from:
1.0 = docFreq
2.4365572E7 = docCount
  1.0 = queryNorm
17.315535 = fieldWeight in 0, product of:
  1.0 = tf(freq=1.0), with freq of:
1.0 = termFreq=1.0
  17.315535 = idf, computed as log((docCount+1)/(docFreq+1)) + 
1 from:
1.0 = docFreq
2.4365572E7 = docCount
  1.0 = fieldNorm(doc=0)
  345.0 = 
{noformat}

And here's the new one:
{noformat}
11708.552 = , product of:
  33.93783 = sum of:
33.93783 = sum of:
  33.93783 = sum of:
33.93783 = weight(username:barackobama in 0) 
[UserSimilarityProvider], result of:
  33.93783 = score(doc=0,freq=1.0), product of:
1.96 = boost
17.31522 = fieldWeight in 0, product of:
  1.0 = tf(freq=1.0), with freq of:
1.0 = termFreq=1.0
  17.31522 = idf, computed as log((docCount+1)/(docFreq+1)) + 1 
from:
1.0 = docFreq
2.4357912E7 = docCount
  1.0 = fieldNorm(doc=0)
  345.0 = 
{noformat}

I'll take a look at the IndexSearcher methods you mentioned and see if we 
missed anything in our code (it's possible we override some of this behavior, 
and did not make the appropriate changes).

> Remove queryNorm
> 
>
> Key: LUCENE-7368
> URL: https://issues.apache.org/jira/browse/LUCENE-7368
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Major
> Fix For: 7.0
>
> Attachments: LUCENE-7368.patch
>
>
> Splitting LUCENE-7347 into smaller tasks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9376) Fix or suppress 20 resource leak precommit warnings in lucene/search

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113654#comment-17113654
 ] 

ASF subversion and git services commented on LUCENE-9376:
-

Commit 8e578b4e51cbab206c31653077ce4a3e3a6879b8 in lucene-solr's branch 
refs/heads/branch_8x from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8e578b4 ]

LUCENE-9376: Fix or suppress 20 resource leak precommit warnings in 
lucene/search


> Fix or suppress 20 resource leak precommit warnings in lucene/search
> 
>
> Key: LUCENE-9376
> URL: https://issues.apache.org/jira/browse/LUCENE-9376
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Andras Salamon
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-9376.patch
>
>
> There are 20 resource leak precommit warnings in org/apache/lucene/search:
> {noformat}
>  [ecj-lint] 71. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestFuzzyQuery.java
>  (at line 414)
>  [ecj-lint]   MockAnalyzer analyzer = new MockAnalyzer(random());
>  [ecj-lint]
>  [ecj-lint] Resource leak: 'analyzer' is never closed
> --
>  [ecj-lint] 72. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestFuzzyQuery.java
>  (at line 557)
>  [ecj-lint]   RandomIndexWriter w = new RandomIndexWriter(random(), dir);
>  [ecj-lint] ^
>  [ecj-lint] Resource leak: 'w' is never closed
> --
>  [ecj-lint] 73. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestLRUQueryCache.java
>  (at line 185)
>  [ecj-lint]   throw error.get();
>  [ecj-lint]   ^^
>  [ecj-lint] Resource leak: 'mgr' is not closed at this location
> --
>  [ecj-lint] 74. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestLRUQueryCache.java
>  (at line 185)
>  [ecj-lint]   throw error.get();
>  [ecj-lint]   ^^
>  [ecj-lint] Resource leak: 'w' is not closed at this location
> --
>  [ecj-lint] 75. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestSameScoresWithThreads.java
>  (at line 49)
>  [ecj-lint]   LineFileDocs docs = new LineFileDocs(random());
>  [ecj-lint]
>  [ecj-lint] Resource leak: 'docs' is never closed
> --
>  [ecj-lint] 76. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestSearcherManager.java
>  (at line 313)
>  [ecj-lint]   SearcherManager sm = new SearcherManager(writer, false, false, 
> new SearcherFactory());
>  [ecj-lint]   ^^
>  [ecj-lint] Resource leak: 'sm' is never closed
> --
>  [ecj-lint] 79. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestTermQuery.java
>  (at line 52)
>  [ecj-lint]   new TermQuery(new Term("foo", "bar"), TermStates.build(new 
> MultiReader().getContext(), new Term("foo", "bar"), true)));
>  [ecj-lint]  
> ^
>  [ecj-lint] Resource leak: '' is never closed
> --
>  [ecj-lint] 15. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/test-framework/src/java/org/apache/lucene/search/ShardSearchingTestBase.java
>  (at line 554)
>  [ecj-lint]   final LineFileDocs docs = new LineFileDocs(random());
>  [ecj-lint]  
>  [ecj-lint] Resource leak: 'docs' is never closed
> --
>  [ecj-lint] 1. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/highlighter/src/java/org/apache/lucene/search/uhighlight/UnifiedHighlighter.java
>  (at line 598)
>  [ecj-lint]   IndexReader indexReaderWithTermVecCache =
>  [ecj-lint]   ^^^
>  [ecj-lint] Resource leak: 'indexReaderWithTermVecCache' is never closed
> --
>  [ecj-lint] 1. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/highlighter/src/test/org/apache/lucene/search/highlight/HighlighterTest.java
>  (at line 1365)
>  [ecj-lint]   Analyzer analyzer = new SynonymAnalyzer(synonyms);
>  [ecj-lint]
>  [ecj-lint] Resource leak: 'analyzer' is never closed
> --
>  [ecj-lint] 2. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/highlighter/src/test/org/apache/lucene/search/highlight/TokenSourcesTest.java
>  (at line 379)
>  [ecj-lint]   final BaseTermVectorsFormatTestCase.RandomTokenStream 
> rTokenStream =
>  [ecj-lint] 
> 
>  [ecj-lint] Resource leak: 'rTokenStream' is never closed
> --
>  [ecj-lint] 3. 

[jira] [Commented] (SOLR-14504) ZkController LiveNodesListener has NullPointerException in startup race

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113687#comment-17113687
 ] 

ASF subversion and git services commented on SOLR-14504:


Commit 0728ef06e98cee5a278b8d75054d0f0c9d33a5ac in lucene-solr's branch 
refs/heads/SOLR-14461-fileupload from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0728ef0 ]

SOLR-14504: ZkController LiveNodesListener has NullPointerException in startup 
race.


> ZkController LiveNodesListener has NullPointerException in startup race
> ---
>
> Key: SOLR-14504
> URL: https://issues.apache.org/jira/browse/SOLR-14504
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 
> 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14504.patch
>
>
> If a NODELOST event happens before the cloudManager is initialized then a 
> NullPointerException will occur on this line 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
> {code:java}
> byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs())); {code}
> Rather than accessing cloudManager directly, getSolrCloudManager() should be 
> called.
>  
> This happens very rarely, but if it happens it will stop Solr starting, 
> result in "CoreContainer is either not initialized or shutting down". Snippet 
> from 8.3.1
> {noformat}
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)
> 2020-05-19
>  03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> 
> (0)
> 2020-05-19 03:44:56.614 ERROR (main) [   ] 
> o.a.s.s.SolrDispatchFilter Could not start Solr. Check solr/home 
> property and the logs
> 2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NullPointerException
>   at 
> org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
>   at 
> org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
>   at 
> org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
>   at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
>   at org.apache.solr.cloud.ZkController.(ZkController.java:473)
>   at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
>   at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:631){noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9374) Port check-broken-links to gradle

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113683#comment-17113683
 ] 

ASF subversion and git services commented on LUCENE-9374:
-

Commit 84ea0cb87dd7071648bd8efb97644f2af148fa7c in lucene-solr's branch 
refs/heads/SOLR-14461-fileupload from Tomoko Uchida
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=84ea0cb ]

LUCENE-9374: Add checkBrokenLinks gradle task (#1522)



> Port check-broken-links to gradle
> -
>
> Key: LUCENE-9374
> URL: https://issues.apache.org/jira/browse/LUCENE-9374
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: master (9.0)
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> This is a sub-task of LUCENE-9321; adds a gradle task "checkBrokenLinks" that 
> verifies links in the entire documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13289) Support for BlockMax WAND

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113689#comment-17113689
 ] 

ASF subversion and git services commented on SOLR-13289:


Commit 3ca7628c43747a2f81188b9848a870cc7fc37f63 in lucene-solr's branch 
refs/heads/SOLR-14461-fileupload from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3ca7628 ]

SOLR-13289: Rename minExactHits to minExactCount (#1511)



> Support for BlockMax WAND
> -
>
> Key: SOLR-13289
> URL: https://issues.apache.org/jira/browse/SOLR-13289
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Tomas Eduardo Fernandez Lobbe
>Priority: Major
> Attachments: SOLR-13289.patch, SOLR-13289.patch
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> LUCENE-8135 introduced BlockMax WAND as a major speed improvement. Need to 
> expose this via Solr. When enabled, the numFound returned will not be exact.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13289) Support for BlockMax WAND

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113688#comment-17113688
 ] 

ASF subversion and git services commented on SOLR-13289:


Commit 5e9483e7885cab47b7d0e6249cfeb1fc02ffc257 in lucene-solr's branch 
refs/heads/SOLR-14461-fileupload from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5e9483e ]

SOLR-13289: Use the final collector's scoreMode (#1517)

This is needed in case a PostFilter changes the scoreMode

> Support for BlockMax WAND
> -
>
> Key: SOLR-13289
> URL: https://issues.apache.org/jira/browse/SOLR-13289
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Tomas Eduardo Fernandez Lobbe
>Priority: Major
> Attachments: SOLR-13289.patch, SOLR-13289.patch
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> LUCENE-8135 introduced BlockMax WAND as a major speed improvement. Need to 
> expose this via Solr. When enabled, the numFound returned will not be exact.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9376) Fix or suppress 20 resource leak precommit warnings in lucene/search

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113691#comment-17113691
 ] 

ASF subversion and git services commented on LUCENE-9376:
-

Commit 21b08d5cab743f171b982bc8f929a75556a44ab6 in lucene-solr's branch 
refs/heads/SOLR-14461-fileupload from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=21b08d5 ]

LUCENE-9376: Fix or suppress 20 resource leak precommit warnings in 
lucene/search


> Fix or suppress 20 resource leak precommit warnings in lucene/search
> 
>
> Key: LUCENE-9376
> URL: https://issues.apache.org/jira/browse/LUCENE-9376
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Andras Salamon
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 8.6
>
> Attachments: LUCENE-9376.patch
>
>
> There are 20 resource leak precommit warnings in org/apache/lucene/search:
> {noformat}
>  [ecj-lint] 71. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestFuzzyQuery.java
>  (at line 414)
>  [ecj-lint]   MockAnalyzer analyzer = new MockAnalyzer(random());
>  [ecj-lint]
>  [ecj-lint] Resource leak: 'analyzer' is never closed
> --
>  [ecj-lint] 72. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestFuzzyQuery.java
>  (at line 557)
>  [ecj-lint]   RandomIndexWriter w = new RandomIndexWriter(random(), dir);
>  [ecj-lint] ^
>  [ecj-lint] Resource leak: 'w' is never closed
> --
>  [ecj-lint] 73. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestLRUQueryCache.java
>  (at line 185)
>  [ecj-lint]   throw error.get();
>  [ecj-lint]   ^^
>  [ecj-lint] Resource leak: 'mgr' is not closed at this location
> --
>  [ecj-lint] 74. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestLRUQueryCache.java
>  (at line 185)
>  [ecj-lint]   throw error.get();
>  [ecj-lint]   ^^
>  [ecj-lint] Resource leak: 'w' is not closed at this location
> --
>  [ecj-lint] 75. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestSameScoresWithThreads.java
>  (at line 49)
>  [ecj-lint]   LineFileDocs docs = new LineFileDocs(random());
>  [ecj-lint]
>  [ecj-lint] Resource leak: 'docs' is never closed
> --
>  [ecj-lint] 76. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestSearcherManager.java
>  (at line 313)
>  [ecj-lint]   SearcherManager sm = new SearcherManager(writer, false, false, 
> new SearcherFactory());
>  [ecj-lint]   ^^
>  [ecj-lint] Resource leak: 'sm' is never closed
> --
>  [ecj-lint] 79. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestTermQuery.java
>  (at line 52)
>  [ecj-lint]   new TermQuery(new Term("foo", "bar"), TermStates.build(new 
> MultiReader().getContext(), new Term("foo", "bar"), true)));
>  [ecj-lint]  
> ^
>  [ecj-lint] Resource leak: '' is never closed
> --
>  [ecj-lint] 15. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/test-framework/src/java/org/apache/lucene/search/ShardSearchingTestBase.java
>  (at line 554)
>  [ecj-lint]   final LineFileDocs docs = new LineFileDocs(random());
>  [ecj-lint]  
>  [ecj-lint] Resource leak: 'docs' is never closed
> --
>  [ecj-lint] 1. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/highlighter/src/java/org/apache/lucene/search/uhighlight/UnifiedHighlighter.java
>  (at line 598)
>  [ecj-lint]   IndexReader indexReaderWithTermVecCache =
>  [ecj-lint]   ^^^
>  [ecj-lint] Resource leak: 'indexReaderWithTermVecCache' is never closed
> --
>  [ecj-lint] 1. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/highlighter/src/test/org/apache/lucene/search/highlight/HighlighterTest.java
>  (at line 1365)
>  [ecj-lint]   Analyzer analyzer = new SynonymAnalyzer(synonyms);
>  [ecj-lint]
>  [ecj-lint] Resource leak: 'analyzer' is never closed
> --
>  [ecj-lint] 2. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/highlighter/src/test/org/apache/lucene/search/highlight/TokenSourcesTest.java
>  (at line 379)
>  [ecj-lint]   final BaseTermVectorsFormatTestCase.RandomTokenStream 
> rTokenStream =
>  [ecj-lint] 
> 
>  [ecj-lint] Resource leak: 

[jira] [Commented] (SOLR-13289) Support for BlockMax WAND

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113690#comment-17113690
 ] 

ASF subversion and git services commented on SOLR-13289:


Commit 16a22fcf564c54cf6e05e5e5c117477fb21aaa04 in lucene-solr's branch 
refs/heads/SOLR-14461-fileupload from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=16a22fc ]

SOLR-13289: Add Refguide changes (#1501)



> Support for BlockMax WAND
> -
>
> Key: SOLR-13289
> URL: https://issues.apache.org/jira/browse/SOLR-13289
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Tomas Eduardo Fernandez Lobbe
>Priority: Major
> Attachments: SOLR-13289.patch, SOLR-13289.patch
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> LUCENE-8135 introduced BlockMax WAND as a major speed improvement. Need to 
> expose this via Solr. When enabled, the numFound returned will not be exact.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14492) many json.facet aggregations can throw ArrayIndexOutOfBoundsException when using DVHASH due to incorrect resize impl

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113684#comment-17113684
 ] 

ASF subversion and git services commented on SOLR-14492:


Commit 28209cb8b1fe2a4d8050e4877c4df2ad5d85509b in lucene-solr's branch 
refs/heads/SOLR-14461-fileupload from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=28209cb ]

SOLR-14492: Fix ArrayIndexOutOfBoundsException in json.facet 'terms' when 
FacetFieldProcessorByHashDV is used with aggregations over multivalued numeric 
fields

SOLR-14477: Fix incorrect 'relatedness()' calculations in json.facet 'terms' 
when 'prefix' option is used


> many json.facet aggregations can throw ArrayIndexOutOfBoundsException when 
> using DVHASH due to incorrect resize impl
> 
>
> Key: SOLR-14492
> URL: https://issues.apache.org/jira/browse/SOLR-14492
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14492.patch, SOLR-14492.patch
>
>
> It appears we have quite a few SlotAcc impls that don't properly implement 
> resize: they ask the {{Resizer}} to resize their arrays, but throw away the 
> result. (arrays can't be resized in place, the {{Resizer}} is designed to 
> return a new replacment map, initializing empty values and/or mapping old 
> indicies to new indicies)
> For many FacetFieldProcessors, this isn't (normally) a problem because they 
> create their Accs using a "max upper bound" on the possible number of slots 
> in advance -- and only use resize later to "shrink" the number of slots.
> But in the case of {{method:dvhash}} / FacetFieldProcessorByHashDV this 
> processor starts out using a number of slots based on the size of the base 
> DocSet (rounded up to the next power of 2) maxed out at 1024, and then 
> _grows_ the SlotAccs if it encounters more values then that.
> This means that if the "base" context of the term facet is significantly 
> smaller then the number of values in the docValues field being faceted on 
> (ie: multiValued fields), then these problematic SlotAccs won't grow properly 
> and you'll get ArrayIndexOutOfBoundsException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14482) Fix or suppress warnings in solr/search/facet

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113686#comment-17113686
 ] 

ASF subversion and git services commented on SOLR-14482:


Commit 9c066f60f1804c26db8be226429a0be046c5a4db in lucene-solr's branch 
refs/heads/SOLR-14461-fileupload from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9c066f6 ]

SOLR-14482: Fix or suppress warnings in solr/search/facet


> Fix or suppress warnings in solr/search/facet
> -
>
> Key: SOLR-14482
> URL: https://issues.apache.org/jira/browse/SOLR-14482
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Taking this on next since I've just worked on it in SOLR-10810.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14477) relatedness() values can be wrong when using 'prefix'

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113685#comment-17113685
 ] 

ASF subversion and git services commented on SOLR-14477:


Commit 28209cb8b1fe2a4d8050e4877c4df2ad5d85509b in lucene-solr's branch 
refs/heads/SOLR-14461-fileupload from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=28209cb ]

SOLR-14492: Fix ArrayIndexOutOfBoundsException in json.facet 'terms' when 
FacetFieldProcessorByHashDV is used with aggregations over multivalued numeric 
fields

SOLR-14477: Fix incorrect 'relatedness()' calculations in json.facet 'terms' 
when 'prefix' option is used


> relatedness() values can be wrong when using 'prefix'
> -
>
> Key: SOLR-14477
> URL: https://issues.apache.org/jira/browse/SOLR-14477
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14477.patch, SOLR-14477.patch, SOLR-14477.patch
>
>
> Another {{relatedness()}} bug found in json facet's while working on 
> increased test coverage for SOLR-13132.
> if the {{prefix}} option is used when doing a terms facet, then the 
> {{relatedess()}} calculations can be wrong in some situations -- most notably 
> when using {{limit:-1}} but i'm pretty sure the bug also impacts the code 
> paths where the (first) {{sort}} (or {{prelim_sort}} is computed against the  
> {{relatedness()}} values.
> Real world impacts of this bug should be relatively low since i can't really 
> think of any practical usecases for using  {{relatedness()}} in conjunction 
> with {{prefix}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13289) Support for BlockMax WAND

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113621#comment-17113621
 ] 

ASF subversion and git services commented on SOLR-13289:


Commit 3ca7628c43747a2f81188b9848a870cc7fc37f63 in lucene-solr's branch 
refs/heads/master from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3ca7628 ]

SOLR-13289: Rename minExactHits to minExactCount (#1511)



> Support for BlockMax WAND
> -
>
> Key: SOLR-13289
> URL: https://issues.apache.org/jira/browse/SOLR-13289
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Tomas Eduardo Fernandez Lobbe
>Priority: Major
> Attachments: SOLR-13289.patch, SOLR-13289.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> LUCENE-8135 introduced BlockMax WAND as a major speed improvement. Need to 
> expose this via Solr. When enabled, the numFound returned will not be exact.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] tflobbe merged pull request #1511: SOLR-13289: minExactHits -> minExactCount

2020-05-21 Thread GitBox


tflobbe merged pull request #1511:
URL: https://github.com/apache/lucene-solr/pull/1511


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7368) Remove queryNorm

2020-05-21 Thread Adrien Grand (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-7368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113528#comment-17113528
 ] 

Adrien Grand commented on LUCENE-7368:
--

[~ddaniliuc] It was intentional. This second IDF factor was only used for the 
normalization logic, the IDF would not be squared in the final score.

See how IndexSearcher#createNormalizedWeight works: 
[https://github.com/apache/lucene-solr/blob/branch_6x/lucene/core/src/java/org/apache/lucene/search/IndexSearcher.java#L732-L742],
 here is what would happen for a TermQuery and ClassicSimilarity:

 - The term weight is initially computed as {{boost * IDF^2}} as you noted.
 - {{float v = weight.getValueForNormalization(); // v == boost^2 * IDF^2}}
 - {{float norm = getSimilarity(needsScores).queryNorm(v); // norm == 1/sqrt(v) 
== 1/(boost * IDF)}}
 - {{weight.normalize(norm, 1.0f); // value == norm * boost * IDF^2 == IDF}}

Can you share the output of {{IndexSearcher#explain}} before and after the 
change?

> Remove queryNorm
> 
>
> Key: LUCENE-7368
> URL: https://issues.apache.org/jira/browse/LUCENE-7368
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Major
> Fix For: 7.0
>
> Attachments: LUCENE-7368.patch
>
>
> Splitting LUCENE-7347 into smaller tasks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9378) Configurable compression for BinaryDocValues

2020-05-21 Thread Michael Sokolov (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113535#comment-17113535
 ] 

Michael Sokolov commented on LUCENE-9378:
-

Here are the index file sizes (after merging to a single segment). In total 
there was a ~6.5% reduction in index size, although the doc values (dvd) file 
reduced quite a bit more, ~28%
h3.  Before
|4|_h4.dii|
|276168|_h4.dim|
|892220|_h4.fdt|
|840|_h4.fdx|
|4|_h4.fnm|
|1981564|_h4_Lucene80_0.dvd|
|24|_h4_Lucene80_0.dvm|
|5111752|_h4_Lucene84_0.doc|
|4108112|_h4_Lucene84_0.pos|
|1145544|_h4_Lucene84_0.tim|
|23268|_h4_Lucene84_0.tip|
|65104|_h4.nvd|
|4|_h4.nvm|
|4|_h4.si|
|4|segments_3|
|0|write.lock|
|13604636|TOTAL|
h3. After
|4|_h5.dii|
|276480|_h5.dim|
|12|_h5.fdm|
|889700|_h5.fdt|
|820|_h5.fdx|
|4|_h5.fnm|
|1421700|_h5_Lucene80_0.dvd|
|4|_h5_Lucene80_0.dvm|
|5111616|_h5_Lucene84_0.doc|
|4108024|_h5_Lucene84_0.pos|
|848876|_h5_Lucene84_0.tim|
|23244|_h5_Lucene84_0.tip|
|65104|_h5.nvd|
|4|_h5.nvm|
|4|_h5.si|
|4|segments_3|
|12745620|TOTAL|

> Configurable compression for BinaryDocValues
> 
>
> Key: LUCENE-9378
> URL: https://issues.apache.org/jira/browse/LUCENE-9378
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Viral Gandhi
>Priority: Minor
>
> Lucene 8.5.1 includes a change to always [compress 
> BinaryDocValues|https://issues.apache.org/jira/browse/LUCENE-9211]. This 
> caused (~30%) reduction in our red-line QPS (throughput). 
> We think users should be given some way to opt-in for this compression 
> feature instead of always being enabled which can have a substantial query 
> time cost as we saw during our upgrade. [~mikemccand] suggested one possible 
> approach by introducing a *mode* in Lucene84DocValuesFormat (COMPRESSED and 
> UNCOMPRESSED) and allowing users to create a custom Codec subclassing the 
> default Codec and pick the format they want.
> Idea is similar to Lucene50StoredFieldsFormat which has two modes, 
> Mode.BEST_SPEED and Mode.BEST_COMPRESSION.
> Here's related issues for adding benchmark covering BINARY doc values 
> query-time performance - [https://github.com/mikemccand/luceneutil/issues/61]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14507) Option to pass solr.hdfs.home in API backup/restore calls

2020-05-21 Thread Haley Reeve (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haley Reeve updated SOLR-14507:
---
Status: Patch Available  (was: Open)

> Option to pass solr.hdfs.home in API backup/restore calls
> -
>
> Key: SOLR-14507
> URL: https://issues.apache.org/jira/browse/SOLR-14507
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Haley Reeve
>Priority: Major
> Attachments: SOLR-14507.patch
>
>
> The Solr backup/restore API has an optional parameter for specifying the 
> directory to backup to. However, the HdfsBackupRepository class doesn't use 
> this location when creating the HDFS Filesystem object. Instead it uses the 
> solr.hdfs.home setting configured in solr.xml. This functionally means that 
> the backup location, which can be passed to the API call dynamically, is 
> limited by the static home directory defined in solr.xml. This requirement 
> means that if the solr.hdfs.home path and backup location don't share the 
> same URI scheme and hostname, the backup will fail, even if the backup could 
> otherwise have been written to the specified location successfully.
> If we had the option to pass the solr.hdfs.home path as part of the API call, 
> it would remove this limitation on the backup location.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9376) Fix or suppress 20 resource leak precommit warnings in lucene/search

2020-05-21 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-9376:
---
Fix Version/s: 8.6
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks Andras!

> Fix or suppress 20 resource leak precommit warnings in lucene/search
> 
>
> Key: LUCENE-9376
> URL: https://issues.apache.org/jira/browse/LUCENE-9376
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Andras Salamon
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 8.6
>
> Attachments: LUCENE-9376.patch
>
>
> There are 20 resource leak precommit warnings in org/apache/lucene/search:
> {noformat}
>  [ecj-lint] 71. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestFuzzyQuery.java
>  (at line 414)
>  [ecj-lint]   MockAnalyzer analyzer = new MockAnalyzer(random());
>  [ecj-lint]
>  [ecj-lint] Resource leak: 'analyzer' is never closed
> --
>  [ecj-lint] 72. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestFuzzyQuery.java
>  (at line 557)
>  [ecj-lint]   RandomIndexWriter w = new RandomIndexWriter(random(), dir);
>  [ecj-lint] ^
>  [ecj-lint] Resource leak: 'w' is never closed
> --
>  [ecj-lint] 73. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestLRUQueryCache.java
>  (at line 185)
>  [ecj-lint]   throw error.get();
>  [ecj-lint]   ^^
>  [ecj-lint] Resource leak: 'mgr' is not closed at this location
> --
>  [ecj-lint] 74. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestLRUQueryCache.java
>  (at line 185)
>  [ecj-lint]   throw error.get();
>  [ecj-lint]   ^^
>  [ecj-lint] Resource leak: 'w' is not closed at this location
> --
>  [ecj-lint] 75. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestSameScoresWithThreads.java
>  (at line 49)
>  [ecj-lint]   LineFileDocs docs = new LineFileDocs(random());
>  [ecj-lint]
>  [ecj-lint] Resource leak: 'docs' is never closed
> --
>  [ecj-lint] 76. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestSearcherManager.java
>  (at line 313)
>  [ecj-lint]   SearcherManager sm = new SearcherManager(writer, false, false, 
> new SearcherFactory());
>  [ecj-lint]   ^^
>  [ecj-lint] Resource leak: 'sm' is never closed
> --
>  [ecj-lint] 79. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/core/src/test/org/apache/lucene/search/TestTermQuery.java
>  (at line 52)
>  [ecj-lint]   new TermQuery(new Term("foo", "bar"), TermStates.build(new 
> MultiReader().getContext(), new Term("foo", "bar"), true)));
>  [ecj-lint]  
> ^
>  [ecj-lint] Resource leak: '' is never closed
> --
>  [ecj-lint] 15. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/test-framework/src/java/org/apache/lucene/search/ShardSearchingTestBase.java
>  (at line 554)
>  [ecj-lint]   final LineFileDocs docs = new LineFileDocs(random());
>  [ecj-lint]  
>  [ecj-lint] Resource leak: 'docs' is never closed
> --
>  [ecj-lint] 1. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/highlighter/src/java/org/apache/lucene/search/uhighlight/UnifiedHighlighter.java
>  (at line 598)
>  [ecj-lint]   IndexReader indexReaderWithTermVecCache =
>  [ecj-lint]   ^^^
>  [ecj-lint] Resource leak: 'indexReaderWithTermVecCache' is never closed
> --
>  [ecj-lint] 1. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/highlighter/src/test/org/apache/lucene/search/highlight/HighlighterTest.java
>  (at line 1365)
>  [ecj-lint]   Analyzer analyzer = new SynonymAnalyzer(synonyms);
>  [ecj-lint]
>  [ecj-lint] Resource leak: 'analyzer' is never closed
> --
>  [ecj-lint] 2. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/highlighter/src/test/org/apache/lucene/search/highlight/TokenSourcesTest.java
>  (at line 379)
>  [ecj-lint]   final BaseTermVectorsFormatTestCase.RandomTokenStream 
> rTokenStream =
>  [ecj-lint] 
> 
>  [ecj-lint] Resource leak: 'rTokenStream' is never closed
> --
>  [ecj-lint] 3. WARNING in 
> /Users/andrassalamon/src/lucene-solr-upstream/lucene/highlighter/src/test/org/apache/lucene/search/highlight/custom/HighlightCustomQueryTest.java
>  (at line 108)
>  [ecj-lint]   

[GitHub] [lucene-solr] ErickErickson closed pull request #1526: SOLR-14495: Fix or suppress warnings in solr/search/function

2020-05-21 Thread GitBox


ErickErickson closed pull request #1526:
URL: https://github.com/apache/lucene-solr/pull/1526


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1506: SOLR-14470: Add streaming expressions to /export handler

2020-05-21 Thread GitBox


dsmiley commented on a change in pull request #1506:
URL: https://github.com/apache/lucene-solr/pull/1506#discussion_r428932332



##
File path: 
solr/core/src/java/org/apache/solr/handler/sql/FilterCalciteConnection.java
##
@@ -0,0 +1,382 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.handler.sql;
+
+import java.lang.reflect.Type;
+import java.sql.Array;
+import java.sql.Blob;
+import java.sql.CallableStatement;
+import java.sql.Clob;
+import java.sql.DatabaseMetaData;
+import java.sql.NClob;
+import java.sql.PreparedStatement;
+import java.sql.SQLClientInfoException;
+import java.sql.SQLException;
+import java.sql.SQLWarning;
+import java.sql.SQLXML;
+import java.sql.Savepoint;
+import java.sql.Statement;
+import java.sql.Struct;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.Executor;
+
+import org.apache.calcite.adapter.java.JavaTypeFactory;
+import org.apache.calcite.config.CalciteConnectionConfig;
+import org.apache.calcite.jdbc.CalciteConnection;
+import org.apache.calcite.jdbc.CalcitePrepare;
+import org.apache.calcite.linq4j.Enumerator;
+import org.apache.calcite.linq4j.Queryable;
+import org.apache.calcite.linq4j.tree.Expression;
+import org.apache.calcite.schema.SchemaPlus;
+
+/**
+ * A filter that contains another {@link CalciteConnection} and
+ * allows adding pre- post-method behaviors.
+ */
+class FilterCalciteConnection implements CalciteConnection {

Review comment:
   What a class... it reminds me of one of the benefits of Kotlin lang

##
File path: solr/core/src/java/org/apache/solr/handler/export/ExportWriter.java
##
@@ -216,14 +376,53 @@ public void write(OutputStream os) throws IOException {
   return;
 }
 
+String expr = params.get(StreamParams.EXPR);
+if (expr != null) {
+  StreamFactory streamFactory = initialStreamContext.getStreamFactory();
+  try {
+StreamExpression expression = StreamExpressionParser.parse(expr);
+if (streamFactory.isEvaluator(expression)) {
+  streamExpression = new StreamExpression(StreamParams.TUPLE);
+  streamExpression.addParameter(new 
StreamExpressionNamedParameter(StreamParams.RETURN_VALUE, expression));
+} else {
+  streamExpression = expression;
+}
+  } catch (Exception e) {
+writeException(e, writer, true);
+return;
+  }
+  streamContext = new StreamContext();
+  streamContext.setRequestParams(params);
+  // nocommit enforce this?
+  streamContext.setLocal(true);
+
+  streamContext.workerID = 0;
+  streamContext.numWorkers = 1;
+  
streamContext.setSolrClientCache(initialStreamContext.getSolrClientCache());
+  streamContext.setModelCache(initialStreamContext.getModelCache());
+  streamContext.setObjectCache(initialStreamContext.getObjectCache());
+  streamContext.put("core", req.getCore().getName());

Review comment:
   naming here is confusing -- name of the thing vs the thing itself.  Do 
you have to adhere to these specific names?  Perhaps "core-object" might be 
more clear than ambiguously solr-core.

##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/StatsStream.java
##
@@ -266,91 +252,106 @@ public void close() throws IOException {
   }
 
   public Tuple read() throws IOException {
-if(index == 0) {
-  ++index;
+if(!done) {
+  done = true;
   return tuple;
 } else {
-  Map fields = new HashMap();
-  fields.put("EOF", true);
-  Tuple tuple = new Tuple(fields);
-  return tuple;
+  return Tuple.EOF();
 }
   }
 
-  private String getJsonFacetString(Metric[] _metrics) {
-StringBuilder buf = new StringBuilder();
-appendJson(buf, _metrics);
-return "{"+buf.toString()+"}";
+  public StreamComparator getStreamSort() {
+return null;
   }
 
-  private void appendJson(StringBuilder buf,
-  Metric[] _metrics) {
-
-int metricCount = 0;
+  private void addStats(ModifiableSolrParams params, Metric[] _metrics) {
+Map> m = new HashMap<>();
 for(Metric metric : _metrics) {
-  String identifier = 

[jira] [Comment Edited] (LUCENE-7368) Remove queryNorm

2020-05-21 Thread Dumitru Daniliuc (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-7368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113558#comment-17113558
 ] 

Dumitru Daniliuc edited comment on LUCENE-7368 at 5/21/20, 9:31 PM:


[~jpountz], thanks for looking into this! Here's the old explain message:
{noformat}
202743.53 = , product of:
  587.6624 = sum of:
587.6624 = sum of:
  587.6624 = sum of:
587.6624 = weight(username:barackobama in 0) 
[UserSimilarityProvider], result of:
  587.6624 = score(doc=0,freq=1.0), product of:
33.93845 = queryWeight, product of:
  1.96 = boost
  17.315535 = idf, computed as log((docCount+1)/(docFreq+1)) + 
1 from:
1.0 = docFreq
2.4365572E7 = docCount
  1.0 = queryNorm
17.315535 = fieldWeight in 0, product of:
  1.0 = tf(freq=1.0), with freq of:
1.0 = termFreq=1.0
  17.315535 = idf, computed as log((docCount+1)/(docFreq+1)) + 
1 from:
1.0 = docFreq
2.4365572E7 = docCount
  1.0 = fieldNorm(doc=0)
  345.0 = 
{noformat}

And here's the new one:
{noformat}
11708.552 = , product of:
  33.93783 = sum of:
33.93783 = sum of:
  33.93783 = sum of:
33.93783 = weight(username:barackobama in 0) 
[UserSimilarityProvider], result of:
  33.93783 = score(doc=0,freq=1.0), product of:
1.96 = boost
17.31522 = fieldWeight in 0, product of:
  1.0 = tf(freq=1.0), with freq of:
1.0 = termFreq=1.0
  17.31522 = idf, computed as log((docCount+1)/(docFreq+1)) + 1 
from:
1.0 = docFreq
2.4357912E7 = docCount
  1.0 = fieldNorm(doc=0)
  345.0 = 
{noformat}

I'll take a look at the methods you mentioned.


was (Author: ddaniliuc):
[~jpountz], thanks for looking into this! Here's the old explain message:
{noformat}
202743.53 = , product of:
  587.6624 = sum of:
587.6624 = sum of:
  587.6624 = sum of:
587.6624 = weight(username:barackobama in 0) 
[UserSimilarityProvider], result of:
  587.6624 = score(doc=0,freq=1.0), product of:
33.93845 = queryWeight, product of:
  1.96 = boost
  17.315535 = idf, computed as log((docCount+1)/(docFreq+1)) + 
1 from:
1.0 = docFreq
2.4365572E7 = docCount
  1.0 = queryNorm
17.315535 = fieldWeight in 0, product of:
  1.0 = tf(freq=1.0), with freq of:
1.0 = termFreq=1.0
  17.315535 = idf, computed as log((docCount+1)/(docFreq+1)) + 
1 from:
1.0 = docFreq
2.4365572E7 = docCount
  1.0 = fieldNorm(doc=0)
  345.0 = 
{noformat}

And here's the new one:
{noformat}
11708.552 = , product of:
  33.93783 = sum of:
33.93783 = sum of:
  33.93783 = sum of:
33.93783 = weight(username:barackobama in 0) 
[UserSimilarityProvider], result of:
  33.93783 = score(doc=0,freq=1.0), product of:
1.96 = boost
17.31522 = fieldWeight in 0, product of:
  1.0 = tf(freq=1.0), with freq of:
1.0 = termFreq=1.0
  17.31522 = idf, computed as log((docCount+1)/(docFreq+1)) + 1 
from:
1.0 = docFreq
2.4357912E7 = docCount
  1.0 = fieldNorm(doc=0)
  345.0 = 
{noformat}

I'll take a look at the IndexSearcher methods you mentioned and see if we 
missed anything in our code (it's possible we override some of this behavior, 
and did not make the appropriate changes).

> Remove queryNorm
> 
>
> Key: LUCENE-7368
> URL: https://issues.apache.org/jira/browse/LUCENE-7368
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Major
> Fix For: 7.0
>
> Attachments: LUCENE-7368.patch
>
>
> Splitting LUCENE-7347 into smaller tasks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14507) Option to pass solr.hdfs.home in API backup/restore calls

2020-05-21 Thread Haley Reeve (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haley Reeve updated SOLR-14507:
---
Attachment: SOLR-14507.patch
Status: Open  (was: Open)

> Option to pass solr.hdfs.home in API backup/restore calls
> -
>
> Key: SOLR-14507
> URL: https://issues.apache.org/jira/browse/SOLR-14507
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Haley Reeve
>Priority: Major
> Attachments: SOLR-14507.patch
>
>
> The Solr backup/restore API has an optional parameter for specifying the 
> directory to backup to. However, the HdfsBackupRepository class doesn't use 
> this location when creating the HDFS Filesystem object. Instead it uses the 
> solr.hdfs.home setting configured in solr.xml. This functionally means that 
> the backup location, which can be passed to the API call dynamically, is 
> limited by the static home directory defined in solr.xml. This requirement 
> means that if the solr.hdfs.home path and backup location don't share the 
> same URI scheme and hostname, the backup will fail, even if the backup could 
> otherwise have been written to the specified location successfully.
> If we had the option to pass the solr.hdfs.home path as part of the API call, 
> it would remove this limitation on the backup location.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] tflobbe merged pull request #1501: SOLR-13289: Add Refguide changes

2020-05-21 Thread GitBox


tflobbe merged pull request #1501:
URL: https://github.com/apache/lucene-solr/pull/1501


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13289) Support for BlockMax WAND

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113629#comment-17113629
 ] 

ASF subversion and git services commented on SOLR-13289:


Commit 16a22fcf564c54cf6e05e5e5c117477fb21aaa04 in lucene-solr's branch 
refs/heads/master from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=16a22fc ]

SOLR-13289: Add Refguide changes (#1501)



> Support for BlockMax WAND
> -
>
> Key: SOLR-13289
> URL: https://issues.apache.org/jira/browse/SOLR-13289
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Tomas Eduardo Fernandez Lobbe
>Priority: Major
> Attachments: SOLR-13289.patch, SOLR-13289.patch
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> LUCENE-8135 introduced BlockMax WAND as a major speed improvement. Need to 
> expose this via Solr. When enabled, the numFound returned will not be exact.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-11334) UnifiedSolrHighlighter returns an error when hl.fl delimited by ", "

2020-05-21 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-11334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-11334:

Status: Patch Available  (was: Open)

> UnifiedSolrHighlighter returns an error when hl.fl delimited by ", "
> 
>
> Key: SOLR-11334
> URL: https://issues.apache.org/jira/browse/SOLR-11334
> Project: Solr
>  Issue Type: Bug
>  Components: highlighter
>Affects Versions: 6.6
> Environment: Ubuntu 17.04 (GNU/Linux 4.10.0-33-generic x86_64)
> Java HotSpot 64-Bit Server VM(build 25.114-b01, mixed mode)
>Reporter: Yasufumi Mizoguchi
>Priority: Trivial
> Attachments: SOLR-11334.patch
>
>
> UnifiedSolrHighlighter(hl.method=unified) misjudge the zero-length string as 
> a field name and returns an error when hl.fl delimited by ", "
> request:
> {code}
> $ curl -XGET 
> "http://localhost:8983/solr/techproducts/select?fl=name,%20manu=name,%20manu=unified=on=on=corsair=json;
> {code}
> response:
> {code}
> {
>   "responseHeader":{
> "status":400,
> "QTime":8,
> "params":{
>   "q":"corsair",
>   "hl":"on",
>   "indent":"on",
>   "fl":"name, manu",
>   "hl.fl":"name, manu",
>   "hl.method":"unified",
>   "wt":"json"}},
>   "response":{"numFound":2,"start":0,"docs":[
>   {
> "name":"CORSAIR ValueSelect 1GB 184-Pin DDR SDRAM Unbuffered DDR 400 
> (PC 3200) System Memory - Retail",
> "manu":"Corsair Microsystems Inc."},
>   {
> "name":"CORSAIR  XMS 2GB (2 x 1GB) 184-Pin DDR SDRAM Unbuffered DDR 
> 400 (PC 3200) Dual Channel Kit System Memory - Retail",
> "manu":"Corsair Microsystems Inc."}]
>   },
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"undefined field ",
> "code":400}}
> {code}
> DefaultHighlighter's response:
> {code}
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"corsair",
>   "hl":"on",
>   "indent":"on",
>   "fl":"name, manu",
>   "hl.fl":"name, manu",
>   "hl.method":"original",
>   "wt":"json"}},
>   "response":{"numFound":2,"start":0,"docs":[
>   {
> "name":"CORSAIR ValueSelect 1GB 184-Pin DDR SDRAM Unbuffered DDR 400 
> (PC 3200) System Memory - Retail",
> "manu":"Corsair Microsystems Inc."},
>   {
> "name":"CORSAIR  XMS 2GB (2 x 1GB) 184-Pin DDR SDRAM Unbuffered DDR 
> 400 (PC 3200) Dual Channel Kit System Memory - Retail",
> "manu":"Corsair Microsystems Inc."}]
>   },
>   "highlighting":{
> "VS1GB400C3":{
>   "name":["CORSAIR ValueSelect 1GB 184-Pin DDR SDRAM Unbuffered 
> DDR 400 (PC 3200) System Memory - Retail"],
>   "manu":["Corsair Microsystems Inc."]},
> "TWINX2048-3200PRO":{
>   "name":["CORSAIR  XMS 2GB (2 x 1GB) 184-Pin DDR SDRAM 
> Unbuffered DDR 400 (PC 3200) Dual Channel Kit System"],
>   "manu":["Corsair Microsystems Inc."]}}}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley merged pull request #1490: SOLR-14461: Replace commons-fileupload with Jetty

2020-05-21 Thread GitBox


dsmiley merged pull request #1490:
URL: https://github.com/apache/lucene-solr/pull/1490


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12320) Not all multi-part post requests should create tmp files.

2020-05-21 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-12320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-12320.
-
Resolution: Not A Problem

> Not all multi-part post requests should create tmp files.
> -
>
> Key: SOLR-12320
> URL: https://issues.apache.org/jira/browse/SOLR-12320
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: David Smiley
>Priority: Minor
>
> We create tmp files for multi-part posts because often they are uploaded 
> files for Solr cell or something but we also sometimes write params only or 
> params and updates as multi-part post. These should not create any tmp files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14461) Replace commons-fileupload use with standard Servlet/Jetty

2020-05-21 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-14461.
-
Fix Version/s: 8.6
   Resolution: Fixed

> Replace commons-fileupload use with standard Servlet/Jetty
> --
>
> Key: SOLR-14461
> URL: https://issues.apache.org/jira/browse/SOLR-14461
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 8.6
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Commons-fileupload had utility back in the day before the Servlet 3.0 spec 
> but I think it's now obsolete.  I'd rather not maintain this dependency, 
> which includes keeping it up to date from security vulnerabilities.
> (I have work in-progress)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] shalinmangar merged pull request #1512: SOLR-13325: Add a collection selector to ComputePlanAction

2020-05-21 Thread GitBox


shalinmangar merged pull request #1512:
URL: https://github.com/apache/lucene-solr/pull/1512


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13325) Add a collection selector to ComputePlanAction

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113750#comment-17113750
 ] 

ASF subversion and git services commented on SOLR-13325:


Commit 338671e511b753955f7186e7063cd95824cdf4e0 in lucene-solr's branch 
refs/heads/master from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=338671e ]

SOLR-13325: Add a collection selector to ComputePlanAction (#1512)

ComputePlanAction now supports a collection selector of the form `collections: 
{policy: my_policy}` which can be used to select multiple collections that 
match collection property/value pairs. This is useful to maintain a whitelist 
of collections for which actions should be taken without needing to hard-code 
the collection names. The collection hints are pushed down to the policy engine 
so operations for non-matching collections are not computed at all. The 
AutoAddReplicasPlanAction now becomes a thin shim over ComputePlanAction and 
simply adds a collection selector for the collection property 
autoAddReplicas=true.

> Add a collection selector to ComputePlanAction
> --
>
> Key: SOLR-13325
> URL: https://issues.apache.org/jira/browse/SOLR-13325
> Project: Solr
>  Issue Type: Improvement
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (9.0), 8.6
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Similar to SOLR-13273, it'd be nice to have a collection selector that 
> applies to compute plan action. An example use-case would be to selectively 
> add replicas on new nodes for certain collections only.
> Here is a selector that returns collections that match the given collection 
> property/value pair:
> {code}
> "collection": {"property_name": "property_value"}
> {code}
> Here's another selector that returns collections that have the given policy 
> applied
> {code}
> "collection": {"#policy": "policy_name"}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13878) Update commons-fileupload to 1.4 to fix potential resource leak issue

2020-05-21 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-13878.
-
Fix Version/s: 8.6
   Resolution: Not A Problem

As of 8.6, this will no longer be a dependency thanks to SOLR-14461.
I'm not sure if earlier Solr branches ought to be updated for this but it can 
be re-opened if someone chooses to do that.

> Update commons-fileupload to 1.4 to fix potential resource leak issue
> -
>
> Key: SOLR-13878
> URL: https://issues.apache.org/jira/browse/SOLR-13878
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 8.2
>Reporter: Charles Dumont
>Priority: Minor
> Fix For: 8.6
>
>
> commons-fileupload version 1.3 has a potential resource leak issue as 
> described here: https://issues.apache.org/jira/browse/FILEUPLOAD-250.  solr 
> currently has a dependency on 1.3 and should update to 1.4 to avoid being 
> potentially impacted by this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] shalinmangar commented on a change in pull request #1512: SOLR-13325: Add a collection selector to ComputePlanAction

2020-05-21 Thread GitBox


shalinmangar commented on a change in pull request #1512:
URL: https://github.com/apache/lucene-solr/pull/1512#discussion_r429042522



##
File path: solr/solr-ref-guide/src/solrcloud-autoscaling-trigger-actions.adoc
##
@@ -29,12 +29,13 @@ commands which can re-balance the cluster in response to 
trigger events.
 The following parameters are configurable:
 
 `collections`::
-A comma-separated list of collection names. If this list is not empty then
-the computed operations will only calculate collection operations that affect
-listed collections and ignore any other collection operations for collections
+A comma-separated list of collection names. This can also be a selector on the 
collection property e.g. `collections: {'policy': 'my_custom_policy'}` will 
match all collections which use the policy named `my_customer_policy`.

Review comment:
   Thanks. I had reworded the description so the `my_custom_policy` and 
`my_customer_policy` are both replaced with `my_policy` everywhere. I'll keep 
the example here because it conveys how to use more than one key/value pair.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14461) Replace commons-fileupload use with standard Servlet/Jetty

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113738#comment-17113738
 ] 

ASF subversion and git services commented on SOLR-14461:


Commit 3fba3daa954938553a5bcd67d0c32d4171eeadb6 in lucene-solr's branch 
refs/heads/master from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3fba3daa ]

SOLR-14461: Replace commons-fileupload with Jetty (#1490)



> Replace commons-fileupload use with standard Servlet/Jetty
> --
>
> Key: SOLR-14461
> URL: https://issues.apache.org/jira/browse/SOLR-14461
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Commons-fileupload had utility back in the day before the Servlet 3.0 spec 
> but I think it's now obsolete.  I'd rather not maintain this dependency, 
> which includes keeping it up to date from security vulnerabilities.
> (I have work in-progress)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14461) Replace commons-fileupload use with standard Servlet/Jetty

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113755#comment-17113755
 ] 

ASF subversion and git services commented on SOLR-14461:


Commit 41b4bec51b6b2b083c5fb2170057e69693b2ff77 in lucene-solr's branch 
refs/heads/branch_8x from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=41b4bec ]

SOLR-14461: Replace commons-fileupload with Jetty (#1490)

(cherry picked from commit 3fba3daa954938553a5bcd67d0c32d4171eeadb6)


> Replace commons-fileupload use with standard Servlet/Jetty
> --
>
> Key: SOLR-14461
> URL: https://issues.apache.org/jira/browse/SOLR-14461
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Commons-fileupload had utility back in the day before the Servlet 3.0 spec 
> but I think it's now obsolete.  I'd rather not maintain this dependency, 
> which includes keeping it up to date from security vulnerabilities.
> (I have work in-progress)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14504) ZkController LiveNodesListener has NullPointerException in startup race

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113336#comment-17113336
 ] 

ASF subversion and git services commented on SOLR-14504:


Commit 0728ef06e98cee5a278b8d75054d0f0c9d33a5ac in lucene-solr's branch 
refs/heads/master from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0728ef0 ]

SOLR-14504: ZkController LiveNodesListener has NullPointerException in startup 
race.


> ZkController LiveNodesListener has NullPointerException in startup race
> ---
>
> Key: SOLR-14504
> URL: https://issues.apache.org/jira/browse/SOLR-14504
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 
> 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14504.patch
>
>
> If a NODELOST event happens before the cloudManager is initialized then a 
> NullPointerException will occur on this line 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
> {code:java}
> byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs())); {code}
> Rather than accessing cloudManager directly, getSolrCloudManager() should be 
> called.
>  
> This happens very rarely, but if it happens it will stop Solr starting, 
> result in "CoreContainer is either not initialized or shutting down". Snippet 
> from 8.3.1
> {noformat}
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)
> 2020-05-19
>  03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> 
> (0)
> 2020-05-19 03:44:56.614 ERROR (main) [   ] 
> o.a.s.s.SolrDispatchFilter Could not start Solr. Check solr/home 
> property and the logs
> 2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NullPointerException
>   at 
> org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
>   at 
> org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
>   at 
> org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
>   at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
>   at org.apache.solr.cloud.ZkController.(ZkController.java:473)
>   at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
>   at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:631){noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13749) Implement support for joining across collections with multiple shards ( XCJF )

2020-05-21 Thread Gus Heck (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113505#comment-17113505
 ] 

Gus Heck commented on SOLR-13749:
-

The zkHost parameter is a lot less exciting, because zookeeper isn't going to 
accept parameters on the URL, and there's no way to send data to the connection 
via this, so the most that can happen to zookeeper itself is spurious, empty 
connections, perhaps it could be a DOS, but I'm not convinced that the weight 
of that on zookeeper is going to be more than the weight of the initiating 
requests to solr anyway (perhaps with a very  large solr cluster that has a 
very small zookeeper managing it?), so when I reviewed this initially I did 
fiddle around trying to see if I could hack ZK with this but was unable to get 
more than an empty connection, thus I decided not to worry about zookeeper 
URLs. If there's a hole in that logic (or zookeeper changes to accept FLW on 
the url or something) then we would have a problem and need a white list.  

If the destination server is unsecured and the zookeeper is similarly insecure 
(or the security path has been enabled intentionally for some other operation), 
then I think the result is that the data in the current server can be filtered 
by data in the destination server (which is the point of the feature). With 
some creativity this could probably be used to infer some details about the 
contents of a destination server or add some query load to it, but not directly 
expose its data.  So the risk is much lower. If this risk is unacceptable, then 
yes we need a whitelist there too. 

The primary reason for the url whitelist here is that Solr has a lot of very 
dangerous GET urls, and a great many other applications also use http. Allowing 
Solr to act as a relay is super dangerous.  

> Implement support for joining across collections with multiple shards ( XCJF )
> --
>
> Key: SOLR-13749
> URL: https://issues.apache.org/jira/browse/SOLR-13749
> Project: Solr
>  Issue Type: New Feature
>Reporter: Kevin Watters
>Assignee: Gus Heck
>Priority: Blocker
> Fix For: 8.6
>
> Attachments: 2020-03 Smiley with ASF hat.jpeg
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This ticket includes 2 query parsers.
> The first one is the "Cross collection join filter"  (XCJF) parser. This is 
> the "Cross-collection join filter" query parser. It can do a call out to a 
> remote collection to get a set of join keys to be used as a filter against 
> the local collection.
> The second one is the Hash Range query parser that you can specify a field 
> name and a hash range, the result is that only the documents that would have 
> hashed to that range will be returned.
> This query parser will do an intersection based on join keys between 2 
> collections.
> The local collection is the collection that you are searching against.
> The remote collection is the collection that contains the join keys that you 
> want to use as a filter.
> Each shard participating in the distributed request will execute a query 
> against the remote collection.  If the local collection is setup with the 
> compositeId router to be routed on the join key field, a hash range query is 
> applied to the remote collection query to only match the documents that 
> contain a potential match for the documents that are in the local shard/core. 
>  
>  
> Here's some vocab to help with the descriptions of the various parameters.
> ||Term||Description||
> |Local Collection|This is the main collection that is being queried.|
> |Remote Collection|This is the collection that the XCJFQuery will query to 
> resolve the join keys.|
> |XCJFQuery|The lucene query that executes a search to get back a set of join 
> keys from a remote collection|
> |HashRangeQuery|The lucene query that matches only the documents whose hash 
> code on a field falls within a specified range.|
>  
>  
> ||Param ||Required ||Description||
> |collection|Required|The name of the external Solr collection to be queried 
> to retrieve the set of join key values ( required )|
> |zkHost|Optional|The connection string to be used to connect to Zookeeper.  
> zkHost and solrUrl are both optional parameters, and at most one of them 
> should be specified.  
> If neither of zkHost or solrUrl are specified, the local Zookeeper cluster 
> will be used. ( optional )|
> |solrUrl|Optional|The URL of the external Solr node to be queried ( optional 
> )|
> |from|Required|The join key field name in the external collection ( required 
> )|
> |to|Required|The join key field name in the local collection|
> |v|See Note|The query to be executed against the external Solr collection to 
> retrieve the set of join key values.  
> Note:  The original query 

[jira] [Assigned] (SOLR-13939) Extract any non-gradle related patches (deprecations, URL fixes, etc.) from gradle effort

2020-05-21 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-13939:
-

Assignee: Erick Erickson

> Extract any non-gradle related patches (deprecations, URL fixes, etc.) from 
> gradle effort
> -
>
> Key: SOLR-13939
> URL: https://issues.apache.org/jira/browse/SOLR-13939
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-13939.patch, eoe_merged.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13939) Extract any non-gradle related patches (deprecations, URL fixes, etc.) from gradle effort

2020-05-21 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113407#comment-17113407
 ] 

Erick Erickson commented on SOLR-13939:
---

Ah, good point. I'll leave it open then and assign it to myself to get to 
"sometime"

> Extract any non-gradle related patches (deprecations, URL fixes, etc.) from 
> gradle effort
> -
>
> Key: SOLR-13939
> URL: https://issues.apache.org/jira/browse/SOLR-13939
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Priority: Major
> Attachments: SOLR-13939.patch, eoe_merged.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9378) Configurable compression for BinaryDocValues

2020-05-21 Thread Viral Gandhi (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viral Gandhi updated LUCENE-9378:
-
Description: 
Lucene 8.5.1 includes a change to always [compress 
BinaryDocValues|https://issues.apache.org/jira/browse/LUCENE-9211]. This caused 
(~30%) reduction in our red-line QPS (throughput). 

We think users should be given some way to opt-in for this compression feature 
instead of always being enabled which can have a substantial query time cost as 
we saw during our upgrade. [~mikemccand] suggested one possible approach by 
introducing a *mode* in Lucene84DocValuesFormat (COMPRESSED and UNCOMPRESSED) 
and allowing users to create a custom Codec subclassing the default Codec and 
pick the format they want.

Idea is similar to Lucene50StoredFieldsFormat which has two modes, 
Mode.BEST_SPEED and Mode.BEST_COMPRESSION.

Here's related issues for adding benchmark covering BINARY doc values 
query-time performance - [https://github.com/mikemccand/luceneutil/issues/61]

  was:
Lucene 8.5.1 includes a change to always compress BinaryDocValues. This caused 
(~30%) reduction in our red-line QPS (throughput). 

We think users should be given some way to opt-in for this compression feature 
instead of always being enabled which can have a substantial query time cost as 
we saw during our upgrade. [~mikemccand] suggested one possible approach by 
introducing a *mode* in Lucene84DocValuesFormat (COMPRESSED and UNCOMPRESSED) 
and allowing users to create a custom Codec subclassing the default Codec and 
pick the format they want.

Idea is similar to Lucene50StoredFieldsFormat which has two modes, 
Mode.BEST_SPEED and Mode.BEST_COMPRESSION.

Here's related issues for adding benchmark covering BINARY doc values 
query-time performance - [https://github.com/mikemccand/luceneutil/issues/61]


> Configurable compression for BinaryDocValues
> 
>
> Key: LUCENE-9378
> URL: https://issues.apache.org/jira/browse/LUCENE-9378
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Viral Gandhi
>Priority: Minor
>
> Lucene 8.5.1 includes a change to always [compress 
> BinaryDocValues|https://issues.apache.org/jira/browse/LUCENE-9211]. This 
> caused (~30%) reduction in our red-line QPS (throughput). 
> We think users should be given some way to opt-in for this compression 
> feature instead of always being enabled which can have a substantial query 
> time cost as we saw during our upgrade. [~mikemccand] suggested one possible 
> approach by introducing a *mode* in Lucene84DocValuesFormat (COMPRESSED and 
> UNCOMPRESSED) and allowing users to create a custom Codec subclassing the 
> default Codec and pick the format they want.
> Idea is similar to Lucene50StoredFieldsFormat which has two modes, 
> Mode.BEST_SPEED and Mode.BEST_COMPRESSION.
> Here's related issues for adding benchmark covering BINARY doc values 
> query-time performance - [https://github.com/mikemccand/luceneutil/issues/61]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9378) Configurable compression for BinaryDocValues

2020-05-21 Thread Viral Gandhi (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viral Gandhi updated LUCENE-9378:
-
Description: 
Lucene 8.5.1 includes a change to always compress BinaryDocValues. This caused 
(~30%) reduction in our red-line QPS (throughput). 

We think users should be given some way to opt-in for this compression feature 
instead of always being enabled which can have a substantial query time cost as 
we saw during our upgrade. [~mikemccand] suggested one possible approach by 
introducing a *mode* in Lucene84DocValuesFormat (COMPRESSED and UNCOMPRESSED) 
and allowing users to create a custom Codec subclassing the default Codec and 
pick the format they want.

Idea is similar to Lucene50StoredFieldsFormat which has two modes, 
Mode.BEST_SPEED and Mode.BEST_COMPRESSION.

Here's related issues for adding benchmark covering BINARY doc values 
query-time performance - [https://github.com/mikemccand/luceneutil/issues/61]

  was:
Lucene 8.5.1 includes a change to always [compress 
BinaryDocValues|https://issues.apache.org/jira/browse/LUCENE-9211]. This caused 
(~30%) reduction in our red-line QPS (throughput). 

We think users should be given some way to opt-in for this compression feature 
instead of always being enabled which can have a substantial query time cost as 
we saw during our upgrade. [~mikemccand] suggested one possible approach by 
introducing a *mode* in Lucene84DocValuesFormat (COMPRESSED and UNCOMPRESSED) 
and allowing users to create a custom Codec subclassing the default Codec and 
pick the format they want.

Idea is similar to Lucene50StoredFieldsFormat which has two modes, 
Mode.BEST_SPEED and Mode.BEST_COMPRESSION.

Here' related issues for adding benchmark covering BINARY doc values query-time 
performance - [https://github.com/mikemccand/luceneutil/issues/61]


> Configurable compression for BinaryDocValues
> 
>
> Key: LUCENE-9378
> URL: https://issues.apache.org/jira/browse/LUCENE-9378
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Viral Gandhi
>Priority: Minor
>
> Lucene 8.5.1 includes a change to always compress BinaryDocValues. This 
> caused (~30%) reduction in our red-line QPS (throughput). 
> We think users should be given some way to opt-in for this compression 
> feature instead of always being enabled which can have a substantial query 
> time cost as we saw during our upgrade. [~mikemccand] suggested one possible 
> approach by introducing a *mode* in Lucene84DocValuesFormat (COMPRESSED and 
> UNCOMPRESSED) and allowing users to create a custom Codec subclassing the 
> default Codec and pick the format they want.
> Idea is similar to Lucene50StoredFieldsFormat which has two modes, 
> Mode.BEST_SPEED and Mode.BEST_COMPRESSION.
> Here's related issues for adding benchmark covering BINARY doc values 
> query-time performance - [https://github.com/mikemccand/luceneutil/issues/61]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9378) Configurable compression for BinaryDocValues

2020-05-21 Thread Viral Gandhi (Jira)
Viral Gandhi created LUCENE-9378:


 Summary: Configurable compression for BinaryDocValues
 Key: LUCENE-9378
 URL: https://issues.apache.org/jira/browse/LUCENE-9378
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Viral Gandhi


Lucene 8.5.1 includes a change to always [compress 
BinaryDocValues|https://issues.apache.org/jira/browse/LUCENE-9211]. This caused 
(~30%) reduction in our red-line QPS (throughput). 

We think users should be given some way to opt-in for this compression feature 
instead of always being enabled which can have a substantial query time cost as 
we saw during our upgrade. [~mikemccand] suggested one possible approach by 
introducing a *mode* in Lucene84DocValuesFormat (COMPRESSED and UNCOMPRESSED) 
and allowing users to create a custom Codec subclassing the default Codec and 
pick the format they want.

Idea is similar to Lucene50StoredFieldsFormat which has two modes, 
Mode.BEST_SPEED and Mode.BEST_COMPRESSION.

Here' related issues for adding benchmark covering BINARY doc values query-time 
performance - [https://github.com/mikemccand/luceneutil/issues/61]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9211) Adding compression to BinaryDocValues storage

2020-05-21 Thread Viral Gandhi (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113384#comment-17113384
 ] 

Viral Gandhi commented on LUCENE-9211:
--

This improvement had a negative impact on our internal benchmarking when we 
tried to upgrade to Lucene 8.5.1. I have created an issue regarding that - 
https://issues.apache.org/jira/browse/LUCENE-9378.

> Adding compression to BinaryDocValues storage
> -
>
> Key: LUCENE-9211
> URL: https://issues.apache.org/jira/browse/LUCENE-9211
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Reporter: Mark Harwood
>Assignee: Mark Harwood
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 8.5
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> While SortedSetDocValues can be used today to store identical values in a 
> compact form this is not effective for data with many unique values.
> The proposal is that BinaryDocValues should be stored in LZ4 compressed 
> blocks which can dramatically reduce disk storage costs in many cases. The 
> proposal is blocks of a number of documents are stored as a single compressed 
> blob along with metadata that records offsets where the original document 
> values can be found in the uncompressed content.
> There's a trade-off here between efficient compression (more docs-per-block = 
> better compression) and fast retrieval times (fewer docs-per-block = faster 
> read access for single values). A fixed block size of 32 docs seems like it 
> would be a reasonable compromise for most scenarios.
> A PR is up for review here [https://github.com/apache/lucene-solr/pull/1234]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13749) Implement support for joining across collections with multiple shards ( XCJF )

2020-05-21 Thread Gus Heck (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113517#comment-17113517
 ] 

Gus Heck commented on SOLR-13749:
-

Let me clarify the above... some of it is forward looking in the even that the 
NPE I mentioned above gets changed, or some aspect of when we do/don't 
encode/decode URL's gets changed, etc... or in the event that there are 
parameter hacking/hiding/encoding tricks I didn't think of... HTTP is just too 
ubiquitous, and it initiates the connection with a path string of arbitrary 
size... the ZK protocol is only relevant to ZK servers and there is no way 
(that I know of) to make the initial zk connection send a lot of data.

> Implement support for joining across collections with multiple shards ( XCJF )
> --
>
> Key: SOLR-13749
> URL: https://issues.apache.org/jira/browse/SOLR-13749
> Project: Solr
>  Issue Type: New Feature
>Reporter: Kevin Watters
>Assignee: Gus Heck
>Priority: Blocker
> Fix For: 8.6
>
> Attachments: 2020-03 Smiley with ASF hat.jpeg
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This ticket includes 2 query parsers.
> The first one is the "Cross collection join filter"  (XCJF) parser. This is 
> the "Cross-collection join filter" query parser. It can do a call out to a 
> remote collection to get a set of join keys to be used as a filter against 
> the local collection.
> The second one is the Hash Range query parser that you can specify a field 
> name and a hash range, the result is that only the documents that would have 
> hashed to that range will be returned.
> This query parser will do an intersection based on join keys between 2 
> collections.
> The local collection is the collection that you are searching against.
> The remote collection is the collection that contains the join keys that you 
> want to use as a filter.
> Each shard participating in the distributed request will execute a query 
> against the remote collection.  If the local collection is setup with the 
> compositeId router to be routed on the join key field, a hash range query is 
> applied to the remote collection query to only match the documents that 
> contain a potential match for the documents that are in the local shard/core. 
>  
>  
> Here's some vocab to help with the descriptions of the various parameters.
> ||Term||Description||
> |Local Collection|This is the main collection that is being queried.|
> |Remote Collection|This is the collection that the XCJFQuery will query to 
> resolve the join keys.|
> |XCJFQuery|The lucene query that executes a search to get back a set of join 
> keys from a remote collection|
> |HashRangeQuery|The lucene query that matches only the documents whose hash 
> code on a field falls within a specified range.|
>  
>  
> ||Param ||Required ||Description||
> |collection|Required|The name of the external Solr collection to be queried 
> to retrieve the set of join key values ( required )|
> |zkHost|Optional|The connection string to be used to connect to Zookeeper.  
> zkHost and solrUrl are both optional parameters, and at most one of them 
> should be specified.  
> If neither of zkHost or solrUrl are specified, the local Zookeeper cluster 
> will be used. ( optional )|
> |solrUrl|Optional|The URL of the external Solr node to be queried ( optional 
> )|
> |from|Required|The join key field name in the external collection ( required 
> )|
> |to|Required|The join key field name in the local collection|
> |v|See Note|The query to be executed against the external Solr collection to 
> retrieve the set of join key values.  
> Note:  The original query can be passed at the end of the string or as the 
> "v" parameter.  
> It's recommended to use query parameter substitution with the "v" parameter 
> to ensure no issues arise with the default query parsers.|
> |routed| |true / false.  If true, the XCJF query will use each shard's hash 
> range to determine the set of join keys to retrieve for that shard.
> This parameter improves the performance of the cross-collection join, but 
> it depends on the local collection being routed by the toField.  If this 
> parameter is not specified, 
> the XCJF query will try to determine the correct value automatically.|
> |ttl| |The length of time that an XCJF query in the cache will be considered 
> valid, in seconds.  Defaults to 3600 (one hour).  
> The XCJF query will not be aware of changes to the remote collection, so 
> if the remote collection is updated, cached XCJF queries may give inaccurate 
> results.  
> After the ttl period has expired, the XCJF query will re-execute the join 
> against the remote collection.|
> |_All others_| |Any normal Solr parameter can also be specified as a local 
> param.|
>  
> 

[jira] [Comment Edited] (SOLR-13749) Implement support for joining across collections with multiple shards ( XCJF )

2020-05-21 Thread Gus Heck (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113517#comment-17113517
 ] 

Gus Heck edited comment on SOLR-13749 at 5/21/20, 8:26 PM:
---

Let me clarify the above... some of it is forward looking in the event that the 
NPE I mentioned above gets changed, or some aspect of when we do/don't 
encode/decode URL's gets changed, etc... or in the event that there are 
parameter hacking/hiding/encoding tricks I didn't think of... HTTP is just too 
ubiquitous, and it initiates the connection with a path string of arbitrary 
size... the ZK protocol is only relevant to ZK servers and there is no way 
(that I know of) to make the initial zk connection send a lot of data.


was (Author: gus_heck):
Let me clarify the above... some of it is forward looking in the even that the 
NPE I mentioned above gets changed, or some aspect of when we do/don't 
encode/decode URL's gets changed, etc... or in the event that there are 
parameter hacking/hiding/encoding tricks I didn't think of... HTTP is just too 
ubiquitous, and it initiates the connection with a path string of arbitrary 
size... the ZK protocol is only relevant to ZK servers and there is no way 
(that I know of) to make the initial zk connection send a lot of data.

> Implement support for joining across collections with multiple shards ( XCJF )
> --
>
> Key: SOLR-13749
> URL: https://issues.apache.org/jira/browse/SOLR-13749
> Project: Solr
>  Issue Type: New Feature
>Reporter: Kevin Watters
>Assignee: Gus Heck
>Priority: Blocker
> Fix For: 8.6
>
> Attachments: 2020-03 Smiley with ASF hat.jpeg
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This ticket includes 2 query parsers.
> The first one is the "Cross collection join filter"  (XCJF) parser. This is 
> the "Cross-collection join filter" query parser. It can do a call out to a 
> remote collection to get a set of join keys to be used as a filter against 
> the local collection.
> The second one is the Hash Range query parser that you can specify a field 
> name and a hash range, the result is that only the documents that would have 
> hashed to that range will be returned.
> This query parser will do an intersection based on join keys between 2 
> collections.
> The local collection is the collection that you are searching against.
> The remote collection is the collection that contains the join keys that you 
> want to use as a filter.
> Each shard participating in the distributed request will execute a query 
> against the remote collection.  If the local collection is setup with the 
> compositeId router to be routed on the join key field, a hash range query is 
> applied to the remote collection query to only match the documents that 
> contain a potential match for the documents that are in the local shard/core. 
>  
>  
> Here's some vocab to help with the descriptions of the various parameters.
> ||Term||Description||
> |Local Collection|This is the main collection that is being queried.|
> |Remote Collection|This is the collection that the XCJFQuery will query to 
> resolve the join keys.|
> |XCJFQuery|The lucene query that executes a search to get back a set of join 
> keys from a remote collection|
> |HashRangeQuery|The lucene query that matches only the documents whose hash 
> code on a field falls within a specified range.|
>  
>  
> ||Param ||Required ||Description||
> |collection|Required|The name of the external Solr collection to be queried 
> to retrieve the set of join key values ( required )|
> |zkHost|Optional|The connection string to be used to connect to Zookeeper.  
> zkHost and solrUrl are both optional parameters, and at most one of them 
> should be specified.  
> If neither of zkHost or solrUrl are specified, the local Zookeeper cluster 
> will be used. ( optional )|
> |solrUrl|Optional|The URL of the external Solr node to be queried ( optional 
> )|
> |from|Required|The join key field name in the external collection ( required 
> )|
> |to|Required|The join key field name in the local collection|
> |v|See Note|The query to be executed against the external Solr collection to 
> retrieve the set of join key values.  
> Note:  The original query can be passed at the end of the string or as the 
> "v" parameter.  
> It's recommended to use query parameter substitution with the "v" parameter 
> to ensure no issues arise with the default query parsers.|
> |routed| |true / false.  If true, the XCJF query will use each shard's hash 
> range to determine the set of join keys to retrieve for that shard.
> This parameter improves the performance of the cross-collection join, but 
> it depends on the local collection being routed by the toField.  If this 
> 

[jira] [Commented] (SOLR-14504) ZkController LiveNodesListener has NullPointerException in startup race

2020-05-21 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113324#comment-17113324
 ] 

Andrzej Bialecki commented on SOLR-14504:
-

Right, I tried to come up with a unit test, too, but it's such an awkward place 
that it would require some restructuring to do it. I'm not happy about it but 
the fix is simple and makes sense .. so let's do it.

> ZkController LiveNodesListener has NullPointerException in startup race
> ---
>
> Key: SOLR-14504
> URL: https://issues.apache.org/jira/browse/SOLR-14504
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 
> 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14504.patch
>
>
> If a NODELOST event happens before the cloudManager is initialized then a 
> NullPointerException will occur on this line 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
> {code:java}
> byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs())); {code}
> Rather than accessing cloudManager directly, getSolrCloudManager() should be 
> called.
>  
> This happens very rarely, but if it happens it will stop Solr starting, 
> result in "CoreContainer is either not initialized or shutting down". Snippet 
> from 8.3.1
> {noformat}
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)
> 2020-05-19
>  03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> 
> (0)
> 2020-05-19 03:44:56.614 ERROR (main) [   ] 
> o.a.s.s.SolrDispatchFilter Could not start Solr. Check solr/home 
> property and the logs
> 2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NullPointerException
>   at 
> org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
>   at 
> org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
>   at 
> org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
>   at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
>   at org.apache.solr.cloud.ZkController.(ZkController.java:473)
>   at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
>   at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:631){noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9378) Configurable compression for BinaryDocValues

2020-05-21 Thread Adrien Grand (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113506#comment-17113506
 ] 

Adrien Grand commented on LUCENE-9378:
--

If you still have the indices, I'd be curious to know what the difference in 
disk usage is too.

> Configurable compression for BinaryDocValues
> 
>
> Key: LUCENE-9378
> URL: https://issues.apache.org/jira/browse/LUCENE-9378
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Viral Gandhi
>Priority: Minor
>
> Lucene 8.5.1 includes a change to always [compress 
> BinaryDocValues|https://issues.apache.org/jira/browse/LUCENE-9211]. This 
> caused (~30%) reduction in our red-line QPS (throughput). 
> We think users should be given some way to opt-in for this compression 
> feature instead of always being enabled which can have a substantial query 
> time cost as we saw during our upgrade. [~mikemccand] suggested one possible 
> approach by introducing a *mode* in Lucene84DocValuesFormat (COMPRESSED and 
> UNCOMPRESSED) and allowing users to create a custom Codec subclassing the 
> default Codec and pick the format they want.
> Idea is similar to Lucene50StoredFieldsFormat which has two modes, 
> Mode.BEST_SPEED and Mode.BEST_COMPRESSION.
> Here's related issues for adding benchmark covering BINARY doc values 
> query-time performance - [https://github.com/mikemccand/luceneutil/issues/61]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9378) Configurable compression for BinaryDocValues

2020-05-21 Thread Michael Sokolov (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113489#comment-17113489
 ] 

Michael Sokolov commented on LUCENE-9378:
-

I updated luceneutil to enable sorting by a BinaryDocValues field over title 
and ran a test t across a wide range of tasks, comparing branch_8_4 (before) 
and branch_8_5 (after). In this test, all tasks have a title sort criterion 
applied. Interestingly, BrowseDateTaxoFacets shows a big improvement! But 
otherwise we see a pretty significant degradation in performance.

 
||Task||QPS before||StdDev||QPS after||StdDev||Pct diff||
|MedTerm|9.40|(3.1%)|1.68|(0.4%)|-82.1% ( -83% - -81%)|
|LowTerm|20.17|(1.8%)|3.74|(0.4%)|-81.5% ( -82% - -80%)|
|Wildcard|5.25|(3.3%)|1.02|(0.4%)|-80.6% ( -81% - -79%)|
|Prefix3|12.83|(2.3%)|2.52|(0.4%)|-80.4% ( -81% - -79%)|
|OrHighLow|3.07|(4.1%)|0.71|(0.6%)|-76.9% ( -78% - -75%)|
|HighTerm|2.79|(4.6%)|0.72|(0.5%)|-74.1% ( -75% - -72%)|
|Fuzzy2|19.88|(2.7%)|5.16|(0.5%)|-74.0% ( -75% - -72%)|
|IntNRQ|329.04|(1.4%)|85.42|(0.4%)|-74.0% ( -74% - -73%)|
|AndHighHigh|5.44|(3.1%)|1.52|(0.6%)|-72.1% ( -73% - -70%)|
|AndHighMed|7.85|(2.4%)|2.55|(0.6%)|-67.4% ( -68% - -65%)|
|LowSloppyPhrase|5.11|(2.4%)|1.90|(0.6%)|-62.9% ( -64% - -61%)|
|OrHighHigh|1.47|(4.2%)|0.56|(1.0%)|-61.7% ( -64% - -58%)|
|LowPhrase|8.21|(1.9%)|3.23|(0.6%)|-60.6% ( -61% - -59%)|
|HighSloppyPhrase|1.48|(3.2%)|0.61|(0.9%)|-58.9% ( -61% - -56%)|
|Fuzzy1|112.25|(5.7%)|46.46|(1.1%)|-58.6% ( -61% - -54%)|
|MedSloppyPhrase|2.16|(3.0%)|0.94|(0.7%)|-56.5% ( -58% - -54%)|
|OrHighMed|1.23|(4.4%)|0.54|(1.2%)|-55.9% ( -58% - -52%)|
|MedPhrase|2.87|(2.6%)|1.77|(1.0%)|-38.5% ( -40% - -35%)|
|HighPhrase|0.28|(3.3%)|0.21|(1.9%)|-24.1% ( -28% - -19%)|
|HighIntervalsOrdered|0.48|(4.7%)|0.41|(2.9%)|-16.2% ( -22% - -9%)|
|Respell|99.24|(1.7%)|86.51|(0.8%)|-12.8% ( -15% - -10%)|
|AndHighLow|302.35|(2.5%)|276.95|(2.6%)|-8.4% ( -13% - -3%)|
|BrowseDayOfYearTaxoFacets|4202.04|(3.0%)|4057.48|(2.6%)|-3.4% ( -8% - 2%)|
|BrowseMonthTaxoFacets|4160.07|(2.8%)|4080.02|(2.2%)|-1.9% ( -6% - 3%)|
|BrowseDayOfYearSSDVFacets|3.29|(4.9%)|3.29|(7.1%)|0.0% ( -11% - 12%)|
|BrowseMonthSSDVFacets|3.68|(15.7%)|3.69|(16.9%)|0.3% ( -27% - 39%)|
|BrowseDateTaxoFacets|0.54|(6.3%)|0.96|(5.8%)|77.3% ( 61% - 95%)|

> Configurable compression for BinaryDocValues
> 
>
> Key: LUCENE-9378
> URL: https://issues.apache.org/jira/browse/LUCENE-9378
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Viral Gandhi
>Priority: Minor
>
> Lucene 8.5.1 includes a change to always [compress 
> BinaryDocValues|https://issues.apache.org/jira/browse/LUCENE-9211]. This 
> caused (~30%) reduction in our red-line QPS (throughput). 
> We think users should be given some way to opt-in for this compression 
> feature instead of always being enabled which can have a substantial query 
> time cost as we saw during our upgrade. [~mikemccand] suggested one possible 
> approach by introducing a *mode* in Lucene84DocValuesFormat (COMPRESSED and 
> UNCOMPRESSED) and allowing users to create a custom Codec subclassing the 
> default Codec and pick the format they want.
> Idea is similar to Lucene50StoredFieldsFormat which has two modes, 
> Mode.BEST_SPEED and Mode.BEST_COMPRESSION.
> Here's related issues for adding benchmark covering BINARY doc values 
> query-time performance - [https://github.com/mikemccand/luceneutil/issues/61]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14416) Nodes view doesn't work correctly when Solr is hosted on Windows

2020-05-21 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14416:

Security: (was: Public)

> Nodes view doesn't work correctly when Solr is hosted on Windows
> 
>
> Key: SOLR-14416
> URL: https://issues.apache.org/jira/browse/SOLR-14416
> Project: Solr
>  Issue Type: Bug
>  Components: Admin UI
>Affects Versions: 7.7.2, 8.1, 8.2, 8.1.1, 8.3, 8.4, 8.3.1, 8.4.1
>Reporter: Colvin Cowie
>Priority: Minor
> Fix For: 8.5
>
> Attachments: screenshot-1.png
>
>
> I sent a message about this on the mailing list a long time ago and got no 
> replies.
> Originally I saw it on 8.1.1, it's a problem in 8.3.1 and I don't expect it's 
> fixed in 8.5, but I will check.
> On Solr 8.1.1 / 7.7.2 with Oracle 1.8.0_191 25.191-b12 with Solr running on 
> Windows 10
> In the Nodes view of the Admin 
> UI,http://localhost:8983/solr/#/~cloud?view=nodes there is a refresh button. 
> However when you click it, the only thing that gets visibly refreshed is the 
> 'bar chart' (not sure what to call it - it's shown when you choose show 
> details) of the index shard size on disk. The other stats do not update.
> Also, when there is more than one node, only some of the node information is 
> shown
>  !screenshot-1.png! 
> Firefox dev console shows:
> {noformat}
> _Error: s.system.uptime is undefined
> nodesSubController/$scope.reload/<@http://localhost:8983/solr/js/angular/controllers/cloud.js:384:11
> v/http://localhost:8983/solr/libs/angular-resource.min.js:33:133
> processQueue@http://localhost:8983/solr/libs/angular.js:13193:27
> scheduleProcessQueue/<@http://localhost:8983/solr/libs/angular.js:13209:27
> $eval@http://localhost:8983/solr/libs/angular.js:14406:16
> $digest@http://localhost:8983/solr/libs/angular.js:14222:15
> $apply@http://localhost:8983/solr/libs/angular.js:14511:13
> done@http://localhost:8983/solr/libs/angular.js:9669:36
> completeRequest@http://localhost:8983/solr/libs/angular.js:9859:7
> requestLoaded@http://localhost:8983/solr/libs/angular.js:9800:9_
> {noformat}
> The system response has upTimeMs in it for the JVM/JMX properties, but no 
> system/uptime
> {noformat}
> {
>   "responseHeader":{
> "status":0,
> "QTime":63},
>   "localhost:8983_solr":{
> "responseHeader":{
>   "status":0,
>   "QTime":49},
> "mode":"solrcloud",
> "zkHost":"localhost:9983",
> "solr_home":"...",
> "lucene":{
>   "solr-spec-version":"8.1.1",
>   "solr-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:20:01",
>   "lucene-spec-version":"8.1.1",
>   "lucene-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:15:24"},
> "jvm":{
>   "version":"1.8.0_211 25.211-b12",
>   "name":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM",
>   "spec":{
> "vendor":"Oracle Corporation",
> "name":"Java Platform API Specification",
> "version":"1.8"},
>   "jre":{
> "vendor":"Oracle Corporation",
> "version":"1.8.0_211"},
>   "vm":{
> "vendor":"Oracle Corporation",
> "name":"Java HotSpot(TM) 64-Bit Server VM",
> "version":"25.211-b12"},
>   "processors":8,
>   "memory":{
> "free":"1.4 GB",
> "total":"2 GB",
> "max":"2 GB",
> "used":"566.7 MB (%27.7)",
> "raw":{
>   "free":1553268432,
>   "total":2147483648,
>   "max":2147483648,
>   "used":594215216,
>   "used%":27.670302242040634}},
>   "jmx":{
> "bootclasspath":"...",
> "classpath":"start.jar",
> "commandLineArgs":[...],
> "startTime":"2019-06-20T11:41:58.955Z",
> "upTimeMS":516602}},
> "system":{
>   "name":"Windows 10",
>   "arch":"amd64",
>   "availableProcessors":8,
>   "systemLoadAverage":-1.0,
>   "version":"10.0",
>   "committedVirtualMemorySize":2709114880,
>   "freePhysicalMemorySize":16710127616,
>   "freeSwapSpaceSize":16422531072,
>   "processCpuLoad":0.13941671744473663,
>   "processCpuTime":194609375000,
>   "systemCpuLoad":0.25816002967796037,
>   "totalPhysicalMemorySize":34261250048,
>   "totalSwapSpaceSize":39361523712},
> "node":"localhost:8983_solr"}}
> {noformat}
> The SystemInfoHandler does this:
> {code}
> // Try some command line things:
> try {
>   if (!Constants.WINDOWS) {
> info.add( "uname",  execute( "uname -a" ) );
> info.add( "uptime", execute( "uptime" ) );
>   }
> } catch( Exception ex ) {
>   log.warn("Unable to execute command line tools to get operating system 
> properties.", ex);
> }
> {code}
> Which appears to be the problem 

[jira] [Updated] (SOLR-14422) Solr 8.5 Admin UI shows Angular placeholders on first load / refresh

2020-05-21 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14422:

Security: (was: Public)

> Solr 8.5 Admin UI shows Angular placeholders on first load / refresh
> 
>
> Key: SOLR-14422
> URL: https://issues.apache.org/jira/browse/SOLR-14422
> Project: Solr
>  Issue Type: Bug
>  Components: Admin UI
>Affects Versions: 8.5, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14422.patch, image-2020-04-21-14-51-18-923.png
>
>
> When loading / refreshing the Admin UI in 8.5.1, it briefly but _visibly_ 
> shows a placeholder for the "SolrCore Initialization Failures" error message, 
> with a lot of redness. It looks like there is a real problem. Obviously the 
> message then disappears, and it can be ignored.
> However, if I was a first time user, it would not give me confidence that 
> everything is okay. In a way, an error message that appears briefly then 
> disappears before I can finish reading it is worse than one which just stays 
> there.
>  
> Here's a screenshot of what I mean  !image-2020-04-21-14-51-18-923.png!
>  
> I suspect that SOLR-14132 will have caused this
>  
> From a (very) brief googling it seems like using the ng-cloak attribute is 
> the right way to fix this, and it certainly seems to work for me. 
> https://docs.angularjs.org/api/ng/directive/ngCloak
> I will attach a patch with it, but if someone who actually knows Angular etc 
> has a better approach then please go for it



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14469) Removed deprecated code in solr/core (master only)

2020-05-21 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113131#comment-17113131
 ] 

Erick Erickson commented on SOLR-14469:
---

One idea that has come up is to deal with the problem of accumulating cruft via 
precommit rather than fail compilations. So one idea is to fail precommit 
(again, master only and gradle) if:

1. the deprecation annotation doesn't have a "remove in..." message, exact form 
TBD
2. the deprecation annotation doesn't have an indication of what to do instead 
(use method ###, delete, ...), exact form TBD
3. Given <1>, fail if the Lucene/Solr version that's being checked is >= the 
version mentioned in the annotation.

That would make cutting the next major version more painful, all the 
deprecations to be removed in that major version would have to be dealt with 
all at once. In extremis, we could defer the code changes by bumping the 
version mentioned in the annotation

One consequence of this approach would be that merging from master to master-1 
might would get messier, but it's not at all clear to me how much messier...

> Removed deprecated code in solr/core (master only)
> --
>
> Key: SOLR-14469
> URL: https://issues.apache.org/jira/browse/SOLR-14469
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> I'm currently working on getting all the warnings out of the code, so this is 
> something of a placeholder for a week or two.
> There will be sub-tasks, please create them when you start working on a 
> project.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14482) Fix or suppress warnings in solr/search/facet

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113170#comment-17113170
 ] 

ASF subversion and git services commented on SOLR-14482:


Commit 9e041ed80e6ab9525c692bf54cfda30267cc594b in lucene-solr's branch 
refs/heads/branch_8x from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9e041ed ]

SOLR-14482: Fix or suppress warnings in solr/search/facet


> Fix or suppress warnings in solr/search/facet
> -
>
> Key: SOLR-14482
> URL: https://issues.apache.org/jira/browse/SOLR-14482
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Taking this on next since I've just worked on it in SOLR-10810.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14482) Fix or suppress warnings in solr/search/facet

2020-05-21 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-14482.
---
Fix Version/s: 8.6
   Resolution: Fixed

> Fix or suppress warnings in solr/search/facet
> -
>
> Key: SOLR-14482
> URL: https://issues.apache.org/jira/browse/SOLR-14482
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Taking this on next since I've just worked on it in SOLR-10810.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13939) Extract any non-gradle related patches (deprecations, URL fixes, etc.) from gradle effort

2020-05-21 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113193#comment-17113193
 ] 

Erick Erickson commented on SOLR-13939:
---

[~dweiss] Should I close this? I think it's totally obsolete

> Extract any non-gradle related patches (deprecations, URL fixes, etc.) from 
> gradle effort
> -
>
> Key: SOLR-13939
> URL: https://issues.apache.org/jira/browse/SOLR-13939
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Priority: Major
> Attachments: SOLR-13939.patch, eoe_merged.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14421) New examples in solr.in.cmd in Solr 8.5 don't work as provided

2020-05-21 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14421:

Security: (was: Public)

> New examples in solr.in.cmd in Solr 8.5 don't work as provided
> --
>
> Key: SOLR-14421
> URL: https://issues.apache.org/jira/browse/SOLR-14421
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.5, 8.5.1
>Reporter: Colvin Cowie
>Assignee: Jan Høydahl
>Priority: Trivial
> Fix For: 8.6
>
> Attachments: SOLR-14421.patch
>
>
>  
> These SOLR_OPTS examples need to be prefixed with _set_ and don't work when 
> surrounded with quotes 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr.in.cmd#L194-L199]
> {noformat}
> REM SOLR_OPTS="%SOLR_OPTS% -Dsolr.environment=prod"
> REM Specifies the path to a common library directory that will be shared 
> across all cores.
> REM Any JAR files in this directory will be added to the search path for Solr 
> plugins.
> REM If the specified path is not absolute, it will be relative to 
> `%SOLR_HOME%`.
> REM SOLR_OPTS="%SOLR_OPTS% -Dsolr.sharedLib=/path/to/lib"
> {noformat}
> Without set you will get "_'SOLR_OPTS' is not recognized as an internal or 
> external command, operable program or batch file_."
> After adding _set,_ with the quotes you get _"-Dsolr.environment=prod was 
> unexpected at this time."_
> I'll attach a patch
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14504) ZkController LiveNodesListener has NullPointerException in startup race

2020-05-21 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14504:

Security: (was: Public)

> ZkController LiveNodesListener has NullPointerException in startup race
> ---
>
> Key: SOLR-14504
> URL: https://issues.apache.org/jira/browse/SOLR-14504
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 
> 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14504.patch
>
>
> If a NODELOST event happens before the cloudManager is initialized then a 
> NullPointerException will occur on this line 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
> {code:java}
> byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs())); {code}
> Rather than accessing cloudManager directly, getSolrCloudManager() should be 
> called.
>  
> This happens very rarely, but if it happens it will stop Solr starting, 
> result in "CoreContainer is either not initialized or shutting down". Snippet 
> from 8.3.1
> {noformat}
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> (1)
> 2020-05-19
>  03:44:56.606 INFO  (zkCallback-12-thread-2) [   ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> 
> (0)
> 2020-05-19 03:44:56.614 ERROR (main) [   ] 
> o.a.s.s.SolrDispatchFilter Could not start Solr. Check solr/home 
> property and the logs
> 2020-05-19 03:44:56.639 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:java.lang.NullPointerException
>   at 
> org.apache.solr.cloud.ZkController.lambda$registerLiveNodesListener$10(ZkController.java:1020)
>   at 
> org.apache.solr.common.cloud.ZkStateReader.registerLiveNodesListener(ZkStateReader.java:880)
>   at 
> org.apache.solr.cloud.ZkController.registerLiveNodesListener(ZkController.java:1035)
>   at org.apache.solr.cloud.ZkController.init(ZkController.java:917)
>   at org.apache.solr.cloud.ZkController.(ZkController.java:473)
>   at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
>   at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:631){noformat}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14482) Fix or suppress warnings in solr/search/facet

2020-05-21 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113171#comment-17113171
 ] 

ASF subversion and git services commented on SOLR-14482:


Commit 9c066f60f1804c26db8be226429a0be046c5a4db in lucene-solr's branch 
refs/heads/master from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9c066f6 ]

SOLR-14482: Fix or suppress warnings in solr/search/facet


> Fix or suppress warnings in solr/search/facet
> -
>
> Key: SOLR-14482
> URL: https://issues.apache.org/jira/browse/SOLR-14482
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Taking this on next since I've just worked on it in SOLR-10810.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14428) FuzzyQuery has severe memory usage in 8.5

2020-05-21 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14428:

Security: (was: Public)

> FuzzyQuery has severe memory usage in 8.5
> -
>
> Key: SOLR-14428
> URL: https://issues.apache.org/jira/browse/SOLR-14428
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.5, 8.5.1
>Reporter: Colvin Cowie
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.6
>
> Attachments: FuzzyHammer.java, SOLR-14428-WeakReferences.patch, 
> image-2020-04-23-09-18-06-070.png, image-2020-04-24-20-09-31-179.png, 
> screenshot-2.png, screenshot-3.png, screenshot-4.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I sent this to the mailing list
> I'm moving from 8.3.1 to 8.5.1, and started getting Out Of Memory Errors 
> while running our normal tests. After profiling it was clear that the 
> majority of the heap was allocated through FuzzyQuery.
> LUCENE-9068 moved construction of the automata from the FuzzyTermsEnum to the 
> FuzzyQuery's constructor.
> I created a little test ( [^FuzzyHammer.java] ) that fires off fuzzy queries 
> from random UUID strings for 5 minutes
> {code}
> FIELD_NAME + ":" + UUID.randomUUID().toString().replace("-", "") + "~2"
> {code}
> When running against a vanilla Solr 8.31 and 8.4.1 there is no problem, while 
> the memory usage has increased drastically on 8.5.0 and 8.5.1.
> Comparison of heap usage while running the attached test against Solr 8.3.1 
> and 8.5.1 with a single (empty) shard and 4GB heap:
> !image-2020-04-23-09-18-06-070.png! 
> And with 4 shards on 8.4.1 and 8.5.0:
>  !screenshot-2.png! 
> I'm guessing that the memory might be being leaked if the FuzzyQuery objects 
> are referenced from the cache, while the FuzzyTermsEnum would not have been.
> Query Result Cache on 8.5.1:
>  !screenshot-3.png! 
> ~316mb in the cache
> QRC on 8.3.1
>  !screenshot-4.png! 
> <1mb
> With an empty cache, running this query 
> _field_s:e41848af85d24ac197c71db6888e17bc~2_ results in the following memory 
> allocation
> {noformat}
> 8.3.1: CACHE.searcher.queryResultCache.ramBytesUsed:  1520
> 8.5.1: CACHE.searcher.queryResultCache.ramBytesUsed:648855
> {noformat}
> ~1 gives 98253 and ~0 gives 6339 on 8.5.1. 8.3.1 is constant at 1520



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-21 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14503:

Security: (was: Public)

> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14503.patch, SOLR-14503.patch
>
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used even when you specify a different waitForZk value
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14506) COLSTATUS Null Pointer Exception

2020-05-21 Thread Austin Weidler (Jira)
Austin Weidler created SOLR-14506:
-

 Summary: COLSTATUS Null Pointer Exception
 Key: SOLR-14506
 URL: https://issues.apache.org/jira/browse/SOLR-14506
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI, JSON Request API, Schema and Analysis
Affects Versions: 8.5.1, 8.5
 Environment: *"Incidents" collection setup* 
 
 "incidents": {
"stateFormat": 2,
"znodeVersion": 5,
"properties": {
  "autoAddReplicas": "false",
  "maxShardsPerNode": "-1",
  "nrtReplicas": "1",
  "pullReplicas": "0",
  "replicationFactor": "1",
  "router": {
"field": "slug",
"name": "implicit"
  },
  "tlogReplicas": "0"
},
"activeShards": 1,
"inactiveShards": 0
  },
Reporter: Austin Weidler


When querying for collection status, a null pointer exception is returned. I 
believe it is caused by the use of "implicit" routing for the shards and the 
Admin Handler trying to set the "Range" attribute of a shard (when one doesn't 
exist).
{code:java}
// org.apache.solr.handler.admin.ColStatus.getColStatus(ColStatus.java:152)
sliceMap.add("range", s.getRange().toString());
{code}
I believe "getRange()" is NULL since implicit routing is used.

 
{code:java}
"trace": "java.lang.NullPointerException\n\tat 
org.apache.solr.handler.admin.ColStatus.getColStatus(ColStatus.java:152)\n\tat 
org.apache.solr.handler.admin.CollectionsHandler$CollectionOperation.lambda$static$1(CollectionsHandler.java:547)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler$CollectionOperation.execute(CollectionsHandler.java:1326)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:266)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:254)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)\n\tat
 org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:842)\n\tat 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:808)\n\tat
 org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:559)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:420)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:352)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1607)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1577)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)\n\tat
 
org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)\n\tat
 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\n\tat
 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\n\tat
 org.eclipse.jetty.server.Server.handle(Server.java:500)\n\tat 
org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)\n\tat
 org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)\n\tat 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)\n\tat 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:270)\n\tat
 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\n\tat
 org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)\n\tat 

[jira] [Commented] (SOLR-14504) ZkController LiveNodesListener has NullPointerException in startup race

2020-05-21 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113272#comment-17113272
 ] 

Lucene/Solr QA commented on SOLR-14504:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m  3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m  3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m  3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 46m 
24s{color} | {color:green} core in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-14504 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13003634/SOLR-14504.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 9c066f60f18 |
| ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/753/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/753/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> ZkController LiveNodesListener has NullPointerException in startup race
> ---
>
> Key: SOLR-14504
> URL: https://issues.apache.org/jira/browse/SOLR-14504
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 7.7, 7.7.1, 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 
> 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14504.patch
>
>
> If a NODELOST event happens before the cloudManager is initialized then a 
> NullPointerException will occur on this line 
> [https://github.com/apache/lucene-solr/blob/c18666ad05afc02979c150aacd4810cff02e43f3/solr/core/src/java/org/apache/solr/cloud/ZkController.java#L1020]
> {code:java}
> byte[] json = Utils.toJSON(Collections.singletonMap("timestamp", 
> cloudManager.getTimeSource().getEpochTimeNs())); {code}
> Rather than accessing cloudManager directly, getSolrCloudManager() should be 
> called.
>  
> This happens very rarely, but if it happens it will stop Solr starting, 
> result in "CoreContainer is either not initialized or shutting down". Snippet 
> from 8.3.1
> {noformat}
> 2020-05-19 03:44:40.241 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.245 INFO  (zkConnectionManagerCallback-11-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.245 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.359 INFO  (main) [   ] o.a.s.c.c.ConnectionManager 
> Waiting for client to connect to ZooKeeper
> 2020-05-19 03:44:40.361 INFO  (zkConnectionManagerCallback-13-thread-1) [   ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2020-05-19 03:44:40.361 INFO  (main) [   ] o.a.s.c.c.ConnectionManager Client 
> is connected to ZooKeeper
> 2020-05-19 03:44:40.417 INFO  (main) [   ] o.a.s.c.c.ZkStateReader Updated 
> live nodes from ZooKeeper... (0) -> 

[jira] [Updated] (SOLR-14506) COLSTATUS Null Pointer Exception

2020-05-21 Thread Austin Weidler (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Weidler updated SOLR-14506:
--
Priority: Critical  (was: Major)

> COLSTATUS Null Pointer Exception
> 
>
> Key: SOLR-14506
> URL: https://issues.apache.org/jira/browse/SOLR-14506
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, JSON Request API, Schema and Analysis
>Affects Versions: 8.5, 8.5.1
> Environment: *"Incidents" collection setup* 
>  
>  "incidents": {
> "stateFormat": 2,
> "znodeVersion": 5,
> "properties": {
>   "autoAddReplicas": "false",
>   "maxShardsPerNode": "-1",
>   "nrtReplicas": "1",
>   "pullReplicas": "0",
>   "replicationFactor": "1",
>   "router": {
> "field": "slug",
> "name": "implicit"
>   },
>   "tlogReplicas": "0"
> },
> "activeShards": 1,
> "inactiveShards": 0
>   },
>Reporter: Austin Weidler
>Priority: Critical
>
> When querying for collection status, a null pointer exception is returned. I 
> believe it is caused by the use of "implicit" routing for the shards and the 
> Admin Handler trying to set the "Range" attribute of a shard (when one 
> doesn't exist).
> {code:java}
> // org.apache.solr.handler.admin.ColStatus.getColStatus(ColStatus.java:152)
> sliceMap.add("range", s.getRange().toString());
> {code}
> I believe "getRange()" is NULL since implicit routing is used.
>  
> {code:java}
> "trace": "java.lang.NullPointerException\n\tat 
> org.apache.solr.handler.admin.ColStatus.getColStatus(ColStatus.java:152)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler$CollectionOperation.lambda$static$1(CollectionsHandler.java:547)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler$CollectionOperation.execute(CollectionsHandler.java:1326)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:266)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:254)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:842)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:808)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:559)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:420)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:352)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1607)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1577)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)\n\tat
>  
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:500)\n\tat 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)\n\tat
>  

[jira] [Updated] (LUCENE-9377) Unknown query type SynonymQuery in ComplexPhraseQueryParser for boolean clauses

2020-05-21 Thread Nikolay Khitrin (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Khitrin updated LUCENE-9377:

   Attachment: LUCENE-9377.patch
Lucene Fields: New,Patch Available  (was: New)
   Status: Open  (was: Open)

> Unknown query type SynonymQuery in ComplexPhraseQueryParser for boolean 
> clauses
> ---
>
> Key: LUCENE-9377
> URL: https://issues.apache.org/jira/browse/LUCENE-9377
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.4
>Reporter: Nikolay Khitrin
>Priority: Major
> Attachments: LUCENE-9377.patch
>
>
> Follow up for LUCENE-7695.
> ComplexPhraseQueryParser fails with
> {code:java}
> Unknown query type:org.apache.lucene.search.SynonymQuery{code}
> exception on queries like name: "(dog cat) something" if dog expands by 
> SynonymFilter.
> For now parser converts to BooleanQuery only top-level SynonymQueries, but 
> not the nested ones.
> Looks like it can be fixed by simple conversion in BQ clauses handling loop 
> similar to LUCENE-7695.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9377) Unknown query type SynonymQuery in ComplexPhraseQueryParser for boolean clauses

2020-05-21 Thread Nikolay Khitrin (Jira)
Nikolay Khitrin created LUCENE-9377:
---

 Summary: Unknown query type SynonymQuery in 
ComplexPhraseQueryParser for boolean clauses
 Key: LUCENE-9377
 URL: https://issues.apache.org/jira/browse/LUCENE-9377
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 8.4
Reporter: Nikolay Khitrin
 Attachments: LUCENE-9377.patch

Follow up for LUCENE-7695.

ComplexPhraseQueryParser fails with
{code:java}
Unknown query type:org.apache.lucene.search.SynonymQuery{code}
exception on queries like name: "(dog cat) something" if dog expands by 
SynonymFilter.

For now parser converts to BooleanQuery only top-level SynonymQueries, but not 
the nested ones.

Looks like it can be fixed by simple conversion in BQ clauses handling loop 
similar to LUCENE-7695.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13939) Extract any non-gradle related patches (deprecations, URL fixes, etc.) from gradle effort

2020-05-21 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113203#comment-17113203
 ] 

Dawid Weiss commented on SOLR-13939:


{quote}bq. Solr test-related fixes (thread leaks, minor changes to zk stuff I 
have no idea about).
{quote}
There could be some code/ ideas taken from the above but I'm not able to do it 
- I don't know anything about zookeeper. Gradle wise I don't think we can 
extract anything more from that branch.

D.

> Extract any non-gradle related patches (deprecations, URL fixes, etc.) from 
> gradle effort
> -
>
> Key: SOLR-13939
> URL: https://issues.apache.org/jira/browse/SOLR-13939
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Priority: Major
> Attachments: SOLR-13939.patch, eoe_merged.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-14384) Stack SolrRequestInfo

2020-05-21 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-14384:
---

Assignee: David Smiley
Reporter: David Smiley  (was: Mikhail Khludnev)

> Stack SolrRequestInfo
> -
>
> Key: SOLR-14384
> URL: https://issues.apache.org/jira/browse/SOLR-14384
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Sometimes SolrRequestInfo need to be suspended/overridden with a new one that 
> is used temporarily. Examples are in the {{[subquery]}} transformer, and in 
> warm of caches, and in QuerySenderListener (another type of warming), maybe 
> others.  This can be annoying to do correctly, and in at least one place it 
> isn't done correctly.  SolrRequestInfoSuspender shows some complexity.  In 
> this issue, [~dsmiley] proposes using a stack internally to SolrRequestInfo 
> that is push'ed and pop'ed.  It's not the only way to solve this but it's one 
> way.
>  See linked issues for the context and discussion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14384) Stack SolrRequestInfo

2020-05-21 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-14384:

Description: 
Sometimes SolrRequestInfo need to be suspended/overridden with a new one that 
is used temporarily. Examples are in the {{[subquery]}} transformer, and in 
warm of caches, and in QuerySenderListener (another type of warming), maybe 
others.  This can be annoying to do correctly, and in at least one place it 
isn't done correctly.  SolrRequestInfoSuspender shows some complexity.  In this 
issue, [~dsmiley] proposes using a stack internally to SolrRequestInfo that is 
push'ed and pop'ed.  It's not the only way to solve this but it's one way.

 See linked issues for the context and discussion.

  was:Sometimes SolrRequestInfo need to be suspended or overridden. [~dsmiley] 
suggests to introduce stacking for it. See linked issues for the context and 
discussion.


> Stack SolrRequestInfo
> -
>
> Key: SOLR-14384
> URL: https://issues.apache.org/jira/browse/SOLR-14384
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Sometimes SolrRequestInfo need to be suspended/overridden with a new one that 
> is used temporarily. Examples are in the {{[subquery]}} transformer, and in 
> warm of caches, and in QuerySenderListener (another type of warming), maybe 
> others.  This can be annoying to do correctly, and in at least one place it 
> isn't done correctly.  SolrRequestInfoSuspender shows some complexity.  In 
> this issue, [~dsmiley] proposes using a stack internally to SolrRequestInfo 
> that is push'ed and pop'ed.  It's not the only way to solve this but it's one 
> way.
>  See linked issues for the context and discussion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org