[jira] [Updated] (SOLR-13142) Create Collection will put two replicas in same node

2019-01-22 Thread Lyle (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyle updated SOLR-13142:

Attachment: diagnostics.json
CLUSTERSTATUS.json
autoscaling.json

> Create Collection will put two replicas in same node
> 
>
> Key: SOLR-13142
> URL: https://issues.apache.org/jira/browse/SOLR-13142
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.6
>Reporter: Lyle
>Priority: Major
> Attachments: CLUSTERSTATUS.json, autoscaling.json, diagnostics.json
>
>
> I have a solr cluster with two nodes, and both of them are alive.
> Solr cluster: 
> [http://10.58.91.47:8082/search]
> [http://10.58.91.83:8082/search]
>  
> In solr7.5 when create a collection in two nodes cluster with shards=1 and 
> replicationFactor=2, each node will have a replica.
> However, in solr7.6, when use same request to create collection, two replicas 
> will be put in same solr node.
> example:
> create collection request:
> [http://10.58.91.47:8082/search/admin/collections?action=CREATE&name=NewCollection&numShards=1&replicationFactor=2&collection.configName=conf&shards=shard1&maxShardsPerNode=1]
> response:
> {code:java}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 5392
> },
> "success": {
> "10.58.91.47:8082_search": {
> "responseHeader": {
> "status": 0,
> "QTime": 2584
> },
> "core": "NewCollection_shard1_replica_n1"
> }
> }
> }
> {code}
> use clusterstatus request to query replica info:
> request: 
> [http://10.58.91.83:8082/search/admin/collections?action=CLUSTERSTATUS&collection=NewCollection]
> response:
> {code:java}
> Tree
> Chart
> JSON Input
> {
> "responseHeader": {
> "status": 0,
> "QTime": 8
> },
> "cluster": {
> "collections": {
> "NewCollection": {
> "pullReplicas": "0",
> "replicationFactor": "2",
> "shards": {
> "shard1": {
> "range": "8000-7fff",
> "state": "active",
> "replicas": {
> "core_node3": {
> "core": "NewCollection_shard1_replica_n1",
> "base_url": "http://10.58.91.47:8082/search";,
> "node_name": "10.58.91.47:8082_search",
> "state": "active",
> "type": "NRT"
> },
> "core_node4": {
> "core": "NewCollection_shard1_replica_n2",
> "base_url": "http://10.58.91.47:8082/search";,
> "node_name": "10.58.91.47:8082_search",
> "state": "active",
> "type": "NRT",
> "leader": "true"
> }
> }
> }
> },
> "router": {
> "name": "compositeId"
> },
> "maxShardsPerNode": "1",
> "autoAddReplicas": "false",
> "nrtReplicas": "2",
> "tlogReplicas": "0",
> "znodeVersion": 4,
> "configName": "conf"
> }
> },
> "properties": {
> "legacyCloud": "true"
> },
> "live_nodes": [
> "10.58.91.47:8082_search",
> "10.58.91.83:8082_search"
> ]
> }
> }
> {code}
> Is there any change for solr7.6 since I have not find any create collection 
> API changes from document.
> Could anyone please provide help?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13142) Create Collection will put two replicas in same node

2019-01-22 Thread Lyle (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748516#comment-16748516
 ] 

Lyle commented on SOLR-13142:
-

Hi [~gus_heck], 

Sorry for put "can anyone provide help" in Jira ticket, I just meet this issue 
and don't know if I have any config error or solr have changed the default 
behavior for autoscaling.

SOLR-13159 seems same as this issue, However, from [Upgrading to 7.x Releases 
Solr 
7.6|[https://lucene.apache.org/solr/guide/7_6/solr-upgrade-notes.html#solr-7-6%20with%20respect%20to%20default%20core%20placement]
 ] Autoscaling section, it seems the default behavior have changed, *by default 
a node with the fewest number of cores already on it and the highest available 
freedisk will be selected for new core creation*, and *It removes the default 
setting of {{maxShardsPerNode=1}} when an autoscaling policy is in place.*  So 
even I set maxShardsPerNode=1 in create collection request, It will not effect 
since the autoscaling has been set in [^autoscaling.json].

attach files for comparison.

 

Thanks,

Lyle

> Create Collection will put two replicas in same node
> 
>
> Key: SOLR-13142
> URL: https://issues.apache.org/jira/browse/SOLR-13142
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.6
>Reporter: Lyle
>Priority: Major
> Attachments: CLUSTERSTATUS.json, autoscaling.json, diagnostics.json
>
>
> I have a solr cluster with two nodes, and both of them are alive.
> Solr cluster: 
> [http://10.58.91.47:8082/search]
> [http://10.58.91.83:8082/search]
>  
> In solr7.5 when create a collection in two nodes cluster with shards=1 and 
> replicationFactor=2, each node will have a replica.
> However, in solr7.6, when use same request to create collection, two replicas 
> will be put in same solr node.
> example:
> create collection request:
> [http://10.58.91.47:8082/search/admin/collections?action=CREATE&name=NewCollection&numShards=1&replicationFactor=2&collection.configName=conf&shards=shard1&maxShardsPerNode=1]
> response:
> {code:java}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 5392
> },
> "success": {
> "10.58.91.47:8082_search": {
> "responseHeader": {
> "status": 0,
> "QTime": 2584
> },
> "core": "NewCollection_shard1_replica_n1"
> }
> }
> }
> {code}
> use clusterstatus request to query replica info:
> request: 
> [http://10.58.91.83:8082/search/admin/collections?action=CLUSTERSTATUS&collection=NewCollection]
> response:
> {code:java}
> Tree
> Chart
> JSON Input
> {
> "responseHeader": {
> "status": 0,
> "QTime": 8
> },
> "cluster": {
> "collections": {
> "NewCollection": {
> "pullReplicas": "0",
> "replicationFactor": "2",
> "shards": {
> "shard1": {
> "range": "8000-7fff",
> "state": "active",
> "replicas": {
> "core_node3": {
> "core": "NewCollection_shard1_replica_n1",
> "base_url": "http://10.58.91.47:8082/search";,
> "node_name": "10.58.91.47:8082_search",
> "state": "active",
> "type": "NRT"
> },
> "core_node4": {
> "core": "NewCollection_shard1_replica_n2",
> "base_url": "http://10.58.91.47:8082/search";,
> "node_name": "10.58.91.47:8082_search",
> "state": "active",
> "type": "NRT",
> "leader": "true"
> }
> }
> }
> },
> "router": {
> "name": "compositeId"
> },
> "maxShardsPerNode": "1",
> "autoAddReplicas": "false",
> "nrtReplicas": "2",
> "tlogReplicas": "0",
> "znodeVersion": 4,
> "configName": "conf"
> }
> },
> "properties": {
> "legacyCloud": "true"
> },
> "live_nodes": [
> "10.58.91.47:8082_search",
> "10.58.91.83:8082_search"
> ]
> }
> }
> {code}
> Is there any change for solr7.6 since I have not find any create collection 
> API changes from document.
> Could anyone please provide help?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8651) Tokenizer implementations can't be reset

2019-01-22 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748522#comment-16748522
 ] 

Alan Woodward commented on LUCENE-8651:
---

OK, I see what's happening.  The root tokenstream is generated from an 
Analyzer, which calls setReader on its Tokenizer implementation before 
returning it.  This then gets passed to your 
ConcatenatingTokenFilterFactory.create() method, which wraps the tokenstream 
but also adds the decorator - in this case a Tokenizer.

We only go through TokenFilterFactory.create() once though.  Analyzers cache 
the output of TokenStreamComponents for re-use; when the tokenstream comes to 
be re-used, the Analyzer will call setReader on the root again, but it doesn't 
go through TFF.create() again so setReader() doesn't get called on the 
KeywordTokenizer.

I think the solution here is to pull Field$StringTokenStream out to be a 
top-level class that can be used in these sort of cases.

> Tokenizer implementations can't be reset
> 
>
> Key: LUCENE-8651
> URL: https://issues.apache.org/jira/browse/LUCENE-8651
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Dan Meehl
>Priority: Major
> Attachments: LUCENE-8650-2.patch, LUCENE-8651.patch
>
>
> The fine print here is that they can't be reset without calling setReader() 
> every time before reset() is called. The reason for this is that Tokenizer 
> violates the contract put forth by TokenStream.reset() which is the following:
> "Resets this stream to a clean state. Stateful implementations must implement 
> this method so that they can be reused, just as if they had been created 
> fresh."
> Tokenizer implementation's reset function can't reset in that manner because 
> their Tokenizer.close() removes the reference to the underlying Reader 
> because of LUCENE-2387. The catch-22 here is that we don't want to 
> unnecessarily keep around a Reader (memory leak) but we would like to be able 
> to reset() if necessary.
> The patches include an integration test that attempts to use a 
> ConcatenatingTokenStream to join an input TokenStream with a KeywordTokenizer 
> TokenStream. This test fails with an IllegalStateException thrown by 
> Tokenizer.ILLEGAL_STATE_READER.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13142) Create Collection will put two replicas in same node

2019-01-22 Thread Lyle (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748540#comment-16748540
 ] 

Lyle commented on SOLR-13142:
-

after add below Cluster Policies via /admin/autoscaling, new collection's 
replica will be put in two different nodes.
{code:java}
{
"set-cluster-policy": [
{"replica": "<2", "shard": "#EACH", "node": "#ANY"}
]
}
{code}
hence, close this ticket as its expected behavior.

 

> Create Collection will put two replicas in same node
> 
>
> Key: SOLR-13142
> URL: https://issues.apache.org/jira/browse/SOLR-13142
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.6
>Reporter: Lyle
>Priority: Major
> Attachments: CLUSTERSTATUS.json, autoscaling.json, diagnostics.json
>
>
> I have a solr cluster with two nodes, and both of them are alive.
> Solr cluster: 
> [http://10.58.91.47:8082/search]
> [http://10.58.91.83:8082/search]
>  
> In solr7.5 when create a collection in two nodes cluster with shards=1 and 
> replicationFactor=2, each node will have a replica.
> However, in solr7.6, when use same request to create collection, two replicas 
> will be put in same solr node.
> example:
> create collection request:
> [http://10.58.91.47:8082/search/admin/collections?action=CREATE&name=NewCollection&numShards=1&replicationFactor=2&collection.configName=conf&shards=shard1&maxShardsPerNode=1]
> response:
> {code:java}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 5392
> },
> "success": {
> "10.58.91.47:8082_search": {
> "responseHeader": {
> "status": 0,
> "QTime": 2584
> },
> "core": "NewCollection_shard1_replica_n1"
> }
> }
> }
> {code}
> use clusterstatus request to query replica info:
> request: 
> [http://10.58.91.83:8082/search/admin/collections?action=CLUSTERSTATUS&collection=NewCollection]
> response:
> {code:java}
> Tree
> Chart
> JSON Input
> {
> "responseHeader": {
> "status": 0,
> "QTime": 8
> },
> "cluster": {
> "collections": {
> "NewCollection": {
> "pullReplicas": "0",
> "replicationFactor": "2",
> "shards": {
> "shard1": {
> "range": "8000-7fff",
> "state": "active",
> "replicas": {
> "core_node3": {
> "core": "NewCollection_shard1_replica_n1",
> "base_url": "http://10.58.91.47:8082/search";,
> "node_name": "10.58.91.47:8082_search",
> "state": "active",
> "type": "NRT"
> },
> "core_node4": {
> "core": "NewCollection_shard1_replica_n2",
> "base_url": "http://10.58.91.47:8082/search";,
> "node_name": "10.58.91.47:8082_search",
> "state": "active",
> "type": "NRT",
> "leader": "true"
> }
> }
> }
> },
> "router": {
> "name": "compositeId"
> },
> "maxShardsPerNode": "1",
> "autoAddReplicas": "false",
> "nrtReplicas": "2",
> "tlogReplicas": "0",
> "znodeVersion": 4,
> "configName": "conf"
> }
> },
> "properties": {
> "legacyCloud": "true"
> },
> "live_nodes": [
> "10.58.91.47:8082_search",
> "10.58.91.83:8082_search"
> ]
> }
> }
> {code}
> Is there any change for solr7.6 since I have not find any create collection 
> API changes from document.
> Could anyone please provide help?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13142) Create Collection will put two replicas in same node

2019-01-22 Thread Lyle (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyle resolved SOLR-13142.
-
Resolution: Not A Bug

> Create Collection will put two replicas in same node
> 
>
> Key: SOLR-13142
> URL: https://issues.apache.org/jira/browse/SOLR-13142
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.6
>Reporter: Lyle
>Priority: Major
> Attachments: CLUSTERSTATUS.json, autoscaling.json, diagnostics.json
>
>
> I have a solr cluster with two nodes, and both of them are alive.
> Solr cluster: 
> [http://10.58.91.47:8082/search]
> [http://10.58.91.83:8082/search]
>  
> In solr7.5 when create a collection in two nodes cluster with shards=1 and 
> replicationFactor=2, each node will have a replica.
> However, in solr7.6, when use same request to create collection, two replicas 
> will be put in same solr node.
> example:
> create collection request:
> [http://10.58.91.47:8082/search/admin/collections?action=CREATE&name=NewCollection&numShards=1&replicationFactor=2&collection.configName=conf&shards=shard1&maxShardsPerNode=1]
> response:
> {code:java}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 5392
> },
> "success": {
> "10.58.91.47:8082_search": {
> "responseHeader": {
> "status": 0,
> "QTime": 2584
> },
> "core": "NewCollection_shard1_replica_n1"
> }
> }
> }
> {code}
> use clusterstatus request to query replica info:
> request: 
> [http://10.58.91.83:8082/search/admin/collections?action=CLUSTERSTATUS&collection=NewCollection]
> response:
> {code:java}
> Tree
> Chart
> JSON Input
> {
> "responseHeader": {
> "status": 0,
> "QTime": 8
> },
> "cluster": {
> "collections": {
> "NewCollection": {
> "pullReplicas": "0",
> "replicationFactor": "2",
> "shards": {
> "shard1": {
> "range": "8000-7fff",
> "state": "active",
> "replicas": {
> "core_node3": {
> "core": "NewCollection_shard1_replica_n1",
> "base_url": "http://10.58.91.47:8082/search";,
> "node_name": "10.58.91.47:8082_search",
> "state": "active",
> "type": "NRT"
> },
> "core_node4": {
> "core": "NewCollection_shard1_replica_n2",
> "base_url": "http://10.58.91.47:8082/search";,
> "node_name": "10.58.91.47:8082_search",
> "state": "active",
> "type": "NRT",
> "leader": "true"
> }
> }
> }
> },
> "router": {
> "name": "compositeId"
> },
> "maxShardsPerNode": "1",
> "autoAddReplicas": "false",
> "nrtReplicas": "2",
> "tlogReplicas": "0",
> "znodeVersion": 4,
> "configName": "conf"
> }
> },
> "properties": {
> "legacyCloud": "true"
> },
> "live_nodes": [
> "10.58.91.47:8082_search",
> "10.58.91.83:8082_search"
> ]
> }
> }
> {code}
> Is there any change for solr7.6 since I have not find any create collection 
> API changes from document.
> Could anyone please provide help?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-13142) Create Collection will put two replicas in same node

2019-01-22 Thread Lyle (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyle closed SOLR-13142.
---

> Create Collection will put two replicas in same node
> 
>
> Key: SOLR-13142
> URL: https://issues.apache.org/jira/browse/SOLR-13142
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.6
>Reporter: Lyle
>Priority: Major
> Attachments: CLUSTERSTATUS.json, autoscaling.json, diagnostics.json
>
>
> I have a solr cluster with two nodes, and both of them are alive.
> Solr cluster: 
> [http://10.58.91.47:8082/search]
> [http://10.58.91.83:8082/search]
>  
> In solr7.5 when create a collection in two nodes cluster with shards=1 and 
> replicationFactor=2, each node will have a replica.
> However, in solr7.6, when use same request to create collection, two replicas 
> will be put in same solr node.
> example:
> create collection request:
> [http://10.58.91.47:8082/search/admin/collections?action=CREATE&name=NewCollection&numShards=1&replicationFactor=2&collection.configName=conf&shards=shard1&maxShardsPerNode=1]
> response:
> {code:java}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 5392
> },
> "success": {
> "10.58.91.47:8082_search": {
> "responseHeader": {
> "status": 0,
> "QTime": 2584
> },
> "core": "NewCollection_shard1_replica_n1"
> }
> }
> }
> {code}
> use clusterstatus request to query replica info:
> request: 
> [http://10.58.91.83:8082/search/admin/collections?action=CLUSTERSTATUS&collection=NewCollection]
> response:
> {code:java}
> Tree
> Chart
> JSON Input
> {
> "responseHeader": {
> "status": 0,
> "QTime": 8
> },
> "cluster": {
> "collections": {
> "NewCollection": {
> "pullReplicas": "0",
> "replicationFactor": "2",
> "shards": {
> "shard1": {
> "range": "8000-7fff",
> "state": "active",
> "replicas": {
> "core_node3": {
> "core": "NewCollection_shard1_replica_n1",
> "base_url": "http://10.58.91.47:8082/search";,
> "node_name": "10.58.91.47:8082_search",
> "state": "active",
> "type": "NRT"
> },
> "core_node4": {
> "core": "NewCollection_shard1_replica_n2",
> "base_url": "http://10.58.91.47:8082/search";,
> "node_name": "10.58.91.47:8082_search",
> "state": "active",
> "type": "NRT",
> "leader": "true"
> }
> }
> }
> },
> "router": {
> "name": "compositeId"
> },
> "maxShardsPerNode": "1",
> "autoAddReplicas": "false",
> "nrtReplicas": "2",
> "tlogReplicas": "0",
> "znodeVersion": 4,
> "configName": "conf"
> }
> },
> "properties": {
> "legacyCloud": "true"
> },
> "live_nodes": [
> "10.58.91.47:8082_search",
> "10.58.91.83:8082_search"
> ]
> }
> }
> {code}
> Is there any change for solr7.6 since I have not find any create collection 
> API changes from document.
> Could anyone please provide help?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8651) Tokenizer implementations can't be reset

2019-01-22 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748558#comment-16748558
 ] 

Alan Woodward commented on LUCENE-8651:
---

Here's a patch that applies on top of the patch on LUCENE-8650, adding a new 
SingletonTokenStream with some examples of how to use it in conjunction with 
ConcatenatingTokenStream.

> Tokenizer implementations can't be reset
> 
>
> Key: LUCENE-8651
> URL: https://issues.apache.org/jira/browse/LUCENE-8651
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Dan Meehl
>Priority: Major
> Attachments: LUCENE-8650-2.patch, LUCENE-8651.patch, LUCENE-8651.patch
>
>
> The fine print here is that they can't be reset without calling setReader() 
> every time before reset() is called. The reason for this is that Tokenizer 
> violates the contract put forth by TokenStream.reset() which is the following:
> "Resets this stream to a clean state. Stateful implementations must implement 
> this method so that they can be reused, just as if they had been created 
> fresh."
> Tokenizer implementation's reset function can't reset in that manner because 
> their Tokenizer.close() removes the reference to the underlying Reader 
> because of LUCENE-2387. The catch-22 here is that we don't want to 
> unnecessarily keep around a Reader (memory leak) but we would like to be able 
> to reset() if necessary.
> The patches include an integration test that attempts to use a 
> ConcatenatingTokenStream to join an input TokenStream with a KeywordTokenizer 
> TokenStream. This test fails with an IllegalStateException thrown by 
> Tokenizer.ILLEGAL_STATE_READER.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8651) Tokenizer implementations can't be reset

2019-01-22 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8651:
--
Attachment: LUCENE-8651.patch

> Tokenizer implementations can't be reset
> 
>
> Key: LUCENE-8651
> URL: https://issues.apache.org/jira/browse/LUCENE-8651
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Dan Meehl
>Priority: Major
> Attachments: LUCENE-8650-2.patch, LUCENE-8651.patch, LUCENE-8651.patch
>
>
> The fine print here is that they can't be reset without calling setReader() 
> every time before reset() is called. The reason for this is that Tokenizer 
> violates the contract put forth by TokenStream.reset() which is the following:
> "Resets this stream to a clean state. Stateful implementations must implement 
> this method so that they can be reused, just as if they had been created 
> fresh."
> Tokenizer implementation's reset function can't reset in that manner because 
> their Tokenizer.close() removes the reference to the underlying Reader 
> because of LUCENE-2387. The catch-22 here is that we don't want to 
> unnecessarily keep around a Reader (memory leak) but we would like to be able 
> to reset() if necessary.
> The patches include an integration test that attempts to use a 
> ConcatenatingTokenStream to join an input TokenStream with a KeywordTokenizer 
> TokenStream. This test fails with an IllegalStateException thrown by 
> Tokenizer.ILLEGAL_STATE_READER.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13156) Limiting field facet with certain terms via {!terms} not taking into account sorting

2019-01-22 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748581#comment-16748581
 ] 

Mikhail Khludnev commented on SOLR-13156:
-

I'm in doubts. I feel like {{facet.sort}} should work as usual in case of term 
limiting, but {{facet.sort}} doesn't have an _as-given_ option. 

> Limiting field facet with certain terms via {!terms} not taking into account 
> sorting
> 
>
> Key: SOLR-13156
> URL: https://issues.apache.org/jira/browse/SOLR-13156
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Konstantin Perikov
>Priority: Major
>
> When I'm doing limiting facet keys with \{!terms} it doesn't take into 
> account sorting.
> First query not limiting the facet keys:
> {{facet.field=title&facet.sort=count&facet=on&q=*:*}}
> Response as expected:
> {{"facet_counts":\{ "facet_queries":{}, "facet_fields":\{ "title":[ 
> "book2",3, "book1",2, "book3",1]}, "facet_ranges":{}, "facet_intervals":{}, 
> "facet_heatmaps":{}
>  
> When doing it with limiting:
> {{facet.field=\{!terms=Book3,Book2,Book1}title&facet.sort=count&facet=on&q=*:*}}
> I'm getting the exact order of how I list terms:
> {{"facet_counts":\{ "facet_queries":{}, "facet_fields":\{ "title":[ 
> "Book3",1, "Book2",3, "Book1",2]}, "facet_ranges":{}, "facet_intervals":{}, 
> "facet_heatmaps":{}
> I've looked at the code, and it's clearly an issue there:
>  
> org.apache.solr.request.SimpleFacets#getListedTermCounts
>  
> {{for (String term : terms) {}}
> {{    int count = searcher.numDocs(ft.getFieldQuery(null, sf, term), 
> parsed.docs);}}
> {{    res.add(term, count);}}
> {{}}}
>  
> it's just basically iterating over terms and don't do any sorting at all. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8585) Create jump-tables for DocValues at index-time

2019-01-22 Thread Toke Eskildsen (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748610#comment-16748610
 ] 

Toke Eskildsen commented on LUCENE-8585:


Skipping the method call makes sense, [~jpountz], thanks. The synthetic 
accessor on private methods is new to me (I have now read up on it and 
understand the problem), so thanks for the enlightment - there's some older 
code elsewhere I have to re-visit with that in mind.

> Create jump-tables for DocValues at index-time
> --
>
> Key: LUCENE-8585
> URL: https://issues.apache.org/jira/browse/LUCENE-8585
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 8.0
>Reporter: Toke Eskildsen
>Priority: Minor
>  Labels: performance
> Attachments: LUCENE-8585.patch, LUCENE-8585.patch, 
> make_patch_lucene8585.sh
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> As noted in LUCENE-7589, lookup of DocValues should use jump-tables to avoid 
> long iterative walks. This is implemented in LUCENE-8374 at search-time 
> (first request for DocValues from a field in a segment), with the benefit of 
> working without changes to existing Lucene 7 indexes and the downside of 
> introducing a startup time penalty and a memory overhead.
> As discussed in LUCENE-8374, the codec should be updated to create these 
> jump-tables at index time. This eliminates the segment-open time & memory 
> penalties, with the potential downside of increasing index-time for DocValues.
> The three elements of LUCENE-8374 should be transferable to index-time 
> without much alteration of the core structures:
>  * {{IndexedDISI}} block offset and index skips: A {{long}} (64 bits) for 
> every 65536 documents, containing the offset of the block in 33 bits and the 
> index (number of set bits) up to the block in 31 bits.
>  It can be build sequentially and should be stored as a simple sequence of 
> consecutive longs for caching of lookups.
>  As it is fairly small, relative to document count, it might be better to 
> simply memory cache it?
>  * {{IndexedDISI}} DENSE (> 4095, < 65536 set bits) blocks: A {{short}} (16 
> bits) for every 8 {{longs}} (512 bits) for a total of 256 bytes/DENSE_block. 
> Each {{short}} represents the number of set bits up to right before the 
> corresponding sub-block of 512 docIDs.
>  The \{{shorts}} can be computed sequentially or when the DENSE block is 
> flushed (probably the easiest). They should be stored as a simple sequence of 
> consecutive shorts for caching of lookups, one logically independent sequence 
> for each DENSE block. The logical position would be one sequence at the start 
> of every DENSE block.
>  Whether it is best to read all the 16 {{shorts}} up front when a DENSE block 
> is accessed or whether it is best to only read any individual {{short}} when 
> needed is not clear at this point.
>  * Variable Bits Per Value: A {{long}} (64 bits) for every 16384 numeric 
> values. Each {{long}} holds the offset to the corresponding block of values.
>  The offsets can be computed sequentially and should be stored as a simple 
> sequence of consecutive {{longs}} for caching of lookups.
>  The vBPV-offsets has the largest space overhead og the 3 jump-tables and a 
> lot of the 64 bits in each long are not used for most indexes. They could be 
> represented as a simple {{PackedInts}} sequence or {{MonotonicLongValues}}, 
> with the downsides of a potential lookup-time overhead and the need for doing 
> the compression after all offsets has been determined.
> I have no experience with the codec-parts responsible for creating 
> index-structures. I'm quite willing to take a stab at this, although I 
> probably won't do much about it before January 2019. Should anyone else wish 
> to adopt this JIRA-issue or co-work on it, I'll be happy to share.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13143) Output from ExplainAugmenterFactory and DebugQuery for rerank queries is not same

2019-01-22 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748650#comment-16748650
 ] 

Lucene/Solr QA commented on SOLR-13143:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
33s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  4m 26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  4m 26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  4m 27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m  7s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.CollectionsAPISolrJTest |
|   | solr.cloud.autoscaling.sim.TestSimExtremeIndexing |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-13143 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955690/SOLR-13143.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 2aa2c16 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/265/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/265/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/265/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Output from ExplainAugmenterFactory and DebugQuery for rerank queries is not 
> same
> -
>
> Key: SOLR-13143
> URL: https://issues.apache.org/jira/browse/SOLR-13143
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 7.5
>Reporter: Sambhav Kothari
>Priority: Minor
> Attachments: SOLR-13143.patch, bug.patch
>
>
> Currently, if we use the ExplainAugmenterFactory with LtR, instead of using 
> the 
>  model/re-rankers explain method, it uses the default query explain (tf-idf 
>  explanation). This happens because the BasicResultContext doesn't wrap the 
>  
> query([https://github.com/apache/lucene-solr/blob/1d85cd783863f75cea133fb9c452302]
>  
> 214165a4d/solr/core/src/java/org/apache/solr/response/BasicResultContext.java#L6
>  7) with the RankQuery when its set to context's query, which is then used by 
>  the ExplainAugmenterFactory. 
>  
> ([https://github.com/apache/lucene-solr/blob/1d85cd783863f75cea133fb9c45230221416]
>  
> 5a4d/solr/core/src/java/org/apache/solr/response/transform/ExplainAugmenterFacto
>  ry.java#L111).
> As a result there are discrepancies between queries like -
> [http://localhost:8983/solr/collection1/select?q=*:*&collection=collectionName&wt]
>  =json&fl=[explain style=nl],score&rq=\{!ltr model=linear-model}
> [http://localhost:8983/solr/collection1/select?q=*:*&collection=collectionName&wt]
>  =json&fl=score&rq=\{!ltr model=linear-model}&debugQuery=true
> the former outputs the explain from the SimilarityScorer's explain while the 
>  latter uses the correct LtR ModelScorer's explain.
> There are a few other problems with the explain augmenter - for eg. it 
> doesn't 
>  work with grouping

[jira] [Commented] (SOLR-13156) Limiting field facet with certain terms via {!terms} not taking into account sorting

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748651#comment-16748651
 ] 

ASF subversion and git services commented on SOLR-13156:


Commit 1911b3f71ef83f568993106ccd97cff35205f8da in lucene-solr's branch 
refs/heads/branch_7x from Mikhail Khludnev
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1911b3f ]

SOLR-13156: documenting functionality gap.


> Limiting field facet with certain terms via {!terms} not taking into account 
> sorting
> 
>
> Key: SOLR-13156
> URL: https://issues.apache.org/jira/browse/SOLR-13156
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Konstantin Perikov
>Priority: Major
>
> When I'm doing limiting facet keys with \{!terms} it doesn't take into 
> account sorting.
> First query not limiting the facet keys:
> {{facet.field=title&facet.sort=count&facet=on&q=*:*}}
> Response as expected:
> {{"facet_counts":\{ "facet_queries":{}, "facet_fields":\{ "title":[ 
> "book2",3, "book1",2, "book3",1]}, "facet_ranges":{}, "facet_intervals":{}, 
> "facet_heatmaps":{}
>  
> When doing it with limiting:
> {{facet.field=\{!terms=Book3,Book2,Book1}title&facet.sort=count&facet=on&q=*:*}}
> I'm getting the exact order of how I list terms:
> {{"facet_counts":\{ "facet_queries":{}, "facet_fields":\{ "title":[ 
> "Book3",1, "Book2",3, "Book1",2]}, "facet_ranges":{}, "facet_intervals":{}, 
> "facet_heatmaps":{}
> I've looked at the code, and it's clearly an issue there:
>  
> org.apache.solr.request.SimpleFacets#getListedTermCounts
>  
> {{for (String term : terms) {}}
> {{    int count = searcher.numDocs(ft.getFieldQuery(null, sf, term), 
> parsed.docs);}}
> {{    res.add(term, count);}}
> {{}}}
>  
> it's just basically iterating over terms and don't do any sorting at all. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13156) Limiting field facet with certain terms via {!terms} not taking into account sorting

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748656#comment-16748656
 ] 

ASF subversion and git services commented on SOLR-13156:


Commit 7c21445daab611aa37010964fd61f1e31bfbb58b in lucene-solr's branch 
refs/heads/branch_8x from Mikhail Khludnev
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=7c21445 ]

SOLR-13156: documenting functionality gap.


> Limiting field facet with certain terms via {!terms} not taking into account 
> sorting
> 
>
> Key: SOLR-13156
> URL: https://issues.apache.org/jira/browse/SOLR-13156
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Konstantin Perikov
>Priority: Major
>
> When I'm doing limiting facet keys with \{!terms} it doesn't take into 
> account sorting.
> First query not limiting the facet keys:
> {{facet.field=title&facet.sort=count&facet=on&q=*:*}}
> Response as expected:
> {{"facet_counts":\{ "facet_queries":{}, "facet_fields":\{ "title":[ 
> "book2",3, "book1",2, "book3",1]}, "facet_ranges":{}, "facet_intervals":{}, 
> "facet_heatmaps":{}
>  
> When doing it with limiting:
> {{facet.field=\{!terms=Book3,Book2,Book1}title&facet.sort=count&facet=on&q=*:*}}
> I'm getting the exact order of how I list terms:
> {{"facet_counts":\{ "facet_queries":{}, "facet_fields":\{ "title":[ 
> "Book3",1, "Book2",3, "Book1",2]}, "facet_ranges":{}, "facet_intervals":{}, 
> "facet_heatmaps":{}
> I've looked at the code, and it's clearly an issue there:
>  
> org.apache.solr.request.SimpleFacets#getListedTermCounts
>  
> {{for (String term : terms) {}}
> {{    int count = searcher.numDocs(ft.getFieldQuery(null, sf, term), 
> parsed.docs);}}
> {{    res.add(term, count);}}
> {{}}}
>  
> it's just basically iterating over terms and don't do any sorting at all. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13156) Limiting field facet with certain terms via {!terms} not taking into account sorting

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748648#comment-16748648
 ] 

ASF subversion and git services commented on SOLR-13156:


Commit e68697a6de0b380665e9d2d787953035102c318c in lucene-solr's branch 
refs/heads/master from Mikhail Khludnev
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e68697a ]

SOLR-13156: documenting functionality gap.


> Limiting field facet with certain terms via {!terms} not taking into account 
> sorting
> 
>
> Key: SOLR-13156
> URL: https://issues.apache.org/jira/browse/SOLR-13156
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Konstantin Perikov
>Priority: Major
>
> When I'm doing limiting facet keys with \{!terms} it doesn't take into 
> account sorting.
> First query not limiting the facet keys:
> {{facet.field=title&facet.sort=count&facet=on&q=*:*}}
> Response as expected:
> {{"facet_counts":\{ "facet_queries":{}, "facet_fields":\{ "title":[ 
> "book2",3, "book1",2, "book3",1]}, "facet_ranges":{}, "facet_intervals":{}, 
> "facet_heatmaps":{}
>  
> When doing it with limiting:
> {{facet.field=\{!terms=Book3,Book2,Book1}title&facet.sort=count&facet=on&q=*:*}}
> I'm getting the exact order of how I list terms:
> {{"facet_counts":\{ "facet_queries":{}, "facet_fields":\{ "title":[ 
> "Book3",1, "Book2",3, "Book1",2]}, "facet_ranges":{}, "facet_intervals":{}, 
> "facet_heatmaps":{}
> I've looked at the code, and it's clearly an issue there:
>  
> org.apache.solr.request.SimpleFacets#getListedTermCounts
>  
> {{for (String term : terms) {}}
> {{    int count = searcher.numDocs(ft.getFieldQuery(null, sf, term), 
> parsed.docs);}}
> {{    res.add(term, count);}}
> {{}}}
>  
> it's just basically iterating over terms and don't do any sorting at all. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8646) Add multi-term intervals

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748743#comment-16748743
 ] 

ASF subversion and git services commented on LUCENE-8646:
-

Commit ceadb5f1339fe5bf09f9818470872d4957b63ba5 in lucene-solr's branch 
refs/heads/branch_8x from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ceadb5f ]

LUCENE-8646: Multi-term intervals


> Add multi-term intervals
> 
>
> Key: LUCENE-8646
> URL: https://issues.apache.org/jira/browse/LUCENE-8646
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8646.patch
>
>
> We currently have no support for wildcard-type intervals.  I'd like to 
> explore adding some very basic support for prefix and wildcard interval 
> sources, but we need to ensure that we don't end up with the same performance 
> issues that dog SpanMultiTermQueryWrapper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8646) Add multi-term intervals

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748745#comment-16748745
 ] 

ASF subversion and git services commented on LUCENE-8646:
-

Commit 7d7ab14776b7257e09679d840182a4286928e452 in lucene-solr's branch 
refs/heads/master from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=7d7ab14 ]

LUCENE-8646: Multi-term intervals


> Add multi-term intervals
> 
>
> Key: LUCENE-8646
> URL: https://issues.apache.org/jira/browse/LUCENE-8646
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8646.patch
>
>
> We currently have no support for wildcard-type intervals.  I'd like to 
> explore adding some very basic support for prefix and wildcard interval 
> sources, but we need to ensure that we don't end up with the same performance 
> issues that dog SpanMultiTermQueryWrapper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8645) Add fixed field intervals

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748742#comment-16748742
 ] 

ASF subversion and git services commented on LUCENE-8645:
-

Commit 120276295bbcd0a813ca5a74b5eb2a4ba5b35bf3 in lucene-solr's branch 
refs/heads/branch_8x from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1202762 ]

LUCENE-8645: Intervals.fixField()


> Add fixed field intervals
> -
>
> Key: LUCENE-8645
> URL: https://issues.apache.org/jira/browse/LUCENE-8645
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8645.patch
>
>
> It can be useful to report intervals from one fields as if they came from 
> another.  For example, fast prefix searches can be implemented by indexing 
> text into two fields, one with the full terms and one with edge-ngrams 
> enabled; to do proximity searches against terms and prefixes, you could wrap 
> a term query against the ngrammed field so that its intervals appear to come 
> from the normal field, and use it an an ordered or unordered interval.
> This is analogous to the FieldMaskingSpanQuery, but has the advantage that we 
> don't use term statistics for scoring interval queries, so there is no issue 
> with mixing up field weights from different fields.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] iverase opened a new pull request #546: LUCENE-8620: LatLonShape contains

2019-01-22 Thread GitBox
iverase opened a new pull request #546: LUCENE-8620: LatLonShape contains
URL: https://github.com/apache/lucene-solr/pull/546
 
 
   LatLonShape's implementation for spatial relationship CONTAINS.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8645) Add fixed field intervals

2019-01-22 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-8645.
---
   Resolution: Fixed
Fix Version/s: 8.0

> Add fixed field intervals
> -
>
> Key: LUCENE-8645
> URL: https://issues.apache.org/jira/browse/LUCENE-8645
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.0
>
> Attachments: LUCENE-8645.patch
>
>
> It can be useful to report intervals from one fields as if they came from 
> another.  For example, fast prefix searches can be implemented by indexing 
> text into two fields, one with the full terms and one with edge-ngrams 
> enabled; to do proximity searches against terms and prefixes, you could wrap 
> a term query against the ngrammed field so that its intervals appear to come 
> from the normal field, and use it an an ordered or unordered interval.
> This is analogous to the FieldMaskingSpanQuery, but has the advantage that we 
> don't use term statistics for scoring interval queries, so there is no issue 
> with mixing up field weights from different fields.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8646) Add multi-term intervals

2019-01-22 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-8646.
---
   Resolution: Fixed
Fix Version/s: 8.0

> Add multi-term intervals
> 
>
> Key: LUCENE-8646
> URL: https://issues.apache.org/jira/browse/LUCENE-8646
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.0
>
> Attachments: LUCENE-8646.patch
>
>
> We currently have no support for wildcard-type intervals.  I'd like to 
> explore adding some very basic support for prefix and wildcard interval 
> sources, but we need to ensure that we don't end up with the same performance 
> issues that dog SpanMultiTermQueryWrapper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8620) Add CONTAINS support for LatLonShape

2019-01-22 Thread Ignacio Vera (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748747#comment-16748747
 ] 

Ignacio Vera commented on LUCENE-8620:
--

PR #546 opened

> Add CONTAINS support for LatLonShape
> 
>
> Key: LUCENE-8620
> URL: https://issues.apache.org/jira/browse/LUCENE-8620
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/sandbox
>Reporter: Ignacio Vera
>Priority: Major
> Fix For: 8.0, 7.7
>
> Attachments: LUCENE-8620.patch, LUCENE-8620.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the only spatial operation that cannot be performed using 
> {{LatLonShape}} is CONTAINS. This issue will add such capability by tracking 
> if an edge of a generated triangle from the {{Tessellator}} is an edge of the 
> polygon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8645) Add fixed field intervals

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748744#comment-16748744
 ] 

ASF subversion and git services commented on LUCENE-8645:
-

Commit 87d68c8253fcb928be4eb2b2d908393252a50ec5 in lucene-solr's branch 
refs/heads/master from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=87d68c8 ]

LUCENE-8645: Intervals.fixField()


> Add fixed field intervals
> -
>
> Key: LUCENE-8645
> URL: https://issues.apache.org/jira/browse/LUCENE-8645
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8645.patch
>
>
> It can be useful to report intervals from one fields as if they came from 
> another.  For example, fast prefix searches can be implemented by indexing 
> text into two fields, one with the full terms and one with edge-ngrams 
> enabled; to do proximity searches against terms and prefixes, you could wrap 
> a term query against the ngrammed field so that its intervals appear to come 
> from the normal field, and use it an an ordered or unordered interval.
> This is analogous to the FieldMaskingSpanQuery, but has the advantage that we 
> don't use term statistics for scoring interval queries, so there is no issue 
> with mixing up field weights from different fields.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8653) Reverse FST storage so it can be read forward

2019-01-22 Thread Mike Sokolov (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748756#comment-16748756
 ] 

Mike Sokolov commented on LUCENE-8653:
--

The reverse reading is required because the FST serializes itself from an 
Object-heavy DAG of Nodes and Arcs into an array of bytes by traversing the DAG 
backwards, but writing forwards into the byte storage. And it optimizes 
straight-line sections of the DAG by eliminating the explicit pointers and just 
implicitly pointing to the (logically) next Node in the byte array, so "next" 
here means *at the next lower byte address*. We can eliminate this reversal by 
reversing the byte array after serialization and fixing-up the explicit 
pointers when we read them. We can't really fix them up in place without more 
major surgery because they are VInts.

> Reverse FST storage so it can be read forward
> -
>
> Key: LUCENE-8653
> URL: https://issues.apache.org/jira/browse/LUCENE-8653
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/FSTs
>Reporter: Mike Sokolov
>Priority: Major
>
> Discussion of keeping FST off-heap led to the idea of ensuring that FST's can 
> be read forward in order to be more cache-friendly and align better with 
> standard I/O practice. Today FSTs are read in reverse and this leads to some 
> awkwardness, and you can't use standard readers so the code can be confusing 
> to work with.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] ctargett commented on issue #545: SOLR-13157: Convert Unicode double quotes to standard quotes

2019-01-22 Thread GitBox
ctargett commented on issue #545: SOLR-13157: Convert Unicode double quotes to 
standard quotes
URL: https://github.com/apache/lucene-solr/pull/545#issuecomment-456412873
 
 
   Added Jira issue ID to the title, that allows Jira to pick up changes, 
merges, commits from GH.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 6 - Unstable

2019-01-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/6/

2 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([646AC37EFDF8996A]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([646AC37EFDF8996A]:0)




Build Log:
[...truncated 16044 lines...]
   [junit4] Suite: org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest
   [junit4]   2> 1891981 INFO  
(SUITE-ChaosMonkeyNothingIsSafeTest-seed#[646AC37EFDF8996A]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/checkout/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeyNothingIsSafeTest_646AC37EFDF8996A-001/init-core-data-001
   [junit4]   2> 1891982 WARN  
(SUITE-ChaosMonkeyNothingIsSafeTest-seed#[646AC37EFDF8996A]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=2 numCloses=2
   [junit4]   2> 1891985 INFO  
(SUITE-ChaosMonkeyNothingIsSafeTest-seed#[646AC37EFDF8996A]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 1891987 INFO  
(SUITE-ChaosMonkeyNothingIsSafeTest-seed#[646AC37EFDF8996A]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-5776)
   [junit4]   2> 1891987 INFO  
(SUITE-ChaosMonkeyNothingIsSafeTest-seed#[646AC37EFDF8996A]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /ty_n/d
   [junit4]   2> 1891999 INFO  
(TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[646AC37EFDF8996A]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1892000 INFO  (ZkTestServer Run Thread) [] 
o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1892000 INFO  (ZkTestServer Run Thread) [] 
o.a.s.c.ZkTestServer Starting server
   [junit4]   2> 1892100 INFO  
(TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[646AC37EFDF8996A]) [] 
o.a.s.c.ZkTestServer start zk server on port:35962
   [junit4]   2> 1892100 INFO  
(TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[646AC37EFDF8996A]) [] 
o.a.s.c.ZkTestServer parse host and port list: 127.0.0.1:35962
   [junit4]   2> 1892100 INFO  
(TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[646AC37EFDF8996A]) [] 
o.a.s.c.ZkTestServer connecting to 127.0.0.1 35962
   [junit4]   2> 1892129 INFO  (zkConnectionManagerCallback-38403-thread-1) [   
 ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1892153 INFO  (zkConnectionManagerCallback-38405-thread-1) [   
 ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1892156 INFO  
(TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[646AC37EFDF8996A]) [] 
o.a.s.c.ZkTestServer put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/checkout/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 1892157 INFO  
(TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[646AC37EFDF8996A]) [] 
o.a.s.c.ZkTestServer put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/checkout/solr/core/src/test-files/solr/collection1/conf/schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 1892159 INFO  
(TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[646AC37EFDF8996A]) [] 
o.a.s.c.ZkTestServer put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/checkout/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 1892160 INFO  
(TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[646AC37EFDF8996A]) [] 
o.a.s.c.ZkTestServer put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/checkout/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 1892161 INFO  
(TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[646AC37EFDF8996A]) [] 
o.a.s.c.ZkTestServer put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/checkout/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 1892162 INFO  
(TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[646AC37EFDF8996A]) [] 
o.a.s.c.ZkTestServer put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/checkout/solr/core/src/test-files/solr/col

[jira] [Created] (SOLR-13161) Drag and drop replica move - admin ui

2019-01-22 Thread Jeremy Branham (JIRA)
Jeremy Branham created SOLR-13161:
-

 Summary: Drag and drop replica move - admin ui
 Key: SOLR-13161
 URL: https://issues.apache.org/jira/browse/SOLR-13161
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI
Reporter: Jeremy Branham


On the "cloud > nodes" admin screen, it would be nice to have a drag and drop 
way to move replicas around.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13162) Admin UI development-test cycle is slow

2019-01-22 Thread Jeremy Branham (JIRA)
Jeremy Branham created SOLR-13162:
-

 Summary: Admin UI development-test cycle is slow
 Key: SOLR-13162
 URL: https://issues.apache.org/jira/browse/SOLR-13162
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI
Reporter: Jeremy Branham


When developing the admin user interface, it takes a long time to rebuild the 
server to do testing.

It would be nice to have a small test harness or the admin ui, so that 'ant 
server' doesnt need to be executed before testing changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8651) Tokenizer implementations can't be reset

2019-01-22 Thread Dan Meehl (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748798#comment-16748798
 ] 

Dan Meehl commented on LUCENE-8651:
---

This is basically the same solution I came to in LUCENE-8650 (patch2). I ended 
up calling mine KeywordTokenStream to keep the naming in line because it 
matches what KeywordTokenizer does. 

Honestly though, the lifecycle of a Tokenizer still feels wrong to me. All 
other TokenStreams have a reset(), incrementToken(), end(), close() lifecycle. 
But Tokenizer has an extra setReader() in there, and the consumer must know 
that it's a Tokenizer and therefore must call the extra step (assuming it even 
has access to the Reader). It feels to me like Tokenizer should have to conform 
to the same lifecycle steps as every other TokenStream. Or at least, if that 
can't be true, Tokenizer implementations should be able to set their reader by 
overriding reset(). This currently can't be done because inputPending and 
setReader() and ILLEGAL_STATE_READER are final. If this could be done then one 
could construct a Tokenizer implementation that conformed to the TokenStream 
lifecycle and then the consumer doesn't have to know anything about Tokenizer. 
After all, that is the point of an abstraction like this: If the consumer takes 
a TokenStream, then it knows what the lifecycle is. 

If the lifecycle of Tokenizer is to stay the same, I'd like to propose a 
documentation update on TokenStream and Tokenizer. I can take a swing at that 
and post a patch if you'd like.

> Tokenizer implementations can't be reset
> 
>
> Key: LUCENE-8651
> URL: https://issues.apache.org/jira/browse/LUCENE-8651
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Dan Meehl
>Priority: Major
> Attachments: LUCENE-8650-2.patch, LUCENE-8651.patch, LUCENE-8651.patch
>
>
> The fine print here is that they can't be reset without calling setReader() 
> every time before reset() is called. The reason for this is that Tokenizer 
> violates the contract put forth by TokenStream.reset() which is the following:
> "Resets this stream to a clean state. Stateful implementations must implement 
> this method so that they can be reused, just as if they had been created 
> fresh."
> Tokenizer implementation's reset function can't reset in that manner because 
> their Tokenizer.close() removes the reference to the underlying Reader 
> because of LUCENE-2387. The catch-22 here is that we don't want to 
> unnecessarily keep around a Reader (memory leak) but we would like to be able 
> to reset() if necessary.
> The patches include an integration test that attempts to use a 
> ConcatenatingTokenStream to join an input TokenStream with a KeywordTokenizer 
> TokenStream. This test fails with an IllegalStateException thrown by 
> Tokenizer.ILLEGAL_STATE_READER.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8340) Efficient boosting by recency

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748800#comment-16748800
 ] 

ASF subversion and git services commented on LUCENE-8340:
-

Commit 452ffa3626a7f535cf77ea8ecbaf8176d8068084 in lucene-solr's branch 
refs/heads/master from Uwe Schindler
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=452ffa3 ]

LUCENE-8340: Fix typo in CHANGES.txt


> Efficient boosting by recency
> -
>
> Key: LUCENE-8340
> URL: https://issues.apache.org/jira/browse/LUCENE-8340
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 8.0
>
> Attachments: LUCENE-8340.patch
>
>
> I would like that we support something like 
> \{{FeatureField.newSaturationQuery}} but that works with features that are 
> computed dynamically like recency or geo-distance, and is still optimized for 
> top-hits collection. I'm starting with recency because it makes things a bit 
> easier even though I suspect that geo-distance might be a more common need.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8340) Efficient boosting by recency

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748802#comment-16748802
 ] 

ASF subversion and git services commented on LUCENE-8340:
-

Commit b5cf8dc3c41fa9835ca033fe74b7b13dae03c379 in lucene-solr's branch 
refs/heads/branch_8x from Uwe Schindler
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b5cf8dc ]

LUCENE-8340: Fix typo in CHANGES.txt


> Efficient boosting by recency
> -
>
> Key: LUCENE-8340
> URL: https://issues.apache.org/jira/browse/LUCENE-8340
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 8.0
>
> Attachments: LUCENE-8340.patch
>
>
> I would like that we support something like 
> \{{FeatureField.newSaturationQuery}} but that works with features that are 
> computed dynamically like recency or geo-distance, and is still optimized for 
> top-hits collection. I'm starting with recency because it makes things a bit 
> easier even though I suspect that geo-distance might be a more common need.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13116) Add Admin UI login support for Kerberos

2019-01-22 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748812#comment-16748812
 ] 

Jason Gerlowski commented on SOLR-13116:


I guess I'm fine with that.  I'm not sure what information we'd add that 
wouldn't be a restatement of the instructions already on the login page.

Probably worth double checking that this is given a good description in 
CHANGES.txt though, since it's such a visible change for anyone using auth.

> Add Admin UI login support for Kerberos
> ---
>
> Key: SOLR-13116
> URL: https://issues.apache.org/jira/browse/SOLR-13116
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 8.0, 7.7
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.0, 7.7
>
> Attachments: SOLR-13116.patch, SOLR-13116.patch, eventual_auth.png, 
> improved_login_page.png
>
>
> Spinoff from SOLR-7896. Kerberos auth plugin should get Admin UI Login 
> support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape contains

2019-01-22 Thread GitBox
jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape 
contains
URL: https://github.com/apache/lucene-solr/pull/546#discussion_r249807883
 
 

 ##
 File path: 
lucene/sandbox/src/java/org/apache/lucene/document/LatLonShapeBoundingBoxQuery.java
 ##
 @@ -45,23 +46,38 @@ protected Relation relateRangeBBoxToQuery(int minXOffset, 
int minYOffset, byte[]
 
   /** returns true if the query matches the encoded triangle */
   @Override
-  protected boolean queryMatches(byte[] t, int[] scratchTriangle, 
LatLonShape.QueryRelation queryRelation) {
+  protected boolean queryMatches(byte[] t, LatLonShape.Triangle 
scratchTriangle, LatLonShape.QueryRelation queryRelation) {
 // decode indexed triangle
 LatLonShape.decodeTriangle(t, scratchTriangle);
 
-int aY = scratchTriangle[0];
-int aX = scratchTriangle[1];
-int bY = scratchTriangle[2];
-int bX = scratchTriangle[3];
-int cY = scratchTriangle[4];
-int cX = scratchTriangle[5];
+int aY = scratchTriangle.aY;
+int aX = scratchTriangle.aX;
+int bY = scratchTriangle.bY;
+int bX = scratchTriangle.bX;
+int cY = scratchTriangle.cY;
+int cX = scratchTriangle.cX;
 
 if (queryRelation == LatLonShape.QueryRelation.WITHIN) {
   return rectangle2D.containsTriangle(aX, aY, bX, bY, cX, cY);
 }
 return rectangle2D.intersectsTriangle(aX, aY, bX, bY, cX, cY);
   }
 
+  @Override
+  protected EdgeTree.WithinRelation queryWithin(byte[] t, LatLonShape.Triangle 
scratchTriangle) {
+// decode indexed triangle
+LatLonShape.decodeTriangle(t, scratchTriangle);
+
+int aY = scratchTriangle.aY;
+int aX = scratchTriangle.aX;
+int bY = scratchTriangle.bY;
+int bX = scratchTriangle.bX;
+int cY = scratchTriangle.cY;
+int cX = scratchTriangle.cX;
+
+return rectangle2D.withinTriangle(aX, aY, scratchTriangle.ab, bX, bY, 
scratchTriangle.bc, cX, cY, scratchTriangle.ca);
 
 Review comment:
   extracting variables doesn't seem helpful here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape contains

2019-01-22 Thread GitBox
jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape 
contains
URL: https://github.com/apache/lucene-solr/pull/546#discussion_r249799963
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/document/LatLonShape.java
 ##
 @@ -118,19 +118,21 @@ public static Query newPolygonQuery(String field, 
QueryRelation queryRelation, P
* these triangles are encoded and inserted as separate indexed POINT fields
*/
   private static class LatLonTriangle extends Field {
-
+//Constructor for points and lines
 
 Review comment:
   :+1:


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape contains

2019-01-22 Thread GitBox
jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape 
contains
URL: https://github.com/apache/lucene-solr/pull/546#discussion_r249811008
 
 

 ##
 File path: 
lucene/sandbox/src/test/org/apache/lucene/document/TestLatLonLineShapeQueries.java
 ##
 @@ -81,35 +81,52 @@ protected Validator getValidator(QueryRelation 
queryRelation) {
 public boolean testBBoxQuery(double minLat, double maxLat, double minLon, 
double maxLon, Object shape) {
   Line line = (Line)shape;
   Rectangle2D rectangle2D = Rectangle2D.create(new Rectangle(minLat, 
maxLat, minLon, maxLon));
+  EdgeTree.WithinRelation withinRelation = 
EdgeTree.WithinRelation.DISJOINT;
   for (int i = 0, j = 1; j < line.numPoints(); ++i, ++j) {
-int[] decoded = encodeDecodeTriangle(line.getLon(i), line.getLat(i), 
line.getLon(j), line.getLat(j), line.getLon(i), line.getLat(i));
+LatLonShape.Triangle decoded = encodeDecodeTriangle(line.getLon(i), 
line.getLat(i), true, line.getLon(j), line.getLat(j), true, line.getLon(i), 
line.getLat(i), true);
 if (queryRelation == QueryRelation.WITHIN) {
-  if (rectangle2D.containsTriangle(decoded[1], decoded[0], decoded[3], 
decoded[2], decoded[5], decoded[4]) == false) {
+  if (rectangle2D.containsTriangle(decoded.aX, decoded.aY, decoded.bX, 
decoded.bY, decoded.cX, decoded.cY) == false) {
 return false;
   }
+} else if(queryRelation == QueryRelation.CONTAINS) {
 
 Review comment:
   let's add one space between `if` and `(`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape contains

2019-01-22 Thread GitBox
jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape 
contains
URL: https://github.com/apache/lucene-solr/pull/546#discussion_r249821415
 
 

 ##
 File path: 
lucene/sandbox/src/test/org/apache/lucene/document/TestLatLonMultiPolygonShapeQueries.java
 ##
 @@ -57,6 +67,25 @@ protected ShapeType getShapeType() {
 return polygons;
   }
 
+  private boolean isDisjoint(Polygon[] polygons, List 
triangles, int totalPolygons) {
+if (totalPolygons == 0) {
+  return true;
+}
+Polygon[] currentPolygons = new Polygon[totalPolygons];
+for (int i =0; i < totalPolygons; i++) {
 
 Review comment:
   add a space between `=` and `0`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape contains

2019-01-22 Thread GitBox
jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape 
contains
URL: https://github.com/apache/lucene-solr/pull/546#discussion_r249810439
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/geo/Rectangle2D.java
 ##
 @@ -224,7 +313,7 @@ private static boolean bboxContainsTriangle(int ax, int 
ay, int bx, int by, int
 && bboxContainsPoint(cx, cy, minX, maxX, minY, maxY);
   }
 
-  /** returns true if the edge (defined by (ax, ay) (bx, by)) intersects the 
query */
+  /** returns true if the edge (defined by (ax, ay) (bx, by)) crosses the 
query */
 
 Review comment:
   docs don't match the method name?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape contains

2019-01-22 Thread GitBox
jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape 
contains
URL: https://github.com/apache/lucene-solr/pull/546#discussion_r249809786
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/geo/Rectangle2D.java
 ##
 @@ -107,7 +107,7 @@ public boolean queryContainsPoint(int x, int y) {
 return eastRelation;
   }
 
-  /** Checks if the rectangle intersects the provided triangle **/
+  /** Checks if the rectangle crosses the provided triangle **/
   public boolean intersectsTriangle(int aX, int aY, int bX, int bY, int cX, 
int cY) {
 
 Review comment:
   the method name should match teh docs


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape contains

2019-01-22 Thread GitBox
jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape 
contains
URL: https://github.com/apache/lucene-solr/pull/546#discussion_r249810206
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/geo/Rectangle2D.java
 ##
 @@ -202,7 +286,7 @@ private boolean queryIntersects(int ax, int ay, int bx, 
int by, int cx, int cy)
 return false;
   }
 
-  /** returns true if the edge (defined by (ax, ay) (bx, by)) intersects the 
query */
+  /** returns true if the edge (defined by (ax, ay) (bx, by)) crosses the 
query */
 
 Review comment:
   docs don't match the method name


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape contains

2019-01-22 Thread GitBox
jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape 
contains
URL: https://github.com/apache/lucene-solr/pull/546#discussion_r249809936
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/geo/Rectangle2D.java
 ##
 @@ -148,6 +148,90 @@ public boolean intersectsTriangle(int aX, int aY, int bX, 
int bY, int cX, int cY
 return false;
   }
 
+  /**
+   *  Checks if the shape is within the provided triangle.
+   *
+   * @param ax longitude of point a of the triangle
+   * @param ay latitude of point a of the triangle
+   * @param ab if edge ab belongs to the original shape
+   * @param bx longitude of point b of the triangle
+   * @param by latitude of point b of the triangle
+   * @param bc if edge bc belongs to the original shape
+   * @param cx longitude of point c of the triangle
+   * @param cy latitude of point c of the triangle
+   * @param ca if edge ca belongs to the original shape
+   * @return the {@link EdgeTree.WithinRelation}
+   */
+  public EdgeTree.WithinRelation withinTriangle(int ax, int ay, boolean ab, 
int bx, int by, boolean bc, int cx, int cy, boolean ca) {
+if (this.crossesDateline() == true) {
+  //Triangles cannot cross the date line so it is always false
+  return EdgeTree.WithinRelation.INTERSECTS;
 
 Review comment:
   how do we know it's not `DISJOINT`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape contains

2019-01-22 Thread GitBox
jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape 
contains
URL: https://github.com/apache/lucene-solr/pull/546#discussion_r249797707
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/geo/EdgeTree.java
 ##
 @@ -150,9 +204,88 @@ private Relation internalComponentRelateTriangle(double 
ax, double ay, double bx
 if (tree.crossesTriangle(ax, ay, bx, by, cx, cy)) {
   return Relation.CELL_CROSSES_QUERY;
 }
+if (pointInTriangle(tree.lon1, tree.lat1, ax, ay, bx, by, cx, cy) == true) 
{
+  return Relation.CELL_CROSSES_QUERY;
+}
 
 Review comment:
   This looks like an unrelated bug, let's have a dedicated issue for it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape contains

2019-01-22 Thread GitBox
jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape 
contains
URL: https://github.com/apache/lucene-solr/pull/546#discussion_r249795317
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/geo/EdgeTree.java
 ##
 @@ -101,6 +101,60 @@ public Relation relateTriangle(double ax, double ay, 
double bx, double by, doubl
 return Relation.CELL_OUTSIDE_QUERY;
   }
 
+  /** Used by {@link withinTriangle} to check the relationship between a 
triangle and the query shape */
+  public enum WithinRelation {
+/** If the shape is a candidate for within. Tipically this is return if 
the query shape is fully inside
+ * the triangle or if the query shape intersects only edges that do not 
belong to the original shape. */
+CANDIDATE,
+/** Return this if if the query shape intersects an edge that does belong 
to the original shape. */
+INTERSECTS,
 
 Review comment:
   is it what we call `CROSSES` elsewhere?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape contains

2019-01-22 Thread GitBox
jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape 
contains
URL: https://github.com/apache/lucene-solr/pull/546#discussion_r249810176
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/geo/Rectangle2D.java
 ##
 @@ -191,7 +275,7 @@ private static void encode(final int minX, final int maxX, 
final int minY, final
 NumericUtils.intToSortableBytes(maxX, b, 3 * BYTES);
   }
 
-  /** returns true if the query intersects the provided triangle (in encoded 
space) */
+  /** returns true if the query crosses the provided triangle (in encoded 
space) */
 
 Review comment:
   docs don't match the method name


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape contains

2019-01-22 Thread GitBox
jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape 
contains
URL: https://github.com/apache/lucene-solr/pull/546#discussion_r249812595
 
 

 ##
 File path: 
lucene/sandbox/src/test/org/apache/lucene/document/TestLatLonLineShapeQueries.java
 ##
 @@ -81,35 +81,52 @@ protected Validator getValidator(QueryRelation 
queryRelation) {
 public boolean testBBoxQuery(double minLat, double maxLat, double minLon, 
double maxLon, Object shape) {
   Line line = (Line)shape;
   Rectangle2D rectangle2D = Rectangle2D.create(new Rectangle(minLat, 
maxLat, minLon, maxLon));
+  EdgeTree.WithinRelation withinRelation = 
EdgeTree.WithinRelation.DISJOINT;
   for (int i = 0, j = 1; j < line.numPoints(); ++i, ++j) {
-int[] decoded = encodeDecodeTriangle(line.getLon(i), line.getLat(i), 
line.getLon(j), line.getLat(j), line.getLon(i), line.getLat(i));
+LatLonShape.Triangle decoded = encodeDecodeTriangle(line.getLon(i), 
line.getLat(i), true, line.getLon(j), line.getLat(j), true, line.getLon(i), 
line.getLat(i), true);
 if (queryRelation == QueryRelation.WITHIN) {
-  if (rectangle2D.containsTriangle(decoded[1], decoded[0], decoded[3], 
decoded[2], decoded[5], decoded[4]) == false) {
+  if (rectangle2D.containsTriangle(decoded.aX, decoded.aY, decoded.bX, 
decoded.bY, decoded.cX, decoded.cY) == false) {
 return false;
   }
+} else if(queryRelation == QueryRelation.CONTAINS) {
+  EdgeTree.WithinRelation relation = 
rectangle2D.withinTriangle(decoded.aX, decoded.aY, decoded.ab, decoded.bX, 
decoded.bY, decoded.bc, decoded.cX, decoded.cY, decoded.ca);
+  if  (relation == EdgeTree.WithinRelation.INTERSECTS) {
 
 Review comment:
   only one space between `if` and `(`? :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape contains

2019-01-22 Thread GitBox
jpountz commented on a change in pull request #546: LUCENE-8620: LatLonShape 
contains
URL: https://github.com/apache/lucene-solr/pull/546#discussion_r249791047
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/geo/EdgeTree.java
 ##
 @@ -101,6 +101,60 @@ public Relation relateTriangle(double ax, double ay, 
double bx, double by, doubl
 return Relation.CELL_OUTSIDE_QUERY;
   }
 
+  /** Used by {@link withinTriangle} to check the relationship between a 
triangle and the query shape */
+  public enum WithinRelation {
+/** If the shape is a candidate for within. Tipically this is return if 
the query shape is fully inside
 
 Review comment:
   s/Tipically/Typically/


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jdbranham opened a new pull request #547: Admin UI - drag/drop replicas

2019-01-22 Thread GitBox
jdbranham opened a new pull request #547: Admin UI - drag/drop replicas
URL: https://github.com/apache/lucene-solr/pull/547
 
 
   https://issues.apache.org/jira/projects/SOLR/issues/SOLR-13161
   
   drag/drop replica move implemented


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13161) Drag and drop replica move - admin ui

2019-01-22 Thread Jeremy Branham (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748823#comment-16748823
 ] 

Jeremy Branham commented on SOLR-13161:
---

Created PR - https://github.com/apache/lucene-solr/pull/547

> Drag and drop replica move - admin ui
> -
>
> Key: SOLR-13161
> URL: https://issues.apache.org/jira/browse/SOLR-13161
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jeremy Branham
>Priority: Minor
>
> On the "cloud > nodes" admin screen, it would be nice to have a drag and drop 
> way to move replicas around.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13162) Admin UI development-test cycle is slow

2019-01-22 Thread Jeremy Branham (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748824#comment-16748824
 ] 

Jeremy Branham commented on SOLR-13162:
---

Created PR - https://github.com/apache/lucene-solr/pull/547

> Admin UI development-test cycle is slow
> ---
>
> Key: SOLR-13162
> URL: https://issues.apache.org/jira/browse/SOLR-13162
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jeremy Branham
>Priority: Minor
>
> When developing the admin user interface, it takes a long time to rebuild the 
> server to do testing.
> It would be nice to have a small test harness or the admin ui, so that 'ant 
> server' doesnt need to be executed before testing changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Unicode Quotes in query parser

2019-01-22 Thread John Ryan
Thanks Walter,

The solr.ICUNormalizer2CharFilterFactory testing and research I have done leads 
me to believe that quotes are not normalised.

I attempted to do this with character folding, many implementations out there - 
but none actually seem to work. 

I’ll look into the draft.

Thanks
--
John  

> On 21 Jan 2019, at 17:09, Walter Underwood  wrote:
> 
> First, check which transforms are already handled by Unicode normalization. 
> Put this in all of your analyzer chains:
> 
> 
> 
> Probably need this in solrconfig.xml:
> 
>  
>regex=".*\.jar" />
>dir="${solr.install.dir:../../../..}/contrib/analysis-extras/lucene-libs" 
> regex=".*\.jar" />
> 
> I really cannot think of a reason to use unnormalized Unicode in Solr. That 
> should be in all the sample files.
> 
> For search character matching, yes, all spaces should be normalized. I have 
> too many hacks fixing non-breaking spaces spread around the code. When 
> matching, there is zero use for stuff like ideographic space (U+3000).
> 
> I’m not sure if quotes are normalized. I did some searching around without 
> success. That might come under character folding. There was a draft, now 
> withdrawn, for standard character folding. I’d probably start there for a 
> Unicode folding char filter.
> 
> https://www.unicode.org/reports/tr30/tr30-4.html 
> 
> 
> wunder
> Walter Underwood
> wun...@wunderwood.org 
> http://observer.wunderwood.org/  (my blog)
> 
>> On Jan 21, 2019, at 7:43 AM, Michael Sokolov > > wrote:
>> 
>> I think this is probably better to discuss on solr-user, or maybe solr-dev, 
>> since it is dismax parser you are talking about, which really lives in Solr. 
>> However, my 2c  - this seems somewhat dubious. Maybe people want to include 
>> those in their terms? Also, it leads to a kind of slippery slope: would you 
>> also want to convert all the various white space characters (no-break space, 
>> thin space, em space, etc)  as vanilla ascii 32? How about all the other 
>> "operator" characters like brackets?
>> 
>> On Mon, Jan 21, 2019 at 9:50 AM John Ryan > > wrote:
>> I'm looking to create an issue to add support for Unicode Double Quotes to 
>> the dismax parser. 
>> 
>> I want to replace all types of double quotes with standard ones before they 
>> get stripped 
>> 
>> i.e.
>> “ ” „ “ „ « » ‟ ❝ ❞ ⹂ "
>> 
>> With 
>> "
>> I presume this has been discussed before?
>> 
>> I have a POC here: 
>> https://github.com/apache/lucene-solr/compare/branch_7x...jnyryan:branch_7x 
>> 
>> 
>> Thanks, 
>> 
>> John
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
>> 
>> For additional commands, e-mail: dev-h...@lucene.apache.org 
>> 
>> 
> 



Re: Unicode Quotes in query parser

2019-01-22 Thread Mikhail Khludnev
My impression that these quotes are ones which are part of dismax query
syntax ie they should be handled before the analysis happens.

On Mon, Jan 21, 2019 at 8:09 PM Walter Underwood 
wrote:

> First, check which transforms are already handled by Unicode
> normalization. Put this in all of your analyzer chains:
>
> 
>
> Probably need this in solrconfig.xml:
>
>  
>regex=".*\.jar" />
>dir="${solr.install.dir:../../../..}/contrib/analysis-extras/lucene-libs"
> regex=".*\.jar" />
>
> I really cannot think of a reason to use unnormalized Unicode in Solr.
> That should be in all the sample files.
>
> For search character matching, yes, all spaces should be normalized. I
> have too many hacks fixing non-breaking spaces spread around the code. When
> matching, there is zero use for stuff like ideographic space (U+3000).
>
> I’m not sure if quotes are normalized. I did some searching around without
> success. That might come under character folding. There was a draft, now
> withdrawn, for standard character folding. I’d probably start there for a
> Unicode folding char filter.
>
> https://www.unicode.org/reports/tr30/tr30-4.html
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> On Jan 21, 2019, at 7:43 AM, Michael Sokolov  wrote:
>
> I think this is probably better to discuss on solr-user, or maybe
> solr-dev, since it is dismax parser you are talking about, which really
> lives in Solr. However, my 2c  - this seems somewhat dubious. Maybe people
> want to include those in their terms? Also, it leads to a kind of slippery
> slope: would you also want to convert all the various white space
> characters (no-break space, thin space, em space, etc)  as vanilla ascii
> 32? How about all the other "operator" characters like brackets?
>
> On Mon, Jan 21, 2019 at 9:50 AM John Ryan 
> wrote:
>
>> I'm looking to create an issue to add support for Unicode Double Quotes
>> to the dismax parser.
>>
>> I want to replace all types of double quotes with standard ones before
>> they get stripped
>>
>> i.e.
>> “ ” „ “ „ « » ‟ ❝ ❞ ⹂ "
>>
>> With
>> "
>> I presume this has been discussed before?
>>
>> I have a POC here:
>> https://github.com/apache/lucene-solr/compare/branch_7x...jnyryan:branch_7x
>>
>> Thanks,
>>
>> John
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>

-- 
Sincerely yours
Mikhail Khludnev


Re: Unicode Quotes in query parser

2019-01-22 Thread Michael Sokolov
Right - QueryParsers generally do a first pass, parsing incoming Strings
using their operator characters tok tokenize the input and only after that
do they pass the tokens (or phrases) to an Analyzer. I haven't checked
Dismax - not sure how it does its parsing exactly, but I doubt you can just
"turn on the right Analyzer" to get it to recognize curly quotes as phrase
operators, eg.

On Tue, Jan 22, 2019 at 10:39 AM Mikhail Khludnev  wrote:

> My impression that these quotes are ones which are part of dismax query
> syntax ie they should be handled before the analysis happens.
>
> On Mon, Jan 21, 2019 at 8:09 PM Walter Underwood 
> wrote:
>
>> First, check which transforms are already handled by Unicode
>> normalization. Put this in all of your analyzer chains:
>>
>> 
>>
>> Probably need this in solrconfig.xml:
>>
>>  
>>   > regex=".*\.jar" />
>>   > dir="${solr.install.dir:../../../..}/contrib/analysis-extras/lucene-libs"
>> regex=".*\.jar" />
>>
>> I really cannot think of a reason to use unnormalized Unicode in Solr.
>> That should be in all the sample files.
>>
>> For search character matching, yes, all spaces should be normalized. I
>> have too many hacks fixing non-breaking spaces spread around the code. When
>> matching, there is zero use for stuff like ideographic space (U+3000).
>>
>> I’m not sure if quotes are normalized. I did some searching around
>> without success. That might come under character folding. There was a
>> draft, now withdrawn, for standard character folding. I’d probably start
>> there for a Unicode folding char filter.
>>
>> https://www.unicode.org/reports/tr30/tr30-4.html
>>
>> wunder
>> Walter Underwood
>> wun...@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>>
>> On Jan 21, 2019, at 7:43 AM, Michael Sokolov  wrote:
>>
>> I think this is probably better to discuss on solr-user, or maybe
>> solr-dev, since it is dismax parser you are talking about, which really
>> lives in Solr. However, my 2c  - this seems somewhat dubious. Maybe people
>> want to include those in their terms? Also, it leads to a kind of slippery
>> slope: would you also want to convert all the various white space
>> characters (no-break space, thin space, em space, etc)  as vanilla ascii
>> 32? How about all the other "operator" characters like brackets?
>>
>> On Mon, Jan 21, 2019 at 9:50 AM John Ryan 
>> wrote:
>>
>>> I'm looking to create an issue to add support for Unicode Double Quotes
>>> to the dismax parser.
>>>
>>> I want to replace all types of double quotes with standard ones before
>>> they get stripped
>>>
>>> i.e.
>>> “ ” „ “ „ « » ‟ ❝ ❞ ⹂ "
>>>
>>> With
>>> "
>>> I presume this has been discussed before?
>>>
>>> I have a POC here:
>>> https://github.com/apache/lucene-solr/compare/branch_7x...jnyryan:branch_7x
>>>
>>> Thanks,
>>>
>>> John
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>>
>
> --
> Sincerely yours
> Mikhail Khludnev
>


[jira] [Commented] (SOLR-13162) Admin UI development-test cycle is slow

2019-01-22 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748882#comment-16748882
 ] 

Jason Gerlowski commented on SOLR-13162:


It depends what files you're editing, but I think there is an ant command for 
repackaging the admin-ui alone.  You should be able to run {{ant dist}} from 
the {{solr/webapp}} dir.  Could totally be misunderstanding what you're after 
here, or maybe {{ant dist}} is deficient in some way.  Just wanted to mention 
it on the off chance that's what you're looking for.

> Admin UI development-test cycle is slow
> ---
>
> Key: SOLR-13162
> URL: https://issues.apache.org/jira/browse/SOLR-13162
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jeremy Branham
>Priority: Minor
>
> When developing the admin user interface, it takes a long time to rebuild the 
> server to do testing.
> It would be nice to have a small test harness or the admin ui, so that 'ant 
> server' doesnt need to be executed before testing changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] tflobbe closed pull request #37: Move to SolrCloudTestCase

2019-01-22 Thread GitBox
tflobbe closed pull request #37: Move to SolrCloudTestCase
URL: https://github.com/apache/lucene-solr/pull/37
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] tflobbe commented on issue #46: Variables in dataimport config doesn't resolve

2019-01-22 Thread GitBox
tflobbe commented on issue #46: Variables in dataimport config doesn't resolve
URL: https://github.com/apache/lucene-solr/pull/46#issuecomment-456475655
 
 
   Already merged 5c8a70f


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] iverase commented on a change in pull request #546: LUCENE-8620: LatLonShape contains

2019-01-22 Thread GitBox
iverase commented on a change in pull request #546: LUCENE-8620: LatLonShape 
contains
URL: https://github.com/apache/lucene-solr/pull/546#discussion_r249871898
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/geo/EdgeTree.java
 ##
 @@ -150,9 +204,88 @@ private Relation internalComponentRelateTriangle(double 
ax, double ay, double bx
 if (tree.crossesTriangle(ax, ay, bx, by, cx, cy)) {
   return Relation.CELL_CROSSES_QUERY;
 }
+if (pointInTriangle(tree.lon1, tree.lat1, ax, ay, bx, by, cx, cy) == true) 
{
+  return Relation.CELL_CROSSES_QUERY;
+}
 
 Review comment:
   I opened https://issues.apache.org/jira/browse/LUCENE-8654


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8654) Polygon2D#relateTriangle returns the wrong answer if polygon is inside the triangle

2019-01-22 Thread Ignacio Vera (JIRA)
Ignacio Vera created LUCENE-8654:


 Summary: Polygon2D#relateTriangle returns the wrong answer if 
polygon is inside the triangle
 Key: LUCENE-8654
 URL: https://issues.apache.org/jira/browse/LUCENE-8654
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ignacio Vera


The method returns CELL_OUTSIDE_QUERY but the right answer should be 
CELL_CROSSES_QUERY.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] tflobbe closed pull request #90: fixed some CSS syntax errors in Solr webapp

2019-01-22 Thread GitBox
tflobbe closed pull request #90: fixed some CSS syntax errors in Solr webapp
URL: https://github.com/apache/lucene-solr/pull/90
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] tflobbe commented on issue #90: fixed some CSS syntax errors in Solr webapp

2019-01-22 Thread GitBox
tflobbe commented on issue #90: fixed some CSS syntax errors in Solr webapp
URL: https://github.com/apache/lucene-solr/pull/90#issuecomment-456479980
 
 
   These changes were fixed as part of 
   https://github.com/apache/lucene-solr/pull/541


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8651) Tokenizer implementations can't be reset

2019-01-22 Thread Dan Meehl (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748798#comment-16748798
 ] 

Dan Meehl edited comment on LUCENE-8651 at 1/22/19 5:30 PM:


This is basically the same solution I came to in LUCENE-8650 (patch2). I ended 
up calling mine KeywordTokenStream to keep the naming in line because it 
matches what KeywordTokenizer does. 

Honestly though, the lifecycle of a Tokenizer still feels wrong to me. All 
other TokenStreams have a reset(), incrementToken(), end(), close() lifecycle. 
But Tokenizer has an extra setReader() in there, and the consumer must know 
that it's a Tokenizer and then call the extra step (assuming it even has access 
to the Reader). It feels to me like Tokenizer should have to conform to the 
same lifecycle steps as every other TokenStream. Or at least, if that can't be 
true, Tokenizer implementations should be able to set their reader by 
overriding reset(). This currently can't be done because inputPending and 
setReader() and ILLEGAL_STATE_READER are final. If this could be done then one 
could construct a Tokenizer implementation that conformed to the TokenStream 
lifecycle and then the consumer doesn't have to know anything about Tokenizer. 
After all, that is the point of an abstraction like this: If the consumer takes 
a TokenStream, then it knows what the lifecycle is. 

If the lifecycle of Tokenizer is to stay the same, I'd like to propose a 
documentation update on TokenStream and Tokenizer. I can take a swing at that 
and post a patch if you'd like.


was (Author: dmeehl):
This is basically the same solution I came to in LUCENE-8650 (patch2). I ended 
up calling mine KeywordTokenStream to keep the naming in line because it 
matches what KeywordTokenizer does. 

Honestly though, the lifecycle of a Tokenizer still feels wrong to me. All 
other TokenStreams have a reset(), incrementToken(), end(), close() lifecycle. 
But Tokenizer has an extra setReader() in there, and the consumer must know 
that it's a Tokenizer and therefore must call the extra step (assuming it even 
has access to the Reader). It feels to me like Tokenizer should have to conform 
to the same lifecycle steps as every other TokenStream. Or at least, if that 
can't be true, Tokenizer implementations should be able to set their reader by 
overriding reset(). This currently can't be done because inputPending and 
setReader() and ILLEGAL_STATE_READER are final. If this could be done then one 
could construct a Tokenizer implementation that conformed to the TokenStream 
lifecycle and then the consumer doesn't have to know anything about Tokenizer. 
After all, that is the point of an abstraction like this: If the consumer takes 
a TokenStream, then it knows what the lifecycle is. 

If the lifecycle of Tokenizer is to stay the same, I'd like to propose a 
documentation update on TokenStream and Tokenizer. I can take a swing at that 
and post a patch if you'd like.

> Tokenizer implementations can't be reset
> 
>
> Key: LUCENE-8651
> URL: https://issues.apache.org/jira/browse/LUCENE-8651
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Dan Meehl
>Priority: Major
> Attachments: LUCENE-8650-2.patch, LUCENE-8651.patch, LUCENE-8651.patch
>
>
> The fine print here is that they can't be reset without calling setReader() 
> every time before reset() is called. The reason for this is that Tokenizer 
> violates the contract put forth by TokenStream.reset() which is the following:
> "Resets this stream to a clean state. Stateful implementations must implement 
> this method so that they can be reused, just as if they had been created 
> fresh."
> Tokenizer implementation's reset function can't reset in that manner because 
> their Tokenizer.close() removes the reference to the underlying Reader 
> because of LUCENE-2387. The catch-22 here is that we don't want to 
> unnecessarily keep around a Reader (memory leak) but we would like to be able 
> to reset() if necessary.
> The patches include an integration test that attempts to use a 
> ConcatenatingTokenStream to join an input TokenStream with a KeywordTokenizer 
> TokenStream. This test fails with an IllegalStateException thrown by 
> Tokenizer.ILLEGAL_STATE_READER.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8654) Polygon2D#relateTriangle returns the wrong answer if polygon is inside the triangle

2019-01-22 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748937#comment-16748937
 ] 

Adrien Grand commented on LUCENE-8654:
--

+1

> Polygon2D#relateTriangle returns the wrong answer if polygon is inside the 
> triangle
> ---
>
> Key: LUCENE-8654
> URL: https://issues.apache.org/jira/browse/LUCENE-8654
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8654.patch
>
>
> The method returns CELL_OUTSIDE_QUERY but the right answer should be 
> CELL_CROSSES_QUERY.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8654) Polygon2D#relateTriangle returns the wrong answer if polygon is inside the triangle

2019-01-22 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748946#comment-16748946
 ] 

Adrien Grand commented on LUCENE-8654:
--

Please add a note to the changelog when pushing.

> Polygon2D#relateTriangle returns the wrong answer if polygon is inside the 
> triangle
> ---
>
> Key: LUCENE-8654
> URL: https://issues.apache.org/jira/browse/LUCENE-8654
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8654.patch
>
>
> The method returns CELL_OUTSIDE_QUERY but the right answer should be 
> CELL_CROSSES_QUERY.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13163) 'searchRate' trigger: belowNodeOp=DELETNODE can result in loss of leader

2019-01-22 Thread Hoss Man (JIRA)
Hoss Man created SOLR-13163:
---

 Summary: 'searchRate' trigger: belowNodeOp=DELETNODE can result in 
loss of leader
 Key: SOLR-13163
 URL: https://issues.apache.org/jira/browse/SOLR-13163
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


While working on SOLR-13140 I discovered that configuring a very high 
belowNodeRate in {{SearchRateTriggerIntegrationTest.testDeleteNode}} can cause 
all nodes -- even the node hosting the shard leader -- to be the target of 
DELETENODE ops.

this indicates at least one serious bug in the code (we should never allow the 
leader to be deleted), but also raises other questions about situations not 
adequately tested:
* even if the code isn't particularly protecting the leader, why isn't 
minReplicas protecting at least one replica?
* what would happen if multiple replicas co-existed on the same node? would if 
the leader was one of the replicas that existed on the same node as another 
replica? 
* what would happen if there were additional collections in the cluster that 
had replicas on these nodes that had low search rate for this target 
collection?  would they protect the nodes from being the target of DELETENODE 
ops.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13163) 'searchRate' trigger: belowNodeOp=DELETENODE can result in loss of leader

2019-01-22 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-13163:

Summary: 'searchRate' trigger: belowNodeOp=DELETENODE can result in loss of 
leader  (was: 'searchRate' trigger: belowNodeOp=DELETNODE can result in loss of 
leader)

> 'searchRate' trigger: belowNodeOp=DELETENODE can result in loss of leader
> -
>
> Key: SOLR-13163
> URL: https://issues.apache.org/jira/browse/SOLR-13163
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> While working on SOLR-13140 I discovered that configuring a very high 
> belowNodeRate in {{SearchRateTriggerIntegrationTest.testDeleteNode}} can 
> cause all nodes -- even the node hosting the shard leader -- to be the target 
> of DELETENODE ops.
> this indicates at least one serious bug in the code (we should never allow 
> the leader to be deleted), but also raises other questions about situations 
> not adequately tested:
> * even if the code isn't particularly protecting the leader, why isn't 
> minReplicas protecting at least one replica?
> * what would happen if multiple replicas co-existed on the same node? would 
> if the leader was one of the replicas that existed on the same node as 
> another replica? 
> * what would happen if there were additional collections in the cluster that 
> had replicas on these nodes that had low search rate for this target 
> collection?  would they protect the nodes from being the target of DELETENODE 
> ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13162) Admin UI development-test cycle is slow

2019-01-22 Thread Jeremy Branham (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748970#comment-16748970
 ] 

Jeremy Branham commented on SOLR-13162:
---

Thank [~gerlowskija]

I didn't know about repackaging the admin-ui without running 'ant server' 
[18min].
It was considerably faster [1.5min] 

I created a small nodejs project that serves the admin-ui, and proxies the 
request to a locally running solr instance. I like this because I can just 
refresh the page to see the latest changes, without repackaging or restarting 
solr.
Although, I'm not sure how much value others would give it.

> Admin UI development-test cycle is slow
> ---
>
> Key: SOLR-13162
> URL: https://issues.apache.org/jira/browse/SOLR-13162
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jeremy Branham
>Priority: Minor
>
> When developing the admin user interface, it takes a long time to rebuild the 
> server to do testing.
> It would be nice to have a small test harness or the admin ui, so that 'ant 
> server' doesnt need to be executed before testing changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13140) Harden SearchRateTriggerIntegrationTest

2019-01-22 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748975#comment-16748975
 ] 

Hoss Man commented on SOLR-13140:
-

I've spun off the issues with DELETENODE into SOLR-13163.

I'll go ahead and:
* mark the {{testDeleteNode}} as @AwaitsFix on that jira
** add additional comments with some details
* commit & backport the rest of the improvements here.


> Harden SearchRateTriggerIntegrationTest
> ---
>
> Key: SOLR-13140
> URL: https://issues.apache.org/jira/browse/SOLR-13140
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-13140.patch, SOLR-13140.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13140) Harden SearchRateTriggerIntegrationTest

2019-01-22 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748977#comment-16748977
 ] 

Andrzej Bialecki  commented on SOLR-13140:
--

Sounds good, thanks Hoss!

> Harden SearchRateTriggerIntegrationTest
> ---
>
> Key: SOLR-13140
> URL: https://issues.apache.org/jira/browse/SOLR-13140
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-13140.patch, SOLR-13140.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8653) Reverse FST storage so it can be read forward

2019-01-22 Thread Mike Sokolov (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Sokolov updated LUCENE-8653:
-
Attachment: fst-reverse.patch

> Reverse FST storage so it can be read forward
> -
>
> Key: LUCENE-8653
> URL: https://issues.apache.org/jira/browse/LUCENE-8653
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/FSTs
>Reporter: Mike Sokolov
>Priority: Major
> Attachments: fst-reverse.patch
>
>
> Discussion of keeping FST off-heap led to the idea of ensuring that FST's can 
> be read forward in order to be more cache-friendly and align better with 
> standard I/O practice. Today FSTs are read in reverse and this leads to some 
> awkwardness, and you can't use standard readers so the code can be confusing 
> to work with.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8653) Reverse FST storage so it can be read forward

2019-01-22 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748985#comment-16748985
 ] 

Michael McCandless commented on LUCENE-8653:


Impressive how simple this was!  I think it's simpler to think about, reading 
the {{byte[]}} in forward order, and it ought to be a bit more cache friendly.  
I agree jumping between FST nodes is very random access, but e.g. at a given 
node as we scan the arcs looking for a match that would become sequential byte 
reads with this change.  Curious the impact is neutral, but maybe if we combine 
this with LUCENE-8635 we can measure an impact?

> Reverse FST storage so it can be read forward
> -
>
> Key: LUCENE-8653
> URL: https://issues.apache.org/jira/browse/LUCENE-8653
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/FSTs
>Reporter: Mike Sokolov
>Priority: Major
> Attachments: fst-reverse.patch
>
>
> Discussion of keeping FST off-heap led to the idea of ensuring that FST's can 
> be read forward in order to be more cache-friendly and align better with 
> standard I/O practice. Today FSTs are read in reverse and this leads to some 
> awkwardness, and you can't use standard readers so the code can be confusing 
> to work with.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13156) Limiting field facet with certain terms via {!terms} not taking into account sorting

2019-01-22 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748990#comment-16748990
 ] 

Mikhail Khludnev commented on SOLR-13156:
-

Here's the spec proposal:
{quote}
Limiting terms changes default for {{facet.sort}}: if it's omitted facet values 
returned in the as-given order. It can be explicitly overridden by values 
{{index}} and {{count}}. 
{quote}
WDYT?   
ccing [~yo...@apache.org]

> Limiting field facet with certain terms via {!terms} not taking into account 
> sorting
> 
>
> Key: SOLR-13156
> URL: https://issues.apache.org/jira/browse/SOLR-13156
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Konstantin Perikov
>Priority: Major
>
> When I'm doing limiting facet keys with \{!terms} it doesn't take into 
> account sorting.
> First query not limiting the facet keys:
> {{facet.field=title&facet.sort=count&facet=on&q=*:*}}
> Response as expected:
> {{"facet_counts":\{ "facet_queries":{}, "facet_fields":\{ "title":[ 
> "book2",3, "book1",2, "book3",1]}, "facet_ranges":{}, "facet_intervals":{}, 
> "facet_heatmaps":{}
>  
> When doing it with limiting:
> {{facet.field=\{!terms=Book3,Book2,Book1}title&facet.sort=count&facet=on&q=*:*}}
> I'm getting the exact order of how I list terms:
> {{"facet_counts":\{ "facet_queries":{}, "facet_fields":\{ "title":[ 
> "Book3",1, "Book2",3, "Book1",2]}, "facet_ranges":{}, "facet_intervals":{}, 
> "facet_heatmaps":{}
> I've looked at the code, and it's clearly an issue there:
>  
> org.apache.solr.request.SimpleFacets#getListedTermCounts
>  
> {{for (String term : terms) {}}
> {{    int count = searcher.numDocs(ft.getFieldQuery(null, sf, term), 
> parsed.docs);}}
> {{    res.add(term, count);}}
> {{}}}
>  
> it's just basically iterating over terms and don't do any sorting at all. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-22 Thread Mike Sokolov (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Sokolov updated LUCENE-8635:
-
Attachment: fst-offheap-ra-rev.patch

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: fst-offheap-ra-rev.patch, offheap.patch, ra.patch, 
> rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-22 Thread Mike Sokolov (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748996#comment-16748996
 ] 

Mike Sokolov commented on LUCENE-8635:
--

I uploaded a patch that combines these three things: off-heap FST + 
random-access reader + reversal of the FST so it is forward-read. Unit tests 
are passing; I'm running some benchmarks to see what the impact is on 
performance

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: fst-offheap-ra-rev.patch, offheap.patch, ra.patch, 
> rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13163) 'searchRate' trigger: belowNodeOp=DELETENODE can result in loss of leader

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749000#comment-16749000
 ] 

ASF subversion and git services commented on SOLR-13163:


Commit 15e5ca999ff7e912653db897781b21642d5307f0 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=15e5ca9 ]

SOLR-13140: harden SearchRateTriggerIntegrationTest by using more absolute rate 
thresholds and latches to track when all events have been processed so we don't 
need to 'guess' about sleep calls

This commit also disables testDeleteNode pending an AwaitsFix on SOLR-13163


> 'searchRate' trigger: belowNodeOp=DELETENODE can result in loss of leader
> -
>
> Key: SOLR-13163
> URL: https://issues.apache.org/jira/browse/SOLR-13163
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> While working on SOLR-13140 I discovered that configuring a very high 
> belowNodeRate in {{SearchRateTriggerIntegrationTest.testDeleteNode}} can 
> cause all nodes -- even the node hosting the shard leader -- to be the target 
> of DELETENODE ops.
> this indicates at least one serious bug in the code (we should never allow 
> the leader to be deleted), but also raises other questions about situations 
> not adequately tested:
> * even if the code isn't particularly protecting the leader, why isn't 
> minReplicas protecting at least one replica?
> * what would happen if multiple replicas co-existed on the same node? would 
> if the leader was one of the replicas that existed on the same node as 
> another replica? 
> * what would happen if there were additional collections in the cluster that 
> had replicas on these nodes that had low search rate for this target 
> collection?  would they protect the nodes from being the target of DELETENODE 
> ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13140) Harden SearchRateTriggerIntegrationTest

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748999#comment-16748999
 ] 

ASF subversion and git services commented on SOLR-13140:


Commit 15e5ca999ff7e912653db897781b21642d5307f0 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=15e5ca9 ]

SOLR-13140: harden SearchRateTriggerIntegrationTest by using more absolute rate 
thresholds and latches to track when all events have been processed so we don't 
need to 'guess' about sleep calls

This commit also disables testDeleteNode pending an AwaitsFix on SOLR-13163


> Harden SearchRateTriggerIntegrationTest
> ---
>
> Key: SOLR-13140
> URL: https://issues.apache.org/jira/browse/SOLR-13140
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-13140.patch, SOLR-13140.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13140) Harden SearchRateTriggerIntegrationTest

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749015#comment-16749015
 ] 

ASF subversion and git services commented on SOLR-13140:


Commit 4d9c835376e9feb2ca7b9baa514c2306c5fa61c0 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4d9c835 ]

SOLR-13140: harden SearchRateTriggerIntegrationTest by using more absolute rate 
thresholds and latches to track when all events have been processed so we don't 
need to 'guess' about sleep calls

This commit also disables testDeleteNode pending an AwaitsFix on SOLR-13163

(cherry picked from commit 15e5ca999ff7e912653db897781b21642d5307f0)


> Harden SearchRateTriggerIntegrationTest
> ---
>
> Key: SOLR-13140
> URL: https://issues.apache.org/jira/browse/SOLR-13140
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-13140.patch, SOLR-13140.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 438 - Still Unstable

2019-01-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/438/

2 tests failed.
FAILED:  org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:34915/qggl/t/forceleader_test_collection

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: 
http://127.0.0.1:34915/qggl/t/forceleader_test_collection
at 
__randomizedtesting.SeedInfo.seed([15B6F57437E384CD:F321C1B40E617DAC]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:484)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1110)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:504)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:479)
at 
org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1075)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1047)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRul

[jira] [Commented] (SOLR-13163) 'searchRate' trigger: belowNodeOp=DELETENODE can result in loss of leader

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749016#comment-16749016
 ] 

ASF subversion and git services commented on SOLR-13163:


Commit 4d9c835376e9feb2ca7b9baa514c2306c5fa61c0 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4d9c835 ]

SOLR-13140: harden SearchRateTriggerIntegrationTest by using more absolute rate 
thresholds and latches to track when all events have been processed so we don't 
need to 'guess' about sleep calls

This commit also disables testDeleteNode pending an AwaitsFix on SOLR-13163

(cherry picked from commit 15e5ca999ff7e912653db897781b21642d5307f0)


> 'searchRate' trigger: belowNodeOp=DELETENODE can result in loss of leader
> -
>
> Key: SOLR-13163
> URL: https://issues.apache.org/jira/browse/SOLR-13163
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> While working on SOLR-13140 I discovered that configuring a very high 
> belowNodeRate in {{SearchRateTriggerIntegrationTest.testDeleteNode}} can 
> cause all nodes -- even the node hosting the shard leader -- to be the target 
> of DELETENODE ops.
> this indicates at least one serious bug in the code (we should never allow 
> the leader to be deleted), but also raises other questions about situations 
> not adequately tested:
> * even if the code isn't particularly protecting the leader, why isn't 
> minReplicas protecting at least one replica?
> * what would happen if multiple replicas co-existed on the same node? would 
> if the leader was one of the replicas that existed on the same node as 
> another replica? 
> * what would happen if there were additional collections in the cluster that 
> had replicas on these nodes that had low search rate for this target 
> collection?  would they protect the nodes from being the target of DELETENODE 
> ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13140) Harden SearchRateTriggerIntegrationTest

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749032#comment-16749032
 ] 

ASF subversion and git services commented on SOLR-13140:


Commit 6882f43b96c6dc0fec7a1677d6687fa83f5a1669 in lucene-solr's branch 
refs/heads/branch_7x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6882f43 ]

SOLR-13140: harden SearchRateTriggerIntegrationTest by using more absolute rate 
thresholds and latches to track when all events have been processed so we don't 
need to 'guess' about sleep calls

This commit also disables testDeleteNode pending an AwaitsFix on SOLR-13163

(cherry picked from commit 15e5ca999ff7e912653db897781b21642d5307f0)


> Harden SearchRateTriggerIntegrationTest
> ---
>
> Key: SOLR-13140
> URL: https://issues.apache.org/jira/browse/SOLR-13140
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-13140.patch, SOLR-13140.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13163) 'searchRate' trigger: belowNodeOp=DELETENODE can result in loss of leader

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749034#comment-16749034
 ] 

ASF subversion and git services commented on SOLR-13163:


Commit 6882f43b96c6dc0fec7a1677d6687fa83f5a1669 in lucene-solr's branch 
refs/heads/branch_7x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6882f43 ]

SOLR-13140: harden SearchRateTriggerIntegrationTest by using more absolute rate 
thresholds and latches to track when all events have been processed so we don't 
need to 'guess' about sleep calls

This commit also disables testDeleteNode pending an AwaitsFix on SOLR-13163

(cherry picked from commit 15e5ca999ff7e912653db897781b21642d5307f0)


> 'searchRate' trigger: belowNodeOp=DELETENODE can result in loss of leader
> -
>
> Key: SOLR-13163
> URL: https://issues.apache.org/jira/browse/SOLR-13163
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> While working on SOLR-13140 I discovered that configuring a very high 
> belowNodeRate in {{SearchRateTriggerIntegrationTest.testDeleteNode}} can 
> cause all nodes -- even the node hosting the shard leader -- to be the target 
> of DELETENODE ops.
> this indicates at least one serious bug in the code (we should never allow 
> the leader to be deleted), but also raises other questions about situations 
> not adequately tested:
> * even if the code isn't particularly protecting the leader, why isn't 
> minReplicas protecting at least one replica?
> * what would happen if multiple replicas co-existed on the same node? would 
> if the leader was one of the replicas that existed on the same node as 
> another replica? 
> * what would happen if there were additional collections in the cluster that 
> had replicas on these nodes that had low search rate for this target 
> collection?  would they protect the nodes from being the target of DELETENODE 
> ops.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13140) Harden SearchRateTriggerIntegrationTest

2019-01-22 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-13140.
-
   Resolution: Fixed
Fix Version/s: master (9.0)
   7.7
   8.0

> Harden SearchRateTriggerIntegrationTest
> ---
>
> Key: SOLR-13140
> URL: https://issues.apache.org/jira/browse/SOLR-13140
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: 8.0, 7.7, master (9.0)
>
> Attachments: SOLR-13140.patch, SOLR-13140.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8640) validate delimiters when parsing date ranges

2019-01-22 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749060#comment-16749060
 ] 

Mikhail Khludnev commented on LUCENE-8640:
--

[~lsharma3] I left a few comments on github, would you mind to consider them? 

> validate delimiters when parsing date ranges
> 
>
> Key: LUCENE-8640
> URL: https://issues.apache.org/jira/browse/LUCENE-8640
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: LUCENE-8640.patch, LUCENE-8640.patch, LUCENE-8640.patch, 
> mypatch.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {{DateRangePrefixTree.parseCalendar()}} should validate delimiters to rejects 
> dates like {{2000-11T13}} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13162) Admin UI development-test cycle is slow

2019-01-22 Thread Upayavira (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749166#comment-16749166
 ] 

Upayavira commented on SOLR-13162:
--

This sounds like a very simple idea that would have saved me a lot of time back 
in the day (when I was working on the UI a lot).

> Admin UI development-test cycle is slow
> ---
>
> Key: SOLR-13162
> URL: https://issues.apache.org/jira/browse/SOLR-13162
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jeremy Branham
>Priority: Minor
>
> When developing the admin user interface, it takes a long time to rebuild the 
> server to do testing.
> It would be nice to have a small test harness or the admin ui, so that 'ant 
> server' doesnt need to be executed before testing changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-22 Thread Ankit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749180#comment-16749180
 ] 

Ankit Jain commented on LUCENE-8635:


bq. {quote}Technically we could make things work for existing segments since 
your patch doesn't change the file format.{quote}
[~jpountz] - I'm curious on how this can be done. I looked at the code and it 
seemed that all settings are passed to the segment writer and writer should put 
those settings in codec for reader to consume. Do you have any pointers on this?

{quote}I agree it's a bit unlikely that the terms index gets paged out, but you 
can still end up with a cold FS cache eg. when the host restarts?{quote}
There can be option for preloading terms index during index open. Even though, 
lucene already provides option for preloading mapped buffer 
[here|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/store/MMapDirectory.java#L95],
 it is done at directory level and not file level. Though, elasticsearch worked 
around that to provide [file level 
setting|https://www.elastic.co/guide/en/elasticsearch/reference/master/_pre_loading_data_into_the_file_system_cache.html]

{quote}For the record, Lucene also performs implicit PK lookups when indexing 
with updateDocument. So this might have an impact on indexing speed as 
well.{quote}
If customer workload is updateDocument heavy, the impact should be minimal, as 
terms index will get loaded into memory after first fault for every page and 
then there should not be any page faults. If customers are sensitive to 
latency, they can use the preload option for terms index.

{quote}Wondering whether avoiding 'array reversal' in the second patch is what 
helped rather than moving to random access and removing skip? May be we should 
try with reading one byte at a time with original patch.{quote}
I overlooked that earlier and attributed performance gain to absence of seek 
operation. This makes lot more sense, will try to do some by changing readBytes 
to below:
{{   
public byte readByte() throws IOException {
final byte b = this.in.readByte();
this.skipBytes(2);
return b;
}

public void readBytes(byte[] b, int offset, int len) throws IOException {
for (int i=offset+len-1; i>=offset; i--) {
b[i] = this.readByte();
}
}
}}

bq. {quote}I uploaded a patch that combines these three things: off-heap FST + 
random-access reader + reversal of the FST so it is forward-read. Unit tests 
are passing; I'm running some benchmarks to see what the impact is on 
performance{quote}
That's great Mike. If this works, we don't need the reverse reader. We don't 
even need the random-access reader, as we can simply change readBytes to below:
{{
public void readBytes(byte[] b, int offset, int len) throws IOException {
this.in.readBytes(b, offset, len);
}
}}

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: fst-offheap-ra-rev.patch, offheap.patch, ra.patch, 
> rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-22 Thread Ankit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749180#comment-16749180
 ] 

Ankit Jain edited comment on LUCENE-8635 at 1/22/19 9:40 PM:
-

bq. {quote}Technically we could make things work for existing segments since 
your patch doesn't change the file format.{quote}
[~jpountz] - I'm curious on how this can be done. I looked at the code and it 
seemed that all settings are passed to the segment writer and writer should put 
those settings in codec for reader to consume. Do you have any pointers on this?

{quote}I agree it's a bit unlikely that the terms index gets paged out, but you 
can still end up with a cold FS cache eg. when the host restarts?{quote}
There can be option for preloading terms index during index open. Even though, 
lucene already provides option for preloading mapped buffer 
[here|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/store/MMapDirectory.java#L95],
 it is done at directory level and not file level. Though, elasticsearch worked 
around that to provide [file level 
setting|https://www.elastic.co/guide/en/elasticsearch/reference/master/_pre_loading_data_into_the_file_system_cache.html]

{quote}For the record, Lucene also performs implicit PK lookups when indexing 
with updateDocument. So this might have an impact on indexing speed as 
well.{quote}
If customer workload is updateDocument heavy, the impact should be minimal, as 
terms index will get loaded into memory after first fault for every page and 
then there should not be any page faults. If customers are sensitive to 
latency, they can use the preload option for terms index.

{quote}Wondering whether avoiding 'array reversal' in the second patch is what 
helped rather than moving to random access and removing skip? May be we should 
try with reading one byte at a time with original patch.{quote}
I overlooked that earlier and attributed performance gain to absence of seek 
operation. This makes lot more sense, will try to do some by changing readBytes 
to below:
{code:title=ReverseIndexInputReader.java|borderStyle=solid}  
public byte readByte() throws IOException {
final byte b = this.in.readByte();
this.skipBytes(2);
return b;
}

public void readBytes(byte[] b, int offset, int len) throws IOException {
for (int i=offset+len-1; i>=offset; i--) {
b[i] = this.readByte();
}
}
{code}

bq. {quote}I uploaded a patch that combines these three things: off-heap FST + 
random-access reader + reversal of the FST so it is forward-read. Unit tests 
are passing; I'm running some benchmarks to see what the impact is on 
performance{quote}
That's great Mike. If this works, we don't need the reverse reader. We don't 
even need the random-access reader, as we can simply change readBytes to below:
{code:title=ReverseIndexInputReader.java|borderStyle=solid}  
public void readBytes(byte[] b, int offset, int len) throws IOException {
this.in.readBytes(b, offset, len);
}
{code}


was (Author: akjain):
bq. {quote}Technically we could make things work for existing segments since 
your patch doesn't change the file format.{quote}
[~jpountz] - I'm curious on how this can be done. I looked at the code and it 
seemed that all settings are passed to the segment writer and writer should put 
those settings in codec for reader to consume. Do you have any pointers on this?

{quote}I agree it's a bit unlikely that the terms index gets paged out, but you 
can still end up with a cold FS cache eg. when the host restarts?{quote}
There can be option for preloading terms index during index open. Even though, 
lucene already provides option for preloading mapped buffer 
[here|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/store/MMapDirectory.java#L95],
 it is done at directory level and not file level. Though, elasticsearch worked 
around that to provide [file level 
setting|https://www.elastic.co/guide/en/elasticsearch/reference/master/_pre_loading_data_into_the_file_system_cache.html]

{quote}For the record, Lucene also performs implicit PK lookups when indexing 
with updateDocument. So this might have an impact on indexing speed as 
well.{quote}
If customer workload is updateDocument heavy, the impact should be minimal, as 
terms index will get loaded into memory after first fault for every page and 
then there should not be any page faults. If customers are sensitive to 
latency, they can use the preload option for terms index.

{quote}Wondering whether avoiding 'array reversal' in the second patch is what 
helped rather than moving to random access and removing skip? May be we should 
try with reading one byte at a time with original patch.{quote}
I overlooked that earlier and attributed performance gain to absence of seek 
operation. Th

[jira] [Comment Edited] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-22 Thread Ankit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749180#comment-16749180
 ] 

Ankit Jain edited comment on LUCENE-8635 at 1/22/19 9:41 PM:
-

{quote}Technically we could make things work for existing segments since your 
patch doesn't change the file format.{quote}
[~jpountz] - I'm curious on how this can be done. I looked at the code and it 
seemed that all settings are passed to the segment writer and writer should put 
those settings in codec for reader to consume. Do you have any pointers on this?

{quote}I agree it's a bit unlikely that the terms index gets paged out, but you 
can still end up with a cold FS cache eg. when the host restarts?{quote}
There can be option for preloading terms index during index open. Even though, 
lucene already provides option for preloading mapped buffer 
[here|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/store/MMapDirectory.java#L95],
 it is done at directory level and not file level. Though, elasticsearch worked 
around that to provide [file level 
setting|https://www.elastic.co/guide/en/elasticsearch/reference/master/_pre_loading_data_into_the_file_system_cache.html]

{quote}For the record, Lucene also performs implicit PK lookups when indexing 
with updateDocument. So this might have an impact on indexing speed as 
well.{quote}
If customer workload is updateDocument heavy, the impact should be minimal, as 
terms index will get loaded into memory after first fault for every page and 
then there should not be any page faults. If customers are sensitive to 
latency, they can use the preload option for terms index.

{quote}Wondering whether avoiding 'array reversal' in the second patch is what 
helped rather than moving to random access and removing skip? May be we should 
try with reading one byte at a time with original patch.{quote}
I overlooked that earlier and attributed performance gain to absence of seek 
operation. This makes lot more sense, will try to do some by changing readBytes 
to below:
{code:title=ReverseIndexInputReader.java|borderStyle=solid}  
public byte readByte() throws IOException {
final byte b = this.in.readByte();
this.skipBytes(2);
return b;
}

public void readBytes(byte[] b, int offset, int len) throws IOException {
for (int i=offset+len-1; i>=offset; i--) {
b[i] = this.readByte();
}
}
{code}

{quote}I uploaded a patch that combines these three things: off-heap FST + 
random-access reader + reversal of the FST so it is forward-read. Unit tests 
are passing; I'm running some benchmarks to see what the impact is on 
performance{quote}
That's great Mike. If this works, we don't need the reverse reader. We don't 
even need the random-access reader, as we can simply change readBytes to below:
{code:title=ReverseIndexInputReader.java|borderStyle=solid}  
public void readBytes(byte[] b, int offset, int len) throws IOException {
this.in.readBytes(b, offset, len);
}
{code}


was (Author: akjain):
bq. {quote}Technically we could make things work for existing segments since 
your patch doesn't change the file format.{quote}
[~jpountz] - I'm curious on how this can be done. I looked at the code and it 
seemed that all settings are passed to the segment writer and writer should put 
those settings in codec for reader to consume. Do you have any pointers on this?

{quote}I agree it's a bit unlikely that the terms index gets paged out, but you 
can still end up with a cold FS cache eg. when the host restarts?{quote}
There can be option for preloading terms index during index open. Even though, 
lucene already provides option for preloading mapped buffer 
[here|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/store/MMapDirectory.java#L95],
 it is done at directory level and not file level. Though, elasticsearch worked 
around that to provide [file level 
setting|https://www.elastic.co/guide/en/elasticsearch/reference/master/_pre_loading_data_into_the_file_system_cache.html]

{quote}For the record, Lucene also performs implicit PK lookups when indexing 
with updateDocument. So this might have an impact on indexing speed as 
well.{quote}
If customer workload is updateDocument heavy, the impact should be minimal, as 
terms index will get loaded into memory after first fault for every page and 
then there should not be any page faults. If customers are sensitive to 
latency, they can use the preload option for terms index.

{quote}Wondering whether avoiding 'array reversal' in the second patch is what 
helped rather than moving to random access and removing skip? May be we should 
try with reading one byte at a time with original patch.{quote}
I overlooked that earlier and attributed performance gain to absence of seek 
operation. This makes

[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 434 - Still Failing

2019-01-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/434/

No tests ran.

Build Log:
[...truncated 23460 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2459 links (2010 relative) to 3224 anchors in 246 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.7.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:c

[jira] [Commented] (SOLR-13162) Admin UI development-test cycle is slow

2019-01-22 Thread Jeremy Branham (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749211#comment-16749211
 ] 

Jeremy Branham commented on SOLR-13162:
---

It definitely saved me some time, even on a small modification I was working on 
[drag-n-drop replica moves] 

I've started working on a new admin ui from the ground up, but it will be a 
while before I can get all the current features implemented. 

[https://github.com/savantly-net/solr-admin] 

> Admin UI development-test cycle is slow
> ---
>
> Key: SOLR-13162
> URL: https://issues.apache.org/jira/browse/SOLR-13162
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jeremy Branham
>Priority: Minor
>
> When developing the admin user interface, it takes a long time to rebuild the 
> server to do testing.
> It would be nice to have a small test harness or the admin ui, so that 'ant 
> server' doesnt need to be executed before testing changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13162) Admin UI development-test cycle is slow

2019-01-22 Thread Jeremy Branham (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749211#comment-16749211
 ] 

Jeremy Branham edited comment on SOLR-13162 at 1/22/19 10:28 PM:
-

It definitely saved me some time, even on a small modification I was working on 
[drag-n-drop replica moves] 

I've started working on a new admin ui from the ground up, but it will be a 
while before I can get all the current features implemented. 

[https://github.com/savantly-net/solr-admin] 

It will work as a standalone tool.


was (Author: jdbranham):
It definitely saved me some time, even on a small modification I was working on 
[drag-n-drop replica moves] 

I've started working on a new admin ui from the ground up, but it will be a 
while before I can get all the current features implemented. 

[https://github.com/savantly-net/solr-admin] 

> Admin UI development-test cycle is slow
> ---
>
> Key: SOLR-13162
> URL: https://issues.apache.org/jira/browse/SOLR-13162
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jeremy Branham
>Priority: Minor
>
> When developing the admin user interface, it takes a long time to rebuild the 
> server to do testing.
> It would be nice to have a small test harness or the admin ui, so that 'ant 
> server' doesnt need to be executed before testing changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] ctargett commented on issue #547: SOLR-13161: Admin UI - drag/drop replicas

2019-01-22 Thread GitBox
ctargett commented on issue #547: SOLR-13161: Admin UI - drag/drop replicas
URL: https://github.com/apache/lucene-solr/pull/547#issuecomment-456591248
 
 
   Added Jira issue ID to title for Jira integration.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13162) Admin UI development-test cycle is slow

2019-01-22 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749257#comment-16749257
 ] 

Jan Høydahl commented on SOLR-13162:


I have been using the webapp {{ant dist}} workflow for some time. Just let Solr 
run and after the ant dist, refresh browser and the new UI loads. Quite happy 
with that.

Cool that you have started to play with a rewrite of the Admin. I suppose you 
have seen SOLR-12276 too. Do you plan for the UI app to be deployed on just 
one/some of the servers in a cluster, or could it be hosted from within Solr as 
well? Running UI as a separate app would for sure allow some interesting 
deployments where the UI could reside in a another network than the nodes?

I guess the design/layout of existing UI is sub optimal due to just being 
incrementally developed, Cloud is just bolted on top of things etc. So I'd love 
if we could start with a vision and have a UX designer whip up a brand new UI 
design before we start re-implementing what we have, just my 2¢

> Admin UI development-test cycle is slow
> ---
>
> Key: SOLR-13162
> URL: https://issues.apache.org/jira/browse/SOLR-13162
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jeremy Branham
>Priority: Minor
>
> When developing the admin user interface, it takes a long time to rebuild the 
> server to do testing.
> It would be nice to have a small test harness or the admin ui, so that 'ant 
> server' doesnt need to be executed before testing changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.4) - Build # 970 - Unstable!

2019-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/970/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery

Error Message:
Expected a collection with one shard and two replicas Timeout waiting to see 
state for collection=MissingSegmentRecoveryTest 
:DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"MissingSegmentRecoveryTest_shard1_replica_n1", 
  "base_url":"http://127.0.0.1:60146/solr";,   
"node_name":"127.0.0.1:60146_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node4":{  
 "core":"MissingSegmentRecoveryTest_shard1_replica_n2",   
"base_url":"http://127.0.0.1:60149/solr";,   
"node_name":"127.0.0.1:60149_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false",   "nrtReplicas":"2",   
"tlogReplicas":"0"} Live Nodes: [127.0.0.1:60146_solr, 127.0.0.1:60149_solr] 
Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"MissingSegmentRecoveryTest_shard1_replica_n1", 
  "base_url":"http://127.0.0.1:60146/solr";,   
"node_name":"127.0.0.1:60146_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node4":{  
 "core":"MissingSegmentRecoveryTest_shard1_replica_n2",   
"base_url":"http://127.0.0.1:60149/solr";,   
"node_name":"127.0.0.1:60149_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false",   "nrtReplicas":"2",   
"tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected a collection with one shard and two replicas
Timeout waiting to see state for collection=MissingSegmentRecoveryTest 
:DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"MissingSegmentRecoveryTest_shard1_replica_n1",
  "base_url":"http://127.0.0.1:60146/solr";,
  "node_name":"127.0.0.1:60146_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false"},
"core_node4":{
  "core":"MissingSegmentRecoveryTest_shard1_replica_n2",
  "base_url":"http://127.0.0.1:60149/solr";,
  "node_name":"127.0.0.1:60149_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
Live Nodes: [127.0.0.1:60146_solr, 127.0.0.1:60149_solr]
Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"MissingSegmentRecoveryTest_shard1_replica_n1",
  "base_url":"http://127.0.0.1:60146/solr";,
  "node_name":"127.0.0.1:60146_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false"},
"core_node4":{
  "core":"MissingSegmentRecoveryTest_shard1_replica_n2",
  "base_url":"http://127.0.0.1:60149/solr";,
  "node_name":"127.0.0.1:60149_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([DCFF0BB3E1ED4FE4:8CAA93B0B8CCF9F9]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:289)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:267)
at 
org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecovery

[GitHub] MighTguY commented on a change in pull request #538: LUCENE-8640: added changes for the validation of valid dateString

2019-01-22 Thread GitBox
MighTguY commented on a change in pull request #538: LUCENE-8640: added changes 
for the validation of valid dateString
URL: https://github.com/apache/lucene-solr/pull/538#discussion_r250034905
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/schema/DateRangeFieldTest.java
 ##
 @@ -20,7 +20,7 @@
 import org.junit.BeforeClass;
 import org.junit.Test;
 
-public class DateRangeFieldTest extends SolrTestCaseJ4 {
+public class  DateRangeFieldTest extends SolrTestCaseJ4 {
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] MighTguY commented on a change in pull request #538: LUCENE-8640: added changes for the validation of valid dateString

2019-01-22 Thread GitBox
MighTguY commented on a change in pull request #538: LUCENE-8640: added changes 
for the validation of valid dateString
URL: https://github.com/apache/lucene-solr/pull/538#discussion_r250034976
 
 

 ##
 File path: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/prefix/tree/DateRangePrefixTree.java
 ##
 @@ -500,4 +516,18 @@ public Calendar parseCalendar(String str) throws 
ParseException {
 throw new ParseException("Improperly formatted date: "+str, offset);
   }
 
+  private  void isValidDateDelimeter(String str, int offset, char delim) {
+if(str.charAt(offset) != delim) {
+  throw new IllegalArgumentException("Not the valid delimeter for 
Position"+offset);
+}
+  }
+
+  private int parseIntegerAndValidate(String  str, int offset, int min, int 
max) {
+int val = Integer.parseInt(str.substring(offset, offset+2));
+if((val < min ) || (val>max)) {
+  throw new IllegalArgumentException("Not valid date.");
 
 Review comment:
   I have incorporated the same.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] MighTguY commented on a change in pull request #538: LUCENE-8640: added changes for the validation of valid dateString

2019-01-22 Thread GitBox
MighTguY commented on a change in pull request #538: LUCENE-8640: added changes 
for the validation of valid dateString
URL: https://github.com/apache/lucene-solr/pull/538#discussion_r250035191
 
 

 ##
 File path: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/prefix/tree/DateRangePrefixTree.java
 ##
 @@ -500,4 +516,18 @@ public Calendar parseCalendar(String str) throws 
ParseException {
 throw new ParseException("Improperly formatted date: "+str, offset);
   }
 
+  private  void isValidDateDelimeter(String str, int offset, char delim) {
+if(str.charAt(offset) != delim) {
+  throw new IllegalArgumentException("Not the valid delimeter for 
Position"+offset);
+}
+  }
+
+  private int parseIntegerAndValidate(String  str, int offset, int min, int 
max) {
+int val = Integer.parseInt(str.substring(offset, offset+2));
+if((val < min ) || (val>max)) {
+  throw new IllegalArgumentException("Not valid date.");
 
 Review comment:
   @gd-spb-e1m I have incorporated the same, please review


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8640) validate delimiters when parsing date ranges

2019-01-22 Thread Lucky Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749394#comment-16749394
 ] 

Lucky Sharma commented on LUCENE-8640:
--

I have incorporated the comments :) please review

> validate delimiters when parsing date ranges
> 
>
> Key: LUCENE-8640
> URL: https://issues.apache.org/jira/browse/LUCENE-8640
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: LUCENE-8640.patch, LUCENE-8640.patch, LUCENE-8640.patch, 
> mypatch.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> {{DateRangePrefixTree.parseCalendar()}} should validate delimiters to rejects 
> dates like {{2000-11T13}} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Windows (64bit/jdk-10.0.1) - Build # 22 - Unstable!

2019-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/22/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testClassifyStream

Error Message:
expected:<0.0> but was:<0.9998245650830389>

Stack Trace:
java.lang.AssertionError: expected:<0.0> but was:<0.9998245650830389>
at 
__randomizedtesting.SeedInfo.seed([E732023CF11F988A:427A9804C847811E]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:553)
at org.junit.Assert.assertEquals(Assert.java:683)
at 
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testClassifyStream(StreamDecoratorTest.java:3406)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testClassifyStream

Error Message:
expected:<0.0> but was:<0.9998245650

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23561 - Unstable!

2019-01-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23561/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestCloudSearcherWarming.testRepFactor1LeaderStartup

Error Message:
No live SolrServers available to handle this request

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:343)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:213)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1110)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:517)
at 
org.apache.solr.cloud.TestCloudSearcherWarming.tearDown(TestCloudSearcherWarming.java:78)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotse

[JENKINS] Lucene-Solr-BadApples-Tests-8.x - Build # 9 - Unstable

2019-01-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-8.x/9/

3 tests failed.
FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.test

Error Message:
acoll: 1548201759155 bcoll: 1548201759228

Stack Trace:
java.lang.AssertionError: acoll: 1548201759155 bcoll: 1548201759228
at 
__randomizedtesting.SeedInfo.seed([DAFE72BE73FE450A:52AA4D64DD0228F2]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testFillWorkQueue(MultiThreadedOCPTest.java:116)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.test(MultiThreadedOCPTest.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at j

[JENKINS] Lucene-Solr-repro - Build # 2715 - Unstable

2019-01-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/2715/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1756/consoleText

[repro] Revision: 01dfe7bf4b2bd05326c66cc6297300f3dd321547

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=CdcrReplicationHandlerTest 
-Dtests.method=testReplicationWithBufferedUpdates -Dtests.seed=2F43141E50A6522E 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=tr-TR -Dtests.timezone=Asia/Brunei -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  
-Dtestcase=HdfsTlogReplayBufferedWhileIndexingTest -Dtests.method=test 
-Dtests.seed=2F43141E50A6522E -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=zh-SG -Dtests.timezone=Africa/Lome -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
15e5ca999ff7e912653db897781b21642d5307f0
[repro] git fetch
[repro] git checkout 01dfe7bf4b2bd05326c66cc6297300f3dd321547

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   CdcrReplicationHandlerTest
[repro]   HdfsTlogReplayBufferedWhileIndexingTest
[repro] ant compile-test

[...truncated 3568 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.CdcrReplicationHandlerTest|*.HdfsTlogReplayBufferedWhileIndexingTest"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=2F43141E50A6522E -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=tr-TR -Dtests.timezone=Asia/Brunei -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 5419 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest
[repro]   1/5 failed: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest
[repro] git checkout 15e5ca999ff7e912653db897781b21642d5307f0

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1757 - Still Unstable

2019-01-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1757/

3 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
at 
__randomizedtesting.SeedInfo.seed([B90AF6AFC6BAD54B:322D257E87BC7ECF]:0)
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
at 
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:308)
at 
org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at 
org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:394)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.retryOnPortBindFailure(JettySolrRunner.java:544)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:483)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:451)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:499)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowin

[jira] [Commented] (LUCENE-8654) Polygon2D#relateTriangle returns the wrong answer if polygon is inside the triangle

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749609#comment-16749609
 ] 

ASF subversion and git services commented on LUCENE-8654:
-

Commit ea06ecf6b3ff9d49d26b914077cfe9c48b82bd99 in lucene-solr's branch 
refs/heads/branch_8x from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ea06ecf ]

LUCENE-8654: Polygon2D#relateTriangle returns the wrong answer if polygon is 
inside the triangle


> Polygon2D#relateTriangle returns the wrong answer if polygon is inside the 
> triangle
> ---
>
> Key: LUCENE-8654
> URL: https://issues.apache.org/jira/browse/LUCENE-8654
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8654.patch
>
>
> The method returns CELL_OUTSIDE_QUERY but the right answer should be 
> CELL_CROSSES_QUERY.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8654) Polygon2D#relateTriangle returns the wrong answer if polygon is inside the triangle

2019-01-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749610#comment-16749610
 ] 

ASF subversion and git services commented on LUCENE-8654:
-

Commit 5ac3bbc539620bf8ba09f66f13c73b210715be68 in lucene-solr's branch 
refs/heads/branch_7x from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5ac3bbc ]

LUCENE-8654: Polygon2D#relateTriangle returns the wrong answer if polygon is 
inside the triangle


> Polygon2D#relateTriangle returns the wrong answer if polygon is inside the 
> triangle
> ---
>
> Key: LUCENE-8654
> URL: https://issues.apache.org/jira/browse/LUCENE-8654
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8654.patch
>
>
> The method returns CELL_OUTSIDE_QUERY but the right answer should be 
> CELL_CROSSES_QUERY.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >