[jira] [Reopened] (SOLR-10442) xtendedDismaxQParser (edismax) makes pf* require search term exactly

2018-04-06 Thread Nikolay Martynov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Martynov reopened SOLR-10442:
-

Sorry for misinformation, this is still happening on 6.6.1

> xtendedDismaxQParser (edismax) makes pf* require search term exactly
> 
>
> Key: SOLR-10442
> URL: https://issues.apache.org/jira/browse/SOLR-10442
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 6.5
>Reporter: Nikolay Martynov
>Priority: Major
>
> Request like:
> {code}
> "params":{
>   "q": "cat AND dog",
>   "q.op": "AND",
>   "defType":"edismax",
>   "qf":"description",
>   "pf2":"description"
> }
> {code}
> produces query like this:
> {code}
> "parsedquery_toString":"+(+(description.en:cat) +(description.en:dog)) 
> (+(description.en:\"cat dog\"))"
> {code}
> Solr 4.6.1 produces different parsing of this query:
> {code}
> "parsedquery_toString": "+(+(description.en:cat) +(description.en:dog)) 
> (description.en:\"cat dog\")",
> {code}
> Replacing {{q.op=AND}} with {{q.op=OR}} in newer Solr produces same query as 
> old Solr despite the fact that it would seem that this change should not make 
> a difference.
> This issue is probably related to SOLR-8812 - looks like it is just one more 
> case of same problem. It also would mean that change occurred in version 
> range specified there - unfortunately I would not be able to test that.
> This looks like a change in behaviour is not quite expected: now introducing 
> pf2 searches for documents that must have 'cat dog' phrase instead of just 
> boosting such documents.
> Please let me know if more information is required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10442) xtendedDismaxQParser (edismax) makes pf* require search term exactly

2018-03-20 Thread Nikolay Martynov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Martynov resolved SOLR-10442.
-
Resolution: Cannot Reproduce

> xtendedDismaxQParser (edismax) makes pf* require search term exactly
> 
>
> Key: SOLR-10442
> URL: https://issues.apache.org/jira/browse/SOLR-10442
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 6.5
>Reporter: Nikolay Martynov
>Priority: Major
>
> Request like:
> {code}
> "params":{
>   "q": "cat AND dog",
>   "q.op": "AND",
>   "defType":"edismax",
>   "qf":"description",
>   "pf2":"description"
> }
> {code}
> produces query like this:
> {code}
> "parsedquery_toString":"+(+(description.en:cat) +(description.en:dog)) 
> (+(description.en:\"cat dog\"))"
> {code}
> Solr 4.6.1 produces different parsing of this query:
> {code}
> "parsedquery_toString": "+(+(description.en:cat) +(description.en:dog)) 
> (description.en:\"cat dog\")",
> {code}
> Replacing {{q.op=AND}} with {{q.op=OR}} in newer Solr produces same query as 
> old Solr despite the fact that it would seem that this change should not make 
> a difference.
> This issue is probably related to SOLR-8812 - looks like it is just one more 
> case of same problem. It also would mean that change occurred in version 
> range specified there - unfortunately I would not be able to test that.
> This looks like a change in behaviour is not quite expected: now introducing 
> pf2 searches for documents that must have 'cat dog' phrase instead of just 
> boosting such documents.
> Please let me know if more information is required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10442) xtendedDismaxQParser (edismax) makes pf* require search term exactly

2018-03-20 Thread Nikolay Martynov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16406405#comment-16406405
 ] 

Nikolay Martynov commented on SOLR-10442:
-

Looks like we cannot reproduce it any longer. closing.

> xtendedDismaxQParser (edismax) makes pf* require search term exactly
> 
>
> Key: SOLR-10442
> URL: https://issues.apache.org/jira/browse/SOLR-10442
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 6.5
>Reporter: Nikolay Martynov
>Priority: Major
>
> Request like:
> {code}
> "params":{
>   "q": "cat AND dog",
>   "q.op": "AND",
>   "defType":"edismax",
>   "qf":"description",
>   "pf2":"description"
> }
> {code}
> produces query like this:
> {code}
> "parsedquery_toString":"+(+(description.en:cat) +(description.en:dog)) 
> (+(description.en:\"cat dog\"))"
> {code}
> Solr 4.6.1 produces different parsing of this query:
> {code}
> "parsedquery_toString": "+(+(description.en:cat) +(description.en:dog)) 
> (description.en:\"cat dog\")",
> {code}
> Replacing {{q.op=AND}} with {{q.op=OR}} in newer Solr produces same query as 
> old Solr despite the fact that it would seem that this change should not make 
> a difference.
> This issue is probably related to SOLR-8812 - looks like it is just one more 
> case of same problem. It also would mean that change occurred in version 
> range specified there - unfortunately I would not be able to test that.
> This looks like a change in behaviour is not quite expected: now introducing 
> pf2 searches for documents that must have 'cat dog' phrase instead of just 
> boosting such documents.
> Please let me know if more information is required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12049) Solrj doesn't pass basic auth for delete requests

2018-03-01 Thread Nikolay Martynov (JIRA)
Nikolay Martynov created SOLR-12049:
---

 Summary: Solrj doesn't pass basic auth for delete requests
 Key: SOLR-12049
 URL: https://issues.apache.org/jira/browse/SOLR-12049
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrJ
Affects Versions: 7.1
Reporter: Nikolay Martynov


If basic authentication is used then delete by id requests do not work because 
authentication parameters are not passed.

For updates there is this line in the code: 
https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/client/solrj/request/UpdateRequest.java#L280

For deletes there is no corresponding logic in the same file below.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12042) Authorization rules do not work as expected.

2018-02-27 Thread Nikolay Martynov (JIRA)
Nikolay Martynov created SOLR-12042:
---

 Summary: Authorization rules do not work as expected.
 Key: SOLR-12042
 URL: https://issues.apache.org/jira/browse/SOLR-12042
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Authentication
Affects Versions: 6.6.2
 Environment: SolrCloud, Linux.
Reporter: Nikolay Martynov


Authentication rules do not work as expected: more permissions are given than 
desired.

This is an example of security.json:
{noformat}
{
 "authentication":{
   "blockUnknown":false,
   "class":"solr.BasicAuthPlugin",
   "credentials":{"admin":"XvyR9ddaDk/kVNBrhJHkeWhqTFQ2uAsv8tDOmkSDwkg= 
3EiRiSQVKYnGDgOwBoY6NJNlOcoRuYZOoUMYB9hgpGw="},
   "":{"v":56}},
 "authorization":{
   "class":"solr.RuleBasedAuthorizationPlugin",
   "user-role":{"admin":["admin"]},
   "":{"v":66},
   "permissions":[
 {
   "name":"read",
   "role":null,
   "index":1},
 {
   "path":"/admin/info/system",
   "collection":null,
   "role":null,
   "index":2},
 {
   "name":"all",
   "role":"admin",
   "index":3}]}}
{noformat}

With this not authentication is required to create or delete collection.
If one removes second rule (one with path) then authentication is required to 
create or destroy collection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11770) NPE in tvrh if no field is specified and document doesn't contain any fields with term vectors

2018-01-11 Thread Nikolay Martynov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322508#comment-16322508
 ] 

Nikolay Martynov commented on SOLR-11770:
-

There is a related story for 'tvrh needs stored unique key': 
https://issues.apache.org/jira/browse/SOLR-11792

> NPE in tvrh if no field is specified and document doesn't contain any fields 
> with term vectors
> --
>
> Key: SOLR-11770
> URL: https://issues.apache.org/jira/browse/SOLR-11770
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.2
>Reporter: Nikolay Martynov
>Assignee: Erick Erickson
>
> It looks like if {{tvrh}} request doesn't contain {{fl}} parameter and 
> document doesn't have any fields with term vectors then Solr returns NPE.
> Request: 
> {{tvrh?shards.qt=/tvrh=field%3Avalue=json=id%3A123=true}}.
> On our 'old' schema we had some fields with {{termVectors}} and even more 
> fields with position data. In our new schema we tried to remove unused data 
> so we dropped a lot of position data and some term vectors.
> Our documents are 'sparsely' populated - not all documents contain all fields.
> Above request was returning fine for our 'old' schema and returns 500 for our 
> 'new' schema - on exactly same Solr (6.6.2).
> Stack trace:
> {code}
> 2017-12-18 01:15:00.958 ERROR (qtp255041198-46697) [c:test s:shard3 
> r:core_node11 x:test_shard3_replica1] o.a.s.h.RequestHandlerBase 
> java.lang.NullPointerException
>at 
> org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:324)
>at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296)
>at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
>at org.apache.solr.core.SolrCore.execute(SolrCore.java:2482)
>at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
>at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>at org.eclipse.jetty.server.Server.handle(Server.java:534)
>at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
> 

[jira] [Updated] (SOLR-11792) tvrh component doesn't work if unique key has stored="false"

2017-12-22 Thread Nikolay Martynov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Martynov updated SOLR-11792:

Summary: tvrh component doesn't work if unique key has stored="false"  
(was: tvrh component requires unique key to be stored and doesn't work with 
docValues)

> tvrh component doesn't work if unique key has stored="false"
> 
>
> Key: SOLR-11792
> URL: https://issues.apache.org/jira/browse/SOLR-11792
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.2
>Reporter: Nikolay Martynov
>
> If I create index with unique key defined like
> {code}
>  docValues="true"/>
> {code}
> then searches seem to be working, but {{tvrh}} doesn't return any vectors for 
> fields that have one stored.
> Upon a cursory look at the code it looks like {{tvrh}} component requires 
> unique key to be specifically stored.
> Ideally {{tvrh}} should work fine with docValues. And at the very least this 
> gotcha should be documented, probably here: 
> https://lucene.apache.org/solr/guide/6_6/field-properties-by-use-case.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11792) tvrh component requires unique key to be stored and doesn't work with docValues

2017-12-22 Thread Nikolay Martynov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16301862#comment-16301862
 ] 

Nikolay Martynov commented on SOLR-11792:
-

> WDYT about changing the title to reflect that the problem is with the 
>  having stored="false"?

Fine by me :)

> tvrh component requires unique key to be stored and doesn't work with 
> docValues
> ---
>
> Key: SOLR-11792
> URL: https://issues.apache.org/jira/browse/SOLR-11792
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.2
>Reporter: Nikolay Martynov
>
> If I create index with unique key defined like
> {code}
>  docValues="true"/>
> {code}
> then searches seem to be working, but {{tvrh}} doesn't return any vectors for 
> fields that have one stored.
> Upon a cursory look at the code it looks like {{tvrh}} component requires 
> unique key to be specifically stored.
> Ideally {{tvrh}} should work fine with docValues. And at the very least this 
> gotcha should be documented, probably here: 
> https://lucene.apache.org/solr/guide/6_6/field-properties-by-use-case.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11792) tvrh component requires unique key to be stored and doesn't work with docValues

2017-12-22 Thread Nikolay Martynov (JIRA)
Nikolay Martynov created SOLR-11792:
---

 Summary: tvrh component requires unique key to be stored and 
doesn't work with docValues
 Key: SOLR-11792
 URL: https://issues.apache.org/jira/browse/SOLR-11792
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6.2
Reporter: Nikolay Martynov


If I create index with unique key defined like
{code}

{code}

then searches seem to be working, but {{tvrh}} doesn't return any vectors for 
fields that have one stored.

Upon a cursory look at the code it looks like {{tvrh}} component requires 
unique key to be specifically stored.

Ideally {{tvrh}} should work fine with docValues. And at the very least this 
gotcha should be documented, probably here: 
https://lucene.apache.org/solr/guide/6_6/field-properties-by-use-case.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11770) NPE in tvrh if no field is specified and document doesn't contain any fields with term vectors

2017-12-17 Thread Nikolay Martynov (JIRA)
Nikolay Martynov created SOLR-11770:
---

 Summary: NPE in tvrh if no field is specified and document doesn't 
contain any fields with term vectors
 Key: SOLR-11770
 URL: https://issues.apache.org/jira/browse/SOLR-11770
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6.2
Reporter: Nikolay Martynov


It looks like if {{tvrh}} request doesn't contain {{fl}} parameter and document 
doesn't have any fields with term vectors then Solr returns NPE.

Request: 
{{tvrh?shards.qt=/tvrh=field%3Avalue=json=id%3A123=true}}.

On our 'old' schema we had some fields with {{termVectors}} and even more 
fields with position data. In our new schema we tried to remove unused data so 
we dropped a lot of position data and some term vectors.

Our documents are 'sparsely' populated - not all documents contain all fields.

Above request was returning fine for our 'old' schema and returns 500 for our 
'new' schema - on exactly same Solr (6.6.2).

Stack trace:
{code}
2017-12-18 01:15:00.958 ERROR (qtp255041198-46697) [c:test s:shard3 
r:core_node11 x:test_shard3_replica1] o.a.s.h.RequestHandlerBase 
java.lang.NullPointerException
   at 
org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:324)
   at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296)
   at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2482)
   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
   at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
   at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
   at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
   at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
   at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
   at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
   at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
   at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
   at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
   at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
   at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
   at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
   at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
   at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
   at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
   at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
   at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
   at org.eclipse.jetty.server.Server.handle(Server.java:534)
   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
   at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
   at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
   at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
   at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
   at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
   at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
   at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11634) Create collection doesn't respect `maxShardsPerNode`

2017-12-11 Thread Nikolay Martynov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285989#comment-16285989
 ] 

Nikolay Martynov commented on SOLR-11634:
-

To clarify: we have 1 JVM per box.

> Create collection doesn't respect `maxShardsPerNode`
> 
>
> Key: SOLR-11634
> URL: https://issues.apache.org/jira/browse/SOLR-11634
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.1
>Reporter: Nikolay Martynov
>Assignee: Erick Erickson
>
> Command
> {noformat}
> curl 
> 'http://host:8983/solr/admin/collections?action=CREATE=xxx=16=3=config=2=shard:*,replica:<2,node:*=shard:*,replica:<2,sysprop.aws.az:*'
> {noformat}
> creates collection with 1,2 and 3 shard per nodes - looks like 
> {{maxShardsPerNode}} is being ignored.
> Adding {{rule=replica:<{},node:*}} seems to help, but I'm not sure if this is 
> correct and it doesn't seem to match documented behaviour.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8048) Filesystems do not guarantee order of directories updates

2017-11-10 Thread Nikolay Martynov (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247854#comment-16247854
 ] 

Nikolay Martynov edited comment on LUCENE-8048 at 11/10/17 6:03 PM:


Just to clarify:
* On Linux directory {{fsync}} does work, but it doesn't solve the problem 
currently because this {{fsync}} currently happens after segment list has been 
created. Essentially guarantees about fsync (weak as they are) are in case of 
failure: before fsync you can see changes in any combination, after fsync you 
are guaranteed to see exactly what was written.
* This can potentially affect any FS that uses non trivial storage for 
directories (which is pretty much everything these days). Word on the internet 
is that btrfs is capable of doing out of order directory writes.
* 'kernel automatically detects rename pattern' - I think this only works on 
some FSs (ext4) and only if certain mount options are present (auto_da_alloc). 
And I think generally this is about syncing file data with directory, not 
syncing directory as a whole on rename.


was (Author: mar-kolya):
Just to clarify:
* On Linux directory {{fsync}} does work, but it doesn't solve the problem 
because this {{fsync}} happens after segment list has been created. Essentially 
guarantees about fsync (weak as they are) are in case of failure: before fsync 
you can see changes in any combination, after fsync you are guaranteed to see 
exactly what was written.
* This can potentially affect any FS that uses non trivial storage for 
directories (which is pretty much everything these days). Word on the internet 
is that btrfs is capable of doing out of order directory writes.
* 'kernel automatically detects rename pattern' - I think this only works on 
some FSs (ext4) and only if certain mount options are present (auto_da_alloc). 
And I think generally this is about syncing file data with directory, not 
syncing directory as a whole on rename.

> Filesystems do not guarantee order of directories updates
> -
>
> Key: LUCENE-8048
> URL: https://issues.apache.org/jira/browse/LUCENE-8048
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nikolay Martynov
>
> Currently when index is written to disk the following sequence of events is 
> taking place:
> * write segment file
> * sync segment file
> * write segment file
> * sync segment file
> ...
> * write list of segments
> * sync list of segments
> * rename list of segments
> * sync index directory
> This sequence leads to potential window of opportunity for system to crash 
> after 'rename list of segments' but before 'sync index directory' and 
> depending on exact filesystem implementation this may potentially lead to 
> 'list of segments' being visible in directory while some of the segments are 
> not.
> Solution to this is to sync index directory after all segments have been 
> written. [This 
> commit|https://github.com/mar-kolya/lucene-solr/commit/58e05dd1f633ab9b02d9e6374c7fab59689ae71c]
>  shows idea implemented. I'm fairly certain that I didn't find all the places 
> this may be potentially happening.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8048) Filesystems do not guarantee order of directories updates

2017-11-10 Thread Nikolay Martynov (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247854#comment-16247854
 ] 

Nikolay Martynov commented on LUCENE-8048:
--

Just to clarify:
* On Linux directory {{fsync}} does work, but it doesn't solve the problem 
because this {{fsync}} happens after segment list has been created. Essentially 
guarantees about fsync (weak as they are) are in case of failure: before fsync 
you can see changes in any combination, after fsync you are guaranteed to see 
exactly what was written.
* This can potentially affect any FS that uses non trivial storage for 
directories (which is pretty much everything these days). Word on the internet 
is that btrfs is capable of doing out of order directory writes.
* 'kernel automatically detects rename pattern' - I think this only works on 
some FSs (ext4) and only if certain mount options are present (auto_da_alloc). 
And I think generally this is about syncing file data with directory, not 
syncing directory as a whole on rename.

> Filesystems do not guarantee order of directories updates
> -
>
> Key: LUCENE-8048
> URL: https://issues.apache.org/jira/browse/LUCENE-8048
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nikolay Martynov
>
> Currently when index is written to disk the following sequence of events is 
> taking place:
> * write segment file
> * sync segment file
> * write segment file
> * sync segment file
> ...
> * write list of segments
> * sync list of segments
> * rename list of segments
> * sync index directory
> This sequence leads to potential window of opportunity for system to crash 
> after 'rename list of segments' but before 'sync index directory' and 
> depending on exact filesystem implementation this may potentially lead to 
> 'list of segments' being visible in directory while some of the segments are 
> not.
> Solution to this is to sync index directory after all segments have been 
> written. [This 
> commit|https://github.com/mar-kolya/lucene-solr/commit/58e05dd1f633ab9b02d9e6374c7fab59689ae71c]
>  shows idea implemented. I'm fairly certain that I didn't find all the places 
> this may be potentially happening.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11634) Create collection doesn't respect `maxShardsPerNode`

2017-11-09 Thread Nikolay Martynov (JIRA)
Nikolay Martynov created SOLR-11634:
---

 Summary: Create collection doesn't respect `maxShardsPerNode`
 Key: SOLR-11634
 URL: https://issues.apache.org/jira/browse/SOLR-11634
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6.1
Reporter: Nikolay Martynov


Command
{noformat}
curl 
'http://host:8983/solr/admin/collections?action=CREATE=xxx=16=3=config=2=shard:*,replica:<2,node:*=shard:*,replica:<2,sysprop.aws.az:*'
{noformat}

creates collection with 1,2 and 3 shard per nodes - looks like 
{{maxShardsPerNode}} is being ignored.

Adding {{rule=replica:<{},node:*}} seems to help, but I'm not sure if this is 
correct and it doesn't seem to match documented behaviour.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11625) Solr may remove live index on Solr shutdown

2017-11-09 Thread Nikolay Martynov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246718#comment-16246718
 ] 

Nikolay Martynov commented on SOLR-11625:
-

Hi.

We are using c4.8xlarge, we have 24 nodes, 3 replicas, 16 shards - 2 cores per 
node.
Exact indexing rate hard to estimate, but probably 10-20 threads hitting with 
20 docs batches.

We have a script to roll these boxes one by one: i.e. roll one, wait for 
cluster to become 'green', roll next one. This script rarely finished because 
of this problem.

> Solr may remove live index on Solr shutdown
> ---
>
> Key: SOLR-11625
> URL: https://issues.apache.org/jira/browse/SOLR-11625
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.1
>Reporter: Nikolay Martynov
>
> This has been observed in the wild:
> {noformat}
> 2017-11-07 02:35:46.909 ERROR (qtp1724399560-8090) [c:xxx s:shard4 
> r:core_node399 x:xxx_shard4_replica8] o.a.s.c.SolrCore 
> :java.nio.channels.ClosedByInterruptException
>   at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>   at sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:315)
>   at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:242)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192)
>   at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:356)
>   at 
> org.apache.solr.core.SolrCore.cleanupOldIndexDirectories(SolrCore.java:3044)
>   at org.apache.solr.core.SolrCore.close(SolrCore.java:1575)
>   at org.apache.solr.servlet.HttpSolrCall.destroy(HttpSolrCall.java:582)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:374)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> 2017-11-07 02:35:46.912 INFO  
> (OldIndexDirectoryCleanupThreadForCore-xxx_shard4_replica8) [c:xxx s:shard4 
> r:core_node399 x:xxx_shard4_replica8] o.a.s.c.DirectoryFactory Found 1 old 
> index directories to clean-up under 
> /opt/solr/server/solr/xxx_shard4_replica8/data/ afterReload=false
> {noformat}
> After this Solr cannot start claiming that some 

[jira] [Commented] (SOLR-11625) Solr may remove live index on Solr shutdown

2017-11-09 Thread Nikolay Martynov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245958#comment-16245958
 ] 

Nikolay Martynov commented on SOLR-11625:
-

Yes.

So setup is fairly easy:
* Create a cluster.
* Start sending a lot of updates to the cluster.
* Start rebooting nodes in that cluster - 'graceful' shutdown is important.

>From time to time Solr doesn't come back up complaining that it cannot find 
>index file.

> Solr may remove live index on Solr shutdown
> ---
>
> Key: SOLR-11625
> URL: https://issues.apache.org/jira/browse/SOLR-11625
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.1
>Reporter: Nikolay Martynov
>
> This has been observed in the wild:
> {noformat}
> 2017-11-07 02:35:46.909 ERROR (qtp1724399560-8090) [c:xxx s:shard4 
> r:core_node399 x:xxx_shard4_replica8] o.a.s.c.SolrCore 
> :java.nio.channels.ClosedByInterruptException
>   at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>   at sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:315)
>   at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:242)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192)
>   at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:356)
>   at 
> org.apache.solr.core.SolrCore.cleanupOldIndexDirectories(SolrCore.java:3044)
>   at org.apache.solr.core.SolrCore.close(SolrCore.java:1575)
>   at org.apache.solr.servlet.HttpSolrCall.destroy(HttpSolrCall.java:582)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:374)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> 2017-11-07 02:35:46.912 INFO  
> (OldIndexDirectoryCleanupThreadForCore-xxx_shard4_replica8) [c:xxx s:shard4 
> r:core_node399 x:xxx_shard4_replica8] o.a.s.c.DirectoryFactory Found 1 old 
> index directories to clean-up under 
> /opt/solr/server/solr/xxx_shard4_replica8/data/ afterReload=false
> {noformat}
> After this Solr cannot start claiming that some files that are supposed to 
> exist in the index do not exist. On one occasion we observed 

[jira] [Created] (SOLR-11626) Filesystems do not guarantee order of directories updates

2017-11-08 Thread Nikolay Martynov (JIRA)
Nikolay Martynov created SOLR-11626:
---

 Summary: Filesystems do not guarantee order of directories updates
 Key: SOLR-11626
 URL: https://issues.apache.org/jira/browse/SOLR-11626
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Nikolay Martynov


Currently when index is written to disk the following sequence of events is 
taking place:
* write segment file
* sync segment file
* write segment file
* sync segment file
...
* write list of segments
* sync list of segments
* rename list of segments
* sync index directory

This sequence leads to potential window of opportunity for system to crash 
after 'rename list of segments' but before 'sync index directory' and depending 
on exact filesystem implementation this may potentially lead to 'list of 
segments' being visible in directory while some of the segments are not.

Solution to this is to sync index directory after all segments have been 
written. [This 
commit|https://github.com/mar-kolya/lucene-solr/commit/58e05dd1f633ab9b02d9e6374c7fab59689ae71c]
 shows idea implemented. I'm fairly certain that I didn't find all the places 
this may be potentially happening.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11625) Solr may remove live index on Solr shutdown

2017-11-08 Thread Nikolay Martynov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Martynov updated SOLR-11625:

Description: 
This has been observed in the wild:

{noformat}
2017-11-07 02:35:46.909 ERROR (qtp1724399560-8090) [c:xxx s:shard4 
r:core_node399 x:xxx_shard4_replica8] o.a.s.c.SolrCore 
:java.nio.channels.ClosedByInterruptException
at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:315)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:242)
at 
org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192)
at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:356)
at 
org.apache.solr.core.SolrCore.cleanupOldIndexDirectories(SolrCore.java:3044)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1575)
at org.apache.solr.servlet.HttpSolrCall.destroy(HttpSolrCall.java:582)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:374)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)

2017-11-07 02:35:46.912 INFO  
(OldIndexDirectoryCleanupThreadForCore-xxx_shard4_replica8) [c:xxx s:shard4 
r:core_node399 x:xxx_shard4_replica8] o.a.s.c.DirectoryFactory Found 1 old 
index directories to clean-up under 
/opt/solr/server/solr/xxx_shard4_replica8/data/ afterReload=false
{noformat}

After this Solr cannot start claiming that some files that are supposed to 
exist in the index do not exist. On one occasion we observed segments file not 
being present.

We were able to trace this problem to {{SolrCore.cleanupOldIndexDirectories}} 
using wrong index directory as current index because 
{{SolrCore.getNewIndexDir}} could not read proper index directory due to 
reading code receiving interruption exception.

[This 
change|https://github.com/mar-kolya/lucene-solr/commit/8967367edd2b8b5ed072876f27051613e3425100]
 seems to address the problem. But it should be said that this is more of a 
hot-patch rather than a proper fix.

  was:
This has been observed in the wild:

{noformat}
2017-11-07 02:35:46.909 ERROR (qtp1724399560-8090) [c:xxx s:shard4 
r:core_node399 x:xxx_shard4_replica8] o.a.s.c.SolrCore 
:java.nio.channels.ClosedByInterruptException
at 

[jira] [Created] (SOLR-11625) Solr may remove live index on Solr shutdown

2017-11-08 Thread Nikolay Martynov (JIRA)
Nikolay Martynov created SOLR-11625:
---

 Summary: Solr may remove live index on Solr shutdown
 Key: SOLR-11625
 URL: https://issues.apache.org/jira/browse/SOLR-11625
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6.1
Reporter: Nikolay Martynov


This has been observed in the wild:

{noformat}
2017-11-07 02:35:46.909 ERROR (qtp1724399560-8090) [c:xxx s:shard4 
r:core_node399 x:xxx_shard4_replica8] o.a.s.c.SolrCore 
:java.nio.channels.ClosedByInterruptException
at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:315)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:242)
at 
org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192)
at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:356)
at 
org.apache.solr.core.SolrCore.cleanupOldIndexDirectories(SolrCore.java:3044)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1575)
at org.apache.solr.servlet.HttpSolrCall.destroy(HttpSolrCall.java:582)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:374)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)

2017-11-07 02:35:46.912 INFO  
(OldIndexDirectoryCleanupThreadForCore-xxx_shard4_replica8) [c:xxx s:shard4 
r:core_node399 x:xxx_shard4_replica8] o.a.s.c.DirectoryFactory Found 1 old 
index directories to clean-up under 
/opt/solr/server/solr/xxx_shard4_replica8/data/ afterReload=false
{noformat}

After this Solr cannot start claiming that some files that are supposed to 
exist in the index do not exist. On one occasion we observed segments file not 
being present.

We were able to trace this problem to {SolrCore.cleanupOldIndexDirectories} 
using wrong index directory as current index because {SolrCore.getNewIndexDir} 
could not read proper index directory due to reading code receiving 
interruption exception.

[This 
change|https://github.com/mar-kolya/lucene-solr/commit/8967367edd2b8b5ed072876f27051613e3425100]
 seems to address the problem. But it should be said that this is more of a 
hot-patch rather than a proper fix.



--
This message was sent by Atlassian JIRA

[jira] [Created] (SOLR-10442) xtendedDismaxQParser (edismax) makes pf* require search term exactly

2017-04-06 Thread Nikolay Martynov (JIRA)
Nikolay Martynov created SOLR-10442:
---

 Summary: xtendedDismaxQParser (edismax) makes pf* require search 
term exactly
 Key: SOLR-10442
 URL: https://issues.apache.org/jira/browse/SOLR-10442
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: query parsers
Affects Versions: 6.5
Reporter: Nikolay Martynov


Request like:
{code}
"params":{
  "q": "cat AND dog",
  "q.op": "AND",
  "defType":"edismax",
  "qf":"description",
  "pf2":"description"
}
{code}
produces query like this:
{code}
"parsedquery_toString":"+(+(description.en:cat) +(description.en:dog)) 
(+(description.en:\"cat dog\"))"
{code}

Solr 4.6.1 produces different parsing of this query:
{code}
"parsedquery_toString": "+(+(description.en:cat) +(description.en:dog)) 
(description.en:\"cat dog\")",
{code}

Replacing {{q.op=AND}} with {{q.op=OR}} in newer Solr produces same query as 
old Solr despite the fact that it would seem that this change should not make a 
difference.

This issue is probably related to SOLR-8812 - looks like it is just one more 
case of same problem. It also would mean that change occurred in version range 
specified there - unfortunately I would not be able to test that.

This looks like a change in behaviour is not quite expected: now introducing 
pf2 searches for documents that must have 'cat dog' phrase instead of just 
boosting such documents.

Please let me know if more information is required.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org