[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3944 - Still Failing!

2017-04-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3944/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 3467 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home/jre/bin/java 
-XX:-UseCompressedOops -XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/heapdumps 
-ea -esa -Dtests.prefix=tests -Dtests.seed=A38783F3F91753B3 -Xmx512M 
-Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=7.0.0 -Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/analysis/kuromoji/test/temp
 -Dcommon.dir=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene 
-Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/clover/db
 
-Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=7.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX 
-Djunit4.childvm.cwd=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/analysis/kuromoji/test/J1
 -Djunit4.childvm.id=1 -Djunit4.childvm.count=2 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true -Dtests.disableHdfs=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager -classpath 

[jira] [Commented] (SOLR-10424) /update/docs/json is swalling all fields

2017-04-04 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956280#comment-15956280
 ] 

Noble Paul commented on SOLR-10424:
---

I guess your {{/update/json/docs}} is configured with {{mapUniqueKeyOnly=true}}
SOLR-8240 is same I think

> /update/docs/json is swalling all fields
> 
>
> Key: SOLR-10424
> URL: https://issues.apache.org/jira/browse/SOLR-10424
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5, master (7.0)
>Reporter: Hoss Man
>
> I'm not sure when/how exactly this broke, but sending a list of documents to 
> {{/update/json/docs}} is currently useless -- regardless of what your 
> documents contain, all you get is 3 fields: {{id}}, {{\_version\_}}, and a 
> {{\_src\_}} field containing your original JSON, but none of the fields you 
> specified are added.
> Steps to reproduce...
> {noformat}
> git co releases/lucene-solr/6.5.0
> ...
> ant clean && cd solr && ant server
> ...
> bin/solr -e techproducts
> ...
> curl 'http://localhost:8983/solr/techproducts/update/json/docs?commit=true' 
> --data-binary @example/exampledocs/books.json -H 
> 'Content-type:application/json'
> ...
> curl 'http://localhost:8983/solr/techproducts/query?q=id:978-1933988177'
> {
>   "responseHeader":{
> "status":0,
> "QTime":5,
> "params":{
>   "q":"id:978-1933988177"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"978-1933988177",
> "_src_":"{\n\"id\" : \"978-1933988177\",\n\"cat\" : 
> [\"book\",\"paperback\"],\n\"name\" : \"Lucene in Action, Second 
> Edition\",\n\"author\" : \"Michael McCandless\",\n\"sequence_i\" : 
> 1,\n\"genre_s\" : \"IT\",\n\"inStock\" : true,\n\"price\" : 
> 30.50,\n\"pages_i\" : 475\n  }",
> "_version_":1563794703530328065}]
>   }}
> {noformat}
> Compare with using {{/update/json}} ...
> {noformat}
> curl 'http://localhost:8983/solr/techproducts/update/json?commit=true' 
> --data-binary @example/exampledocs/books.json -H 
> 'Content-type:application/json'
> ...
> curl 'http://localhost:8983/solr/techproducts/query?q=id:978-1933988177'
> {
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"id:978-1933988177"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"978-1933988177",
> "cat":["book",
>   "paperback"],
> "name":"Lucene in Action, Second Edition",
> "author":"Michael McCandless",
> "author_s":"Michael McCandless",
> "sequence_i":1,
> "sequence_pi":1,
> "genre_s":"IT",
> "inStock":true,
> "price":30.5,
> "price_c":"30.5,USD",
> "pages_i":475,
> "pages_pi":475,
> "_version_":1563794766373584896}]
>   }}
> {noformat}
> According to the ref-guide, the only diff between these two endpoints should 
> be that {{/update/json/docs}} defaults {{json.command=false}} ... but since 
> the top level JSON structure in books.json is a list ({{"[ ... ]"}}) that 
> shouldn't matter because that's not the solr JSON command syntax.
> 
> If you try to send a singular JSON document tp {{/update/json/docs}}, you get 
> the same problem...
> {noformat}
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"id":"HOSS","popularity":42}' 
> 'http://localhost:8983/solr/techproducts/update/json/docs?commit=true'
> ...
> curl 'http://localhost:8983/solr/techproducts/query?q=id:HOSS'{
>   "responseHeader":{
> "status":0,
> "QTime":0,
> "params":{
>   "q":"id:HOSS"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"HOSS",
> "_src_":"{\"id\":\"HOSS\",\"popularity\":42}",
> "_version_":1563795188162232320}]
>   }}
> {noformat}
> ...even though the same JSON works fine to 
> {{/update/json?json.command=false}} ...
> {noformat}
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"id":"HOSS","popularity":42}' 
> 'http://localhost:8983/solr/techproducts/update/json?commit=true=false'
> ...
> curl 'http://localhost:8983/solr/techproducts/query?q=id:HOSS'{
>   "responseHeader":{
> "status":0,
> "QTime":1,
> "params":{
>   "q":"id:HOSS"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"HOSS",
> "popularity":42,
> "_version_":1563795262581768192}]
>   }}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10423) ShingleFilter causes overly restrictive queries to be produced

2017-04-04 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-10423:
--
Affects Version/s: 6.5

> ShingleFilter causes overly restrictive queries to be produced
> --
>
> Key: SOLR-10423
> URL: https://issues.apache.org/jira/browse/SOLR-10423
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 6.5
>Reporter: Steve Rowe
>
> When {{sow=false}} and {{ShingleFilter}} is included in the query analyzer, 
> {{QueryBuilder}} produces queries that inappropriately require sequential 
> terms.  E.g. the query "A B C" produces {{(+A_B +B_C) A_B_C}} when the query 
> analyzer includes {{ maxShingleSize="3" outputUnigrams="false" tokenSeparator="_"/>}}.
> Aman Deep Singh reported this problem on the solr-user list. From 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201703.mbox/%3ccanegtx9bwbpwqc-cxieac7qsas7x2tgzovomy5ztiagco1p...@mail.gmail.com%3e]:
> {quote}
> I was trying to use the shingle filter but it was not creating the query as
> desirable.
> my schema is
> {noformat}
>  positionIncrementGap="100">
>   
> 
>  maxShingleSize="4"/>
> 
>   
> 
> 
> {noformat}
> my solr query is
> {noformat}
> http://localhost:8983/solr/productCollection/select?
>  defType=edismax
> =true
> =one%20plus%20one%20four
> =nameShingle
> =false
> =xml
> {noformat}
> and it was creating the parsed query as
> {noformat}
> 
> (+(DisjunctionMaxQuery(((+nameShingle:one plus +nameShingle:plus one
> +nameShingle:one four))) DisjunctionMaxQuery(((+nameShingle:one plus
> +nameShingle:plus one four))) DisjunctionMaxQuery(((+nameShingle:one plus one 
> +nameShingle:one four))) DisjunctionMaxQuery((nameShingle:one plus one 
> four)))~1)/no_coord
> 
> 
> *++nameShingle:one plus +nameShingle:plus one +nameShingle:one four))
> ((+nameShingle:one plus +nameShingle:plus one four)) ((+nameShingle:one
> plus one +nameShingle:one four)) (nameShingle:one plus one four))~1)*
> 
> {noformat}
> So ideally token creations is perfect but in the query it is using boolean + 
> operator which is causing the problem as if i have a document with name as 
> "one plus one" ,according to the shingles it has to matched as its token will 
> be  ("one plus","one plus one","plus one") .
> I have tried using the q.op and played around the mm also but nothing is
> giving me the correct response.
> Any idea how i can fetch that document even if the document is missing any
> token.
> My expected response will be getting the document "one plus one" even the 
> user query has any additional term like "one plus one two" and so on.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10423) ShingleFilter causes overly restrictive queries to be produced

2017-04-04 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-10423:
--
Component/s: query parsers

> ShingleFilter causes overly restrictive queries to be produced
> --
>
> Key: SOLR-10423
> URL: https://issues.apache.org/jira/browse/SOLR-10423
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 6.5
>Reporter: Steve Rowe
>
> When {{sow=false}} and {{ShingleFilter}} is included in the query analyzer, 
> {{QueryBuilder}} produces queries that inappropriately require sequential 
> terms.  E.g. the query "A B C" produces {{(+A_B +B_C) A_B_C}} when the query 
> analyzer includes {{ maxShingleSize="3" outputUnigrams="false" tokenSeparator="_"/>}}.
> Aman Deep Singh reported this problem on the solr-user list. From 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201703.mbox/%3ccanegtx9bwbpwqc-cxieac7qsas7x2tgzovomy5ztiagco1p...@mail.gmail.com%3e]:
> {quote}
> I was trying to use the shingle filter but it was not creating the query as
> desirable.
> my schema is
> {noformat}
>  positionIncrementGap="100">
>   
> 
>  maxShingleSize="4"/>
> 
>   
> 
> 
> {noformat}
> my solr query is
> {noformat}
> http://localhost:8983/solr/productCollection/select?
>  defType=edismax
> =true
> =one%20plus%20one%20four
> =nameShingle
> =false
> =xml
> {noformat}
> and it was creating the parsed query as
> {noformat}
> 
> (+(DisjunctionMaxQuery(((+nameShingle:one plus +nameShingle:plus one
> +nameShingle:one four))) DisjunctionMaxQuery(((+nameShingle:one plus
> +nameShingle:plus one four))) DisjunctionMaxQuery(((+nameShingle:one plus one 
> +nameShingle:one four))) DisjunctionMaxQuery((nameShingle:one plus one 
> four)))~1)/no_coord
> 
> 
> *++nameShingle:one plus +nameShingle:plus one +nameShingle:one four))
> ((+nameShingle:one plus +nameShingle:plus one four)) ((+nameShingle:one
> plus one +nameShingle:one four)) (nameShingle:one plus one four))~1)*
> 
> {noformat}
> So ideally token creations is perfect but in the query it is using boolean + 
> operator which is causing the problem as if i have a document with name as 
> "one plus one" ,according to the shingles it has to matched as its token will 
> be  ("one plus","one plus one","plus one") .
> I have tried using the q.op and played around the mm also but nothing is
> giving me the correct response.
> Any idea how i can fetch that document even if the document is missing any
> token.
> My expected response will be getting the document "one plus one" even the 
> user query has any additional term like "one plus one two" and so on.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10425) PointFields ignore indexed="false"

2017-04-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956174#comment-15956174
 ] 

Tomás Fernández Löbbe commented on SOLR-10425:
--

This is not intentional. My intention with PointFields was always to make them 
look exactly the same as TrieFields to the end user, so features should work 
the same way. This needs to be fixed. 
What about moving the {{isFieldUsed()}} check to {{createFields(...)}}, and 
then just checking if indexed=true before calling {{createField(...)}}? That 
way the fix just goes to the superclass, and {{createField}} always creates?


> PointFields ignore indexed="false"
> --
>
> Key: SOLR-10425
> URL: https://issues.apache.org/jira/browse/SOLR-10425
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> (NOTE: description below focuses on {{IntPointField}}, but problem seems to 
> affect all {{PointField}} subclasses)
> There seems to be a disconnect between {{PointField.createFields}} -> 
> {{IntPointField.createField}} -> {{PointField.isFieldUsed}} that results in 
> an {{org.apache.lucene.document.IntPoint}} being created for each field 
> value, even if field is {{indexed="false"}}
> Steps to reproduce...
> {noformat}
> bin/solr -e techproducts
> ...
> curl -X POST -H 'Content-type:application/json' --data-binary '{
>   "add-field":{
>  "name":"hoss_points_check",
>  "type":"pint",
>  "stored":true,
>  "docValues":false,
>  "indexed":false}
> }' http://localhost:8983/solr/techproducts/schema
> ...
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"HOSS","hoss_points_check":42}]' 
> 'http://localhost:8983/solr/techproducts/update/json?commit=true'
> ...
> curl 'http://localhost:8983/solr/techproducts/query?q=id:HOSS'
> {
>   "responseHeader":{
> "status":0,
> "QTime":3,
> "params":{
>   "q":"id:HOSS"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"HOSS",
> "hoss_points_check":42,
> "_version_":1563795876337418240}]
>   }}
> curl 'http://localhost:8983/solr/techproducts/query?q=hoss_points_check:42'
> {
>   "responseHeader":{
> "status":0,
> "QTime":2,
> "params":{
>   "q":"hoss_points_check:42"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"HOSS",
> "hoss_points_check":42,
> "_version_":1563795876337418240}]
>   }}
> {noformat}
> Note that I can search on the doc using the  "hoss_points_check" field even 
> though it is {{docValues="false" indexed="false"}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10374) Implement set-policy and remove-policy APIs

2017-04-04 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat reassigned SOLR-10374:
---

Assignee: Cao Manh Dat

> Implement set-policy and remove-policy APIs
> ---
>
> Key: SOLR-10374
> URL: https://issues.apache.org/jira/browse/SOLR-10374
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Cao Manh Dat
>  Labels: autoscaling
> Fix For: master (7.0)
>
>
> Add {{set-policy}} and {{remove-policy}} APIs for adding, updating and 
> deleting autoscaling policies from Zookeeper.
> {code}
> curl -H 'Content-type:application/json' -d '{
>   "set-policy": {
> "default": {
>   "preferences": [
> {
>   "minimize": "replicas",
>   "precision": 3
> },
> {
>   "maximize": "freedisk",
>   "precision": 100
> },
> {
>   "minimize": "cpu",
>   "precision": 10
> }
>   ]
> }
>   }
> }' http://localhost:8983/solr/admin/autoscaling
> {code}
> This issue is only for the CRUD APIs. The actual implementation of these 
> policies will be done in a separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10425) PointFields ignore indexed="false"

2017-04-04 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956133#comment-15956133
 ] 

Hoss Man commented on SOLR-10425:
-


At first glance, I thought maybe this may have been an intentional design 
choice that tomas made with the mindset of:
* users should not declare a "Point" field unless they definitely wanted a BKD 
Point structure on disk
* the {{"indexed"}} should only apply to creating the {{Terms}} based inverted 
index files, not the BKD points files

But this doesn't jive with the existing test configs (such as 
{{schema-point.xml}}) where many point fields explicitly use {{indexed="true"}}

It's also really important to ensure we have _some_ way to support users who 
want an "integer" type field that is {{stored="true"}} but don't care about it 
being searchable/sortable w/o wasting disk space -- even once/if we 
deprecate/remove {{TrieIntField}}}




I believe the fix here is a pretty straight forward: 
{{IntPointField.createField}} should ignore {{isFieldUsed()}} and just check 
{{field.indexed()}} -- allthough some other tweaks/null-checks may be needed in 
{{PointField.createFields}} & obviously we should get a lot more tests of these 
edge cases to smoke those out.

i'm hoping [~tomasflobbe]  will chime in with a sanity check that i'm not 
missing some major reason why things work this way before i go too deep down 
hte rabit hole of writting new tests.


> PointFields ignore indexed="false"
> --
>
> Key: SOLR-10425
> URL: https://issues.apache.org/jira/browse/SOLR-10425
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> (NOTE: description below focuses on {{IntPointField}}, but problem seems to 
> affect all {{PointField}} subclasses)
> There seems to be a disconnect between {{PointField.createFields}} -> 
> {{IntPointField.createField}} -> {{PointField.isFieldUsed}} that results in 
> an {{org.apache.lucene.document.IntPoint}} being created for each field 
> value, even if field is {{indexed="false"}}
> Steps to reproduce...
> {noformat}
> bin/solr -e techproducts
> ...
> curl -X POST -H 'Content-type:application/json' --data-binary '{
>   "add-field":{
>  "name":"hoss_points_check",
>  "type":"pint",
>  "stored":true,
>  "docValues":false,
>  "indexed":false}
> }' http://localhost:8983/solr/techproducts/schema
> ...
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '[{"id":"HOSS","hoss_points_check":42}]' 
> 'http://localhost:8983/solr/techproducts/update/json?commit=true'
> ...
> curl 'http://localhost:8983/solr/techproducts/query?q=id:HOSS'
> {
>   "responseHeader":{
> "status":0,
> "QTime":3,
> "params":{
>   "q":"id:HOSS"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"HOSS",
> "hoss_points_check":42,
> "_version_":1563795876337418240}]
>   }}
> curl 'http://localhost:8983/solr/techproducts/query?q=hoss_points_check:42'
> {
>   "responseHeader":{
> "status":0,
> "QTime":2,
> "params":{
>   "q":"hoss_points_check:42"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "id":"HOSS",
> "hoss_points_check":42,
> "_version_":1563795876337418240}]
>   }}
> {noformat}
> Note that I can search on the doc using the  "hoss_points_check" field even 
> though it is {{docValues="false" indexed="false"}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10425) PointFields ignore indexed="false"

2017-04-04 Thread Hoss Man (JIRA)
Hoss Man created SOLR-10425:
---

 Summary: PointFields ignore indexed="false"
 Key: SOLR-10425
 URL: https://issues.apache.org/jira/browse/SOLR-10425
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man



(NOTE: description below focuses on {{IntPointField}}, but problem seems to 
affect all {{PointField}} subclasses)

There seems to be a disconnect between {{PointField.createFields}} -> 
{{IntPointField.createField}} -> {{PointField.isFieldUsed}} that results in an 
{{org.apache.lucene.document.IntPoint}} being created for each field value, 
even if field is {{indexed="false"}}

Steps to reproduce...

{noformat}
bin/solr -e techproducts
...
curl -X POST -H 'Content-type:application/json' --data-binary '{
  "add-field":{
 "name":"hoss_points_check",
 "type":"pint",
 "stored":true,
 "docValues":false,
 "indexed":false}
}' http://localhost:8983/solr/techproducts/schema
...
curl -X POST -H 'Content-type:application/json' --data-binary 
'[{"id":"HOSS","hoss_points_check":42}]' 
'http://localhost:8983/solr/techproducts/update/json?commit=true'
...
curl 'http://localhost:8983/solr/techproducts/query?q=id:HOSS'
{
  "responseHeader":{
"status":0,
"QTime":3,
"params":{
  "q":"id:HOSS"}},
  "response":{"numFound":1,"start":0,"docs":[
  {
"id":"HOSS",
"hoss_points_check":42,
"_version_":1563795876337418240}]
  }}
curl 'http://localhost:8983/solr/techproducts/query?q=hoss_points_check:42'
{
  "responseHeader":{
"status":0,
"QTime":2,
"params":{
  "q":"hoss_points_check:42"}},
  "response":{"numFound":1,"start":0,"docs":[
  {
"id":"HOSS",
"hoss_points_check":42,
"_version_":1563795876337418240}]
  }}
{noformat}

Note that I can search on the doc using the  "hoss_points_check" field even 
though it is {{docValues="false" indexed="false"}}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10424) /update/docs/json is swalling all fields

2017-04-04 Thread Hoss Man (JIRA)
Hoss Man created SOLR-10424:
---

 Summary: /update/docs/json is swalling all fields
 Key: SOLR-10424
 URL: https://issues.apache.org/jira/browse/SOLR-10424
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.5, master (7.0)
Reporter: Hoss Man



I'm not sure when/how exactly this broke, but sending a list of documents to 
{{/update/json/docs}} is currently useless -- regardless of what your documents 
contain, all you get is 3 fields: {{id}}, {{\_version\_}}, and a {{\_src\_}} 
field containing your original JSON, but none of the fields you specified are 
added.

Steps to reproduce...

{noformat}
git co releases/lucene-solr/6.5.0
...
ant clean && cd solr && ant server
...
bin/solr -e techproducts
...
curl 'http://localhost:8983/solr/techproducts/update/json/docs?commit=true' 
--data-binary @example/exampledocs/books.json -H 'Content-type:application/json'
...

curl 'http://localhost:8983/solr/techproducts/query?q=id:978-1933988177'
{
  "responseHeader":{
"status":0,
"QTime":5,
"params":{
  "q":"id:978-1933988177"}},
  "response":{"numFound":1,"start":0,"docs":[
  {
"id":"978-1933988177",
"_src_":"{\n\"id\" : \"978-1933988177\",\n\"cat\" : 
[\"book\",\"paperback\"],\n\"name\" : \"Lucene in Action, Second 
Edition\",\n\"author\" : \"Michael McCandless\",\n\"sequence_i\" : 1,\n 
   \"genre_s\" : \"IT\",\n\"inStock\" : true,\n\"price\" : 30.50,\n
\"pages_i\" : 475\n  }",
"_version_":1563794703530328065}]
  }}
{noformat}

Compare with using {{/update/json}} ...

{noformat}
curl 'http://localhost:8983/solr/techproducts/update/json?commit=true' 
--data-binary @example/exampledocs/books.json -H 'Content-type:application/json'
...
curl 'http://localhost:8983/solr/techproducts/query?q=id:978-1933988177'
{
  "responseHeader":{
"status":0,
"QTime":0,
"params":{
  "q":"id:978-1933988177"}},
  "response":{"numFound":1,"start":0,"docs":[
  {
"id":"978-1933988177",
"cat":["book",
  "paperback"],
"name":"Lucene in Action, Second Edition",
"author":"Michael McCandless",
"author_s":"Michael McCandless",
"sequence_i":1,
"sequence_pi":1,
"genre_s":"IT",
"inStock":true,
"price":30.5,
"price_c":"30.5,USD",
"pages_i":475,
"pages_pi":475,
"_version_":1563794766373584896}]
  }}
{noformat}

According to the ref-guide, the only diff between these two endpoints should be 
that {{/update/json/docs}} defaults {{json.command=false}} ... but since the 
top level JSON structure in books.json is a list ({{"[ ... ]"}}) that shouldn't 
matter because that's not the solr JSON command syntax.



If you try to send a singular JSON document tp {{/update/json/docs}}, you get 
the same problem...

{noformat}
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"id":"HOSS","popularity":42}' 
'http://localhost:8983/solr/techproducts/update/json/docs?commit=true'
...
curl 'http://localhost:8983/solr/techproducts/query?q=id:HOSS'{
  "responseHeader":{
"status":0,
"QTime":0,
"params":{
  "q":"id:HOSS"}},
  "response":{"numFound":1,"start":0,"docs":[
  {
"id":"HOSS",
"_src_":"{\"id\":\"HOSS\",\"popularity\":42}",
"_version_":1563795188162232320}]
  }}
{noformat}

...even though the same JSON works fine to {{/update/json?json.command=false}} 
...

{noformat}
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"id":"HOSS","popularity":42}' 
'http://localhost:8983/solr/techproducts/update/json?commit=true=false'
...
curl 'http://localhost:8983/solr/techproducts/query?q=id:HOSS'{
  "responseHeader":{
"status":0,
"QTime":1,
"params":{
  "q":"id:HOSS"}},
  "response":{"numFound":1,"start":0,"docs":[
  {
"id":"HOSS",
"popularity":42,
"_version_":1563795262581768192}]
  }}
{noformat}





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10423) ShingleFilter causes overly restrictive queries to be produced

2017-04-04 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955912#comment-15955912
 ] 

Steve Rowe edited comment on SOLR-10423 at 4/4/17 11:46 PM:


I think the fix for this problem is to expose 
{{QueryBuilder.setEnableGraphQueries()}} on Solr field types, in the same way 
that the {{autoGeneratePhraseQueries}} option is now.

Since 6.5 is the first version of Solr that included the {{sow=false}} option, 
it previously wasn't possible to construct queries using ShingleFilter, because 
Solr's query parser always split on whitespace before performing analysis, one 
term at a time.

The following Lucene unit test (added to the queryparser module's 
{{TestQueryParser.java}}, after adding a test dependency on the analysis-common 
module), which calls {{QueryBuilder.setEnableGraphQueries(false);}}, succeeds 
for me.  When I change the test to call {{assertQueryEquals()}} (which doesn't 
disable graph queries, which are enabled by default), the test fails with this 
assertion error: {{Query /A B C/ yielded /(+A_B +B_C) A_B_C/, expecting  
/Synonym(A_B A_B_C) B_C/}}.

{code:java}
  public void testShinglesSplitOnWhitespace() throws Exception {
Analyzer a = new Analyzer() {
  @Override protected TokenStreamComponents createComponents(String s) {
Tokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, 
false);
ShingleFilter tokenStream = new ShingleFilter(tokenizer, 2, 3);
tokenStream.setTokenSeparator("_");
tokenStream.setOutputUnigrams(false);
return new TokenStreamComponents(tokenizer, tokenStream);
  }
};
boolean oldSplitOnWhitespace = splitOnWhitespace;
splitOnWhitespace = false;
assertQueryEqualsNoGraph("A B C", a, "Synonym(A_B A_B_C) B_C");
splitOnWhitespace = oldSplitOnWhitespace;
  }

  public void assertQueryEqualsNoGraph(String query, Analyzer a, String result) 
throws Exception {
QueryParser parser = getParser(a);
parser.setEnableGraphQueries(false);
Query q = parser.parse(query);
String s = q.toString("field");
if (!s.equals(result)) {
  fail("Query /" + query + "/ yielded /" + s + "/, expecting /" + result + 
"/");
}
  }
{code}


was (Author: steve_rowe):
I think the fix for this problem is to expose 
{{QueryBuilder.setEnableGraphQueries()}} on Solr field types, in the same way 
that the {{autoGeneratePhraseQueries}} option is now.

Since 6.5 is the first version of Solr that included the {{sow=false}} option, 
it wasn't possible to construct queries using ShingleFilter, because Solr's 
query parser always split on whitespace before performing analysis, one term at 
a time.

The following Lucene unit test (added to the queryparser module's 
{{TestQueryParser.java}}, after adding a test dependency on the analysis-common 
module), which calls {{QueryBuilder.setEnableGraphQueries(false);}}, succeeds 
for me.  When I change the test to call {{assertQueryEquals()}} (which doesn't 
disable graph queries, which are enabled by default), the test fails with this 
assertion error: {{Query /A B C/ yielded /(+A_B +B_C) A_B_C/, expecting  
/Synonym(A_B A_B_C) B_C/}}.

{code:java}
  public void testShinglesSplitOnWhitespace() throws Exception {
Analyzer a = new Analyzer() {
  @Override protected TokenStreamComponents createComponents(String s) {
Tokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, 
false);
ShingleFilter tokenStream = new ShingleFilter(tokenizer, 2, 3);
tokenStream.setTokenSeparator("_");
tokenStream.setOutputUnigrams(false);
return new TokenStreamComponents(tokenizer, tokenStream);
  }
};
boolean oldSplitOnWhitespace = splitOnWhitespace;
splitOnWhitespace = false;
assertQueryEqualsNoGraph("A B C", a, "Synonym(A_B A_B_C) B_C");
splitOnWhitespace = oldSplitOnWhitespace;
  }

  public void assertQueryEqualsNoGraph(String query, Analyzer a, String result) 
throws Exception {
QueryParser parser = getParser(a);
parser.setEnableGraphQueries(false);
Query q = parser.parse(query);
String s = q.toString("field");
if (!s.equals(result)) {
  fail("Query /" + query + "/ yielded /" + s + "/, expecting /" + result + 
"/");
}
  }
{code}

> ShingleFilter causes overly restrictive queries to be produced
> --
>
> Key: SOLR-10423
> URL: https://issues.apache.org/jira/browse/SOLR-10423
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>
> When {{sow=false}} and {{ShingleFilter}} is included in the query analyzer, 
> {{QueryBuilder}} produces queries that inappropriately require sequential 
> terms.  E.g. the query "A B C" produces {{(+A_B +B_C) A_B_C}} when the query 
> 

[jira] [Commented] (SOLR-9959) SolrInfoMBean-s category and hierarchy cleanup

2017-04-04 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956053#comment-15956053
 ] 

Hoss Man commented on SOLR-9959:



(NOTE: comments below are in a mishmash order as i jumped around the code, so 
they can be somewhat redundent as i re-thought about diff concepts while 
looking at diff classes)

* SolrJmxReporter
** should these really be warnings, or just info?
*** {{log.warn("No serviceUrl or agentId was configured, using first 
MBeanServer.", mBeanServer);}}
*** {{log.warn("No JMX server found. Not exposing Solr metrics via JMX.");}}
*** in the prior code, warning might have made sense -- but in the new 
code,  seems like it should only be a warning if agentId is specified, or if 
serviceUrl can't be created?
** it didn't occur to me the last time i reviewed this code, but if it's 
possible for people to configure multiple SolrJmxReporter instances in 
solr.xml, then we should almost certainly support {{rootName}} like 
JmxMonitorMap did, otherwise won't the multiple SolrJmxReporter instances 
overwrite eachother if they use the same MBeanServer?
*** see related comments below regarding JmxObjectNameFactory
** as for *why* a person might want to configure multiple SolrJmxReporter 
instances, that goes back to my previous question about if/why we should 
support {{serviceUrl}} in SolrJmxReporter...
*** since SolrJmxReporter is now at the container level, the only value i see 
is if there's a way to configure Reporters to filter which collections they 
expose
*** so people might configure multiple SolrJmxReporter instances w/diff 
serviceUrls that expose the metrics for diff solr collections to diff end 
consumers
*** is this currently possible?
** NOTE: there is some sort of bug -- i didn't trace down the root cause -- 
causing multiple SolrJmxReporter instances to be inited on startup,
*** run {{bin/solr -e techproducts -Dcom.sun.management.jmxremote}} and very 
early in the logs you'll see...{noformat}
INFO  - 2017-04-04 22:46:40.787; [   ] org.apache.solr.core.SolrXmlConfig; 
Loading container configuration from 
/home/hossman/lucene/dev/solr/example/techproducts/solr/solr.xml
INFO  - 2017-04-04 22:46:40.833; [   ] org.apache.solr.core.SolrXmlConfig; 
MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@66d3c617, but no JMX 
reporters were configured - adding default JMX reporter.
WARN  - 2017-04-04 22:46:41.252; [   ] 
org.apache.solr.metrics.reporters.SolrJmxReporter; No serviceUrl or agentId was 
configured, using first MBeanServer.
INFO  - 2017-04-04 22:46:41.269; [   ] 
org.apache.solr.metrics.reporters.SolrJmxReporter; JMX monitoring enabled at 
server: com.sun.jmx.mbeanserver.JmxMBeanServer@66d3c617
WARN  - 2017-04-04 22:46:41.269; [   ] 
org.apache.solr.metrics.reporters.SolrJmxReporter; No serviceUrl or agentId was 
configured, using first MBeanServer.
INFO  - 2017-04-04 22:46:41.270; [   ] 
org.apache.solr.metrics.reporters.SolrJmxReporter; JMX monitoring enabled at 
server: com.sun.jmx.mbeanserver.JmxMBeanServer@66d3c617
WARN  - 2017-04-04 22:46:41.270; [   ] 
org.apache.solr.metrics.reporters.SolrJmxReporter; No serviceUrl or agentId was 
configured, using first MBeanServer.
INFO  - 2017-04-04 22:46:41.276; [   ] 
org.apache.solr.metrics.reporters.SolrJmxReporter; JMX monitoring enabled at 
server: com.sun.jmx.mbeanserver.JmxMBeanServer@66d3c617
{noformat}
*** and later in the logs, once the techproducts core is added...{noformat}
WARN  - 2017-04-04 22:46:43.608; [   x:techproducts] 
org.apache.solr.metrics.reporters.SolrJmxReporter; No serviceUrl or agentId was 
configured, using first MBeanServer.
INFO  - 2017-04-04 22:46:43.609; [   x:techproducts] 
org.apache.solr.metrics.reporters.SolrJmxReporter; JMX monitoring enabled at 
server: com.sun.jmx.mbeanserver.JmxMBeanServer@66d3c617
{noformat}
*** isn't there only suppose to be *ONE* (implicit) SolrJmxReporter ? ... and 
why would a/each new core cause a new SolrJmxReporter to be created/init'ed ?


* JmxObjectNameFactory
** NOTE: i realize some of these comments aren't specific to changes on this 
branch, but i noticed them while reviewing the JMX stuff a bit more...
** I'm confused as to why the "reporter" name is being include in all the 
ObjectNames ?
*** it's going to be the same for every bean reported by the (same) 
SolrJmxReporter (so pracitcally speaking with teh default implicit 
SolrJmxReporter instance there's this weird "default" somewhere in the 
hierarchical drill down of every MBean)
*** if we expect there to be multiple SolrJmxReporter instances (thus needing 
to disambiguate the beans), then that's exactly what the point of {{rootName}} 
was inthe existing code -- and giving each reporter it's own prefix/hierarchy 
in the MBean server seems better then having their beans intermixed and needing 
to look for the "reporter" attribute of the name to disambiguate
 so i would suggest either we should add 

[jira] [Commented] (SOLR-10423) ShingleFilter causes overly restrictive queries to be produced

2017-04-04 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955912#comment-15955912
 ] 

Steve Rowe commented on SOLR-10423:
---

I think the fix for this problem is to expose 
{{QueryBuilder.setEnableGraphQueries()}} on Solr field types, in the same way 
that the {{autoGeneratePhraseQueries}} option is now.

Since 6.5 is the first version of Solr that included the {{sow=false}} option, 
it wasn't possible to construct queries using ShingleFilter, because Solr's 
query parser always split on whitespace before performing analysis, one term at 
a time.

The following Lucene unit test (added to the queryparser module's 
{{TestQueryParser.java}}, after adding a test dependency on the analysis-common 
module), which calls {{QueryBuilder.setEnableGraphQueries(false);}}, succeeds 
for me.  When I change the test to call {{assertQueryEquals()}} (which doesn't 
disable graph queries, which are enabled by default), the test fails with this 
assertion error: {{Query /A B C/ yielded /(+A_B +B_C) A_B_C/, expecting  
/Synonym(A_B A_B_C) B_C/}}.

{code:java}
  public void testShinglesSplitOnWhitespace() throws Exception {
Analyzer a = new Analyzer() {
  @Override protected TokenStreamComponents createComponents(String s) {
Tokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, 
false);
ShingleFilter tokenStream = new ShingleFilter(tokenizer, 2, 3);
tokenStream.setTokenSeparator("_");
tokenStream.setOutputUnigrams(false);
return new TokenStreamComponents(tokenizer, tokenStream);
  }
};
boolean oldSplitOnWhitespace = splitOnWhitespace;
splitOnWhitespace = false;
assertQueryEqualsNoGraph("A B C", a, "Synonym(A_B A_B_C) B_C");
splitOnWhitespace = oldSplitOnWhitespace;
  }

  public void assertQueryEqualsNoGraph(String query, Analyzer a, String result) 
throws Exception {
QueryParser parser = getParser(a);
parser.setEnableGraphQueries(false);
Query q = parser.parse(query);
String s = q.toString("field");
if (!s.equals(result)) {
  fail("Query /" + query + "/ yielded /" + s + "/, expecting /" + result + 
"/");
}
  }
{code}

> ShingleFilter causes overly restrictive queries to be produced
> --
>
> Key: SOLR-10423
> URL: https://issues.apache.org/jira/browse/SOLR-10423
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>
> When {{sow=false}} and {{ShingleFilter}} is included in the query analyzer, 
> {{QueryBuilder}} produces queries that inappropriately require sequential 
> terms.  E.g. the query "A B C" produces {{(+A_B +B_C) A_B_C}} when the query 
> analyzer includes {{ maxShingleSize="3" outputUnigrams="false" tokenSeparator="_"/>}}.
> Aman Deep Singh reported this problem on the solr-user list. From 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201703.mbox/%3ccanegtx9bwbpwqc-cxieac7qsas7x2tgzovomy5ztiagco1p...@mail.gmail.com%3e]:
> {quote}
> I was trying to use the shingle filter but it was not creating the query as
> desirable.
> my schema is
> {noformat}
>  positionIncrementGap="100">
>   
> 
>  maxShingleSize="4"/>
> 
>   
> 
> 
> {noformat}
> my solr query is
> {noformat}
> http://localhost:8983/solr/productCollection/select?
>  defType=edismax
> =true
> =one%20plus%20one%20four
> =nameShingle
> =false
> =xml
> {noformat}
> and it was creating the parsed query as
> {noformat}
> 
> (+(DisjunctionMaxQuery(((+nameShingle:one plus +nameShingle:plus one
> +nameShingle:one four))) DisjunctionMaxQuery(((+nameShingle:one plus
> +nameShingle:plus one four))) DisjunctionMaxQuery(((+nameShingle:one plus one 
> +nameShingle:one four))) DisjunctionMaxQuery((nameShingle:one plus one 
> four)))~1)/no_coord
> 
> 
> *++nameShingle:one plus +nameShingle:plus one +nameShingle:one four))
> ((+nameShingle:one plus +nameShingle:plus one four)) ((+nameShingle:one
> plus one +nameShingle:one four)) (nameShingle:one plus one four))~1)*
> 
> {noformat}
> So ideally token creations is perfect but in the query it is using boolean + 
> operator which is causing the problem as if i have a document with name as 
> "one plus one" ,according to the shingles it has to matched as its token will 
> be  ("one plus","one plus one","plus one") .
> I have tried using the q.op and played around the mm also but nothing is
> giving me the correct response.
> Any idea how i can fetch that document even if the document is missing any
> token.
> My expected response will be getting the document "one plus one" even the 
> user query has any additional term like "one plus one two" and so on.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2017-04-04 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955909#comment-15955909
 ] 

Mikhail Khludnev commented on SOLR-8297:


Proposed the spec in the ticket description above. Opinions, concerns are quite 
appreciated. 

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
> Attachments: SOLR-8297.patch
>
>
> h2. Proposal
> h3. General Idea
> Approach [~shikhasomani]'s range check algorithm to the most cases
> h3. Join behavior depending on router types of joined collections
> || to\\from ||CompositeId||Implicit||
> ||CompositeId| shard range check, see table below | allow |
> ||Implicit| allow | shard to shard |
> h3. CompositeId to CompositeId join behaviour for certain number of shards
>  
> || to\\from ||single||>1||
> ||single| allow (as is) | allow (range check) |
> ||>1| allow (as is) | per shard range check |
> h3. Rules from the tables above
> * joining from/to CompositeId and Implicit is blindly allowed, it pick ups 
> any collocated replica, because users who do that probably understand what 
> they do.
> * when both sides are Implicit let's join shards by name. ie if request hits 
> collectionTO_shardY_replica2 at a node, the collocated 
> collectionFROM_shardY_replica* is expected.
> * when both sides are CompositeId
> ** from single shard to single shard - nobrainer, just needs collocated 
> replica;
> ** from multiple shards to single shard - all "from" shards (any it's 
> replicas) are picked for joining 
> ** from single shard to multiple shards - existing SOLR-4905 functionality
> ** from multiple to multiple - generic range check algorithm
> ### check that fromField and toField are router.keys in these collections
> ### take shard range for the current "to" collection replica (keep in mind 
> that request is distributed across "to" collection shards)   
> ### enumerate "from" collection shrads, find their subset which covers "to" 
> shard range (this allows to handle any number of shards at both sides)
> ### pickup collocated replicas of these "from" shard subset 
> h3. Caveat 
> this is quite sensitive to shard allocation (and/or replica placement) ie 
> failed "from" replica cannot be collocated with the required "to" shard.  
> h2. Initial Description
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem to be a "normal" 
> use-case in the cross-core join in SolrCloud.
> Hope this helps



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2017-04-04 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-8297:
---
Description: 
h2. Proposal

h3. General Idea
Approach [~shikhasomani]'s range check algorithm to the most cases

h3. Join behavior depending on router types of joined collections
|| to\\from ||CompositeId||Implicit||
||CompositeId| shard range check, see table below | allow |
||Implicit| allow | shard to shard |

h3. CompositeId to CompositeId join behaviour for certain number of shards
 
|| to\\from ||single||>1||
||single| allow (as is) | allow (range check) |
||>1| allow (as is) | per shard range check |

h3. Rules from the tables above
* joining from/to CompositeId and Implicit is blindly allowed, it pick ups any 
collocated replica, because users who do that probably understand what they do.
* when both sides are Implicit let's join shards by name. ie if request hits 
collectionTO_shardY_replica2 at a node, the collocated 
collectionFROM_shardY_replica* is expected.
* when both sides are CompositeId
** from single shard to single shard - nobrainer, just needs collocated replica;
** from multiple shards to single shard - all "from" shards (any it's replicas) 
are picked for joining 
** from single shard to multiple shards - existing SOLR-4905 functionality
** from multiple to multiple - generic range check algorithm
### check that fromField and toField are router.keys in these collections
### take shard range for the current "to" collection replica (keep in mind that 
request is distributed across "to" collection shards)   
### enumerate "from" collection shrads, find their subset which covers "to" 
shard range (this allows to handle any number of shards at both sides)
### pickup collocated replicas of these "from" shard subset 

h3. Caveat 
this is quite sensitive to shard allocation (and/or replica placement) ie 
failed "from" replica cannot be collocated with the required "to" shard.  

h2. Initial Description
Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
Khludnev.
A) exception handling:
The exception "SolrCloud join: multiple shards not yet supported" thrown in the 
function findLocalReplicaForFromIndex of JoinQParserPlugin is not triggered 
correctly: In my use-case, I've a join on a facet.query and when my results are 
only found in 1 shard and the facet.query with the join is querying the last 
replica of the last slice, then the exception is not thrown.
I believe it's better to verify the nr of slices when we want to verify the  
"multiple shards not yet supported" exception (so exception is thrown when 
zkController.getClusterState().getSlices(fromIndex).size()>1).

B) functional enhancement:
I would expect that there is no problem to perform a cross-core join over 
sharded collections when the following conditions are met:
1) both collections are sharded with the same replicationFactor and numShards
2) router.field of the collections is set to the same "key-field" (collection 
of "fromindex" has router.field = "from" field and collection joined to has 
router.field = "to" field)

The router.field setup ensures that documents with the same "key-field" are 
routed to the same node. 
So the combination based on the "key-field" should always be available within 
the same node.

>From a user perspective, I believe these assumptions seem to be a "normal" 
>use-case in the cross-core join in SolrCloud.

Hope this helps

  was:
Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
Khludnev.
A) exception handling:
The exception "SolrCloud join: multiple shards not yet supported" thrown in the 
function findLocalReplicaForFromIndex of JoinQParserPlugin is not triggered 
correctly: In my use-case, I've a join on a facet.query and when my results are 
only found in 1 shard and the facet.query with the join is querying the last 
replica of the last slice, then the exception is not thrown.
I believe it's better to verify the nr of slices when we want to verify the  
"multiple shards not yet supported" exception (so exception is thrown when 
zkController.getClusterState().getSlices(fromIndex).size()>1).

B) functional enhancement:
I would expect that there is no problem to perform a cross-core join over 
sharded collections when the following conditions are met:
1) both collections are sharded with the same replicationFactor and numShards
2) router.field of the collections is set to the same "key-field" (collection 
of "fromindex" has router.field = "from" field and collection joined to has 
router.field = "to" field)

The router.field setup ensures that documents with the same "key-field" are 
routed to the same node. 
So the combination based on the "key-field" should always be available within 
the same node.

>From a user perspective, I believe these assumptions seem to be a "normal" 
>use-case in the cross-core join in SolrCloud.

Hope this 

[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 806 - Still Unstable!

2017-04-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/806/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter

Error Message:
Collection not found: withShardField

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: withShardField
at 
__randomizedtesting.SeedInfo.seed([6173DB88CFD051A6:3423331A63299E56]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:1382)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1075)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1054)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter(CustomCollectionTest.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-10375) Stored text > 716MB retrieval with StoredFieldVisitor causes out of memory error with document cache

2017-04-04 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955878#comment-15955878
 ] 

Michael Braun commented on SOLR-10375:
--

[~arafalov] Yes this would have been solved by using large fields. 

[~dsmiley] My question now becomes at what size/length should Solr be expected 
to support for stored string values? I'd imagine making that call instead does 
come at some cost overall.

> Stored text > 716MB retrieval with StoredFieldVisitor causes out of memory 
> error with document cache
> 
>
> Key: SOLR-10375
> URL: https://issues.apache.org/jira/browse/SOLR-10375
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2.1
> Environment: Java 1.8.121, Linux x64
>Reporter: Michael Braun
>
> Using SolrIndexSearcher.doc(int n, StoredFieldVisitor visitor) - 
> If the document cache has the document, will call visitFromCached, will get 
> an out of memory error because of line 752 of SolrIndexSearcher - 
> visitor.stringField(info, f.stringValue().getBytes(StandardCharsets.UTF_8));
> {code}
>  at java.lang.OutOfMemoryError.()V (OutOfMemoryError.java:48)
>   at java.lang.StringCoding.encode(Ljava/nio/charset/Charset;[CII)[B 
> (StringCoding.java:350)
>   at java.lang.String.getBytes(Ljava/nio/charset/Charset;)[B (String.java:941)
>   at 
> org.apache.solr.search.SolrIndexSearcher.visitFromCached(Lorg/apache/lucene/document/Document;Lorg/apache/lucene/index/StoredFieldVisitor;)V
>  (SolrIndexSearcher.java:685)
>   at 
> org.apache.solr.search.SolrIndexSearcher.doc(ILorg/apache/lucene/index/StoredFieldVisitor;)V
>  (SolrIndexSearcher.java:652)
> {code}
> This is due to the current String.getBytes(Charset) implementation, which 
> allocates the underlying byte array as a function of 
> charArrayLength*maxBytesPerCharacter, which for UTF-8 is 3.  3 * 716MB is 
> over Integer.MAX, and the JVM cannot allocate over this, so an out of memory 
> exception is thrown because the allocation of this much memory for a single 
> array is currently impossible.
> The problem is not present when the document cache is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10423) ShingleFilter causes overly restrictive queries to be produced

2017-04-04 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-10423:
-

 Summary: ShingleFilter causes overly restrictive queries to be 
produced
 Key: SOLR-10423
 URL: https://issues.apache.org/jira/browse/SOLR-10423
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Steve Rowe


When {{sow=false}} and {{ShingleFilter}} is included in the query analyzer, 
{{QueryBuilder}} produces queries that inappropriately require sequential 
terms.  E.g. the query "A B C" produces {{(+A_B +B_C) A_B_C}} when the query 
analyzer includes {{}}.

Aman Deep Singh reported this problem on the solr-user list. From 
[http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201703.mbox/%3ccanegtx9bwbpwqc-cxieac7qsas7x2tgzovomy5ztiagco1p...@mail.gmail.com%3e]:

{quote}
I was trying to use the shingle filter but it was not creating the query as
desirable.

my schema is

{noformat}

  



  


{noformat}

my solr query is

{noformat}
http://localhost:8983/solr/productCollection/select?
 defType=edismax
=true
=one%20plus%20one%20four
=nameShingle
=false
=xml
{noformat}

and it was creating the parsed query as

{noformat}

(+(DisjunctionMaxQuery(((+nameShingle:one plus +nameShingle:plus one
+nameShingle:one four))) DisjunctionMaxQuery(((+nameShingle:one plus
+nameShingle:plus one four))) DisjunctionMaxQuery(((+nameShingle:one plus one 
+nameShingle:one four))) DisjunctionMaxQuery((nameShingle:one plus one 
four)))~1)/no_coord


*++nameShingle:one plus +nameShingle:plus one +nameShingle:one four))
((+nameShingle:one plus +nameShingle:plus one four)) ((+nameShingle:one
plus one +nameShingle:one four)) (nameShingle:one plus one four))~1)*

{noformat}

So ideally token creations is perfect but in the query it is using boolean + 
operator which is causing the problem as if i have a document with name as "one 
plus one" ,according to the shingles it has to matched as its token will be  
("one plus","one plus one","plus one") .

I have tried using the q.op and played around the mm also but nothing is
giving me the correct response.

Any idea how i can fetch that document even if the document is missing any
token.

My expected response will be getting the document "one plus one" even the user 
query has any additional term like "one plus one two" and so on.
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10422) Consolidate font directories

2017-04-04 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett reassigned SOLR-10422:


Assignee: Cassandra Targett

> Consolidate font directories
> 
>
> Key: SOLR-10422
> URL: https://issues.apache.org/jira/browse/SOLR-10422
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Minor
>
> There are 2 font directories, one used for the HTML and another for the PDF. 
> The directory for the fonts is a parameter, so I think we could consolidate 
> and use only one directory for all fonts.
> (There were 3 directories, but I removed one with 
> https://git1-us-west.apache.org/repos/asf?p=lucene-solr.git;a=commit;h=6472b196372b387a43920781d3b2aad1d1d47544)
> It's not quite related, but maybe a little...the HTML uses Proxima Nova, 
> which may not be open licensed, while the PDF is using Noto Sans, which is 
> Apache licensed. We could further consolidate by changing the HTML to use the 
> same base fonts as the PDF.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10422) Consolidate font directories

2017-04-04 Thread Cassandra Targett (JIRA)
Cassandra Targett created SOLR-10422:


 Summary: Consolidate font directories
 Key: SOLR-10422
 URL: https://issues.apache.org/jira/browse/SOLR-10422
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Cassandra Targett
Priority: Minor


There are 2 font directories, one used for the HTML and another for the PDF. 
The directory for the fonts is a parameter, so I think we could consolidate and 
use only one directory for all fonts.

(There were 3 directories, but I removed one with 
https://git1-us-west.apache.org/repos/asf?p=lucene-solr.git;a=commit;h=6472b196372b387a43920781d3b2aad1d1d47544)

It's not quite related, but maybe a little...the HTML uses Proxima Nova, which 
may not be open licensed, while the PDF is using Noto Sans, which is Apache 
licensed. We could further consolidate by changing the HTML to use the same 
base fonts as the PDF.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_121) - Build # 19313 - Unstable!

2017-04-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19313/
Java: 32bit/jdk1.8.0_121 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailureAfterFreshStartTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([97604CAB66F8F874:1F347371C804958C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.LeaderFailureAfterFreshStartTest.waitTillNodesActive(LeaderFailureAfterFreshStartTest.java:233)
at 
org.apache.solr.cloud.LeaderFailureAfterFreshStartTest.restartNodes(LeaderFailureAfterFreshStartTest.java:173)
at 
org.apache.solr.cloud.LeaderFailureAfterFreshStartTest.test(LeaderFailureAfterFreshStartTest.java:148)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-10298) Reduce size of new Ref Guide PDF

2017-04-04 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955762#comment-15955762
 ] 

Cassandra Targett commented on SOLR-10298:
--

I just pushed some stylistic changes to reduce the size and page length 
further. I replaced the Noto Serif font with Noto Sans (Apache licensed), which 
allowed me to reduce the font size and line height calculations while 
maintaining readability. I also reduced the line height in code boxes. These 
changes together reduced the page length to 961 pages (nearly 100 page 
improvement), and reduced the PDF size further also (now 9.0Mb). There is more 
I can do here, just wanted to push changes early and often.

> Reduce size of new Ref Guide PDF
> 
>
> Key: SOLR-10298
> URL: https://issues.apache.org/jira/browse/SOLR-10298
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>
> The new Ref Guide PDF is ~31Mb in size, which is more than 2x the current PDF 
> produced by Confluence (which is 14Mb).
> The asciidoctor-pdf project has a script to optimize the PDF, mostly by 
> scaling down images. When I run this tool on the new PDF, the size is reduced 
> to ~18Mb. (More info on this script: 
> https://github.com/asciidoctor/asciidoctor-pdf#optional-scripts).
> Some of the current image files are very large in size, so I believe that by 
> scaling the images down, we can make the size smaller without adding a step 
> in the build to run the optimize script programmatically (it also has a 
> dependency on GhostScript, so it would be nice to not add another dependency 
> if it can be avoided).
> The new PDF is also about 300 pages longer, but this issue is primarily 
> concerned with file size. However, reducing the number of pages will also 
> make it smaller. A few things that could be tried to reduce the # of pages:
> * Reduce font sizes
> * Increase page margins
> * Review options for when a forced page-break is used and modify if possible



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1276 - Unstable

2017-04-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1276/

1 tests failed.
FAILED:  org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery

Error Message:
Expected a collection with one shard and two replicas null Last available 
state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "core":"MissingSegmentRecoveryTest_shard1_replica2",   
"base_url":"http://127.0.0.1:39888/solr;,   
"node_name":"127.0.0.1:39888_solr",   "state":"active",   
"leader":"true"}, "core_node2":{   
"core":"MissingSegmentRecoveryTest_shard1_replica1",   
"base_url":"http://127.0.0.1:42936/solr;,   
"node_name":"127.0.0.1:42936_solr",   "state":"down",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "realtimeReplicas":"-1"}

Stack Trace:
java.lang.AssertionError: Expected a collection with one shard and two replicas
null
Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"MissingSegmentRecoveryTest_shard1_replica2",
  "base_url":"http://127.0.0.1:39888/solr;,
  "node_name":"127.0.0.1:39888_solr",
  "state":"active",
  "leader":"true"},
"core_node2":{
  "core":"MissingSegmentRecoveryTest_shard1_replica1",
  "base_url":"http://127.0.0.1:42936/solr;,
  "node_name":"127.0.0.1:42936_solr",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "realtimeReplicas":"-1"}
at 
__randomizedtesting.SeedInfo.seed([FC5DD8A3B5EF4583:AC0840A0ECCEF39E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:265)
at 
org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecoveryTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Resolved] (SOLR-10347) Remove index level boost support from "documents" section of the admin UI

2017-04-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-10347.
--
   Resolution: Fixed
Fix Version/s: master (7.0)

Thanks Amrit!

> Remove index level boost support from "documents" section of the admin UI
> -
>
> Key: SOLR-10347
> URL: https://issues.apache.org/jira/browse/SOLR-10347
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Tomás Fernández Löbbe
> Fix For: master (7.0)
>
> Attachments: screenshot-new-UI.png, screenshot-old-UI.png, 
> SOLR-10347.patch, SOLR-10347.patch
>
>
> Index-time boost is deprecated since LUCENE-6819



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10347) Remove index level boost support from "documents" section of the admin UI

2017-04-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955724#comment-15955724
 ] 

ASF subversion and git services commented on SOLR-10347:


Commit f08889f390765c58a7f44f2ff1052484037ce336 in lucene-solr's branch 
refs/heads/master from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f08889f ]

SOLR-10347: Remove index level boost support from 'documents' section of the 
admin UI


> Remove index level boost support from "documents" section of the admin UI
> -
>
> Key: SOLR-10347
> URL: https://issues.apache.org/jira/browse/SOLR-10347
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Tomás Fernández Löbbe
> Attachments: screenshot-new-UI.png, screenshot-old-UI.png, 
> SOLR-10347.patch, SOLR-10347.patch
>
>
> Index-time boost is deprecated since LUCENE-6819



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: Apache Solr Ref Guide for 6.5 RC0

2017-04-04 Thread Cassandra Targett
It looks like this vote has passed, thanks everyone. I'll start the
release process now.

Cassandra

On Mon, Apr 3, 2017 at 6:36 AM, Shalin Shekhar Mangar
 wrote:
> +1
>
> On Sat, Apr 1, 2017 at 1:23 AM, Cassandra Targett  wrote:
>> Please vote for the first release candidate of the Apache Solr
>> Reference Guide for 6.5.
>>
>> Artifacts are available from:
>> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-6.5-RC0/
>>
>> $ more apache-solr-ref-guide-6.5.pdf.sha1
>> a80b3b776b840c59234b6a416b4908c8af7217d1  apache-solr-ref-guide-6.5.pdf
>>
>> Here's my +1.
>>
>> Thanks,
>> Cassandra
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
>
>
> --
> Regards,
> Shalin Shekhar Mangar.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9120) o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- NoSuchFileException

2017-04-04 Thread Rondel Ward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955665#comment-15955665
 ] 

Rondel Ward edited comment on SOLR-9120 at 4/4/17 7:32 PM:
---

I'm seeing this same issue on Solr 6.4.2. However in my case I also saw this 
error pop up when trying to backup a collection using the CollectionsAPI. 

{quote}
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from 
server at http://solrendpoint:8983/solr: Failed to backup 
core=itembuckets_commerce_products_web_index because 
java.nio.file.NoSuchFileException: 
/var/solr/data/itembuckets_commerce_products_web_index/data/index/segments_81
{quote}

*Trace*

{quote}
org.apache.solr.common.SolrException: Could not backup all replicas 
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:287)
 
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:218)
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:166)
 org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:664) 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:445) 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:296)
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
org.eclipse.jetty.server.Server.handle(Server.java:534) 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
 org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
java.lang.Thread.run(Thread.java:745)\n
{quote}

Rebuilding the index and then running the backup call again resolved my issue 
but now I'm concerned that this issue might be more than just noise in the 
logs. 


was (Author: rondelward):
I'm seeing this same issue on Solr 6.4.2. However in my case I also saw this 
error pop up when trying to backup a collection using the CollectionsAPI. 

{quote}
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from 
server at http://solrendpoint:8983/solr: Failed to backup 
core=itembuckets_commerce_products_web_index because 
java.nio.file.NoSuchFileException: 
/var/solr/data/itembuckets_commerce_products_web_index/data/index/segments_81
{quote}

*Trace*

{quote}
org.apache.solr.common.SolrException: Could not backup all replicas\n\tat 
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:287)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:218)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:166)\n\tat
 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:664)\n\tat
 org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:445)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:296)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3943 - Failure!

2017-04-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3943/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 12338 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/temp/junit4-J1-20170404_181008_9757259839262745971116.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] [thread 30075 also had an error][thread 109743 also had an error]
   [junit4] 
   [junit4] [thread 122203 also had an error]
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  Internal Error (sharedRuntime.cpp:873), pid=37859, 
tid=0x000180a3
   [junit4] #  guarantee(nm != NULL) failed: must have containing nmethod for 
implicit division-by-zero exceptions
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (8.0_121-b13) (build 
1.8.0_121-b13)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.121-b13 mixed mode 
bsd-amd64 compressed oops)
   [junit4] # Failed to write core dump. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J1/hs_err_pid37859.log
   [junit4] [thread 68995 also had an error]
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J1: EOF 

[...truncated 814 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home/jre/bin/java 
-XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/heapdumps 
-ea -esa -Dtests.prefix=tests -Dtests.seed=1D219A5D14626400 -Xmx512M 
-Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=7.0.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/temp
 -Dcommon.dir=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene 
-Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/clover/db
 
-Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=7.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX 
-Djunit4.childvm.cwd=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J1
 -Djunit4.childvm.id=1 -Djunit4.childvm.count=2 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true -Dtests.disableHdfs=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=ISO-8859-1 -classpath 

[jira] [Commented] (SOLR-9120) o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- NoSuchFileException

2017-04-04 Thread Rondel Ward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955665#comment-15955665
 ] 

Rondel Ward commented on SOLR-9120:
---

I'm seeing this same issue on Solr 6.4.2. However in my case I also saw this 
error pop up when trying to backup a collection using the CollectionsAPI. 

{quote}
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from 
server at http://solrendpoint:8983/solr: Failed to backup 
core=itembuckets_commerce_products_web_index because 
java.nio.file.NoSuchFileException: 
/var/solr/data/itembuckets_commerce_products_web_index/data/index/segments_81
{quote}

*Trace*

{quote}
org.apache.solr.common.SolrException: Could not backup all replicas\n\tat 
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:287)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:218)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:166)\n\tat
 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:664)\n\tat
 org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:445)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:296)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
 org.eclipse.jetty.server.Server.handle(Server.java:534)\n\tat 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)\n\tat 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)\n\tat
 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
 org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)\n\tat
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)\n\tat
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)\n\tat
 java.lang.Thread.run(Thread.java:745)\n
{quote}

Rebuilding the index and then running the backup call again resolved my issue 
but now I'm concerned that this issue might be more than just noise in the 
logs. 

> o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- 
> NoSuchFileException
> 
>
> Key: SOLR-9120
> URL: https://issues.apache.org/jira/browse/SOLR-9120
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Markus Jelsma
>
> On Solr 6.0, we frequently see the following errors popping up:
> {code}
> java.nio.file.NoSuchFileException: 
> /var/lib/solr/logs_shard2_replica1/data/index/segments_2c5
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>   at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>   at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>   at 

[jira] [Comment Edited] (SOLR-10421) solr/contrib/ltr (MinMax|Standard)Normalizer.paramsToMap needs to save float as string

2017-04-04 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955634#comment-15955634
 ] 

Christine Poerschke edited comment on SOLR-10421 at 4/4/17 7:03 PM:


Minimal fix and test change. Ideally i would like this included in the 
stop-and-restart test case(s) to exactly capture the reported observed 
behavior, though in practice this might have to do for now with 6.5.1 release 
timeline in mind.


was (Author: cpoerschke):
Minimal fix and test change. Ideally i would like this included in the 
stop-and-restart case to exactly capture the reported observed behavior, though 
in practice this might have to do for now with 6.5.1 release timeline in mind.

> solr/contrib/ltr (MinMax|Standard)Normalizer.paramsToMap needs to save float 
> as string
> --
>
> Key: SOLR-10421
> URL: https://issues.apache.org/jira/browse/SOLR-10421
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.4.1, 6.5, 6.4.0, 6.4.2
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: SOLR-10421.patch
>
>
> Please see Jianxiong Dong's [solr learning_to_rank (normalizer) unmatched 
> argument type 
> issue|https://lists.apache.org/thread.html/dd7f553b28da5ea6ef55bf0059970be50fb3e2c68348ac95a749163d@%3Csolr-user.lucene.apache.org%3E]
>  email on the user mailing list for details on how this bug manifests.
> Implementation choice background:
> * If the number were to be saved as a number then {{4.2}} could be considered 
> either as a float or as a double and hence the normalizer classes would need 
> setters for both those possibilities. Equally, {{42.0}} could be saved as 
> just {{42}} which then could be either an int or a long and so again setters 
> for both possibilities would be needed. All this complexity is avoided by 
> saving the number as a string. The class has convenience float setters which 
> can be handy for use in tests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10421) solr/contrib/ltr (MinMax|Standard)Normalizer.paramsToMap needs to save float as string

2017-04-04 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-10421:
---
Attachment: SOLR-10421.patch

Minimal fix and test change. Ideally i would like this included in the 
stop-and-restart case to exactly capture the reported observed behavior, though 
in practice this might have to do for now with 6.5.1 release timeline in mind.

> solr/contrib/ltr (MinMax|Standard)Normalizer.paramsToMap needs to save float 
> as string
> --
>
> Key: SOLR-10421
> URL: https://issues.apache.org/jira/browse/SOLR-10421
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.4.1, 6.5, 6.4.0, 6.4.2
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: SOLR-10421.patch
>
>
> Please see Jianxiong Dong's [solr learning_to_rank (normalizer) unmatched 
> argument type 
> issue|https://lists.apache.org/thread.html/dd7f553b28da5ea6ef55bf0059970be50fb3e2c68348ac95a749163d@%3Csolr-user.lucene.apache.org%3E]
>  email on the user mailing list for details on how this bug manifests.
> Implementation choice background:
> * If the number were to be saved as a number then {{4.2}} could be considered 
> either as a float or as a double and hence the normalizer classes would need 
> setters for both those possibilities. Equally, {{42.0}} could be saved as 
> just {{42}} which then could be either an int or a long and so again setters 
> for both possibilities would be needed. All this complexity is avoided by 
> saving the number as a string. The class has convenience float setters which 
> can be handy for use in tests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10347) Remove index level boost support from "documents" section of the admin UI

2017-04-04 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10347:

Attachment: SOLR-10347.patch

Thanks [~tomasflobbe] for the review. Removed the necessary including 
references of "json-only". Uploaded the updated patch; I think we are good to 
go.

> Remove index level boost support from "documents" section of the admin UI
> -
>
> Key: SOLR-10347
> URL: https://issues.apache.org/jira/browse/SOLR-10347
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Tomás Fernández Löbbe
> Attachments: screenshot-new-UI.png, screenshot-old-UI.png, 
> SOLR-10347.patch, SOLR-10347.patch
>
>
> Index-time boost is deprecated since LUCENE-6819



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10420) Solr 6.x leaking one SolrZkClient instance per second

2017-04-04 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955575#comment-15955575
 ] 

Markus Jelsma edited comment on SOLR-10420 at 4/4/17 6:36 PM:
--

So it seems. Forced GC does not remove the object instances in >= 6.1.0. In 
6.0.x regular GC and forced GC does remove the instances from the object count. 
I think almost everyone should be able to see it for themselves, almost all our 
Solr instances show this problem immediately after restart, some don't in some 
occasions.

Although they don't consume a lot of bytes, the problem appears to cause more 
CPU time being used up.

Filtering the memory sampler for org.apache.solr.common reveals it right away. 
Also, the number of instances should correspond the the number of seconds the 
instance is running. A node running how about six days has roughly 500k 
instances, one running roughly 30 minutes just below 2k.


was (Author: markus17):
So it seems. Forced GC does not remove the object instances in >= 6.1.0. In 
6.0.x regular GC and forced GC does remove the instances from the object count. 
I think almost everyone should be able to see it for themselves, almost all our 
Solr instances show this problem immediately after restart, some don't in some 
occasions.

Although they don't consume a lot of bytes, the problem appears to cause more 
CPU time being used up.

Filtering the memory sampler for org.apache.solr.common reveals it right away.

> Solr 6.x leaking one SolrZkClient instance per second
> -
>
> Key: SOLR-10420
> URL: https://issues.apache.org/jira/browse/SOLR-10420
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5, 6.4.2
>Reporter: Markus Jelsma
> Fix For: master (7.0), branch_6x
>
>
> One of our nodes became berzerk after a restart, Solr went completely nuts! 
> So i opened VisualVM to keep an eye on it and spotted a different problem 
> that occurs in all our Solr 6.4.2 and 6.5.0 nodes.
> It appears Solr is leaking one SolrZkClient instance per second via 
> DistributedQueue$ChildWatcher. That one per second is quite accurate for all 
> nodes, there are about the same amount of instances as there are seconds 
> since Solr started. I know VisualVM's instance count includes 
> objects-to-be-collected, the instance count does not drop after a forced 
> garbed collection round.
> It doesn't matter how many cores or collections the nodes carry or how heavy 
> traffic is.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10420) Solr 6.x leaking one SolrZkClient instance per second

2017-04-04 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955575#comment-15955575
 ] 

Markus Jelsma commented on SOLR-10420:
--

So it seems. Forced GC does not remove the object instances in >= 6.1.0. In 
6.0.x regular GC and forced GC does remove the instances from the object count. 
I think almost everyone should be able to see it for themselves, almost all our 
Solr instances show this problem immediately after restart, some don't in some 
occasions.

Although they don't consume a lot of bytes, the problem appears to cause more 
CPU time being used up.

Filtering the memory sampler for org.apache.solr.common reveals it right away.

> Solr 6.x leaking one SolrZkClient instance per second
> -
>
> Key: SOLR-10420
> URL: https://issues.apache.org/jira/browse/SOLR-10420
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5, 6.4.2
>Reporter: Markus Jelsma
> Fix For: master (7.0), branch_6x
>
>
> One of our nodes became berzerk after a restart, Solr went completely nuts! 
> So i opened VisualVM to keep an eye on it and spotted a different problem 
> that occurs in all our Solr 6.4.2 and 6.5.0 nodes.
> It appears Solr is leaking one SolrZkClient instance per second via 
> DistributedQueue$ChildWatcher. That one per second is quite accurate for all 
> nodes, there are about the same amount of instances as there are seconds 
> since Solr started. I know VisualVM's instance count includes 
> objects-to-be-collected, the instance count does not drop after a forced 
> garbed collection round.
> It doesn't matter how many cores or collections the nodes carry or how heavy 
> traffic is.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.5.1 release?

2017-04-04 Thread Christine Poerschke (BLOOMBERG/ LONDON)
I'd also like to include fix for 
https://issues.apache.org/jira/browse/SOLR-10421 which should be just four 
lines of actual fix, plus ideally a test for it which will be more than four 
lines ...

- Original Message -
From: dev@lucene.apache.org
To: dev@lucene.apache.org
At: 04/04/17 16:54:50

I'd also like to include SOLR-10277 --- it is a serious problem for
people running large number of collections. I'll review and commit the
patch by tomorrow.

On Tue, Apr 4, 2017 at 2:16 PM, Shalin Shekhar Mangar
 wrote:
> I would like to include https://issues.apache.org/jira/browse/SOLR-10416
>
> It is a trivial fix.
>
> On Mon, Apr 3, 2017 at 11:54 PM, Joel Bernstein  wrote:
>> SOLR-10404 looks like a nice improvement!
>>
>> I'll shoot for Friday morning to create the release candidate. I've never
>> been a release manager before so I may need some guidance along the way.
>>
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>> On Mon, Apr 3, 2017 at 12:21 PM, David Smiley 
>> wrote:
>>>
>>> Found & fixed a bug: https://issues.apache.org/jira/browse/SOLR-10404  I'd
>>> like to get this into 6.5.1.  You might be interested in this one Joel.
>>>
>>> On Mon, Apr 3, 2017 at 11:58 AM Steve Rowe  wrote:


 > On Apr 3, 2017, at 11:52 AM, Adrien Grand  wrote:
 >
 > Building the first RC on April 6th sounds good to me! I'm wondering
 > whether the 6.5 Jenkins jobs are still running?

 I disabled the ASF Jenkins 6.5 jobs shortly after the release.  FYI you
 can see which Lucene/Solr jobs are enabled here:
 .  I’ll re-enable the 6.5 jobs
 now.

 --
 Steve
 www.lucidworks.com


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

>>> --
>>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>>> http://www.solrenterprisesearchserver.com
>>
>>
>
>
>
> --
> Regards,
> Shalin Shekhar Mangar.



-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Created] (SOLR-10421) solr/contrib/ltr (MinMax|Standard)Normalizer.paramsToMap needs to save float as string

2017-04-04 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-10421:
--

 Summary: solr/contrib/ltr (MinMax|Standard)Normalizer.paramsToMap 
needs to save float as string
 Key: SOLR-10421
 URL: https://issues.apache.org/jira/browse/SOLR-10421
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.4.2, 6.4.0, 6.5, 6.4.1
Reporter: Christine Poerschke
Assignee: Christine Poerschke


Please see Jianxiong Dong's [solr learning_to_rank (normalizer) unmatched 
argument type 
issue|https://lists.apache.org/thread.html/dd7f553b28da5ea6ef55bf0059970be50fb3e2c68348ac95a749163d@%3Csolr-user.lucene.apache.org%3E]
 email on the user mailing list for details on how this bug manifests.

Implementation choice background:
* If the number were to be saved as a number then {{4.2}} could be considered 
either as a float or as a double and hence the normalizer classes would need 
setters for both those possibilities. Equally, {{42.0}} could be saved as just 
{{42}} which then could be either an int or a long and so again setters for 
both possibilities would be needed. All this complexity is avoided by saving 
the number as a string. The class has convenience float setters which can be 
handy for use in tests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10420) Solr 6.x leaking one SolrZkClient instance per second

2017-04-04 Thread Walter Underwood (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955538#comment-15955538
 ] 

Walter Underwood commented on SOLR-10420:
-

To be clear, these are uncollectable objects?

> Solr 6.x leaking one SolrZkClient instance per second
> -
>
> Key: SOLR-10420
> URL: https://issues.apache.org/jira/browse/SOLR-10420
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5, 6.4.2
>Reporter: Markus Jelsma
> Fix For: master (7.0), branch_6x
>
>
> One of our nodes became berzerk after a restart, Solr went completely nuts! 
> So i opened VisualVM to keep an eye on it and spotted a different problem 
> that occurs in all our Solr 6.4.2 and 6.5.0 nodes.
> It appears Solr is leaking one SolrZkClient instance per second via 
> DistributedQueue$ChildWatcher. That one per second is quite accurate for all 
> nodes, there are about the same amount of instances as there are seconds 
> since Solr started. I know VisualVM's instance count includes 
> objects-to-be-collected, the instance count does not drop after a forced 
> garbed collection round.
> It doesn't matter how many cores or collections the nodes carry or how heavy 
> traffic is.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10415) Within solr-core, debug/trace level logging should use parameterized log messages

2017-04-04 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955498#comment-15955498
 ] 

Christine Poerschke commented on SOLR-10415:


+1 to the idea.

Also, from memory the parameterized log message have (or perhaps had) a limit 
on how many args can follow the first arg but this can be overcome e.g. like 
this:
{code}
log.debug("calling waitForLeaderToSeeDownState for coreZkNodeName={} 
collection={} shard={}", new Object[]{coreZkNodeName, collection, slice});
{code}

Also wondering, once the code is cleaned up, could something similar to the 
forbidden-apis check be used to prevent the re-introduction of unparameterized 
debug/trace log messages?



In the meantime, there are many debug and trace level logging statements, if 
any particularly stood out in your samplings, perhaps we could start here by 
changing those?

> Within solr-core, debug/trace level logging should use parameterized log 
> messages
> -
>
> Key: SOLR-10415
> URL: https://issues.apache.org/jira/browse/SOLR-10415
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Braun
>Priority: Trivial
>
> Noticed in several samplings of an active Solr that several debug statements 
> were taking decently measurable time because of the time of the .toString 
> even when the log.debug() statement would not output because it was 
> effectively INFO or higher. Using parameterized logging statements, ie 
> 'log.debug("Blah {}", o)' will avoid incurring that cost.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10347) Remove index level boost support from "documents" section of the admin UI

2017-04-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955445#comment-15955445
 ] 

Tomás Fernández Löbbe commented on SOLR-10347:
--

Thanks for the patch [~sarkaramr...@gmail.com]. Let's remove the dead code 
instead of commenting out, if people want to see/review past code they can use 
Git. Also, it looks like that "json-only" would be used somewhere else in the 
code, maybe remove that also if not needed anymore. 

> Remove index level boost support from "documents" section of the admin UI
> -
>
> Key: SOLR-10347
> URL: https://issues.apache.org/jira/browse/SOLR-10347
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Tomás Fernández Löbbe
> Attachments: screenshot-new-UI.png, screenshot-old-UI.png, 
> SOLR-10347.patch
>
>
> Index-time boost is deprecated since LUCENE-6819



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10400) refactor "instanceof TrieFooField || instanceof FooPointsField" to use "FooValueFieldType" marker interface

2017-04-04 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955433#comment-15955433
 ] 

Hoss Man commented on SOLR-10400:
-

bq. In a couple cases I see you're throwing IOException if the type isn't 
supported but I think it should be SolrException with BAD_REQUEST?

That was a straight refactoring of the existing collapse code which throws 
IOException (which are later caught & rewrapped in SolrException)

in general cleaning up collapse components error reporting/messages would 
probably be a good idea -- but that's an orthoginal scope to this issue.

> refactor "instanceof TrieFooField || instanceof FooPointsField" to use 
> "FooValueFieldType" marker interface
> ---
>
> Key: SOLR-10400
> URL: https://issues.apache.org/jira/browse/SOLR-10400
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-10400.patch
>
>
> See previous comment from smiley in SOLR-9994...
> https://issues.apache.org/jira/browse/SOLR-9994?focusedCommentId=15875390=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15875390
> ...we already have the NumericValueFieldType marker interface and children.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10414) RecoveryStrategy should be Runnable and not a Thread

2017-04-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955389#comment-15955389
 ] 

Tomás Fernández Löbbe commented on SOLR-10414:
--

If I'm reading the code correctly, the {{setName(...)}} inside 
{{RecoveryStrategy}}'s constructor is changing it's own {{Thread#name}}, 
however that thread is never started, so that name is never really reflected 
anywhere, the Executor is just calling {{RecoveryStrategy#run()}}. I thought 
about setting the name of the current thread inside {{run()}}, however, the 
executor where this is ran from already handles thread names (added with MDC 
logging changes as you say).

> RecoveryStrategy should be Runnable and not a Thread
> 
>
> Key: SOLR-10414
> URL: https://issues.apache.org/jira/browse/SOLR-10414
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10414.patch
>
>
> {{RecoveryStrategy}} is currently a {{Thread}} but is never started, it's 
> just used as a {{Runnable}} and submitted to Executors. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9959) SolrInfoMBean-s category and hierarchy cleanup

2017-04-04 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-9959:

Attachment: SOLR-9959.patch

Latest patch which fixes most of the problems identified in the review:
* admin/mbeans and admin/plugins stats are back, and consequently they are 
visible in the UI.
* JMX reporting is turned on only when MBean server is specified or running.
* metrics reported using {{MetricsMap}} provide now type-specific attributes 
via JMX.

> SolrInfoMBean-s category and hierarchy cleanup
> --
>
> Key: SOLR-9959
> URL: https://issues.apache.org/jira/browse/SOLR-9959
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: SOLR-9959.patch, SOLR-9959.patch
>
>
> SOLR-9947 changed categories of some of {{SolrInfoMBean-s}}, and it also 
> added an alternative view in JMX, similar to the one produced by 
> {{SolrJmxReporter}}.
> Some changes were left out from that issue because they would break the 
> back-compatibility in 6.x, but they should be done before 7.0:
> * remove the old JMX view of {{SolrInfoMBean}}-s and improve the new one so 
> that it's more readable and useful.
> * in many cases {{SolrInfoMBean.getName()}} just returns a FQCN, but it could 
> be more informative - eg. for highlighter or query plugins this could be the 
> symbolic name of a plugin that users know and use in configuration.
> * top-level categories need more thought. On one hand it's best to minimize 
> their number, on the other hand they need to meaningfully represent the 
> functionality of components that use them. SOLR-9947 made some cosmetic 
> changes, but more discussion is necessary (eg. QUERY vs. SEARCHHANDLER)
> * we should consider removing some of the methods in {{SolrInfoMBean}} that 
> usually don't return any useful information, eg. {{getDocs}}, {{getSource()}} 
> and {{getVersion()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10420) Solr 6.x leaking one SolrZkClient instance per second

2017-04-04 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955293#comment-15955293
 ] 

Markus Jelsma commented on SOLR-10420:
--

To note another oddity, some nodes of our regular search cluster (6.5.0) do not 
show increased counts. Some nodes with other roles (but running Solr) show the 
problem immediately after each restart every time i restarted them today. So it 
could be 6.0.1 and 6.0.0 also show the problem, although they didn't when i 
just tested them.

> Solr 6.x leaking one SolrZkClient instance per second
> -
>
> Key: SOLR-10420
> URL: https://issues.apache.org/jira/browse/SOLR-10420
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5, 6.4.2
>Reporter: Markus Jelsma
> Fix For: master (7.0), branch_6x
>
>
> One of our nodes became berzerk after a restart, Solr went completely nuts! 
> So i opened VisualVM to keep an eye on it and spotted a different problem 
> that occurs in all our Solr 6.4.2 and 6.5.0 nodes.
> It appears Solr is leaking one SolrZkClient instance per second via 
> DistributedQueue$ChildWatcher. That one per second is quite accurate for all 
> nodes, there are about the same amount of instances as there are seconds 
> since Solr started. I know VisualVM's instance count includes 
> objects-to-be-collected, the instance count does not drop after a forced 
> garbed collection round.
> It doesn't matter how many cores or collections the nodes carry or how heavy 
> traffic is.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.5.1 release?

2017-04-04 Thread Shalin Shekhar Mangar
I'd also like to include SOLR-10277 --- it is a serious problem for
people running large number of collections. I'll review and commit the
patch by tomorrow.

On Tue, Apr 4, 2017 at 2:16 PM, Shalin Shekhar Mangar
 wrote:
> I would like to include https://issues.apache.org/jira/browse/SOLR-10416
>
> It is a trivial fix.
>
> On Mon, Apr 3, 2017 at 11:54 PM, Joel Bernstein  wrote:
>> SOLR-10404 looks like a nice improvement!
>>
>> I'll shoot for Friday morning to create the release candidate. I've never
>> been a release manager before so I may need some guidance along the way.
>>
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>> On Mon, Apr 3, 2017 at 12:21 PM, David Smiley 
>> wrote:
>>>
>>> Found & fixed a bug: https://issues.apache.org/jira/browse/SOLR-10404  I'd
>>> like to get this into 6.5.1.  You might be interested in this one Joel.
>>>
>>> On Mon, Apr 3, 2017 at 11:58 AM Steve Rowe  wrote:


 > On Apr 3, 2017, at 11:52 AM, Adrien Grand  wrote:
 >
 > Building the first RC on April 6th sounds good to me! I'm wondering
 > whether the 6.5 Jenkins jobs are still running?

 I disabled the ASF Jenkins 6.5 jobs shortly after the release.  FYI you
 can see which Lucene/Solr jobs are enabled here:
 .  I’ll re-enable the 6.5 jobs
 now.

 --
 Steve
 www.lucidworks.com


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

>>> --
>>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>>> http://www.solrenterprisesearchserver.com
>>
>>
>
>
>
> --
> Regards,
> Shalin Shekhar Mangar.



-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10277) On 'downnode', lots of wasteful mutations are done to ZK

2017-04-04 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955288#comment-15955288
 ] 

Shalin Shekhar Mangar commented on SOLR-10277:
--

No problem, I'll review the patch and commit.

> On 'downnode', lots of wasteful mutations are done to ZK
> 
>
> Key: SOLR-10277
> URL: https://issues.apache.org/jira/browse/SOLR-10277
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 5.5.3, 5.5.4, 6.0.1, 6.2.1, 6.3, 6.4.2
>Reporter: Joshua Humphries
>Assignee: Scott Blum
>  Labels: leader, zookeeper
> Attachments: SOLR-10277-5.5.3.patch, SOLR-10277.patch
>
>
> When a node restarts, it submits a single 'downnode' message to the 
> overseer's state update queue.
> When the overseer processes the message, it does way more writes to ZK than 
> necessary. In our cluster of 48 hosts, the majority of collections have only 
> 1 shard and 1 replica. So a single node restarting should only result in 
> ~1/40th of the collections being updated with new replica states (to indicate 
> the node that is no longer active).
> However, the current logic in NodeMutator#downNode always updates *every* 
> collection. So we end up having to do rolling restarts very slowly to avoid 
> having a severe outage due to the overseer having to do way too much work for 
> each host that is restarted. And subsequent shards becoming leader can't get 
> processed until the `downnode` message is fully processed. So a fast rolling 
> restart can result in the overseer queue growing incredibly large and nearly 
> all shards winding up in a leader-less state until that backlog is processed.
> The fix is a trivial logic change to only add a ZkWriteCommand for 
> collections that actually have an impacted replica.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #180: SOLR-8138, added new SQL Qury UI.

2017-04-04 Thread michaelsuzukisagi
GitHub user michaelsuzukisagi opened a pull request:

https://github.com/apache/lucene-solr/pull/180

SOLR-8138, added new SQL Qury UI.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/michaelsuzukisagi/lucene-solr master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/180.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #180


commit 11514b16b292f2bc374134492a00425ee54181c7
Author: Michael Suzuki 
Date:   2017-04-04T15:46:22Z

SOLR-8138, added new sql qury ui.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10420) Solr 6.x leaking one SolrZkClient instance per second

2017-04-04 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955287#comment-15955287
 ] 

Markus Jelsma commented on SOLR-10420:
--

Well, actually DistributedQueue$ChildWatcher is being leaked well, so leaking 
of SolrZkClient could be a consequence of that.



> Solr 6.x leaking one SolrZkClient instance per second
> -
>
> Key: SOLR-10420
> URL: https://issues.apache.org/jira/browse/SOLR-10420
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5, 6.4.2
>Reporter: Markus Jelsma
> Fix For: master (7.0), branch_6x
>
>
> One of our nodes became berzerk after a restart, Solr went completely nuts! 
> So i opened VisualVM to keep an eye on it and spotted a different problem 
> that occurs in all our Solr 6.4.2 and 6.5.0 nodes.
> It appears Solr is leaking one SolrZkClient instance per second via 
> DistributedQueue$ChildWatcher. That one per second is quite accurate for all 
> nodes, there are about the same amount of instances as there are seconds 
> since Solr started. I know VisualVM's instance count includes 
> objects-to-be-collected, the instance count does not drop after a forced 
> garbed collection round.
> It doesn't matter how many cores or collections the nodes carry or how heavy 
> traffic is.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8138) Simple UI for issuing SQL queries

2017-04-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955286#comment-15955286
 ] 

ASF GitHub Bot commented on SOLR-8138:
--

GitHub user michaelsuzukisagi opened a pull request:

https://github.com/apache/lucene-solr/pull/180

SOLR-8138, added new SQL Qury UI.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/michaelsuzukisagi/lucene-solr master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/180.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #180


commit 11514b16b292f2bc374134492a00425ee54181c7
Author: Michael Suzuki 
Date:   2017-04-04T15:46:22Z

SOLR-8138, added new sql qury ui.




> Simple UI for issuing SQL queries
> -
>
> Key: SOLR-8138
> URL: https://issues.apache.org/jira/browse/SOLR-8138
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8138.patch, SOLR-8138.patch, SOLR-8138.patch
>
>
> It would be great for Solr 6 if we could have admin screen where we could 
> issue SQL queries using the new SQL interface.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10420) Solr 6.x leaking one SolrZkClient instance per second

2017-04-04 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955283#comment-15955283
 ] 

Scott Blum commented on SOLR-10420:
---

Hard to see how the problem could be localized to 
DistributedQueue$ChildWatcher.. it doesn't create any ZkClients, it's passed in 
from the outside.

> Solr 6.x leaking one SolrZkClient instance per second
> -
>
> Key: SOLR-10420
> URL: https://issues.apache.org/jira/browse/SOLR-10420
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5, 6.4.2
>Reporter: Markus Jelsma
> Fix For: master (7.0), branch_6x
>
>
> One of our nodes became berzerk after a restart, Solr went completely nuts! 
> So i opened VisualVM to keep an eye on it and spotted a different problem 
> that occurs in all our Solr 6.4.2 and 6.5.0 nodes.
> It appears Solr is leaking one SolrZkClient instance per second via 
> DistributedQueue$ChildWatcher. That one per second is quite accurate for all 
> nodes, there are about the same amount of instances as there are seconds 
> since Solr started. I know VisualVM's instance count includes 
> objects-to-be-collected, the instance count does not drop after a forced 
> garbed collection round.
> It doesn't matter how many cores or collections the nodes carry or how heavy 
> traffic is.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10420) Solr 6.x leaking one SolrZkClient instance per second

2017-04-04 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955282#comment-15955282
 ] 

Markus Jelsma commented on SOLR-10420:
--

Ah, i found it, the problem appeared in 6.1.0. Versions 6.0.0 and 6.0.1 do not 
show this problem, the instances are eaten by GC.

> Solr 6.x leaking one SolrZkClient instance per second
> -
>
> Key: SOLR-10420
> URL: https://issues.apache.org/jira/browse/SOLR-10420
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5, 6.4.2
>Reporter: Markus Jelsma
> Fix For: master (7.0), branch_6x
>
>
> One of our nodes became berzerk after a restart, Solr went completely nuts! 
> So i opened VisualVM to keep an eye on it and spotted a different problem 
> that occurs in all our Solr 6.4.2 and 6.5.0 nodes.
> It appears Solr is leaking one SolrZkClient instance per second via 
> DistributedQueue$ChildWatcher. That one per second is quite accurate for all 
> nodes, there are about the same amount of instances as there are seconds 
> since Solr started. I know VisualVM's instance count includes 
> objects-to-be-collected, the instance count does not drop after a forced 
> garbed collection round.
> It doesn't matter how many cores or collections the nodes carry or how heavy 
> traffic is.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10277) On 'downnode', lots of wasteful mutations are done to ZK

2017-04-04 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955274#comment-15955274
 ] 

Scott Blum commented on SOLR-10277:
---

Agreed, [~shalinmangar] I'm actually OOO all week-- if you wanted to take point 
on getting this landed that would be super.  I reviewed all the live code 
previously, but not [~varunthacker]'s patch to the test.  (though to be honest 
I'm not super familiar with the test frameworks anyway)

> On 'downnode', lots of wasteful mutations are done to ZK
> 
>
> Key: SOLR-10277
> URL: https://issues.apache.org/jira/browse/SOLR-10277
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 5.5.3, 5.5.4, 6.0.1, 6.2.1, 6.3, 6.4.2
>Reporter: Joshua Humphries
>Assignee: Scott Blum
>  Labels: leader, zookeeper
> Attachments: SOLR-10277-5.5.3.patch, SOLR-10277.patch
>
>
> When a node restarts, it submits a single 'downnode' message to the 
> overseer's state update queue.
> When the overseer processes the message, it does way more writes to ZK than 
> necessary. In our cluster of 48 hosts, the majority of collections have only 
> 1 shard and 1 replica. So a single node restarting should only result in 
> ~1/40th of the collections being updated with new replica states (to indicate 
> the node that is no longer active).
> However, the current logic in NodeMutator#downNode always updates *every* 
> collection. So we end up having to do rolling restarts very slowly to avoid 
> having a severe outage due to the overseer having to do way too much work for 
> each host that is restarted. And subsequent shards becoming leader can't get 
> processed until the `downnode` message is fully processed. So a fast rolling 
> restart can result in the overseer queue growing incredibly large and nearly 
> all shards winding up in a leader-less state until that backlog is processed.
> The fix is a trivial logic change to only add a ZkWriteCommand for 
> collections that actually have an impacted replica.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10394) search.grouping.Command rename: getSortWithinGroup --> getWithinGroupSort

2017-04-04 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955262#comment-15955262
 ] 

Christine Poerschke commented on SOLR-10394:


Committed to master and branch_6x this afternoon, not sure why the ASF Bot 
didn't add an update here (my git was also hanging when pushing).
* master: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/05749d06
* branch_6x: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/bec07b0a

> search.grouping.Command rename: getSortWithinGroup --> getWithinGroupSort
> -
>
> Key: SOLR-10394
> URL: https://issues.apache.org/jira/browse/SOLR-10394
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-10394.patch
>
>
> The class is marked _@lucene.experimental_ and SOLR-9660 previously included 
> sortSpecWithinGroup to withinGroupSortSpec renaming for GroupSpecification; 
> the rename proposed here is in line with that.
> Motivation for the change is to reduce group-sort vs. within-group-sort 
> confusion, generally and specifically in SOLR-6203.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10420) Solr 6.x leaking one SolrZkClient instance per second

2017-04-04 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955259#comment-15955259
 ] 

Ishan Chattopadhyaya commented on SOLR-10420:
-

The ant resolve could hang due to lock files. You could try this: {{find ~ 
-name "*lck" | xargs rm}}.

> Solr 6.x leaking one SolrZkClient instance per second
> -
>
> Key: SOLR-10420
> URL: https://issues.apache.org/jira/browse/SOLR-10420
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5, 6.4.2
>Reporter: Markus Jelsma
> Fix For: master (7.0), branch_6x
>
>
> One of our nodes became berzerk after a restart, Solr went completely nuts! 
> So i opened VisualVM to keep an eye on it and spotted a different problem 
> that occurs in all our Solr 6.4.2 and 6.5.0 nodes.
> It appears Solr is leaking one SolrZkClient instance per second via 
> DistributedQueue$ChildWatcher. That one per second is quite accurate for all 
> nodes, there are about the same amount of instances as there are seconds 
> since Solr started. I know VisualVM's instance count includes 
> objects-to-be-collected, the instance count does not drop after a forced 
> garbed collection round.
> It doesn't matter how many cores or collections the nodes carry or how heavy 
> traffic is.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10420) Solr 6.x leaking one SolrZkClient instance per second

2017-04-04 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955253#comment-15955253
 ] 

Markus Jelsma commented on SOLR-10420:
--

I only have 6.5.0 and a not-yet upgraded 6.4.2, both suffer the same.

But i just built a 6.3.0, ran it in cloud mode without registering a collection 
or core using the built-in Zookeeper. After two minutes, i had ~120 client 
objects, now i have more.

6.0.0 doesn't show increased instance counts. Can't test 6.1 and 6.2, ant keeps 
hanging on resolve for whatever reason.


> Solr 6.x leaking one SolrZkClient instance per second
> -
>
> Key: SOLR-10420
> URL: https://issues.apache.org/jira/browse/SOLR-10420
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5, 6.4.2
>Reporter: Markus Jelsma
> Fix For: master (7.0), branch_6x
>
>
> One of our nodes became berzerk after a restart, Solr went completely nuts! 
> So i opened VisualVM to keep an eye on it and spotted a different problem 
> that occurs in all our Solr 6.4.2 and 6.5.0 nodes.
> It appears Solr is leaking one SolrZkClient instance per second via 
> DistributedQueue$ChildWatcher. That one per second is quite accurate for all 
> nodes, there are about the same amount of instances as there are seconds 
> since Solr started. I know VisualVM's instance count includes 
> objects-to-be-collected, the instance count does not drop after a forced 
> garbed collection round.
> It doesn't matter how many cores or collections the nodes carry or how heavy 
> traffic is.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9745) SolrCLI swallows errors from solr.cmd

2017-04-04 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev resolved SOLR-9745.

Resolution: Fixed

Thansks, [~gopikannan]

> SolrCLI swallows errors from solr.cmd
> -
>
> Key: SOLR-9745
> URL: https://issues.apache.org/jira/browse/SOLR-9745
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.3, master (7.0)
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: newbie, newdev
> Fix For: master (7.0), 6.6
>
> Attachments: SOLR-9745.patch, SOLR-9745.patch
>
>
> It occurs on mad scenario in LUCENE-7534:
> * solr.cmd weren't granted +x (it happens under cygwin, yes)
> * coolhacker worked it around with cmd /C solr.cmd start -e ..
> * but when SolrCLI runs solr instances with the same solr.cmd, it just 
> silently fails
> I think we can just pass ExecuteResultHandler which will dump exception to 
> console. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8138) Simple UI for issuing SQL queries

2017-04-04 Thread Michael Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Suzuki updated SOLR-8138:
-
Attachment: SOLR-8138.patch

> Simple UI for issuing SQL queries
> -
>
> Key: SOLR-8138
> URL: https://issues.apache.org/jira/browse/SOLR-8138
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8138.patch, SOLR-8138.patch, SOLR-8138.patch
>
>
> It would be great for Solr 6 if we could have admin screen where we could 
> issue SQL queries using the new SQL interface.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9745) SolrCLI swallows errors from solr.cmd

2017-04-04 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955245#comment-15955245
 ] 

Mikhail Khludnev commented on SOLR-9745:


tests are fixed 
https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6498/console 

> SolrCLI swallows errors from solr.cmd
> -
>
> Key: SOLR-9745
> URL: https://issues.apache.org/jira/browse/SOLR-9745
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.3, master (7.0)
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: newbie, newdev
> Fix For: master (7.0), 6.6
>
> Attachments: SOLR-9745.patch, SOLR-9745.patch
>
>
> It occurs on mad scenario in LUCENE-7534:
> * solr.cmd weren't granted +x (it happens under cygwin, yes)
> * coolhacker worked it around with cmd /C solr.cmd start -e ..
> * but when SolrCLI runs solr instances with the same solr.cmd, it just 
> silently fails
> I think we can just pass ExecuteResultHandler which will dump exception to 
> console. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9745) SolrCLI swallows errors from solr.cmd

2017-04-04 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9745:
---
Fix Version/s: 6.6
   master (7.0)

> SolrCLI swallows errors from solr.cmd
> -
>
> Key: SOLR-9745
> URL: https://issues.apache.org/jira/browse/SOLR-9745
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.3, master (7.0)
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: newbie, newdev
> Fix For: master (7.0), 6.6
>
> Attachments: SOLR-9745.patch, SOLR-9745.patch
>
>
> It occurs on mad scenario in LUCENE-7534:
> * solr.cmd weren't granted +x (it happens under cygwin, yes)
> * coolhacker worked it around with cmd /C solr.cmd start -e ..
> * but when SolrCLI runs solr instances with the same solr.cmd, it just 
> silently fails
> I think we can just pass ExecuteResultHandler which will dump exception to 
> console. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release PyLucene 6.5.0 (rc2) (now with Python 3 support)

2017-04-04 Thread Petrus Hyvönen
Hi, the JCC from the rc2 runs (as expected) fine for my application under
both 2.7 and 3.6.

On Fri, Mar 31, 2017 at 11:56 AM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> +1 to release; I ran my same "first 100K Wikipedia documents" smoke test,
> on Python 3.5.2, Java 1.8.0_121, Ubuntu 16.04.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Thu, Mar 30, 2017 at 3:27 PM, Andi Vajda  wrote:
>
> >
> > A few fixes were needed in JCC for better Windows support.
> > The PyLucene 6.5.0 rc1 vote is thus cancelled.
> >
> > I'm now calling for a vote on PyLucene 6.5.0 rc2.
> >
> > The PyLucene 6.5.0 (rc2) release tracking the recent release of
> > Apache Lucene 6.5.0 is ready.
> >
> > A release candidate is available from:
> >   https://dist.apache.org/repos/dist/dev/lucene/pylucene/6.5.0-rc2/
> >
> > PyLucene 6.5.0 is built with JCC 3.0 included in these release artifacts.
> >
> > JCC 3.0 now supports Python 3.3+ (in addition to Python 2.3+).
> > PyLucene may be built with Python 2 or Python 3.
> >
> > Please vote to release these artifacts as PyLucene 6.5.0.
> > Anyone interested in this release can and should vote !
> >
> > Thanks !
> >
> > Andi..
> >
> > ps: the KEYS file for PyLucene release signing is at:
> > https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS
> > https://dist.apache.org/repos/dist/dev/lucene/pylucene/KEYS
> >
> > pps: here is my +1
> >
>



-- 
_
Petrus Hyvönen, Uppsala, Sweden
Mobile Phone/SMS:+46 73 803 19 00


[jira] [Commented] (SOLR-10338) Configure SecureRandom non blocking for tests.

2017-04-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955224#comment-15955224
 ] 

Mark Miller commented on SOLR-10338:


Don't sweat it [~mihaly.toth], things like Java 9 issues are what we have 
Jenkins to catch at this point in time. 

> Configure SecureRandom non blocking for tests.
> --
>
> Key: SOLR-10338
> URL: https://issues.apache.org/jira/browse/SOLR-10338
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Mihaly Toth
>Assignee: Mark Miller
> Fix For: master (7.0), 6.6
>
> Attachments: SOLR-10338.patch, SOLR-10338.patch, SOLR-10338.patch, 
> SOLR-10338.patch
>
>
> It would be best if SecureRandom could be made non blocking. In that case we 
> could get rid of random entropy exhaustion issue related to all usages of 
> SecureRandom.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10420) Solr 6.x leaking one SolrZkClient instance per second

2017-04-04 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955222#comment-15955222
 ] 

Ishan Chattopadhyaya edited comment on SOLR-10420 at 4/4/17 3:01 PM:
-

Nothing changed in that code in the last few releases. Do you know if this 
worked fine in a prior 6x release?
FYI, [~dragonsinth] and [~shalinmangar] <-- experts in that code.


was (Author: ichattopadhyaya):
Nothing changed in that code in the last few releases. Do you know if this 
worked fine in a prior 6x release?
FYI, [~dragonsinh] and [~shalinmangar] <-- experts in that code.

> Solr 6.x leaking one SolrZkClient instance per second
> -
>
> Key: SOLR-10420
> URL: https://issues.apache.org/jira/browse/SOLR-10420
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5, 6.4.2
>Reporter: Markus Jelsma
> Fix For: master (7.0), branch_6x
>
>
> One of our nodes became berzerk after a restart, Solr went completely nuts! 
> So i opened VisualVM to keep an eye on it and spotted a different problem 
> that occurs in all our Solr 6.4.2 and 6.5.0 nodes.
> It appears Solr is leaking one SolrZkClient instance per second via 
> DistributedQueue$ChildWatcher. That one per second is quite accurate for all 
> nodes, there are about the same amount of instances as there are seconds 
> since Solr started. I know VisualVM's instance count includes 
> objects-to-be-collected, the instance count does not drop after a forced 
> garbed collection round.
> It doesn't matter how many cores or collections the nodes carry or how heavy 
> traffic is.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10420) Solr 6.x leaking one SolrZkClient instance per second

2017-04-04 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955222#comment-15955222
 ] 

Ishan Chattopadhyaya commented on SOLR-10420:
-

Nothing changed in that code in the last few releases. Do you know if this 
worked fine in a prior 6x release?
FYI, [~dragonsinh] and [~shalinmangar] <-- experts in that code.

> Solr 6.x leaking one SolrZkClient instance per second
> -
>
> Key: SOLR-10420
> URL: https://issues.apache.org/jira/browse/SOLR-10420
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5, 6.4.2
>Reporter: Markus Jelsma
> Fix For: master (7.0), branch_6x
>
>
> One of our nodes became berzerk after a restart, Solr went completely nuts! 
> So i opened VisualVM to keep an eye on it and spotted a different problem 
> that occurs in all our Solr 6.4.2 and 6.5.0 nodes.
> It appears Solr is leaking one SolrZkClient instance per second via 
> DistributedQueue$ChildWatcher. That one per second is quite accurate for all 
> nodes, there are about the same amount of instances as there are seconds 
> since Solr started. I know VisualVM's instance count includes 
> objects-to-be-collected, the instance count does not drop after a forced 
> garbed collection round.
> It doesn't matter how many cores or collections the nodes carry or how heavy 
> traffic is.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10277) On 'downnode', lots of wasteful mutations are done to ZK

2017-04-04 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955217#comment-15955217
 ] 

Shalin Shekhar Mangar commented on SOLR-10277:
--

It'd be nice to release this fix in 6.5.1 -- looks serious.

> On 'downnode', lots of wasteful mutations are done to ZK
> 
>
> Key: SOLR-10277
> URL: https://issues.apache.org/jira/browse/SOLR-10277
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 5.5.3, 5.5.4, 6.0.1, 6.2.1, 6.3, 6.4.2
>Reporter: Joshua Humphries
>Assignee: Scott Blum
>  Labels: leader, zookeeper
> Attachments: SOLR-10277-5.5.3.patch, SOLR-10277.patch
>
>
> When a node restarts, it submits a single 'downnode' message to the 
> overseer's state update queue.
> When the overseer processes the message, it does way more writes to ZK than 
> necessary. In our cluster of 48 hosts, the majority of collections have only 
> 1 shard and 1 replica. So a single node restarting should only result in 
> ~1/40th of the collections being updated with new replica states (to indicate 
> the node that is no longer active).
> However, the current logic in NodeMutator#downNode always updates *every* 
> collection. So we end up having to do rolling restarts very slowly to avoid 
> having a severe outage due to the overseer having to do way too much work for 
> each host that is restarted. And subsequent shards becoming leader can't get 
> processed until the `downnode` message is fully processed. So a fast rolling 
> restart can result in the overseer queue growing incredibly large and nearly 
> all shards winding up in a leader-less state until that backlog is processed.
> The fix is a trivial logic change to only add a ZkWriteCommand for 
> collections that actually have an impacted replica.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8138) Simple UI for issuing SQL queries

2017-04-04 Thread Michael Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Suzuki updated SOLR-8138:
-
Attachment: (was: SOLR-8138.patch)

> Simple UI for issuing SQL queries
> -
>
> Key: SOLR-8138
> URL: https://issues.apache.org/jira/browse/SOLR-8138
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8138.patch, SOLR-8138.patch
>
>
> It would be great for Solr 6 if we could have admin screen where we could 
> issue SQL queries using the new SQL interface.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8138) Simple UI for issuing SQL queries

2017-04-04 Thread Michael Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Suzuki updated SOLR-8138:
-
Attachment: SOLR-8138.patch

Attaching the fixed patch of new sql query ui.

> Simple UI for issuing SQL queries
> -
>
> Key: SOLR-8138
> URL: https://issues.apache.org/jira/browse/SOLR-8138
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI
>Affects Versions: 6.0
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8138.patch, SOLR-8138.patch, SOLR-8138.patch
>
>
> It would be great for Solr 6 if we could have admin screen where we could 
> issue SQL queries using the new SQL interface.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10420) Solr 6.x leaking one SolrZkClient instance per second

2017-04-04 Thread Markus Jelsma (JIRA)
Markus Jelsma created SOLR-10420:


 Summary: Solr 6.x leaking one SolrZkClient instance per second
 Key: SOLR-10420
 URL: https://issues.apache.org/jira/browse/SOLR-10420
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.4.2, 6.5
Reporter: Markus Jelsma
 Fix For: master (7.0), branch_6x


One of our nodes became berzerk after a restart, Solr went completely nuts! So 
i opened VisualVM to keep an eye on it and spotted a different problem that 
occurs in all our Solr 6.4.2 and 6.5.0 nodes.

It appears Solr is leaking one SolrZkClient instance per second via 
DistributedQueue$ChildWatcher. That one per second is quite accurate for all 
nodes, there are about the same amount of instances as there are seconds since 
Solr started. I know VisualVM's instance count includes 
objects-to-be-collected, the instance count does not drop after a forced garbed 
collection round.

It doesn't matter how many cores or collections the nodes carry or how heavy 
traffic is.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10347) Remove index level boost support from "documents" section of the admin UI

2017-04-04 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955173#comment-15955173
 ] 

Amrit Sarkar commented on SOLR-10347:
-

[~tomasflobbe] will you be able to review the patch, it is very straightforward.

> Remove index level boost support from "documents" section of the admin UI
> -
>
> Key: SOLR-10347
> URL: https://issues.apache.org/jira/browse/SOLR-10347
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Tomás Fernández Löbbe
> Attachments: screenshot-new-UI.png, screenshot-old-UI.png, 
> SOLR-10347.patch
>
>
> Index-time boost is deprecated since LUCENE-6819



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10338) Configure SecureRandom non blocking for tests.

2017-04-04 Thread Mihaly Toth (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mihaly Toth updated SOLR-10338:
---
Attachment: SOLR-10338.patch

How about such a fix? In case randomness is blocking Solr tests will time out.

> Configure SecureRandom non blocking for tests.
> --
>
> Key: SOLR-10338
> URL: https://issues.apache.org/jira/browse/SOLR-10338
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Mihaly Toth
>Assignee: Mark Miller
> Fix For: master (7.0), 6.6
>
> Attachments: SOLR-10338.patch, SOLR-10338.patch, SOLR-10338.patch, 
> SOLR-10338.patch
>
>
> It would be best if SecureRandom could be made non blocking. In that case we 
> could get rid of random entropy exhaustion issue related to all usages of 
> SecureRandom.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10414) RecoveryStrategy should be Runnable and not a Thread

2017-04-04 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955150#comment-15955150
 ] 

Christine Poerschke commented on SOLR-10414:


SOLR-6885 added the {{setName("RecoveryThread-"+this.coreName);}} that is being 
removed here and I'd be curious as to whether or not logging from different 
cores in the same JVM would still be distinguishable without it. I think it 
would be, via the {{MDCLoggingContext}} stuff added later under SOLR-7590.

So +1 for RecoveryStrategy not being a Thread then.

> RecoveryStrategy should be Runnable and not a Thread
> 
>
> Key: SOLR-10414
> URL: https://issues.apache.org/jira/browse/SOLR-10414
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10414.patch
>
>
> {{RecoveryStrategy}} is currently a {{Thread}} but is never started, it's 
> just used as a {{Runnable}} and submitted to Executors. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10203) Remove dist/test-framework from the binary download archive

2017-04-04 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955081#comment-15955081
 ] 

David Smiley commented on SOLR-10203:
-

+1

> Remove dist/test-framework from the binary download archive
> ---
>
> Key: SOLR-10203
> URL: https://issues.apache.org/jira/browse/SOLR-10203
> Project: Solr
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: master (7.0)
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
>
> Libraries in the dist/test-framework are shipped with every copy of Solr 
> binary, yet they are not used anywhere directly. They take approximately 10 
> MBytes. 
> Remove the directory and provide guidance in a README file on how to get them 
> for those people who are writing their own testing solutions against Solr.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7730) Better encode length normalization in similarities

2017-04-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7730:
-
Attachment: LUCENE-7730.patch

Here is a new patch that builds upon LUCENE-7756. It is not 100% ready as some 
tests still don't pass due to the fact that I did not switch ClassicSimilarity 
to a new encoding but ready for review if anyone wants to have a look.

> Better encode length normalization in similarities
> --
>
> Key: LUCENE-7730
> URL: https://issues.apache.org/jira/browse/LUCENE-7730
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch
>
>
> Now that index-time boosts are gone (LUCENE-6819) and that indices record the 
> version that was used to create them (for backward compatibility, 
> LUCENE-7703), we can look into storing the length normalization factor more 
> efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9217) {!join score=..}.. should delay join to createWeight

2017-04-04 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955016#comment-15955016
 ] 

Mikhail Khludnev edited comment on SOLR-9217 at 4/4/17 11:31 AM:
-

[~gopikannan], to get the idea you can set showItems=100 at FastLRUCache and 
LFUCache filterCache, and then execute score and non-score join under 
{{q=filter()}}. Then, you can see in cache stats that score-join entries use 
'to'-terms lists as cache entry keys. You can also check it with debugger. 
Also, have a look at {{org.apache.solr.query.FilterQuery}} 


was (Author: mkhludnev):
[~gopikannan], to get the idea you can set showItems=100 at FastLRUCache and 
LFUCache filterCache, and then execute score and non-score join under 
{{q=filter()}}. Then, you can see in cache stats that score-join entries use 
'to'-terms lists as cache entry keys. You can also check it with debugger. 

> {!join score=..}.. should delay join to createWeight
> 
>
> Key: SOLR-9217
> URL: https://issues.apache.org/jira/browse/SOLR-9217
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Affects Versions: 6.1
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: newbie, newdev
>
> {{ScoreJoinQParserPlugin.XxxCoreJoinQuery}} executes 
> {{JoinUtil.createJoinQuery}} on {{rewrite()}}, but it's inefficient in 
> {{filter(...)}} syntax -or fq- (!) I suppose it's {{filter()}} only problem, 
> not fq. It's better to do that in {{createWeigh()}} as it's done in classic 
> Solr {{JoinQuery}}, {{JoinQParserPlugin}}.
> All existing tests is enough, we just need to assert rewrite behavior - it 
> should rewrite on enclosing range query or so, and doesn't on plain term 
> query.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9217) {!join score=..}.. should delay join to createWeight

2017-04-04 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9217:
---
Description: 
{{ScoreJoinQParserPlugin.XxxCoreJoinQuery}} executes 
{{JoinUtil.createJoinQuery}} on {{rewrite()}}, but it's inefficient in 
{{filter(...)}} syntax -or fq- (!) I suppose it's {{filter()}} only problem, 
not fq. It's better to do that in {{createWeigh()}} as it's done in classic 
Solr {{JoinQuery}}, {{JoinQParserPlugin}}.
All existing tests is enough, we just need to assert rewrite behavior - it 
should rewrite on enclosing range query or so, and doesn't on plain term query. 
 

  was:
{{ScoreJoinQParserPlugin.XxxCoreJoinQuery}} executes 
{{JoinUtil.createJoinQuery}} on {{rewrite()}}, but it's inefficient in 
{{filter(...)}} syntax or fq. It's better to do that in {{createWeigh()}} as 
it's done in classic Solr {{JoinQuery}}, {{JoinQParserPlugin}}.
All existing tests is enough, we just need to assert rewrite behavior - it 
should rewrite on enclosing range query or so, and doesn't on plain term query. 
 


> {!join score=..}.. should delay join to createWeight
> 
>
> Key: SOLR-9217
> URL: https://issues.apache.org/jira/browse/SOLR-9217
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Affects Versions: 6.1
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: newbie, newdev
>
> {{ScoreJoinQParserPlugin.XxxCoreJoinQuery}} executes 
> {{JoinUtil.createJoinQuery}} on {{rewrite()}}, but it's inefficient in 
> {{filter(...)}} syntax -or fq- (!) I suppose it's {{filter()}} only problem, 
> not fq. It's better to do that in {{createWeigh()}} as it's done in classic 
> Solr {{JoinQuery}}, {{JoinQParserPlugin}}.
> All existing tests is enough, we just need to assert rewrite behavior - it 
> should rewrite on enclosing range query or so, and doesn't on plain term 
> query.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9217) {!join score=..}.. should delay join to createWeight

2017-04-04 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955016#comment-15955016
 ] 

Mikhail Khludnev commented on SOLR-9217:


[~gopikannan], to get the idea you can set showItems=100 at FastLRUCache and 
LFUCache filterCache, and then execute score and non-score join under 
{{q=filter()}}. Then, you can see in cache stats that score-join entries use 
'to'-terms lists as cache entry keys. You can also check it with debugger. 

> {!join score=..}.. should delay join to createWeight
> 
>
> Key: SOLR-9217
> URL: https://issues.apache.org/jira/browse/SOLR-9217
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Affects Versions: 6.1
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: newbie, newdev
>
> {{ScoreJoinQParserPlugin.XxxCoreJoinQuery}} executes 
> {{JoinUtil.createJoinQuery}} on {{rewrite()}}, but it's inefficient in 
> {{filter(...)}} syntax or fq. It's better to do that in {{createWeigh()}} as 
> it's done in classic Solr {{JoinQuery}}, {{JoinQParserPlugin}}.
> All existing tests is enough, we just need to assert rewrite behavior - it 
> should rewrite on enclosing range query or so, and doesn't on plain term 
> query.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10151) TestRecovery.java - use monotonic increasing version number among all the tests to avoid unintentional reordering

2017-04-04 Thread Peter Szantai-Kis (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Szantai-Kis updated SOLR-10151:
-
Attachment: (was: SOLR_10151.0001.patch)

> TestRecovery.java - use monotonic increasing version number among all the 
> tests to avoid unintentional reordering
> -
>
> Key: SOLR-10151
> URL: https://issues.apache.org/jira/browse/SOLR-10151
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mano Kovacs
>Priority: Minor
>  Labels: newbie
>
> {{TestRecovery}} has several tests inserting updates and deletes into a 
> shared core. The tests are using fixed version number which can overlap and 
> can cause issues depending on the order of the tests.
> Proposing using a monotonically incrementing counter for each test and 
> changing tests that they would allocate the used versions would ensure that 
> later running tests would send updates with higher version only. That would 
> prevent any unintentional reordering.
> h5. Example:
> Before:
> {noformat}
> ...
> updateJ(jsonAdd(sdoc("id", "RDBQ1_1", "_version_", "1010")), 
> params(DISTRIB_UPDATE_PARAM, FROM_LEADER));
> updateJ(jsonDelQ("id:RDBQ1_2"), params(DISTRIB_UPDATE_PARAM, FROM_LEADER, 
> "_version_", "-1017")); // This should've arrived after the 1015th update
> updateJ(jsonAdd(sdoc("id", "RDBQ1_2", "_version_", "1015")), 
> params(DISTRIB_UPDATE_PARAM, FROM_LEADER));
> updateJ(jsonAdd(sdoc("id", "RDBQ1_3", "_version_", "1020")), 
> params(DISTRIB_UPDATE_PARAM, FROM_LEADER));
> ...
> {noformat}
> After:
> {noformat}
> ...
> String insVer1 = getNextVersion();
> String insVer2 = getNextVersion();
> String deleteVer = getNextVersion();
> String insVer3 = getNextVersion();
> updateJ(jsonAdd(sdoc("id", "RDBQ1_1", "_version_",insVer1)), 
> params(DISTRIB_UPDATE_PARAM, FROM_LEADER));
> updateJ(jsonDelQ("id:RDBQ1_2"), params(DISTRIB_UPDATE_PARAM, FROM_LEADER, 
> "_version_", "-"+deleteVer)); // This should've arrived after the 1015th 
> update
> updateJ(jsonAdd(sdoc("id", "RDBQ1_2", "_version_", insVer2)), 
> params(DISTRIB_UPDATE_PARAM, FROM_LEADER));
> updateJ(jsonAdd(sdoc("id", "RDBQ1_3", "_version_", insVer3)), 
> params(DISTRIB_UPDATE_PARAM, FROM_LEADER));
> ...
> {noformat}
> It might increase readability as the generation of the versions happen in the 
> preferred replay order.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10151) TestRecovery.java - use monotonic increasing version number among all the tests to avoid unintentional reordering

2017-04-04 Thread Peter Szantai-Kis (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Szantai-Kis updated SOLR-10151:
-
Attachment: SOLR_10151.0001.patch

> TestRecovery.java - use monotonic increasing version number among all the 
> tests to avoid unintentional reordering
> -
>
> Key: SOLR-10151
> URL: https://issues.apache.org/jira/browse/SOLR-10151
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mano Kovacs
>Priority: Minor
>  Labels: newbie
> Attachments: SOLR_10151.0001.patch
>
>
> {{TestRecovery}} has several tests inserting updates and deletes into a 
> shared core. The tests are using fixed version number which can overlap and 
> can cause issues depending on the order of the tests.
> Proposing using a monotonically incrementing counter for each test and 
> changing tests that they would allocate the used versions would ensure that 
> later running tests would send updates with higher version only. That would 
> prevent any unintentional reordering.
> h5. Example:
> Before:
> {noformat}
> ...
> updateJ(jsonAdd(sdoc("id", "RDBQ1_1", "_version_", "1010")), 
> params(DISTRIB_UPDATE_PARAM, FROM_LEADER));
> updateJ(jsonDelQ("id:RDBQ1_2"), params(DISTRIB_UPDATE_PARAM, FROM_LEADER, 
> "_version_", "-1017")); // This should've arrived after the 1015th update
> updateJ(jsonAdd(sdoc("id", "RDBQ1_2", "_version_", "1015")), 
> params(DISTRIB_UPDATE_PARAM, FROM_LEADER));
> updateJ(jsonAdd(sdoc("id", "RDBQ1_3", "_version_", "1020")), 
> params(DISTRIB_UPDATE_PARAM, FROM_LEADER));
> ...
> {noformat}
> After:
> {noformat}
> ...
> String insVer1 = getNextVersion();
> String insVer2 = getNextVersion();
> String deleteVer = getNextVersion();
> String insVer3 = getNextVersion();
> updateJ(jsonAdd(sdoc("id", "RDBQ1_1", "_version_",insVer1)), 
> params(DISTRIB_UPDATE_PARAM, FROM_LEADER));
> updateJ(jsonDelQ("id:RDBQ1_2"), params(DISTRIB_UPDATE_PARAM, FROM_LEADER, 
> "_version_", "-"+deleteVer)); // This should've arrived after the 1015th 
> update
> updateJ(jsonAdd(sdoc("id", "RDBQ1_2", "_version_", insVer2)), 
> params(DISTRIB_UPDATE_PARAM, FROM_LEADER));
> updateJ(jsonAdd(sdoc("id", "RDBQ1_3", "_version_", insVer3)), 
> params(DISTRIB_UPDATE_PARAM, FROM_LEADER));
> ...
> {noformat}
> It might increase readability as the generation of the versions happen in the 
> preferred replay order.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10419) Collection CREATE command to use the new Policy syntax for replica placement

2017-04-04 Thread Noble Paul (JIRA)
Noble Paul created SOLR-10419:
-

 Summary: Collection CREATE command to use the new Policy syntax 
for replica placement
 Key: SOLR-10419
 URL: https://issues.apache.org/jira/browse/SOLR-10419
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul
Assignee: Noble Paul






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10383) NPE on debug query in SOLR UI - LTR OriginalScoreFeature

2017-04-04 Thread Vitezslav Zak (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954999#comment-15954999
 ] 

Vitezslav Zak commented on SOLR-10383:
--

Nice work,

thank you too.

Best

> NPE on debug query in SOLR UI - LTR OriginalScoreFeature
> 
>
> Key: SOLR-10383
> URL: https://issues.apache.org/jira/browse/SOLR-10383
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.4.1, 6.5, 6.4.0, 6.4.2
>Reporter: Vitezslav Zak
>Assignee: Christine Poerschke
> Fix For: master (7.0), 6x, 6.5.1
>
> Attachments: SOLR-10383.patch, SOLR-10383.patch, SOLR-10383-prep.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> Hi,
> there is a NPE if I want to debug query in SOLR UI.
> I'm using LTR for reranking result.
> My features:
> {code}
> {
>   "initArgs":{},
>   "initializedOn":"2017-03-29T05:32:52.160Z",
>   "updatedSinceInit":"2017-03-29T05:56:28.721Z",
>   "managedList":[
> {
>   "name":"documentRecency",
>   "class":"org.apache.solr.ltr.feature.SolrFeature",
>   "params":{"q":"{!func}recip( ms(NOW,initial_release_date), 3.16e-11, 1, 
> 1)"},
>   "store":"_DEFAULT_"},
> {
>   "name":"niceness",
>   "class":"org.apache.solr.ltr.feature.SolrFeature",
>   "params":{"fq":["{!func}recip(niceness, 0.1, 1, 1)"]},
>   "store":"_DEFAULT_"},
> {
>   "name":"originalScore",
>   "class":"org.apache.solr.ltr.feature.OriginalScoreFeature",
>   "params":null,
>   "store":"_DEFAULT_"}]}
> {code}
> My model:
> {code}
> {
>   "initArgs":{},
>   "initializedOn":"2017-03-29T05:32:52.167Z",
>   "updatedSinceInit":"2017-03-29T05:54:26.100Z",
>   "managedList":[{
>   "name":"myModel",
>   "class":"org.apache.solr.ltr.model.LinearModel",
>   "store":"_DEFAULT_",
>   "features":[
> {
>   "name":"documentRecency",
>   "norm":{"class":"org.apache.solr.ltr.norm.IdentityNormalizer"}},
> {
>   "name":"niceness",
>   "norm":{"class":"org.apache.solr.ltr.norm.IdentityNormalizer"}},
> {
>   "name":"originalScore",
>   "norm":{"class":"org.apache.solr.ltr.norm.IdentityNormalizer"}}],
>   "params":{"weights":{
>   "documentRecency":0.1,
>   "niceness":1.0,
>   "originalScore":0.5}}}]}
> {code}
> NPE occurs in this method, where docInfo is null.
> {code:title=OriginalScoreFeature.java}
> @Override
>   public float score() throws IOException {
> // This is done to improve the speed of feature extraction. Since this
> // was already scored in step 1
> // we shouldn't need to calc original score again.
> final DocInfo docInfo = getDocInfo();
> return (docInfo.hasOriginalDocScore() ? docInfo.getOriginalDocScore() 
> : originalScorer.score());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 805 - Unstable!

2017-04-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/805/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

11 tests failed.
FAILED:  org.apache.solr.cloud.ClusterStateUpdateTest.testCoreRegistration

Error Message:
invalid API spec: apispec/collections.collection.shards.shard.delete.json

Stack Trace:
java.lang.RuntimeException: invalid API spec: 
apispec/collections.collection.shards.shard.delete.json
at 
__randomizedtesting.SeedInfo.seed([D46FA642843BAD8F:6AE4C0EDFD41A3BA]:0)
at 
org.apache.solr.common.util.ValidatingJsonMap.parse(ValidatingJsonMap.java:318)
at org.apache.solr.api.ApiBag.lambda$getSpec$0(ApiBag.java:229)
at org.apache.solr.api.Api.getSpec(Api.java:64)
at org.apache.solr.api.ApiBag.register(ApiBag.java:72)
at org.apache.solr.core.PluginBag.put(PluginBag.java:215)
at org.apache.solr.core.PluginBag.put(PluginBag.java:186)
at 
org.apache.solr.core.CoreContainer.createHandler(CoreContainer.java:1337)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:490)
at 
org.apache.solr.cloud.ClusterStateUpdateTest.testCoreRegistration(ClusterStateUpdateTest.java:220)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Updated] (SOLR-10239) MOVEREPLICA API

2017-04-04 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-10239:

Attachment: SOLR-10239.patch

Updated patch for this ticket, added the optimization for hdfs case.
[~shalinmangar] Can you review this patch?

> MOVEREPLICA API
> ---
>
> Key: SOLR-10239
> URL: https://issues.apache.org/jira/browse/SOLR-10239
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Cao Manh Dat
> Attachments: SOLR-10239.patch, SOLR-10239.patch
>
>
> To move a replica from a node to another node, there should be an API 
> command. This should be better than having to do ADDREPLICA and DELETEREPLICA.
> The API will like this
> {code}
> /admin/collections?action=MOVEREPLICA=collection=shard=replica=nodeName=nodeName
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+162) - Build # 3196 - Unstable!

2017-04-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3196/
Java: 32bit/jdk-9-ea+162 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica2","base_url":"http://127.0.0.1:43973","node_name":"127.0.0.1:43973_","state":"active","leader":"true"}];
 clusterState: 
DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/17)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:46531;,   
"core":"c8n_1x3_lf_shard1_replica1",   "node_name":"127.0.0.1:46531_"}, 
"core_node2":{   "core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:43973;,   "node_name":"127.0.0.1:43973_",  
 "state":"active",   "leader":"true"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:35012;,   "node_name":"127.0.0.1:35012_",  
 "state":"down",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica2","base_url":"http://127.0.0.1:43973","node_name":"127.0.0.1:43973_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/17)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:46531;,
  "core":"c8n_1x3_lf_shard1_replica1",
  "node_name":"127.0.0.1:46531_"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:43973;,
  "node_name":"127.0.0.1:43973_",
  "state":"active",
  "leader":"true"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:35012;,
  "node_name":"127.0.0.1:35012_",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([6CDDA21991ED6B66:E4899DC33F11069E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:547)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 

[jira] [Reopened] (SOLR-10338) Configure SecureRandom non blocking for tests.

2017-04-04 Thread Mihaly Toth (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mihaly Toth reopened SOLR-10338:


Until the tests get fixed I reopen the issue.

> Configure SecureRandom non blocking for tests.
> --
>
> Key: SOLR-10338
> URL: https://issues.apache.org/jira/browse/SOLR-10338
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Mihaly Toth
>Assignee: Mark Miller
> Fix For: master (7.0), 6.6
>
> Attachments: SOLR-10338.patch, SOLR-10338.patch, SOLR-10338.patch
>
>
> It would be best if SecureRandom could be made non blocking. In that case we 
> could get rid of random entropy exhaustion issue related to all usages of 
> SecureRandom.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10416) MetricsHandler JSON output still incorrect

2017-04-04 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-10416.
--
Resolution: Fixed

> MetricsHandler JSON output still incorrect
> --
>
> Key: SOLR-10416
> URL: https://issues.apache.org/jira/browse/SOLR-10416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.5
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.6, 6.5.1
>
> Attachments: SOLR-10416.patch
>
>
> SOLR-10269 fixed the individual groups and metrics to use SimpleOrderedMap 
> but the container for those metrics still uses NamedList.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10338) Configure SecureRandom non blocking for tests.

2017-04-04 Thread Mihaly Toth (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954874#comment-15954874
 ] 

Mihaly Toth commented on SOLR-10338:


Sorry, next time I will have a closer look on the test failures ...

The assert is not very perfect indeed. In its current state it is sensitive for 
changes and not really for failure. And now that a new recommendation is 
implemented in JDK 9 the assert failed without a real bug.

The real test is that there is enough random available within reasonable time. 
I am now testing a fix locally that takes 100 bytes seed and 500 random bytes 
and checks that the result is available within a few seconds. On my local 
machine this takes 0.3-0.4 seconds, which can be considered negligible compared 
to the overall execution time of a test case. And on low entropy servers this 
assert will hit if ran in blocking mode.

An alternative would be to change the assert that the algorithm is not 
{{NativePRNG}}. That would be fast but would still not test the real need.

> Configure SecureRandom non blocking for tests.
> --
>
> Key: SOLR-10338
> URL: https://issues.apache.org/jira/browse/SOLR-10338
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Mihaly Toth
>Assignee: Mark Miller
> Fix For: master (7.0), 6.6
>
> Attachments: SOLR-10338.patch, SOLR-10338.patch, SOLR-10338.patch
>
>
> It would be best if SecureRandom could be made non blocking. In that case we 
> could get rid of random entropy exhaustion issue related to all usages of 
> SecureRandom.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10416) MetricsHandler JSON output still incorrect

2017-04-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954848#comment-15954848
 ] 

ASF subversion and git services commented on SOLR-10416:


Commit 16f2718f850dde675d211503de8d13d462dd4dcb in lucene-solr's branch 
refs/heads/branch_6_5 from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=16f2718 ]

SOLR-10416: The JSON output of /admin/metrics is fixed to write the container 
as a map (SimpleOrderedMap) instead of an array (NamedList)

(cherry picked from commit ee98cdc)

(cherry picked from commit 553d9f8)


> MetricsHandler JSON output still incorrect
> --
>
> Key: SOLR-10416
> URL: https://issues.apache.org/jira/browse/SOLR-10416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.5
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.6, 6.5.1
>
> Attachments: SOLR-10416.patch
>
>
> SOLR-10269 fixed the individual groups and metrics to use SimpleOrderedMap 
> but the container for those metrics still uses NamedList.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10416) MetricsHandler JSON output still incorrect

2017-04-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954846#comment-15954846
 ] 

ASF subversion and git services commented on SOLR-10416:


Commit 553d9f88f0946e2ad8eacb4f92d31438aca9d921 in lucene-solr's branch 
refs/heads/branch_6x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=553d9f8 ]

SOLR-10416: The JSON output of /admin/metrics is fixed to write the container 
as a map (SimpleOrderedMap) instead of an array (NamedList)

(cherry picked from commit ee98cdc)


> MetricsHandler JSON output still incorrect
> --
>
> Key: SOLR-10416
> URL: https://issues.apache.org/jira/browse/SOLR-10416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.5
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.6, 6.5.1
>
> Attachments: SOLR-10416.patch
>
>
> SOLR-10269 fixed the individual groups and metrics to use SimpleOrderedMap 
> but the container for those metrics still uses NamedList.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10416) MetricsHandler JSON output still incorrect

2017-04-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954844#comment-15954844
 ] 

ASF subversion and git services commented on SOLR-10416:


Commit ee98cdc79014af0bd309ab4298fdbaeb38ee402b in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ee98cdc ]

SOLR-10416: The JSON output of /admin/metrics is fixed to write the container 
as a map (SimpleOrderedMap) instead of an array (NamedList)


> MetricsHandler JSON output still incorrect
> --
>
> Key: SOLR-10416
> URL: https://issues.apache.org/jira/browse/SOLR-10416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.5
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.6, 6.5.1
>
> Attachments: SOLR-10416.patch
>
>
> SOLR-10269 fixed the individual groups and metrics to use SimpleOrderedMap 
> but the container for those metrics still uses NamedList.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10416) MetricsHandler JSON output still incorrect

2017-04-04 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-10416:
-
Description: SOLR-10269 fixed the individual groups and metrics to use 
SimpleOrderedMap but the container for those metrics still uses NamedList.  
(was: SOLR-10269 introduced the compact=true param which fixed the individual 
groups and metrics to use SimpleOrderedMap but the container for those metrics 
still uses NamedList.)

> MetricsHandler JSON output still incorrect
> --
>
> Key: SOLR-10416
> URL: https://issues.apache.org/jira/browse/SOLR-10416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.5
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.6, 6.5.1
>
> Attachments: SOLR-10416.patch
>
>
> SOLR-10269 fixed the individual groups and metrics to use SimpleOrderedMap 
> but the container for those metrics still uses NamedList.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 6.5.1 release?

2017-04-04 Thread Shalin Shekhar Mangar
I would like to include https://issues.apache.org/jira/browse/SOLR-10416

It is a trivial fix.

On Mon, Apr 3, 2017 at 11:54 PM, Joel Bernstein  wrote:
> SOLR-10404 looks like a nice improvement!
>
> I'll shoot for Friday morning to create the release candidate. I've never
> been a release manager before so I may need some guidance along the way.
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Mon, Apr 3, 2017 at 12:21 PM, David Smiley 
> wrote:
>>
>> Found & fixed a bug: https://issues.apache.org/jira/browse/SOLR-10404  I'd
>> like to get this into 6.5.1.  You might be interested in this one Joel.
>>
>> On Mon, Apr 3, 2017 at 11:58 AM Steve Rowe  wrote:
>>>
>>>
>>> > On Apr 3, 2017, at 11:52 AM, Adrien Grand  wrote:
>>> >
>>> > Building the first RC on April 6th sounds good to me! I'm wondering
>>> > whether the 6.5 Jenkins jobs are still running?
>>>
>>> I disabled the ASF Jenkins 6.5 jobs shortly after the release.  FYI you
>>> can see which Lucene/Solr jobs are enabled here:
>>> .  I’ll re-enable the 6.5 jobs
>>> now.
>>>
>>> --
>>> Steve
>>> www.lucidworks.com
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>
>



-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3942 - Unstable!

2017-04-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3942/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testSolrJAPICalls

Error Message:
Shard split did not complete. Last recorded state: RUNNING expected: 
but was:

Stack Trace:
java.lang.AssertionError: Shard split did not complete. Last recorded state: 
RUNNING expected: but was:
at 
__randomizedtesting.SeedInfo.seed([C6ED2D39FB6532CA:9E89A158FD0F9A1E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testSolrJAPICalls(CollectionsAPIAsyncDistributedZkTest.java:81)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.TestReplicaProperties.test

Error Message:

[jira] [Updated] (SOLR-10416) MetricsHandler JSON output still incorrect

2017-04-04 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-10416:
-
Attachment: SOLR-10416.patch

Trivial fix with a test that fails without it.

> MetricsHandler JSON output still incorrect
> --
>
> Key: SOLR-10416
> URL: https://issues.apache.org/jira/browse/SOLR-10416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.5
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: master (7.0), 6.6, 6.5.1
>
> Attachments: SOLR-10416.patch
>
>
> SOLR-10269 introduced the compact=true param which fixed the individual 
> groups and metrics to use SimpleOrderedMap but the container for those 
> metrics still uses NamedList.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7756) Only record the major that was used to create the index rather than the full version

2017-04-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7756.
--
   Resolution: Fixed
Fix Version/s: master (7.0)

Thanks Mike for looking at this larg-ish patch!

> Only record the major that was used to create the index rather than the full 
> version
> 
>
> Key: LUCENE-7756
> URL: https://issues.apache.org/jira/browse/LUCENE-7756
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: LUCENE-7756.patch, LUCENE-7756.patch, LUCENE-7756.patch
>
>
> LUCENE-7703 added information about the Lucene version that was used to 
> create the index to the segment infos. But since there is a single creation 
> version, it means we need to reject calls to addIndexes that can mix indices 
> that have different creation versions, which might be seen as an important 
> regression by some users. So I have been thinking about only recording the 
> major version that was used to create the index, which is still very valuable 
> information and would allow us to accept calls to addIndexes when all merged 
> indices have the same major version. This looks like a better trade-off to me.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9745) SolrCLI swallows errors from solr.cmd

2017-04-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954778#comment-15954778
 ] 

ASF subversion and git services commented on SOLR-9745:
---

Commit 4c737b8df9b130cf530d17271576730e21d5b4cc in lucene-solr's branch 
refs/heads/master from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4c737b8 ]

SOLR-9745: check exit code only if process has finished


> SolrCLI swallows errors from solr.cmd
> -
>
> Key: SOLR-9745
> URL: https://issues.apache.org/jira/browse/SOLR-9745
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.3, master (7.0)
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: newbie, newdev
> Attachments: SOLR-9745.patch, SOLR-9745.patch
>
>
> It occurs on mad scenario in LUCENE-7534:
> * solr.cmd weren't granted +x (it happens under cygwin, yes)
> * coolhacker worked it around with cmd /C solr.cmd start -e ..
> * but when SolrCLI runs solr instances with the same solr.cmd, it just 
> silently fails
> I think we can just pass ExecuteResultHandler which will dump exception to 
> console. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9745) SolrCLI swallows errors from solr.cmd

2017-04-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954761#comment-15954761
 ] 

ASF subversion and git services commented on SOLR-9745:
---

Commit 8b87a474cbf6873935975302dbd856c3cbef53ec in lucene-solr's branch 
refs/heads/branch_6x from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8b87a47 ]

SOLR-9745: check exit code only if process has finished


> SolrCLI swallows errors from solr.cmd
> -
>
> Key: SOLR-9745
> URL: https://issues.apache.org/jira/browse/SOLR-9745
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.3, master (7.0)
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: newbie, newdev
> Attachments: SOLR-9745.patch, SOLR-9745.patch
>
>
> It occurs on mad scenario in LUCENE-7534:
> * solr.cmd weren't granted +x (it happens under cygwin, yes)
> * coolhacker worked it around with cmd /C solr.cmd start -e ..
> * but when SolrCLI runs solr instances with the same solr.cmd, it just 
> silently fails
> I think we can just pass ExecuteResultHandler which will dump exception to 
> console. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7756) Only record the major that was used to create the index rather than the full version

2017-04-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954749#comment-15954749
 ] 

ASF subversion and git services commented on LUCENE-7756:
-

Commit 23b002a0fdf2f6025f1eb026c0afca247fb21ed0 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=23b002a ]

LUCENE-7756: Only record the major Lucene version that created the index, and 
record the minimum Lucene version that contributed to segments.


> Only record the major that was used to create the index rather than the full 
> version
> 
>
> Key: LUCENE-7756
> URL: https://issues.apache.org/jira/browse/LUCENE-7756
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7756.patch, LUCENE-7756.patch, LUCENE-7756.patch
>
>
> LUCENE-7703 added information about the Lucene version that was used to 
> create the index to the segment infos. But since there is a single creation 
> version, it means we need to reject calls to addIndexes that can mix indices 
> that have different creation versions, which might be seen as an important 
> regression by some users. So I have been thinking about only recording the 
> major version that was used to create the index, which is still very valuable 
> information and would allow us to accept calls to addIndexes when all merged 
> indices have the same major version. This looks like a better trade-off to me.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10418) metrics should return JVM system properties

2017-04-04 Thread Noble Paul (JIRA)
Noble Paul created SOLR-10418:
-

 Summary: metrics should return JVM system properties
 Key: SOLR-10418
 URL: https://issues.apache.org/jira/browse/SOLR-10418
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul


We need to stop using the custom solution used in rules and start using metrics 
for everything




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7639) Use Suffix Arrays for fast search with leading asterisks

2017-04-04 Thread Yakov Sirotkin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954727#comment-15954727
 ] 

Yakov Sirotkin edited comment on LUCENE-7639 at 4/4/17 7:41 AM:


Maybe I have explanation why search with leading asterisk is not easy. Let's 
assume that you have traditional address book on paper and your are looking for 
someone with compound surname _Zeta-Jones_. If you forget the second part you 
can search by _Zeta_ without any problems.
But if you forget the first part, you need to check the whole address book 
looking for _Jones_, in fact, index is useless in such case.


was (Author: yasha):
Maybe I have explanation why search with leading asterisk is not easy. Let's 
assume that you have traditional address book on paper and 
your are looking for someone with compound surname Zeta-Jones. If you forget 
the second part you can search by Zeta without any problems.
But if you forget the first part, you need to check the whole address book 
looking for Jones, in fact, index is useless in such case.

> Use Suffix Arrays for fast search with leading asterisks
> 
>
> Key: LUCENE-7639
> URL: https://issues.apache.org/jira/browse/LUCENE-7639
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Yakov Sirotkin
> Attachments: suffix-array-2.patch, suffix-array.patch
>
>
> If query term starts with asterisks FST checks all words in the dictionary so 
> request processing speed falls down. This problem can be solved with Suffix 
> Array approach. Luckily, Suffix Array can be constructed after Lucene start 
> from existing index. Unfortunately, Suffix Arrays requires a lot of RAM so we 
> can use it only when special flag is set:
> -Dsolr.suffixArray.enable=true
> It is possible to  speed up Suffix Array initialization using several 
> threads, so we can control number of threads with 
> -Dsolr.suffixArray.initialization_treads_count=5
> This system property can be omitted, the default value is 5.  
> Attached patch is the suggested implementation for SuffixArray support, it 
> works for all terms starting with asterisks with at least 3 consequent 
> non-wildcard characters. This patch do not change search results and  affects 
> only performance issues.
> *Update*
> suffix-arra-2.patch is an improved version of the first patch, system 
> properties for it are following::
> {{lucene.suffixArray.enable}} - {{true}}, if you want to enable Suffix Array 
> support. Default value - {{false}}.
> {{lucene.suffixArray.initializationThreadsCount}} - number of threads for 
> Suffix Array initialization, if you set {{0}} - no additional threads used. 
> Default value - {{5}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7639) Use Suffix Arrays for fast search with leading asterisks

2017-04-04 Thread Yakov Sirotkin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954727#comment-15954727
 ] 

Yakov Sirotkin commented on LUCENE-7639:


Maybe I have explanation why search with leading asterisk is not easy. Let's 
assume that you have traditional address book on paper and 
your are looking for someone with compound surname Zeta-Jones. If you forget 
the second part you can search by Zeta without any problems.
But if you forget the first part, you need to check the whole address book 
looking for Jones, in fact, index is useless in such case.

> Use Suffix Arrays for fast search with leading asterisks
> 
>
> Key: LUCENE-7639
> URL: https://issues.apache.org/jira/browse/LUCENE-7639
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Yakov Sirotkin
> Attachments: suffix-array-2.patch, suffix-array.patch
>
>
> If query term starts with asterisks FST checks all words in the dictionary so 
> request processing speed falls down. This problem can be solved with Suffix 
> Array approach. Luckily, Suffix Array can be constructed after Lucene start 
> from existing index. Unfortunately, Suffix Arrays requires a lot of RAM so we 
> can use it only when special flag is set:
> -Dsolr.suffixArray.enable=true
> It is possible to  speed up Suffix Array initialization using several 
> threads, so we can control number of threads with 
> -Dsolr.suffixArray.initialization_treads_count=5
> This system property can be omitted, the default value is 5.  
> Attached patch is the suggested implementation for SuffixArray support, it 
> works for all terms starting with asterisks with at least 3 consequent 
> non-wildcard characters. This patch do not change search results and  affects 
> only performance issues.
> *Update*
> suffix-arra-2.patch is an improved version of the first patch, system 
> properties for it are following::
> {{lucene.suffixArray.enable}} - {{true}}, if you want to enable Suffix Array 
> support. Default value - {{false}}.
> {{lucene.suffixArray.initializationThreadsCount}} - number of threads for 
> Suffix Array initialization, if you set {{0}} - no additional threads used. 
> Default value - {{5}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7639) Use Suffix Arrays for fast search with leading asterisks

2017-04-04 Thread Yakov Sirotkin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954722#comment-15954722
 ] 

Yakov Sirotkin edited comment on LUCENE-7639 at 4/4/17 7:39 AM:


Many thanks to all for feedback, here is the list of changes in 
suffix-array-2.patch:

1. Suffix Array construction implemented without recursion, it fixes major bug 
discovered by {{TestIndexWriter.testWickedLongTerm}} test.
2. Sort wordIds instead of words  - words are already sorted in index. 
3. {{SegmentTermsEnum}} used inside {{ListTermsEnum}}.
4. Entire Suffix Array construction moved to special thread to avoid startup 
delays.
5. Properties renamed to {{lucene.suffixArray.enable}} and 
{{lucene.suffixArray.initializationThreadsCount}}.
6. If {{lucene.suffixArray.initializationThreadsCount}} set to {{0}}, 
initialization is synchronous, additional {{ExecutorService}} is not created.  
7. {{CompiledAutomaton}} used instead of Java's {{Pattern}}.
8. Additional flag {{lucene.suffixArray.optimizeForUTF}} with default value 
{{true}} was added. If it is set to {{false}}, we assume that index can contain 
any bytes, not necessary representing UTF characters. In this case code starts 
to pass some tests, but for real application it increase memory consumption 
twice and reduce performance. 


was (Author: yasha):
Many thanks to all for feedback, here is the list of changes in 
suffix-array-2.patch:

1. Suffix Array construction implemented without recursion, it fixes major bug 
discovered by {{TestIndexWriter.testWickedLongTerm}} test.
2. Sort wordIds instead of words  - words are already sorted in index. 
3. {{SegmentTermsEnum}} used inside {{ListTermsEnum}}.
4. Entire Suffix Array construction moved to special thread to avoid startup 
delays.
5. Properties renamed to {{lucene.suffixArray.enable}} and 
{{lucene.suffixArray.initializationThreadsCount}}.
6. If {{lucene.suffixArray.initializationThreadsCount}} set to {{0}}, 
initialization is synchronous, additional {{ExecutorService}} is not created.  
7. {{CompiledAutomaton}} used instead of Java's {{Pattern}}.
8. Additional flag {{lucene.suffixArray.optimizeForUTF}} with default value 
{{true}} was added. If it is set to {{false}}, we assume that index can contain 
any bytes,
not necessary representing UTF characters. In this case code starts to pass 
some tests, but for real application it increase memory consumption 
twice and reduce performance. 

> Use Suffix Arrays for fast search with leading asterisks
> 
>
> Key: LUCENE-7639
> URL: https://issues.apache.org/jira/browse/LUCENE-7639
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Yakov Sirotkin
> Attachments: suffix-array-2.patch, suffix-array.patch
>
>
> If query term starts with asterisks FST checks all words in the dictionary so 
> request processing speed falls down. This problem can be solved with Suffix 
> Array approach. Luckily, Suffix Array can be constructed after Lucene start 
> from existing index. Unfortunately, Suffix Arrays requires a lot of RAM so we 
> can use it only when special flag is set:
> -Dsolr.suffixArray.enable=true
> It is possible to  speed up Suffix Array initialization using several 
> threads, so we can control number of threads with 
> -Dsolr.suffixArray.initialization_treads_count=5
> This system property can be omitted, the default value is 5.  
> Attached patch is the suggested implementation for SuffixArray support, it 
> works for all terms starting with asterisks with at least 3 consequent 
> non-wildcard characters. This patch do not change search results and  affects 
> only performance issues.
> *Update*
> suffix-arra-2.patch is an improved version of the first patch, system 
> properties for it are following::
> {{lucene.suffixArray.enable}} - {{true}}, if you want to enable Suffix Array 
> support. Default value - {{false}}.
> {{lucene.suffixArray.initializationThreadsCount}} - number of threads for 
> Suffix Array initialization, if you set {{0}} - no additional threads used. 
> Default value - {{5}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7639) Use Suffix Arrays for fast search with leading asterisks

2017-04-04 Thread Yakov Sirotkin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954722#comment-15954722
 ] 

Yakov Sirotkin commented on LUCENE-7639:


Many thanks to all for feedback, here is the list of changes in 
suffix-array-2.patch:

1. Suffix Array construction implemented without recursion, it fixes major bug 
discovered by {{TestIndexWriter.testWickedLongTerm}} test.
2. Sort wordIds instead of words  - words are already sorted in index. 
3. {{SegmentTermsEnum}} used inside {{ListTermsEnum}}.
4. Entire Suffix Array construction moved to special thread to avoid startup 
delays.
5. Properties renamed to {{lucene.suffixArray.enable}} and 
{{lucene.suffixArray.initializationThreadsCount}}.
6. If {{lucene.suffixArray.initializationThreadsCount}} set to {{0}}, 
initialization is synchronous, additional {{ExecutorService}} is not created.  
7. {{CompiledAutomaton}} used instead of Java's {{Pattern}}.
8. Additional flag {{lucene.suffixArray.optimizeForUTF}} with default value 
{{true}} was added. If it is set to {{false}}, we assume that index can contain 
any bytes,
not necessary representing UTF characters. In this case code starts to pass 
some tests, but for real application it increase memory consumption 
twice and reduce performance. 

> Use Suffix Arrays for fast search with leading asterisks
> 
>
> Key: LUCENE-7639
> URL: https://issues.apache.org/jira/browse/LUCENE-7639
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Yakov Sirotkin
> Attachments: suffix-array-2.patch, suffix-array.patch
>
>
> If query term starts with asterisks FST checks all words in the dictionary so 
> request processing speed falls down. This problem can be solved with Suffix 
> Array approach. Luckily, Suffix Array can be constructed after Lucene start 
> from existing index. Unfortunately, Suffix Arrays requires a lot of RAM so we 
> can use it only when special flag is set:
> -Dsolr.suffixArray.enable=true
> It is possible to  speed up Suffix Array initialization using several 
> threads, so we can control number of threads with 
> -Dsolr.suffixArray.initialization_treads_count=5
> This system property can be omitted, the default value is 5.  
> Attached patch is the suggested implementation for SuffixArray support, it 
> works for all terms starting with asterisks with at least 3 consequent 
> non-wildcard characters. This patch do not change search results and  affects 
> only performance issues.
> *Update*
> suffix-arra-2.patch is an improved version of the first patch, system 
> properties for it are following::
> {{lucene.suffixArray.enable}} - {{true}}, if you want to enable Suffix Array 
> support. Default value - {{false}}.
> {{lucene.suffixArray.initializationThreadsCount}} - number of threads for 
> Suffix Array initialization, if you set {{0}} - no additional threads used. 
> Default value - {{5}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7639) Use Suffix Arrays for fast search with leading asterisks

2017-04-04 Thread Yakov Sirotkin (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yakov Sirotkin updated LUCENE-7639:
---
Description: 
If query term starts with asterisks FST checks all words in the dictionary so 
request processing speed falls down. This problem can be solved with Suffix 
Array approach. Luckily, Suffix Array can be constructed after Lucene start 
from existing index. Unfortunately, Suffix Arrays requires a lot of RAM so we 
can use it only when special flag is set:

-Dsolr.suffixArray.enable=true

It is possible to  speed up Suffix Array initialization using several threads, 
so we can control number of threads with 

-Dsolr.suffixArray.initialization_treads_count=5

This system property can be omitted, the default value is 5.  

Attached patch is the suggested implementation for SuffixArray support, it 
works for all terms starting with asterisks with at least 3 consequent 
non-wildcard characters. This patch do not change search results and  affects 
only performance issues.

*Update*
suffix-arra-2.patch is an improved version of the first patch, system 
properties for it are following::

{{lucene.suffixArray.enable}} - {{true}}, if you want to enable Suffix Array 
support. Default value - {{false}}.
{{lucene.suffixArray.initializationThreadsCount}} - number of threads for 
Suffix Array initialization, if you set {{0}} - no additional threads used. 
Default value - {{5}}.

  was:
If query term starts with asterisks FST checks all words in the dictionary so 
request processing speed falls down. This problem can be solved with Suffix 
Array approach. Luckily, Suffix Array can be constructed after Lucene start 
from existing index. Unfortunately, Suffix Arrays requires a lot of RAM so we 
can use it only when special flag is set:

-Dsolr.suffixArray.enable=true

It is possible to  speed up Suffix Array initialization using several threads, 
so we can control number of threads with 

-Dsolr.suffixArray.initialization_treads_count=5

This system property can be omitted, the default value is 5.  

Attached patch is the suggested implementation for SuffixArray support, it 
works for all terms starting with asterisks with at least 3 consequent 
non-wildcard characters. This patch do not change search results and  affects 
only performance issues.

*Update*
suffix-arra-2.patch is improved version of patch, system properties for it are 
following::

{{lucene.suffixArray.enable}} - {{true}}, if you want to enable Suffix Array 
support. Default value - {{false}}.
{{lucene.suffixArray.initializationThreadsCount}} - number of threads for 
Suffix Array initialization, if you set {{0}} - no additional threads used. 
Default value - {{5}}.


> Use Suffix Arrays for fast search with leading asterisks
> 
>
> Key: LUCENE-7639
> URL: https://issues.apache.org/jira/browse/LUCENE-7639
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Yakov Sirotkin
> Attachments: suffix-array-2.patch, suffix-array.patch
>
>
> If query term starts with asterisks FST checks all words in the dictionary so 
> request processing speed falls down. This problem can be solved with Suffix 
> Array approach. Luckily, Suffix Array can be constructed after Lucene start 
> from existing index. Unfortunately, Suffix Arrays requires a lot of RAM so we 
> can use it only when special flag is set:
> -Dsolr.suffixArray.enable=true
> It is possible to  speed up Suffix Array initialization using several 
> threads, so we can control number of threads with 
> -Dsolr.suffixArray.initialization_treads_count=5
> This system property can be omitted, the default value is 5.  
> Attached patch is the suggested implementation for SuffixArray support, it 
> works for all terms starting with asterisks with at least 3 consequent 
> non-wildcard characters. This patch do not change search results and  affects 
> only performance issues.
> *Update*
> suffix-arra-2.patch is an improved version of the first patch, system 
> properties for it are following::
> {{lucene.suffixArray.enable}} - {{true}}, if you want to enable Suffix Array 
> support. Default value - {{false}}.
> {{lucene.suffixArray.initializationThreadsCount}} - number of threads for 
> Suffix Array initialization, if you set {{0}} - no additional threads used. 
> Default value - {{5}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >