[jira] [Commented] (LUCENE-8081) Allow IndexWriter to opt out of flushing on indexing threads

2017-12-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283175#comment-16283175
 ] 

ASF subversion and git services commented on LUCENE-8081:
-

Commit b32739428be0a357a61b7506ca36af3c85b6f236 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b327394 ]

LUCENE-8081: Fix javadoc tag.


> Allow IndexWriter to opt out of flushing on indexing threads
> 
>
> Key: LUCENE-8081
> URL: https://issues.apache.org/jira/browse/LUCENE-8081
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
> Fix For: master (8.0), 7.3
>
> Attachments: LUCENE-8081.patch, LUCENE-8081.patch, LUCENE-8081.patch
>
>
> Today indexing / updating threads always help out flushing. Experts might 
> want indexing threads to only help flushing if flushes are falling behind. 
> Maybe we can allow an expert flag in IWC to opt out of this behavior.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8081) Allow IndexWriter to opt out of flushing on indexing threads

2017-12-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283174#comment-16283174
 ] 

ASF subversion and git services commented on LUCENE-8081:
-

Commit 027a6edb59fde11ee1704e1df57a37c4b7fb0f94 in lucene-solr's branch 
refs/heads/branch_7x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=027a6ed ]

LUCENE-8081: Fix javadoc tag.


> Allow IndexWriter to opt out of flushing on indexing threads
> 
>
> Key: LUCENE-8081
> URL: https://issues.apache.org/jira/browse/LUCENE-8081
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
> Fix For: master (8.0), 7.3
>
> Attachments: LUCENE-8081.patch, LUCENE-8081.patch, LUCENE-8081.patch
>
>
> Today indexing / updating threads always help out flushing. Experts might 
> want indexing threads to only help flushing if flushes are falling behind. 
> Maybe we can allow an expert flag in IWC to opt out of this behavior.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 98 - Still Failing

2017-12-07 Thread Adrien Grand
I pushed a fix for this typo in the javadoc tag.

Le ven. 8 déc. 2017 à 06:02, Apache Jenkins Server <
jenk...@builds.apache.org> a écrit :

> Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/98/
>
> No tests ran.
>
> Build Log:
> [...truncated 7100 lines...]
>   [javadoc] Generating Javadoc
>   [javadoc] Javadoc execution
>   [javadoc] Loading source files for package org.apache.lucene...
>   [javadoc] Loading source files for package org.apache.lucene.analysis...
>   [javadoc] Loading source files for package
> org.apache.lucene.analysis.standard...
>   [javadoc] Loading source files for package
> org.apache.lucene.analysis.tokenattributes...
>   [javadoc] Loading source files for package org.apache.lucene.codecs...
>   [javadoc] Loading source files for package
> org.apache.lucene.codecs.blocktree...
>   [javadoc] Loading source files for package
> org.apache.lucene.codecs.compressing...
>   [javadoc] Loading source files for package
> org.apache.lucene.codecs.lucene50...
>   [javadoc] Loading source files for package
> org.apache.lucene.codecs.lucene60...
>   [javadoc] Loading source files for package
> org.apache.lucene.codecs.lucene62...
>   [javadoc] Loading source files for package
> org.apache.lucene.codecs.lucene70...
>   [javadoc] Loading source files for package
> org.apache.lucene.codecs.perfield...
>   [javadoc] Loading source files for package org.apache.lucene.document...
>   [javadoc] Loading source files for package org.apache.lucene.geo...
>   [javadoc] Loading source files for package org.apache.lucene.index...
>   [javadoc] Loading source files for package org.apache.lucene.search...
>   [javadoc] Loading source files for package
> org.apache.lucene.search.similarities...
>   [javadoc] Loading source files for package
> org.apache.lucene.search.spans...
>   [javadoc] Loading source files for package org.apache.lucene.store...
>   [javadoc] Loading source files for package org.apache.lucene.util...
>   [javadoc] Loading source files for package
> org.apache.lucene.util.automaton...
>   [javadoc] Loading source files for package org.apache.lucene.util.bkd...
>   [javadoc] Loading source files for package org.apache.lucene.util.fst...
>   [javadoc] Loading source files for package
> org.apache.lucene.util.graph...
>   [javadoc] Loading source files for package
> org.apache.lucene.util.mutable...
>   [javadoc] Loading source files for package
> org.apache.lucene.util.packed...
>   [javadoc] Constructing Javadoc information...
>   [javadoc] Standard Doclet version 1.8.0_144
>   [javadoc] Building tree for all the packages and classes...
>   [javadoc]
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/core/src/java/org/apache/lucene/index/LiveIndexWriterConfig.java:435:
> error: unknown tag: lucene.eperimental
>   [javadoc]* @lucene.eperimental
>   [javadoc]  ^
>   [javadoc]
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/core/src/java/org/apache/lucene/index/LiveIndexWriterConfig.java:448:
> error: unknown tag: lucene.eperimental
>   [javadoc]* @lucene.eperimental
>   [javadoc]  ^
>   [javadoc] Building index for all the packages and classes...
>   [javadoc] Building index for all classes...
>   [javadoc] Generating
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/docs/core/help-doc.html...
>   [javadoc] 2 errors
>
> BUILD FAILED
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/build.xml:615:
> The following error occurred while executing this line:
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/common-build.xml:793:
> The following error occurred while executing this line:
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/core/build.xml:54:
> The following error occurred while executing this line:
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/common-build.xml:2213:
> Javadoc returned 1
>
> Total time: 6 minutes 5 seconds
> Build step 'Invoke Ant' marked build as failure
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


Re: [JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 332 - Failure!

2017-12-07 Thread Adrien Grand
I removed the unused import.

Le ven. 8 déc. 2017 à 07:56, Policeman Jenkins Server 
a écrit :

> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/332/
> Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC
>
> All tests passed
>
> Build Log:
> [...truncated 48586 lines...]
> -ecj-javadoc-lint-src:
> [mkdir] Created dir:
> /var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1611069326
>  [ecj-lint] Compiling 188 source files to
> /var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1611069326
>  [ecj-lint] --
>  [ecj-lint] 1. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/analysis/BaseTokenStreamTestCase.java
> (at line 801)
>  [ecj-lint] ts = a.tokenStream("dummy", useCharFilter ? new
> MockCharFilter(reader, remainder) : reader);
>  [ecj-lint]
>  
> ^^^
>  [ecj-lint] Resource leak: 'ts' is not closed at this location
>  [ecj-lint] --
>  [ecj-lint] 2. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/analysis/BaseTokenStreamTestCase.java
> (at line 837)
>  [ecj-lint] reader = new MockReaderWrapper(random, reader);
>  [ecj-lint] ^^
>  [ecj-lint] Resource leak: 'reader' is not closed at this location
>  [ecj-lint] --
>  [ecj-lint] 3. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/analysis/BaseTokenStreamTestCase.java
> (at line 913)
>  [ecj-lint] reader = new MockReaderWrapper(random, reader);
>  [ecj-lint] ^^
>  [ecj-lint] Resource leak: 'reader' is not closed at this location
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 4. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/codecs/mockrandom/MockRandomPostingsFormat.java
> (at line 260)
>  [ecj-lint] throw new AssertionError();
>  [ecj-lint] ^^^
>  [ecj-lint] Resource leak: 'postingsWriter' is not closed at this location
>  [ecj-lint] --
>  [ecj-lint] 5. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/codecs/mockrandom/MockRandomPostingsFormat.java
> (at line 393)
>  [ecj-lint] throw new AssertionError();
>  [ecj-lint] ^^^
>  [ecj-lint] Resource leak: 'postingsReader' is not closed at this location
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 6. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/geo/BaseGeoPointTestCase.java
> (at line 1271)
>  [ecj-lint] RandomIndexWriter writer = new RandomIndexWriter(random(),
> dir, iwc);
>  [ecj-lint]   ^^
>  [ecj-lint] Resource leak: 'writer' is never closed
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 7. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/index/BaseCompressingDocValuesFormatTestCase.java
> (at line 47)
>  [ecj-lint] final IndexWriter iwriter = new IndexWriter(dir, iwc);
>  [ecj-lint]   ^^^
>  [ecj-lint] Resource leak: 'iwriter' is never closed
>  [ecj-lint] --
>  [ecj-lint] 8. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/index/BaseCompressingDocValuesFormatTestCase.java
> (at line 81)
>  [ecj-lint] final IndexWriter iwriter = new IndexWriter(dir, iwc);
>  [ecj-lint]   ^^^
>  [ecj-lint] Resource leak: 'iwriter' is never closed
>  [ecj-lint] --
>  [ecj-lint] 9. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/index/BaseCompressingDocValuesFormatTestCase.java
> (at line 108)
>  [ecj-lint] final IndexWriter iwriter = new IndexWriter(dir, iwc);
>  [ecj-lint]   ^^^
>  [ecj-lint] Resource leak: 'iwriter' is never closed
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 10. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/index/BasePointsFormatTestCase.java
> (at line 622)
>  [ecj-lint] w = new RandomIndexWriter(random(), dir, iwc);
>  [ecj-lint] ^
>  [ecj-lint] Resource leak: 'w' is not closed at this location
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 11. WARNING in
> /Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/index/BasePostingsFormatTestCase.java
> (at line 314)
>  [ecj-lint] Analyzer analyzer = new MockAnalyzer(random());
>  [ecj-lint]  

[jira] [Commented] (LUCENE-4100) Maxscore - Efficient Scoring

2017-12-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283172#comment-16283172
 ] 

ASF subversion and git services commented on LUCENE-4100:
-

Commit 0e1d6682d6ca66590e279ee0c4ccce745f2accd6 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0e1d668 ]

LUCENE-4100: Fix more queries to implement the new updated createWeight API.


> Maxscore - Efficient Scoring
> 
>
> Key: LUCENE-4100
> URL: https://issues.apache.org/jira/browse/LUCENE-4100
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs, core/query/scoring, core/search
>Affects Versions: 4.0-ALPHA
>Reporter: Stefan Pohl
>  Labels: api-change, gsoc2014, patch, performance
> Fix For: master (8.0)
>
> Attachments: LUCENE-4100.patch, LUCENE-4100.patch, LUCENE-4100.patch, 
> LUCENE-4100.patch, contrib_maxscore.tgz, maxscore.patch
>
>
> At Berlin Buzzwords 2012, I will be presenting 'maxscore', an efficient 
> algorithm first published in the IR domain in 1995 by H. Turtle & J. Flood, 
> that I find deserves more attention among Lucene users (and developers).
> I implemented a proof of concept and did some performance measurements with 
> example queries and lucenebench, the package of Mike McCandless, resulting in 
> very significant speedups.
> This ticket is to get started the discussion on including the implementation 
> into Lucene's codebase. Because the technique requires awareness about it 
> from the Lucene user/developer, it seems best to become a contrib/module 
> package so that it consciously can be chosen to be used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8081) Allow IndexWriter to opt out of flushing on indexing threads

2017-12-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283167#comment-16283167
 ] 

ASF subversion and git services commented on LUCENE-8081:
-

Commit fb80264e42c8dfc7e6d2127fe945e8d14b182971 in lucene-solr's branch 
refs/heads/branch_7x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fb80264 ]

LUCENE-8081: Remove unused import.


> Allow IndexWriter to opt out of flushing on indexing threads
> 
>
> Key: LUCENE-8081
> URL: https://issues.apache.org/jira/browse/LUCENE-8081
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
> Fix For: master (8.0), 7.3
>
> Attachments: LUCENE-8081.patch, LUCENE-8081.patch, LUCENE-8081.patch
>
>
> Today indexing / updating threads always help out flushing. Experts might 
> want indexing threads to only help flushing if flushes are falling behind. 
> Maybe we can allow an expert flag in IWC to opt out of this behavior.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8081) Allow IndexWriter to opt out of flushing on indexing threads

2017-12-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283168#comment-16283168
 ] 

ASF subversion and git services commented on LUCENE-8081:
-

Commit d5c72eb5887fe3d399908c4accf453b7a7b339ab in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d5c72eb ]

LUCENE-8081: Remove unused import.


> Allow IndexWriter to opt out of flushing on indexing threads
> 
>
> Key: LUCENE-8081
> URL: https://issues.apache.org/jira/browse/LUCENE-8081
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
> Fix For: master (8.0), 7.3
>
> Attachments: LUCENE-8081.patch, LUCENE-8081.patch, LUCENE-8081.patch
>
>
> Today indexing / updating threads always help out flushing. Experts might 
> want indexing threads to only help flushing if flushes are falling behind. 
> Maybe we can allow an expert flag in IWC to opt out of this behavior.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11520) Suggestions for cores violations

2017-12-07 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-11520.
---
Resolution: Fixed

> Suggestions for cores violations
> 
>
> Key: SOLR-11520
> URL: https://issues.apache.org/jira/browse/SOLR-11520
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.2, master (8.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11538) Implement suggestions for port,ip_*/, nodeRole,sysprop.*, metrics:*

2017-12-07 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-11538.
---
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.2

> Implement suggestions for port,ip_*/, nodeRole,sysprop.*, metrics:*
> ---
>
> Key: SOLR-11538
> URL: https://issues.apache.org/jira/browse/SOLR-11538
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.2, master (8.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11524) Create a autoscaling/suggestions API end-point

2017-12-07 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-11524.
---
   Resolution: Fixed
Fix Version/s: master (8.0)

> Create a autoscaling/suggestions API end-point
> --
>
> Key: SOLR-11524
> URL: https://issues.apache.org/jira/browse/SOLR-11524
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.2, master (8.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11519) Suggestions for replica count violations

2017-12-07 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-11519.
---
   Resolution: Fixed
Fix Version/s: 7.2

> Suggestions for replica count violations
> 
>
> Key: SOLR-11519
> URL: https://issues.apache.org/jira/browse/SOLR-11519
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.2, master (8.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11518) Create suggestions for freedisk violations

2017-12-07 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11518:
--
Fix Version/s: 7.2

> Create suggestions for freedisk violations
> --
>
> Key: SOLR-11518
> URL: https://issues.apache.org/jira/browse/SOLR-11518
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.2, master (8.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11518) Create suggestions for freedisk violations

2017-12-07 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-11518.
---
Resolution: Fixed

> Create suggestions for freedisk violations
> --
>
> Key: SOLR-11518
> URL: https://issues.apache.org/jira/browse/SOLR-11518
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.2, master (8.0)
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11359) An autoscaling/suggestions endpoint to recommend operations

2017-12-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283151#comment-16283151
 ] 

ASF subversion and git services commented on SOLR-11359:


Commit 3431f3a9f3aed7c421d49baec65cb3eb816a4dd8 in lucene-solr's branch 
refs/heads/branch_7_2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3431f3a ]

SOLR-11359: added documentation


> An autoscaling/suggestions endpoint to recommend operations
> ---
>
> Key: SOLR-11359
> URL: https://issues.apache.org/jira/browse/SOLR-11359
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-11359.patch
>
>
> Autoscaling can make suggestions to users on what operations they can perform 
> to improve the health of the cluster
> The suggestions will have the following information
> * http end point
> * http method (POST,DELETE)
> * command payload



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11359) An autoscaling/suggestions endpoint to recommend operations

2017-12-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283149#comment-16283149
 ] 

ASF subversion and git services commented on SOLR-11359:


Commit 48c0798b395f32632f7ac88ab31c6132c13c6dc3 in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=48c0798 ]

SOLR-11359: added documentation


> An autoscaling/suggestions endpoint to recommend operations
> ---
>
> Key: SOLR-11359
> URL: https://issues.apache.org/jira/browse/SOLR-11359
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-11359.patch
>
>
> Autoscaling can make suggestions to users on what operations they can perform 
> to improve the health of the cluster
> The suggestions will have the following information
> * http end point
> * http method (POST,DELETE)
> * command payload



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11685) CollectionsAPIDistributedZkTest.testCollectionsAPI fails regularly with leader mismatch

2017-12-07 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283140#comment-16283140
 ] 

Varun Thacker edited comment on SOLR-11685 at 12/8/17 7:19 AM:
---

Analysis from jenkins_master_7045.log

L20542: Test CollectionsAPIDistributedZkTest.testCollectionsAPI starts at line 
20542
Question: Why is halfcollectionblocker being deleted after this test has 
started and not before?
L20563: create collection awhollynewcollection_0 with 4 shards and 1 replica
L20746: ChaosMonkey monkey: stop jetty! 49379
L20774: This shoes that the jetty that was shut down has 
core=awhollynewcollection_0_shard3_replica_n4
L20781: ChaosMonkey monkey: starting jetty! 49379
L20859: Exception causing close of session 0x1603371da360011 due to 
java.io.IOException: ZooKeeperServer not running/Watch limit violations ..
Question : Why are we hitting a watcher limit?
L20889: Restarting zookeeper
L20915: An add request comes in "ClusterState says we are the leader, but 
locally we don't think so" for awhollynewcollection_0_shard3_replica_n4


Presumably when CloudSolrClient sends the request 
awhollynewcollection_0_shard3_replica_n4 was the leader of shard3. After the 
restart it hasn't become leader yet but there are no other replicas. 

CloudSolrClient should catch this exception as it's local cache might not be 
the most updated one, refresh it state and retry the add operation. Today 
CloudSolrClient retries in {{requestWithRetryOnStaleState}} when 
{{wasCommError}} is true . DistributedUpdateProcessor#doDefensiveChecks throws 
this as a SolrException . We should throw this as another exception on which we 
can retry the operation



was (Author: varunthacker):
Analysis from jenkins_master_7045.log

L20542: Test CollectionsAPIDistributedZkTest.testCollectionsAPI starts at line 
20542
Question: Why is halfcollectionblocker being deleted after this test has 
started and not before?
L20563: create collection awhollynewcollection_0 with 4 shards and 1 replica
L20746: ChaosMonkey monkey: stop jetty! 49379
L20774: This shoes that the jetty that was shut down has 
core=awhollynewcollection_0_shard3_replica_n4
L20781: ChaosMonkey monkey: starting jetty! 49379
L20859: Exception causing close of session 0x1603371da360011 due to 
java.io.IOException: ZooKeeperServer not running/Watch limit violations
L20889: Restarting zookeeper
L20915: An add request comes in "ClusterState says we are the leader, but 
locally we don't think so" for awhollynewcollection_0_shard3_replica_n4


Presumably when CloudSolrClient sends the request 
awhollynewcollection_0_shard3_replica_n4 was the leader of shard3. After the 
restart it hasn't become leader yet but there are no other replicas. 

CloudSolrClient should catch this exception as it's local cache might not be 
the most updated one, refresh it state and retry the add operation. Today 
CloudSolrClient retries in {{requestWithRetryOnStaleState}} when 
{{wasCommError}} is true . DistributedUpdateProcessor#doDefensiveChecks throws 
this as a SolrException . We should throw this as another exception on which we 
can retry the operation


> CollectionsAPIDistributedZkTest.testCollectionsAPI fails regularly with 
> leader mismatch
> ---
>
> Key: SOLR-11685
> URL: https://issues.apache.org/jira/browse/SOLR-11685
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: jenkins_7x_257.log, jenkins_master_7045.log
>
>
> I've been noticing lots of failures on Jenkins where the document add get's 
> rejected because of leader conflict and throws an error like 
> {code}
> ClusterState says we are the leader 
> (https://127.0.0.1:38715/solr/awhollynewcollection_0_shard2_replica_n2), but 
> locally we don't think so. Request came from null
> {code}
> Scanning Jenkins logs I see that these failures have increased since Sept 
> 28th and has been failing daily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11685) CollectionsAPIDistributedZkTest.testCollectionsAPI fails regularly with leader mismatch

2017-12-07 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283148#comment-16283148
 ] 

Varun Thacker commented on SOLR-11685:
--

{code}
if ((isLeader && !localIsLeader) || (isSubShardLeader && !localIsLeader)) {
  log.error("ClusterState says we are the leader, but locally we don't 
think so");
+  if (from == null) {
+//if leader=null means the client sent the request. If we aren't the 
leader (the client has old info) return a exception which the client can +retry 
on
+  }
  throw new SolrException(ErrorCode.SERVICE_UNAVAILABLE,
  "ClusterState says we are the leader (" + zkController.getBaseUrl()
  + "/" + req.getCore().getName() + "), but locally we don't think 
so. Request came from " + from);
}
{code}

> CollectionsAPIDistributedZkTest.testCollectionsAPI fails regularly with 
> leader mismatch
> ---
>
> Key: SOLR-11685
> URL: https://issues.apache.org/jira/browse/SOLR-11685
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: jenkins_7x_257.log, jenkins_master_7045.log
>
>
> I've been noticing lots of failures on Jenkins where the document add get's 
> rejected because of leader conflict and throws an error like 
> {code}
> ClusterState says we are the leader 
> (https://127.0.0.1:38715/solr/awhollynewcollection_0_shard2_replica_n2), but 
> locally we don't think so. Request came from null
> {code}
> Scanning Jenkins logs I see that these failures have increased since Sept 
> 28th and has been failing daily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11685) CollectionsAPIDistributedZkTest.testCollectionsAPI fails regularly with leader mismatch

2017-12-07 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-11685:
-
Attachment: jenkins_master_7045.log

Analysis from jenkins_master_7045.log

L20542: Test CollectionsAPIDistributedZkTest.testCollectionsAPI starts at line 
20542
Question: Why is halfcollectionblocker being deleted after this test has 
started and not before?
L20563: create collection awhollynewcollection_0 with 4 shards and 1 replica
L20746: ChaosMonkey monkey: stop jetty! 49379
L20774: This shoes that the jetty that was shut down has 
core=awhollynewcollection_0_shard3_replica_n4
L20781: ChaosMonkey monkey: starting jetty! 49379
L20859: Exception causing close of session 0x1603371da360011 due to 
java.io.IOException: ZooKeeperServer not running/Watch limit violations
L20889: Restarting zookeeper
L20915: An add request comes in "ClusterState says we are the leader, but 
locally we don't think so" for awhollynewcollection_0_shard3_replica_n4


Presumably when CloudSolrClient sends the request 
awhollynewcollection_0_shard3_replica_n4 was the leader of shard3. After the 
restart it hasn't become leader yet but there are no other replicas. 

CloudSolrClient should catch this exception as it's local cache might not be 
the most updated one, refresh it state and retry the add operation. Today 
CloudSolrClient retries in {{requestWithRetryOnStaleState}} when 
{{wasCommError}} is true . DistributedUpdateProcessor#doDefensiveChecks throws 
this as a SolrException . We should throw this as another exception on which we 
can retry the operation


> CollectionsAPIDistributedZkTest.testCollectionsAPI fails regularly with 
> leader mismatch
> ---
>
> Key: SOLR-11685
> URL: https://issues.apache.org/jira/browse/SOLR-11685
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: jenkins_7x_257.log, jenkins_master_7045.log
>
>
> I've been noticing lots of failures on Jenkins where the document add get's 
> rejected because of leader conflict and throws an error like 
> {code}
> ClusterState says we are the leader 
> (https://127.0.0.1:38715/solr/awhollynewcollection_0_shard2_replica_n2), but 
> locally we don't think so. Request came from null
> {code}
> Scanning Jenkins logs I see that these failures have increased since Sept 
> 28th and has been failing daily.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 332 - Failure!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/332/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 48586 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1611069326
 [ecj-lint] Compiling 188 source files to 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1611069326
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/analysis/BaseTokenStreamTestCase.java
 (at line 801)
 [ecj-lint] ts = a.tokenStream("dummy", useCharFilter ? new 
MockCharFilter(reader, remainder) : reader);
 [ecj-lint] 
^^^
 [ecj-lint] Resource leak: 'ts' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/analysis/BaseTokenStreamTestCase.java
 (at line 837)
 [ecj-lint] reader = new MockReaderWrapper(random, reader);
 [ecj-lint] ^^
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/analysis/BaseTokenStreamTestCase.java
 (at line 913)
 [ecj-lint] reader = new MockReaderWrapper(random, reader);
 [ecj-lint] ^^
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/codecs/mockrandom/MockRandomPostingsFormat.java
 (at line 260)
 [ecj-lint] throw new AssertionError();
 [ecj-lint] ^^^
 [ecj-lint] Resource leak: 'postingsWriter' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/codecs/mockrandom/MockRandomPostingsFormat.java
 (at line 393)
 [ecj-lint] throw new AssertionError();
 [ecj-lint] ^^^
 [ecj-lint] Resource leak: 'postingsReader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/geo/BaseGeoPointTestCase.java
 (at line 1271)
 [ecj-lint] RandomIndexWriter writer = new RandomIndexWriter(random(), dir, 
iwc);
 [ecj-lint]   ^^
 [ecj-lint] Resource leak: 'writer' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/index/BaseCompressingDocValuesFormatTestCase.java
 (at line 47)
 [ecj-lint] final IndexWriter iwriter = new IndexWriter(dir, iwc);
 [ecj-lint]   ^^^
 [ecj-lint] Resource leak: 'iwriter' is never closed
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/index/BaseCompressingDocValuesFormatTestCase.java
 (at line 81)
 [ecj-lint] final IndexWriter iwriter = new IndexWriter(dir, iwc);
 [ecj-lint]   ^^^
 [ecj-lint] Resource leak: 'iwriter' is never closed
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/index/BaseCompressingDocValuesFormatTestCase.java
 (at line 108)
 [ecj-lint] final IndexWriter iwriter = new IndexWriter(dir, iwc);
 [ecj-lint]   ^^^
 [ecj-lint] Resource leak: 'iwriter' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/index/BasePointsFormatTestCase.java
 (at line 622)
 [ecj-lint] w = new RandomIndexWriter(random(), dir, iwc);
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'w' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/index/BasePostingsFormatTestCase.java
 (at line 314)
 [ecj-lint] Analyzer analyzer = new MockAnalyzer(random());
 [ecj-lint]  
 [ecj-lint] Resource leak: 'analyzer' is never closed
 [ecj-lint] --
 [ecj-lint] 12. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/lucene/test-framework/src/java/org/apache/lucene/index/BasePostingsFormatTestCase.java
 (at line 539)
 [ecj-lint] 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+32) - Build # 21051 - Still Unstable!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21051/
Java: 64bit/jdk-10-ea+32 -XX:+UseCompressedOops -XX:+UseSerialGC

33 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamingTest.testZeroParallelReducerStream

Error Message:
java.util.concurrent.ExecutionException: java.io.IOException: --> 
http://127.0.0.1:33495/solr/streams_shard2_replica_n3/:java.util.concurrent.ExecutionException:
 java.io.IOException: --> 
http://127.0.0.1:33495/solr/streams_shard2_replica_n3/:Query  does not 
implement createWeight

Stack Trace:
java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: --> 
http://127.0.0.1:33495/solr/streams_shard2_replica_n3/:java.util.concurrent.ExecutionException:
 java.io.IOException: --> 
http://127.0.0.1:33495/solr/streams_shard2_replica_n3/:Query  does not 
implement createWeight
at 
__randomizedtesting.SeedInfo.seed([53875C7BFCC2EF91:1D7AB1F9E064BAFE]:0)
at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.openStreams(CloudSolrStream.java:400)
at 
org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:275)
at 
org.apache.solr.client.solrj.io.stream.StreamingTest.getTuples(StreamingTest.java:2359)
at 
org.apache.solr.client.solrj.io.stream.StreamingTest.testZeroParallelReducerStream(StreamingTest.java:1933)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

Why IndexFetcher commits after deleting obsolete index?

2017-12-07 Thread Mikhail Khludnev
I mean this row
https://github.com/apache/lucene-solr/blame/master/solr/core/src/java/org/apache/solr/handler/IndexFetcher.java#L458

Sure, "New index in Master. Deleting mine..." makes sense. But committing
right after that is not really determined. Eg if this empty slave commit is
delayed it's assigned with late timestamp, and ignores master commit as
old.

Don't you think it's a problem (at least for test stability)?

The possible solutions are
 - pass earler timestamp to commit explicitly, I'm not sure if it's ever
possible;
 - or always set skipCommitOnMasterVersionZero=true or by default at least.

More details at https://issues.apache.org/jira/browse/SOLR-11673
-- 
Sincerely yours
Mikhail Khludnev


[jira] [Commented] (SOLR-11359) An autoscaling/suggestions endpoint to recommend operations

2017-12-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283124#comment-16283124
 ] 

ASF subversion and git services commented on SOLR-11359:


Commit 24a0708d3c65138ecdee77edd7ce7e08e7e19c75 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=24a0708 ]

SOLR-11359: added documentation


> An autoscaling/suggestions endpoint to recommend operations
> ---
>
> Key: SOLR-11359
> URL: https://issues.apache.org/jira/browse/SOLR-11359
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-11359.patch
>
>
> Autoscaling can make suggestions to users on what operations they can perform 
> to improve the health of the cluster
> The suggestions will have the following information
> * http end point
> * http method (POST,DELETE)
> * command payload



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11718) Deprecate CDCR Buffer APIs

2017-12-07 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283114#comment-16283114
 ] 

Varun Thacker commented on SOLR-11718:
--

We should do the following here :

- remove the buffer from sample solrconfig from both source and target . Add a 
deprecation warning in the ref guide for the enable buffer and disable buffer 
API
- change default from enabled to disabled in the code
- leave a comment in CdcrRequestHandler 
handleEnableBufferAction/handleDisableBufferAction to remove it in the next 8.0 
. We could perhaps remove it earlier as well (not sure) but we don't need to 
tackle that in this Jira

> Deprecate CDCR Buffer APIs
> --
>
> Key: SOLR-11718
> URL: https://issues.apache.org/jira/browse/SOLR-11718
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.1
>Reporter: Amrit Sarkar
> Fix For: 7.2
>
> Attachments: SOLR-11652.patch
>
>
> Kindly see the discussion on SOLR-11652.
> Today, if we see the current CDCR documentation page, buffering is "disabled" 
> by default in both source and target. We don't see any purpose served by Cdcr 
> buffering and it is quite an overhead considering it can take a lot heap 
> space (tlogs ptr) and forever retention of tlogs on the disk when enabled. 
> Also today, even if we disable buffer from API on source , considering it was 
> enabled at startup, tlogs are never purged on leader node of shards of 
> source, refer jira: SOLR-11652



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4317 - Still unstable!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4317/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseSerialGC

15 tests failed.
FAILED:  org.apache.solr.handler.TestSQLHandler.doTest

Error Message:
--> http://127.0.0.1:54804/mx_zek/a/collection1_shard2_replica_n41:Failed to 
execute sqlQuery 'select id, field_i, str_s from collection1 where 
(text='()' OR text='') AND text='' order by field_i desc' against 
JDBC connection 'jdbc:calcitesolr:'. Error while executing SQL "select id, 
field_i, str_s from collection1 where (text='()' OR text='') AND 
text='' order by field_i desc": java.io.IOException: 
java.util.concurrent.ExecutionException: java.io.IOException: --> 
http://127.0.0.1:54823/mx_zek/a/collection1_shard2_replica_n45/:Query  does not 
implement createWeight

Stack Trace:
java.io.IOException: --> 
http://127.0.0.1:54804/mx_zek/a/collection1_shard2_replica_n41:Failed to 
execute sqlQuery 'select id, field_i, str_s from collection1 where 
(text='()' OR text='') AND text='' order by field_i desc' against 
JDBC connection 'jdbc:calcitesolr:'.
Error while executing SQL "select id, field_i, str_s from collection1 where 
(text='()' OR text='') AND text='' order by field_i desc": 
java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: --> 
http://127.0.0.1:54823/mx_zek/a/collection1_shard2_replica_n45/:Query  does not 
implement createWeight
at 
__randomizedtesting.SeedInfo.seed([1C00DDBC4E31B23C:BB446518238AA185]:0)
at 
org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:222)
at 
org.apache.solr.handler.TestSQLHandler.getTuples(TestSQLHandler.java:2522)
at 
org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:124)
at org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:82)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-7.2-Linux (32bit/jdk1.8.0_144) - Build # 28 - Still Unstable!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/28/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.TestCollectionAPI.test

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:43451/eo/gc

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:43451/eo/gc
at 
__randomizedtesting.SeedInfo.seed([E99EC0F5751D6FB6:61CAFF2FDBE1024E]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:314)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 98 - Still Failing

2017-12-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/98/

No tests ran.

Build Log:
[...truncated 7100 lines...]
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] Loading source files for package org.apache.lucene...
  [javadoc] Loading source files for package org.apache.lucene.analysis...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.standard...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.tokenattributes...
  [javadoc] Loading source files for package org.apache.lucene.codecs...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.blocktree...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.compressing...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene50...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene60...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene62...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene70...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.perfield...
  [javadoc] Loading source files for package org.apache.lucene.document...
  [javadoc] Loading source files for package org.apache.lucene.geo...
  [javadoc] Loading source files for package org.apache.lucene.index...
  [javadoc] Loading source files for package org.apache.lucene.search...
  [javadoc] Loading source files for package 
org.apache.lucene.search.similarities...
  [javadoc] Loading source files for package org.apache.lucene.search.spans...
  [javadoc] Loading source files for package org.apache.lucene.store...
  [javadoc] Loading source files for package org.apache.lucene.util...
  [javadoc] Loading source files for package org.apache.lucene.util.automaton...
  [javadoc] Loading source files for package org.apache.lucene.util.bkd...
  [javadoc] Loading source files for package org.apache.lucene.util.fst...
  [javadoc] Loading source files for package org.apache.lucene.util.graph...
  [javadoc] Loading source files for package org.apache.lucene.util.mutable...
  [javadoc] Loading source files for package org.apache.lucene.util.packed...
  [javadoc] Constructing Javadoc information...
  [javadoc] Standard Doclet version 1.8.0_144
  [javadoc] Building tree for all the packages and classes...
  [javadoc] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/core/src/java/org/apache/lucene/index/LiveIndexWriterConfig.java:435:
 error: unknown tag: lucene.eperimental
  [javadoc]* @lucene.eperimental
  [javadoc]  ^
  [javadoc] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/core/src/java/org/apache/lucene/index/LiveIndexWriterConfig.java:448:
 error: unknown tag: lucene.eperimental
  [javadoc]* @lucene.eperimental
  [javadoc]  ^
  [javadoc] Building index for all the packages and classes...
  [javadoc] Building index for all classes...
  [javadoc] Generating 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/docs/core/help-doc.html...
  [javadoc] 2 errors

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/build.xml:615: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/common-build.xml:793:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/core/build.xml:54:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/common-build.xml:2213:
 Javadoc returned 1

Total time: 6 minutes 5 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-7.x - Build # 275 - Failure

2017-12-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/275/

11 tests failed.
FAILED:  org.apache.solr.cloud.CollectionTooManyReplicasTest.testAddShard

Error Message:
Could not load collection from ZK: TooManyReplicasWhenAddingShards

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
TooManyReplicasWhenAddingShards
at 
__randomizedtesting.SeedInfo.seed([CB2D73DD88063B72:EF68157CD013E539]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1123)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:648)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:130)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:110)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:247)
at 
org.apache.solr.cloud.CollectionTooManyReplicasTest.getAllNodeNames(CollectionTooManyReplicasTest.java:217)
at 
org.apache.solr.cloud.CollectionTooManyReplicasTest.testAddShard(CollectionTooManyReplicasTest.java:146)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 956 - Still Failing!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/956/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=20473, name=searcherExecutor-5945-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=20473, name=searcherExecutor-5945-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([52E42445FD98FFF3]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=20473, name=searcherExecutor-5945-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=20473, name=searcherExecutor-5945-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([52E42445FD98FFF3]:0)


FAILED:  org.apache.solr.core.TestLazyCores.testNoCommit

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([52E42445FD98FFF3:8D84859436BF9C56]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:901)
at org.apache.solr.core.TestLazyCores.check10(TestLazyCores.java:847)
at 
org.apache.solr.core.TestLazyCores.testNoCommit(TestLazyCores.java:829)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 

[JENKINS] Lucene-Solr-7.2-Windows (64bit/jdk-9.0.1) - Build # 6 - Still Unstable!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Windows/6/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseG1GC

8 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestBackwardsCompatibility

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_E116A0ADA16C2345-001\4.0.0.2-cfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_E116A0ADA16C2345-001\4.0.0.2-cfs-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_E116A0ADA16C2345-001\4.0.0.2-cfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_E116A0ADA16C2345-001\4.0.0.2-cfs-001

at __randomizedtesting.SeedInfo.seed([E116A0ADA16C2345]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.lucene50.TestLucene60FieldInfoFormat

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestLucene60FieldInfoFormat_500FECA5C0BAB3AA-001\justSoYouGetSomeChannelErrors-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestLucene60FieldInfoFormat_500FECA5C0BAB3AA-001\justSoYouGetSomeChannelErrors-001

C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestLucene60FieldInfoFormat_500FECA5C0BAB3AA-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestLucene60FieldInfoFormat_500FECA5C0BAB3AA-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestLucene60FieldInfoFormat_500FECA5C0BAB3AA-001\justSoYouGetSomeChannelErrors-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestLucene60FieldInfoFormat_500FECA5C0BAB3AA-001\justSoYouGetSomeChannelErrors-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestLucene60FieldInfoFormat_500FECA5C0BAB3AA-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestLucene60FieldInfoFormat_500FECA5C0BAB3AA-001

at __randomizedtesting.SeedInfo.seed([500FECA5C0BAB3AA]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-11331) Ability to Debug Solr With Eclipse IDE

2017-12-07 Thread Karthik Ramachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283005#comment-16283005
 ] 

Karthik Ramachandran commented on SOLR-11331:
-

Yes, you will see the changes without running any ant tasks. Even while 
debugging the changes will be hot swapped and can see the change.  The only 
time you need to run "ant eclipse" again is when you change or upgrade 
dependency version.

> Ability to Debug Solr With Eclipse IDE
> --
>
> Key: SOLR-11331
> URL: https://issues.apache.org/jira/browse/SOLR-11331
> Project: Solr
>  Issue Type: Improvement
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
>Priority: Minor
> Attachments: SOLR-11331.patch
>
>
> Ability to Debug Solr With Eclipse IDE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11702) Redesign current LIR implementation

2017-12-07 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282999#comment-16282999
 ] 

Cao Manh Dat commented on SOLR-11702:
-

[~mdrob] That's right. I borrowed term's idea from Raft. All the replicas can 
update its term equals to the leader's term. Only leader can increase terms of 
other replicas.

> Redesign current LIR implementation
> ---
>
> Key: SOLR-11702
> URL: https://issues.apache.org/jira/browse/SOLR-11702
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11702.patch
>
>
> I recently looked into some problem related to racing between LIR and 
> Recovering. I would like to propose a totally new approach to solve SOLR-5495 
> problem because fixing current implementation by a bandage will lead us to 
> other problems (we can not prove the correctness of the implementation).
> Feel free to give comments/thoughts about this new scheme.
> https://docs.google.com/document/d/1dM2GKMULsS45ZMuvtztVnM2m3fdUeRYNCyJorIIisEo/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11702) Redesign current LIR implementation

2017-12-07 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282992#comment-16282992
 ] 

Mike Drob commented on SOLR-11702:
--

Ooooh, good approach. This is similar in concept to how RAFT works, I think.

One thing that is unclear from design doc (haven't looked at code yet) is who 
updated the ZK terms when replica joins recovery. Is that the result of the 
leader acknowledging the PrepRecoveryCmd?

> Redesign current LIR implementation
> ---
>
> Key: SOLR-11702
> URL: https://issues.apache.org/jira/browse/SOLR-11702
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11702.patch
>
>
> I recently looked into some problem related to racing between LIR and 
> Recovering. I would like to propose a totally new approach to solve SOLR-5495 
> problem because fixing current implementation by a bandage will lead us to 
> other problems (we can not prove the correctness of the implementation).
> Feel free to give comments/thoughts about this new scheme.
> https://docs.google.com/document/d/1dM2GKMULsS45ZMuvtztVnM2m3fdUeRYNCyJorIIisEo/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11331) Ability to Debug Solr With Eclipse IDE

2017-12-07 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282967#comment-16282967
 ] 

David Smiley commented on SOLR-11331:
-

I want to be sure I get this.  Lets say sometime after "ant eclipse" runs, you 
go and edit some code, perhaps print something to stdout.  And of course it 
needs to be compiled and I assume Eclipse takes care of that automatically.  
Can you then run Solr in eclipse and observe the effects of the code change you 
just did, without executing any further ant tasks?  If so, I'd love to improve 
the IntelliJ config similarly.

> Ability to Debug Solr With Eclipse IDE
> --
>
> Key: SOLR-11331
> URL: https://issues.apache.org/jira/browse/SOLR-11331
> Project: Solr
>  Issue Type: Improvement
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
>Priority: Minor
> Attachments: SOLR-11331.patch
>
>
> Ability to Debug Solr With Eclipse IDE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11702) Redesign current LIR implementation

2017-12-07 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282963#comment-16282963
 ] 

Cao Manh Dat edited comment on SOLR-11702 at 12/8/17 3:13 AM:
--

A patch for this ticket implemented according to the design including the fix 
for SOLR-10525 by [~mdrob]. It will need more tests but all current tests are 
passed.


was (Author: caomanhdat):
A patch for this ticket implemented according to the design. It will need more 
tests but all current tests are passed.

> Redesign current LIR implementation
> ---
>
> Key: SOLR-11702
> URL: https://issues.apache.org/jira/browse/SOLR-11702
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11702.patch
>
>
> I recently looked into some problem related to racing between LIR and 
> Recovering. I would like to propose a totally new approach to solve SOLR-5495 
> problem because fixing current implementation by a bandage will lead us to 
> other problems (we can not prove the correctness of the implementation).
> Feel free to give comments/thoughts about this new scheme.
> https://docs.google.com/document/d/1dM2GKMULsS45ZMuvtztVnM2m3fdUeRYNCyJorIIisEo/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11702) Redesign current LIR implementation

2017-12-07 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-11702:

Attachment: SOLR-11702.patch

A patch for this ticket implemented according to the design. It will need more 
tests but all current tests are passed.

> Redesign current LIR implementation
> ---
>
> Key: SOLR-11702
> URL: https://issues.apache.org/jira/browse/SOLR-11702
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11702.patch
>
>
> I recently looked into some problem related to racing between LIR and 
> Recovering. I would like to propose a totally new approach to solve SOLR-5495 
> problem because fixing current implementation by a bandage will lead us to 
> other problems (we can not prove the correctness of the implementation).
> Feel free to give comments/thoughts about this new scheme.
> https://docs.google.com/document/d/1dM2GKMULsS45ZMuvtztVnM2m3fdUeRYNCyJorIIisEo/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9555) Leader incorrectly publishes state for replica when it puts replica into LIR.

2017-12-07 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282965#comment-16282965
 ] 

Cao Manh Dat commented on SOLR-9555:


[~mdrob] [~markrmil...@gmail.com]  please take a look at my recent patch for 
SOLR-11702. I think that is the safer/better way to solve this problem.

> Leader incorrectly publishes state for replica when it puts replica into LIR.
> -
>
> Key: SOLR-9555
> URL: https://issues.apache.org/jira/browse/SOLR-9555
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9555-WIP-2.patch, SOLR-9555-WIP-3.patch, 
> SOLR-9555-WIP.patch, SOLR-9555.patch, SOLR-9555.patch, SOLR-9555.patch, 
> lir-9555-problem.png
>
>
> See 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17888/consoleFull 
> for an example



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.2-Linux (32bit/jdk1.8.0_144) - Build # 27 - Still Unstable!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/27/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=16814, name=jetty-launcher-3891-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=16804, name=jetty-launcher-3891-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=16814, name=jetty-launcher-3891-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+32) - Build # 21050 - Still unstable!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21050/
Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseG1GC

29 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.graph.GraphExpressionTest

Error Message:
37 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.io.graph.GraphExpressionTest: 1) 
Thread[id=1473, 
name=GatherNodesStream-356-thread-1-SendThread(127.0.0.1:42787), 
state=TIMED_WAITING, group=TGRP-GraphExpressionTest] at 
java.base@10-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1051)2) 
Thread[id=1479, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-GraphExpressionTest] at 
java.base@10-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)3) 
Thread[id=1452, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-GraphExpressionTest] at 
java.base@10-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)4) 
Thread[id=1478, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-GraphExpressionTest] at 
java.base@10-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)5) 
Thread[id=1447, name=zkConnectionManagerCallback-341-thread-1, state=WAITING, 
group=TGRP-GraphExpressionTest] at 
java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2074)
 at 
java.base@10-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)6) 
Thread[id=1465, name=GatherNodesStream-349-thread-1-EventThread, state=WAITING, 
group=TGRP-GraphExpressionTest] at 
java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2074)
 at 
java.base@10-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)7) 
Thread[id=1463, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-GraphExpressionTest] at 
java.base@10-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)8) 
Thread[id=1499, name=zkCallback-359-thread-2, state=TIMED_WAITING, 
group=TGRP-GraphExpressionTest] at 
java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10-ea/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@10-ea/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@10-ea/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@10-ea/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1060)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)9) 
Thread[id=1450, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-GraphExpressionTest] at 
java.base@10-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)   10) 
Thread[id=1495, name=zkCallback-340-thread-2, state=TIMED_WAITING, 

[jira] [Commented] (SOLR-11126) Node-level health check handler

2017-12-07 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282829#comment-16282829
 ] 

Anshum Gupta commented on SOLR-11126:
-

FYI, the test doesn't work because of the changed path. I'll fix that and 
update the test.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 333 - Still Failing!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/333/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.benchmark.byTask.tasks.WriteEnwikiLineDocTaskTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\benchmark\test\J0\temp\lucene.benchmark.byTask.tasks.WriteEnwikiLineDocTaskTest_3023BBCD49E2BFA0-001\benchmark-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\benchmark\test\J0\temp\lucene.benchmark.byTask.tasks.WriteEnwikiLineDocTaskTest_3023BBCD49E2BFA0-001\benchmark-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\benchmark\test\J0\temp\lucene.benchmark.byTask.tasks.WriteEnwikiLineDocTaskTest_3023BBCD49E2BFA0-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\benchmark\test\J0\temp\lucene.benchmark.byTask.tasks.WriteEnwikiLineDocTaskTest_3023BBCD49E2BFA0-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\benchmark\test\J0\temp\lucene.benchmark.byTask.tasks.WriteEnwikiLineDocTaskTest_3023BBCD49E2BFA0-001\benchmark-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\benchmark\test\J0\temp\lucene.benchmark.byTask.tasks.WriteEnwikiLineDocTaskTest_3023BBCD49E2BFA0-001\benchmark-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\benchmark\test\J0\temp\lucene.benchmark.byTask.tasks.WriteEnwikiLineDocTaskTest_3023BBCD49E2BFA0-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\benchmark\test\J0\temp\lucene.benchmark.byTask.tasks.WriteEnwikiLineDocTaskTest_3023BBCD49E2BFA0-001

at __randomizedtesting.SeedInfo.seed([3023BBCD49E2BFA0]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.testFullImportInnerEntity

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_1DB77C1F2BD7A2FE-001\tempDir-001\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_1DB77C1F2BD7A2FE-001\tempDir-001\collection1

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_1DB77C1F2BD7A2FE-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_1DB77C1F2BD7A2FE-001\tempDir-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_1DB77C1F2BD7A2FE-001\tempDir-001\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_1DB77C1F2BD7A2FE-001\tempDir-001\collection1
   

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.1) - Build # 7045 - Still Unstable!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7045/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC

18 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestDemoParallelLeafReader

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_62E9F4E449DFF8E1-001\tempDir-004:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_62E9F4E449DFF8E1-001\tempDir-004
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_62E9F4E449DFF8E1-001\tempDir-004:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_62E9F4E449DFF8E1-001\tempDir-004

at __randomizedtesting.SeedInfo.seed([62E9F4E449DFF8E1]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestSimpleFSDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestSimpleFSDirectory_62E9F4E449DFF8E1-001\testInts-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestSimpleFSDirectory_62E9F4E449DFF8E1-001\testInts-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestSimpleFSDirectory_62E9F4E449DFF8E1-001\testInts-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestSimpleFSDirectory_62E9F4E449DFF8E1-001\testInts-001

at __randomizedtesting.SeedInfo.seed([62E9F4E449DFF8E1]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at 
http://127.0.0.1:49379/solr/awhollynewcollection_0_shard3_replica_n4: 
ClusterState says we are the leader 
(http://127.0.0.1:49379/solr/awhollynewcollection_0_shard3_replica_n4), but 
locally we don't 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 21049 - Failure!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21049/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC

33 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.graph.GraphTest

Error Message:
9 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.io.graph.GraphTest: 1) Thread[id=590, 
name=Connection evictor, state=TIMED_WAITING, group=TGRP-GraphTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:748)2) Thread[id=584, 
name=ShortestPathStream-94-thread-1-SendThread(127.0.0.1:37323), 
state=TIMED_WAITING, group=TGRP-GraphTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:997)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060)
3) Thread[id=583, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-GraphTest] at java.lang.Thread.sleep(Native Method) 
at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:748)4) Thread[id=589, 
name=Connection evictor, state=TIMED_WAITING, group=TGRP-GraphTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:748)5) Thread[id=595, 
name=zkCallback-97-thread-1, state=TIMED_WAITING, group=TGRP-GraphTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)6) Thread[id=602, 
name=zkCallback-97-thread-3, state=TIMED_WAITING, group=TGRP-GraphTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)7) Thread[id=585, 
name=ShortestPathStream-94-thread-1-EventThread, state=WAITING, 
group=TGRP-GraphTest] at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)
8) Thread[id=601, name=zkCallback-97-thread-2, state=TIMED_WAITING, 
group=TGRP-GraphTest] at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)9) Thread[id=586, 
name=zkConnectionManagerCallback-98-thread-1, state=WAITING, 
group=TGRP-GraphTest] at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 

[jira] [Commented] (LUCENE-4100) Maxscore - Efficient Scoring

2017-12-07 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282738#comment-16282738
 ] 

Steve Rowe commented on LUCENE-4100:


More Solr tests than usual are failing after the {{4fc5a872d}} commit on this 
issue, e.g. {{TestHashQParserPlugin.testHashPartition}}, which fails for me 
with any seed.  From 
[https://builds.apache.org/job/Lucene-Solr-Tests-master/2210/]:

{noformat}
Checking out Revision 5448274f26191a9882aa5c3020e3cbdcbf93551c 
(refs/remotes/origin/master)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestHashQParserPlugin -Dtests.method=testHashPartition 
-Dtests.seed=B372ECC8951DB18F -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=cs -Dtests.timezone=Asia/Jayapura -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   3.01s J0 | TestHashQParserPlugin.testHashPartition <<<
   [junit4]> Throwable #1: java.lang.UnsupportedOperationException: Query  
does not implement createWeight
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([B372ECC8951DB18F:62CFCD2DAE7708E4]:0)
   [junit4]>at 
org.apache.lucene.search.Query.createWeight(Query.java:66)
   [junit4]>at 
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:734)
   [junit4]>at 
org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:724)
   [junit4]>at 
org.apache.solr.search.SolrIndexSearcher.getProcessedFilter(SolrIndexSearcher.java:1062)
   [junit4]>at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1540)
   [junit4]>at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1416)
   [junit4]>at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:583)
   [junit4]>at 
org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
   [junit4]>at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:375)
   [junit4]>at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
   [junit4]>at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
   [junit4]>at 
org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
   [junit4]>at 
org.apache.solr.util.TestHarness.query(TestHarness.java:337)
   [junit4]>at 
org.apache.solr.util.TestHarness.query(TestHarness.java:319)
   [junit4]>at 
org.apache.solr.search.TestHashQParserPlugin.testHashPartition(TestHashQParserPlugin.java:89)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
[...]
   [junit4]   2> NOTE: test params are: codec=Lucene70, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@68e611ad),
 locale=cs, timezone=Asia/Jayapura
   [junit4]   2> NOTE: Linux 3.13.0-88-generic amd64/Oracle Corporation 
1.8.0_144 (64-bit)/cpus=4,threads=1,free=284221600,total=403177472
{noformat} 

My browser says that {{does not implement createWeight}} occurs 48 times in the 
[https://builds.apache.org/job/Lucene-Solr-Tests-master/2210/consoleText], so 
this is a problem for several tests.

> Maxscore - Efficient Scoring
> 
>
> Key: LUCENE-4100
> URL: https://issues.apache.org/jira/browse/LUCENE-4100
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs, core/query/scoring, core/search
>Affects Versions: 4.0-ALPHA
>Reporter: Stefan Pohl
>  Labels: api-change, gsoc2014, patch, performance
> Fix For: master (8.0)
>
> Attachments: LUCENE-4100.patch, LUCENE-4100.patch, LUCENE-4100.patch, 
> LUCENE-4100.patch, contrib_maxscore.tgz, maxscore.patch
>
>
> At Berlin Buzzwords 2012, I will be presenting 'maxscore', an efficient 
> algorithm first published in the IR domain in 1995 by H. Turtle & J. Flood, 
> that I find deserves more attention among Lucene users (and developers).
> I implemented a proof of concept and did some performance measurements with 
> example queries and lucenebench, the package of Mike McCandless, resulting in 
> very significant speedups.
> This ticket is to get started the discussion on including the implementation 
> into Lucene's codebase. Because the technique requires awareness about it 
> from the Lucene user/developer, it seems best to become a contrib/module 
> package so that it consciously can be chosen to be used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8085) Extend Ant / Ivy configuration to retrieve sources and javadoc for dependencies in order to be accessible during development

2017-12-07 Thread Emerson Castaneda (JIRA)
Emerson Castaneda created LUCENE-8085:
-

 Summary: Extend Ant / Ivy configuration to retrieve sources and 
javadoc for dependencies in order to be accessible during development
 Key: LUCENE-8085
 URL: https://issues.apache.org/jira/browse/LUCENE-8085
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/other, general/build
Affects Versions: 7.1
Reporter: Emerson Castaneda
Priority: Minor



Start point:
Ref: Ivy: How to Retrieve Source Codes of Dependencies 
https://dzone.com/articles/ivy-how-retrieve-source-codes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8085) Extend Ant / Ivy configuration to retrieve sources and javadoc for dependencies in order to be accessible during development

2017-12-07 Thread Emerson Castaneda (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Emerson Castaneda updated LUCENE-8085:
--
Description: 
it would be useful seting the required configuration, for Ant / Ivy in order to 
recover automatically javadocs and sources for dependencies, so you have no to 
attach those manually avoiding this situation:



!lucene_dependencies.jpg|thumbnail!

Start point:
Ref: Ivy: How to Retrieve Source Codes of Dependencies 
https://dzone.com/articles/ivy-how-retrieve-source-codes

  was:

Start point:
Ref: Ivy: How to Retrieve Source Codes of Dependencies 
https://dzone.com/articles/ivy-how-retrieve-source-codes


> Extend Ant / Ivy configuration to retrieve sources and javadoc for 
> dependencies in order to be accessible during development
> 
>
> Key: LUCENE-8085
> URL: https://issues.apache.org/jira/browse/LUCENE-8085
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other, general/build
>Affects Versions: 7.1
>Reporter: Emerson Castaneda
>Priority: Minor
> Attachments: lucene_dependencies.PNG
>
>
> it would be useful seting the required configuration, for Ant / Ivy in order 
> to recover automatically javadocs and sources for dependencies, so you have 
> no to attach those manually avoiding this situation:
> !lucene_dependencies.jpg|thumbnail!
> Start point:
> Ref: Ivy: How to Retrieve Source Codes of Dependencies 
> https://dzone.com/articles/ivy-how-retrieve-source-codes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8085) Extend Ant / Ivy configuration to retrieve sources and javadoc for dependencies in order to be accessible during development

2017-12-07 Thread Emerson Castaneda (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Emerson Castaneda updated LUCENE-8085:
--
Description: 
It would be useful setting up the required configuration for Ant / Ivy in order 
to recover automatically javadocs and sources for dependencies, so you have no 
to attach those manually avoiding this situation:


!lucene_dependencies.PNG!


*Start point:*

Ref: Ivy: How to Retrieve Source Codes of Dependencies 
https://dzone.com/articles/ivy-how-retrieve-source-codes

  was:
it would be useful seting the required configuration, for Ant / Ivy in order to 
recover automatically javadocs and sources for dependencies, so you have no to 
attach those manually avoiding this situation:



!lucene_dependencies.PNG!

Start point:
Ref: Ivy: How to Retrieve Source Codes of Dependencies 
https://dzone.com/articles/ivy-how-retrieve-source-codes


> Extend Ant / Ivy configuration to retrieve sources and javadoc for 
> dependencies in order to be accessible during development
> 
>
> Key: LUCENE-8085
> URL: https://issues.apache.org/jira/browse/LUCENE-8085
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other, general/build
>Affects Versions: 7.1
>Reporter: Emerson Castaneda
>Priority: Minor
> Attachments: lucene_dependencies.PNG
>
>
> It would be useful setting up the required configuration for Ant / Ivy in 
> order to recover automatically javadocs and sources for dependencies, so you 
> have no to attach those manually avoiding this situation:
> !lucene_dependencies.PNG!
> *Start point:*
> Ref: Ivy: How to Retrieve Source Codes of Dependencies 
> https://dzone.com/articles/ivy-how-retrieve-source-codes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8085) Extend Ant / Ivy configuration to retrieve sources and javadoc for dependencies in order to be accessible during development

2017-12-07 Thread Emerson Castaneda (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Emerson Castaneda updated LUCENE-8085:
--
Description: 
it would be useful seting the required configuration, for Ant / Ivy in order to 
recover automatically javadocs and sources for dependencies, so you have no to 
attach those manually avoiding this situation:



!lucene_dependencies.PNG|thumbnail!

Start point:
Ref: Ivy: How to Retrieve Source Codes of Dependencies 
https://dzone.com/articles/ivy-how-retrieve-source-codes

  was:
it would be useful seting the required configuration, for Ant / Ivy in order to 
recover automatically javadocs and sources for dependencies, so you have no to 
attach those manually avoiding this situation:



!lucene_dependencies.jpg|thumbnail!

Start point:
Ref: Ivy: How to Retrieve Source Codes of Dependencies 
https://dzone.com/articles/ivy-how-retrieve-source-codes


> Extend Ant / Ivy configuration to retrieve sources and javadoc for 
> dependencies in order to be accessible during development
> 
>
> Key: LUCENE-8085
> URL: https://issues.apache.org/jira/browse/LUCENE-8085
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other, general/build
>Affects Versions: 7.1
>Reporter: Emerson Castaneda
>Priority: Minor
> Attachments: lucene_dependencies.PNG
>
>
> it would be useful seting the required configuration, for Ant / Ivy in order 
> to recover automatically javadocs and sources for dependencies, so you have 
> no to attach those manually avoiding this situation:
> !lucene_dependencies.PNG|thumbnail!
> Start point:
> Ref: Ivy: How to Retrieve Source Codes of Dependencies 
> https://dzone.com/articles/ivy-how-retrieve-source-codes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-Artifacts-7.x - Build # 105 - Failure

2017-12-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-7.x/105/

No tests ran.

Build Log:
[...truncated 7959 lines...]
  [javadoc] Generating Javadoc
  [javadoc] Javadoc execution
  [javadoc] Loading source files for package org.apache.lucene...
  [javadoc] Loading source files for package org.apache.lucene.analysis...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.standard...
  [javadoc] Loading source files for package 
org.apache.lucene.analysis.tokenattributes...
  [javadoc] Loading source files for package org.apache.lucene.codecs...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.blocktree...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.compressing...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene50...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene60...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene62...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.lucene70...
  [javadoc] Loading source files for package 
org.apache.lucene.codecs.perfield...
  [javadoc] Loading source files for package org.apache.lucene.document...
  [javadoc] Loading source files for package org.apache.lucene.geo...
  [javadoc] Loading source files for package org.apache.lucene.index...
  [javadoc] Loading source files for package org.apache.lucene.search...
  [javadoc] Loading source files for package 
org.apache.lucene.search.similarities...
  [javadoc] Loading source files for package org.apache.lucene.search.spans...
  [javadoc] Loading source files for package org.apache.lucene.store...
  [javadoc] Loading source files for package org.apache.lucene.util...
  [javadoc] Loading source files for package org.apache.lucene.util.automaton...
  [javadoc] Loading source files for package org.apache.lucene.util.bkd...
  [javadoc] Loading source files for package org.apache.lucene.util.fst...
  [javadoc] Loading source files for package org.apache.lucene.util.graph...
  [javadoc] Loading source files for package org.apache.lucene.util.mutable...
  [javadoc] Loading source files for package org.apache.lucene.util.packed...
  [javadoc] Constructing Javadoc information...
  [javadoc] Standard Doclet version 1.8.0_152
  [javadoc] Building tree for all the packages and classes...
  [javadoc] 
/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/core/src/java/org/apache/lucene/index/LiveIndexWriterConfig.java:435:
 error: unknown tag: lucene.eperimental
  [javadoc]* @lucene.eperimental
  [javadoc]  ^
  [javadoc] 
/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/core/src/java/org/apache/lucene/index/LiveIndexWriterConfig.java:448:
 error: unknown tag: lucene.eperimental
  [javadoc]* @lucene.eperimental
  [javadoc]  ^
  [javadoc] Building index for all the packages and classes...
  [javadoc] Building index for all classes...
  [javadoc] Generating 
/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/build/docs/core/help-doc.html...
  [javadoc] 2 errors

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/build.xml:549: 
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/build.xml:451: 
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/solr/test-framework/build.xml:97:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:567:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:562:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:793:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/core/build.xml:54:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Solr-Artifacts-7.x/lucene/common-build.xml:2213:
 Javadoc returned 1

Total time: 8 minutes 42 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-12-07 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282711#comment-16282711
 ] 

Scott Blum commented on SOLR-11423:
---

I didn't resolve due to Noble's #comment-16203208 but I have no objection to 
resolving.  I dropped it on master because I wasn't sure what branches we'd 
want to backport to.

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.2 - Build # 3 - Unstable

2017-12-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.2/3/

6 tests failed.
FAILED:  
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter

Error Message:
Could not load collection from ZK: withShardField

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
withShardField
at 
__randomizedtesting.SeedInfo.seed([91C5CAEB023EC8A7:C4952279AEC70757]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1123)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:648)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection(CloudSolrClient.java:1205)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:848)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter(CustomCollectionTest.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Comment Edited] (SOLR-11730) Test NodeLost / NodeAdded dynamics

2017-12-07 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282027#comment-16282027
 ] 

Andrzej Bialecki  edited comment on SOLR-11730 at 12/7/17 6:44 PM:
---

Simulations indicate that even with significant flakiness (with outages lasting 
up to {{waitFor + cooldown}}) the framework may not take any actions if there 
are other events happening too, because even if a {{nodeLost}} trigger creates 
an event then that event may still be discarded due to the cooldown period. And 
after the cooldown period has passed the flaky node may be back up again, so 
the event would not be generated again.


was (Author: ab):
Simulations indicate that even with significant flakiness the framework may not 
take any actions if there are other events happening too, because even if a 
node lost trigger creates an event that event may be discarded due to the 
cooldown period. And after the cooldown period has passed the flaky node may be 
back up again, so the event would not be generated again.

> Test NodeLost / NodeAdded dynamics
> --
>
> Key: SOLR-11730
> URL: https://issues.apache.org/jira/browse/SOLR-11730
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>
> Let's consider a "flaky node" scenario.
> A node is going up and down at short intervals (eg. due to a flaky network 
> cable). If the frequency of these events coincides with {{waitFor}} interval 
> in {{nodeLost}} trigger configuration, the node may never be reported to the 
> autoscaling framework as lost. Similarly it may never be reported as added 
> back if it's lost again within the {{waitFor}} period of {{nodeAdded}} 
> trigger.
> Other scenarios are possible here too, depending on timing:
> * node being constantly reported as lost
> * node being constantly reported as added
> One possible solution for the autoscaling triggers is that the framework 
> should keep a short-term ({{waitFor * 2}} long?) memory of a node state that 
> the trigger is tracking in order to eliminate flaky nodes (ie. those that 
> transitioned between states more than once within the period).
> Situation like this is detrimental to SolrCloud behavior regardless of 
> autoscaling actions, so it should probably be addressed at a node level by 
> eg. shutting down Solr node after the number of disconnects in a time window 
> reaches a certain threshold.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-12-07 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282306#comment-16282306
 ] 

Adrien Grand commented on SOLR-11423:
-

The situation is quite weird as this change has been pushed to master only but 
has a CHANGES entry under 7.1.

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8085) Extend Ant / Ivy configuration to retrieve sources and javadoc for dependencies in order to be accessible during development

2017-12-07 Thread Emerson Castaneda (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Emerson Castaneda updated LUCENE-8085:
--
Description: 
it would be useful seting the required configuration, for Ant / Ivy in order to 
recover automatically javadocs and sources for dependencies, so you have no to 
attach those manually avoiding this situation:



!lucene_dependencies.PNG!

Start point:
Ref: Ivy: How to Retrieve Source Codes of Dependencies 
https://dzone.com/articles/ivy-how-retrieve-source-codes

  was:
it would be useful seting the required configuration, for Ant / Ivy in order to 
recover automatically javadocs and sources for dependencies, so you have no to 
attach those manually avoiding this situation:



!lucene_dependencies.PNG|thumbnail!

Start point:
Ref: Ivy: How to Retrieve Source Codes of Dependencies 
https://dzone.com/articles/ivy-how-retrieve-source-codes


> Extend Ant / Ivy configuration to retrieve sources and javadoc for 
> dependencies in order to be accessible during development
> 
>
> Key: LUCENE-8085
> URL: https://issues.apache.org/jira/browse/LUCENE-8085
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other, general/build
>Affects Versions: 7.1
>Reporter: Emerson Castaneda
>Priority: Minor
> Attachments: lucene_dependencies.PNG
>
>
> it would be useful seting the required configuration, for Ant / Ivy in order 
> to recover automatically javadocs and sources for dependencies, so you have 
> no to attach those manually avoiding this situation:
> !lucene_dependencies.PNG!
> Start point:
> Ref: Ivy: How to Retrieve Source Codes of Dependencies 
> https://dzone.com/articles/ivy-how-retrieve-source-codes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11714) AddReplicaSuggester endless loop

2017-12-07 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282301#comment-16282301
 ] 

Andrzej Bialecki  commented on SOLR-11714:
--

This still needs to be properly fixed for all other affected branches (7x and 
master), the commit above is just a quick fix that disables this functionality 
in 7.2.

> AddReplicaSuggester endless loop
> 
>
> Key: SOLR-11714
> URL: https://issues.apache.org/jira/browse/SOLR-11714
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.2, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Noble Paul
> Attachments: 7.2-disable-search-rate-trigger.diff, SOLR-11714.diff
>
>
> {{SearchRateTrigger}} events are processed by {{ComputePlanAction}} and 
> depending on the condition either a MoveReplicaSuggester or 
> AddReplicaSuggester is selected.
> When {{AddReplicaSuggester}} is selected there's currently a bug in master, 
> due to an API change (Hint.COLL_SHARD should be used instead of Hint.COLL). 
> However, after fixing that bug {{ComputePlanAction}} goes into an endless 
> loop because the suggester endlessly keeps creating new operations.
> Please see the patch that fixes the Hint.COLL_SHARD issue and modifies the 
> unit test to illustrate this failure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8085) Extend Ant / Ivy configuration to retrieve sources and javadoc for dependencies in order to be accessible during development

2017-12-07 Thread Emerson Castaneda (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Emerson Castaneda updated LUCENE-8085:
--
Attachment: lucene_dependencies.PNG

> Extend Ant / Ivy configuration to retrieve sources and javadoc for 
> dependencies in order to be accessible during development
> 
>
> Key: LUCENE-8085
> URL: https://issues.apache.org/jira/browse/LUCENE-8085
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other, general/build
>Affects Versions: 7.1
>Reporter: Emerson Castaneda
>Priority: Minor
> Attachments: lucene_dependencies.PNG
>
>
> Start point:
> Ref: Ivy: How to Retrieve Source Codes of Dependencies 
> https://dzone.com/articles/ivy-how-retrieve-source-codes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+32) - Build # 21048 - Still Unstable!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21048/
Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseParallelGC

32 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestTolerantUpdateProcessorCloud

Error Message:
Error from server at http://127.0.0.1:32955/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:32955/solr: create the collection time out:180s
at __randomizedtesting.SeedInfo.seed([DFF589DA48DC2017]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.TestTolerantUpdateProcessorCloud.createMiniSolrCloudCluster(TestTolerantUpdateProcessorCloud.java:121)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  junit.framework.TestSuite.org.apache.solr.handler.TestSQLHandler

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.handler.TestSQLHandler: 
1) Thread[id=31791, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestSQLHandler] at 
java.base@10-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestSQLHandler: 
   1) Thread[id=31791, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestSQLHandler]
at java.base@10-ea/java.lang.Thread.sleep(Native Method)
at 

[jira] [Commented] (LUCENE-8010) fix or sandbox similarities in core with problems

2017-12-07 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282650#comment-16282650
 ] 

Robert Muir commented on LUCENE-8010:
-

Thanks for hacking on these! I can run some relevance tests on the proposed 
changes, if you can wait a few days, so we have a better idea of the impact. 
Obviously not concerned about nextUp/nextDown-type fixes.

> fix or sandbox similarities in core with problems
> -
>
> Key: LUCENE-8010
> URL: https://issues.apache.org/jira/browse/LUCENE-8010
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-8010.patch
>
>
> We want to support scoring optimizations such as LUCENE-4100 and LUCENE-7993, 
> which put very minimal requirements on the similarity impl. Today 
> similarities of various quality are in core and tests. 
> The ones with problems currently have warnings in the javadocs about their 
> bugs, and if the problems are severe enough, then they are also disabled in 
> randomized testing too.
> IMO lucene core should only have practical functions that won't return 
> {{NaN}} scores at times or cause relevance to go backwards if the user's 
> stopfilter isn't configured perfectly. Also it is important for unit tests to 
> not deal with broken or semi-broken sims, and the ones in core should pass 
> all unit tests.
> I propose we move the buggy ones to sandbox and deprecate them. If they can 
> be fixed we can put them back in core, otherwise bye-bye.
> FWIW tests developed in LUCENE-7997 document the following requirements:
>* scores are non-negative and finite.
>* score matches the explanation exactly.
>* internal explanations calculations are sane (e.g. sum of: and so on 
> actually compute sums)
>* scores don't decrease as term frequencies increase: e.g. score(freq=N + 
> 1) >= score(freq=N)
>* scores don't decrease as documents get shorter, e.g. score(len=M) >= 
> score(len=M+1)
>* scores don't decrease as terms get rarer, e.g. score(term=N) >= 
> score(term=N+1)
>* scoring works for floating point frequencies (e.g. sloppy phrase and 
> span queries will work)
>* scoring works for reasonably large 64-bit statistic values (e.g. 
> distributed search will work)
>* scoring works for reasonably large boost values (0 .. Integer.MAX_VALUE, 
> e.g. query boosts will work)
>* scoring works for parameters randomized within valid ranges



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2210 - Failure

2017-12-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2210/

20 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.DistributedVersionInfoTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([B372ECC8951DB18F]:0)


FAILED:  
org.apache.solr.cloud.TestLeaderInitiatedRecoveryThread.testPublishDownState

Error Message:
Could not load collection from ZK: collection1

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
collection1
at 
__randomizedtesting.SeedInfo.seed([B372ECC8951DB18F:ED0F4E363100F872]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1123)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:648)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:130)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.getTotalReplicas(AbstractFullDistribZkTestBase.java:487)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:440)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:333)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Commented] (SOLR-11331) Ability to Debug Solr With Eclipse IDE

2017-12-07 Thread Karthik Ramachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282622#comment-16282622
 ] 

Karthik Ramachandran commented on SOLR-11331:
-

No, we need to run only "ant eclipse" with this patch.  I was able to launch 
the webapps with this patch.

> Ability to Debug Solr With Eclipse IDE
> --
>
> Key: SOLR-11331
> URL: https://issues.apache.org/jira/browse/SOLR-11331
> Project: Solr
>  Issue Type: Improvement
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
>Priority: Minor
> Attachments: SOLR-11331.patch
>
>
> Ability to Debug Solr With Eclipse IDE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11126) Node-level health check handler

2017-12-07 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-11126:

Attachment: SOLR-11126-v2.patch

Patch that changes the API to only be on the lines of the new v2 APIs. The 
health check handler can be invoked using:
{code}
/api/node/health
{code}

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10181) CREATEALIAS and DELETEALIAS commands consistency problems under concurrency

2017-12-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282284#comment-16282284
 ] 

Samuel García Martínez commented on SOLR-10181:
---

this was fixed as part of the work done in SOLR-11444, implementing changes as 
functions applied to the data. The 
`ZkStateReader.AliasesManager.applyModificationAndExportToZk`.

Reviewing the test code for the `OverseerCollectionConfigSetProcessorTest` 
there is no test case for the alias management commands. Should I mark this as 
resolved anyway?

> CREATEALIAS and DELETEALIAS commands consistency problems under concurrency
> ---
>
> Key: SOLR-10181
> URL: https://issues.apache.org/jira/browse/SOLR-10181
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 5.3, 5.4, 5.5, 6.4.1
>Reporter: Samuel García Martínez
>Assignee: Erick Erickson
> Attachments: SOLR-10181_testcase.patch
>
>
> When several CREATEALIAS are run at the same time by the OCP it could happen 
> that, even tho the API response is OK, some of those CREATEALIAS request 
> changes are lost.
> h3. The problem
> The problem happens because the CREATEALIAS cmd implementation relies on 
> _zkStateReader.getAliases()_ to create the map that will be stored in ZK. If 
> several threads reach that line at the same time it will happen that only one 
> will be stored correctly and the others will be overridden.
> The code I'm referencing is [this 
> piece|https://github.com/apache/lucene-solr/blob/8c1e67e30e071ceed636083532d4598bf6a8791f/solr/core/src/java/org/apache/solr/cloud/CreateAliasCmd.java#L65].
>  As an example, let's say that the current aliases map has {a:colA, b:colB}. 
> If two CREATEALIAS (one adding c:colC and other creating d:colD) are 
> submitted to the _tpe_ and reach that line at the same time, the resulting 
> maps will look like {a:colA, b:colB, c:colC} and {a:colA, b:colB, d:colD} and 
> only one of them will be stored correctly in ZK, resulting in "data loss", 
> meaning that API is returning OK despite that it didn't work as expected.
> On top of this, another concurrency problem could happen when the command 
> checks if the alias has been set using _checkForAlias_ method. if these two 
> CREATEALIAS zk writes had ran at the same time, the alias check fir one of 
> the threads can timeout since only one of the writes has "survived" and has 
> been "committed" to the _zkStateReader.getAliases()_ map.
> h3. How to fix it
> I can post a patch to this if someone gives me directions on how it should be 
> fixed. As I see this, there are two places where the issue can be fixed: in 
> the processor (OverseerCollectionMessageHandler) in a generic way or inside 
> the command itself.
> h5. The processor fix
> The locking mechanism (_OverseerCollectionMessageHandler#lockTask_) should be 
> the place to fix this inside the processor. I thought that adding the 
> operation name instead of only "collection" or "name" to the locking key 
> would fix the issue, but I realized that the problem will happen anyway if 
> the concurrency happens between different operations modifying the same 
> resource (like CREATEALIAS and DELETEALIAS do). So, if this should be the 
> path to follow I don't know what should be used as a locking key.
> h5. The command fix
> Fixing it at the command level (_CreateAliasCmd_ and _DeleteAliasCmd_) would 
> be relatively easy. Using optimistic locking, i.e, using the aliases.json zk 
> version in the keeper.setData. To do that, Aliases class should offer the 
> aliases version so the commands can forward that version with the update and 
> retry when it fails.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.2-Linux (32bit/jdk1.8.0_144) - Build # 26 - Still Unstable!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/26/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionReloadTest.testReloadedLeaderStateAfterZkSessionLoss

Error Message:
Error from server at http://127.0.0.1:41435/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:41435/solr: create the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([D64989B3FCAF0201:2D6732EE5A427154]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.CollectionReloadTest.testReloadedLeaderStateAfterZkSessionLoss(CollectionReloadTest.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9743) An UTILIZENODE command

2017-12-07 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282512#comment-16282512
 ] 

Christine Poerschke commented on SOLR-9743:
---

Thanks [~jpountz]! I've reversed two relocating edits from the 
[original|https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blobdiff;f=solr/CHANGES.txt;h=38ed4ba5c9ade014ab4db33d2850a28f49acc98e;hp=d5b953dad5359e3bdfc285bbd3ebf07d10553ee1;hb=c62d538;hpb=c51e34905037a44347530304d2be5b23e7095348]
 commit and as far as i can tell we're good here now then.

> An UTILIZENODE command
> --
>
> Key: SOLR-9743
> URL: https://issues.apache.org/jira/browse/SOLR-9743
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.2
>
>
> The command would accept one or more nodes and create appropriate replicas 
> based on some strategy.
> The params are
>  *node: (required && multi-valued) : The nodes to be deployed 
>  * collection: (optional) The collection to which the node should be added 
> to. if this parameter is not passed, try to assign to all collections
> example:
> {code}
> action=UTILIZENODE=gettingstarted
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9587) Script for creating a redacted tech support package

2017-12-07 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282504#comment-16282504
 ] 

Shawn Heisey commented on SOLR-9587:


I've begun some initial work on a script for this.  For now, I'm ignoring the 
fact that we don't have any kind of infrastructure for collecting packages from 
users.  I will write up a wiki page with some suggestions about file sharing 
sites like Dropbox, and have the script output the URL for that page.

Despite my personal opinion that I could do a better job with the script if I 
wrote it in perl, I have decided to write it as a shell script for better 
portability.  I'm trying very hard to NOT use any bashisms, so the script will 
work on the original bourne shell (/bin/sh), but since all of the systems I 
have easy access to use either bash or dash for /bin/sh, I can't be 100 percent 
certain that I've succeeded.

Once I've finished the first draft, a review with suggestions for any mistakes 
I've made would be appreciated.  Additional ideas for information that can 
easily be gathered are also appreciated.


> Script for creating a redacted tech support package
> ---
>
> Key: SOLR-9587
> URL: https://issues.apache.org/jira/browse/SOLR-9587
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Shawn Heisey
>Priority: Minor
>
> When users need help with Solr from the project, relevant config and log 
> information from their install is usually the best way that we can help them. 
>  If we had an easy way to gather all useful information into a single 
> compressed file that a user could place onto a file sharing site, we'd be in 
> awesome shape.
> If the script was smart enough to try and keep the final size of the package 
> under 1MB, which might be challenging until we have a better handle on 
> logfile rotation, most users would be able to share the package using 
> Apache's own pastebin, which can be found here: http://apaste.info
> If we pass control to the SolrCLI Java class as part of the script 
> functionality, we can also automatically redact any *config* information that 
> we *know* to be sensitive, like passwords.  Automatic redaction of logfiles 
> might be nearly impossible, but we should be making efforts in our standard 
> INFO-level logging to never log anything most users would consider to be 
> sensitive.  If the user bumps to DEBUG or TRACE, all bets are off.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9743) An UTILIZENODE command

2017-12-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282484#comment-16282484
 ] 

ASF subversion and git services commented on SOLR-9743:
---

Commit de84430fcd1f024c5be8c698f5fc055c8f24573a in lucene-solr's branch 
refs/heads/branch_7_2 from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=de84430 ]

SOLR-9743: Reverse inadvertent relocating edits in 7.1.0 and 7.2.0 sections.


> An UTILIZENODE command
> --
>
> Key: SOLR-9743
> URL: https://issues.apache.org/jira/browse/SOLR-9743
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.2
>
>
> The command would accept one or more nodes and create appropriate replicas 
> based on some strategy.
> The params are
>  *node: (required && multi-valued) : The nodes to be deployed 
>  * collection: (optional) The collection to which the node should be added 
> to. if this parameter is not passed, try to assign to all collections
> example:
> {code}
> action=UTILIZENODE=gettingstarted
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9743) An UTILIZENODE command

2017-12-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282479#comment-16282479
 ] 

ASF subversion and git services commented on SOLR-9743:
---

Commit 2f0d5bef2e9f4bece683033276b6f552c0ebed30 in lucene-solr's branch 
refs/heads/branch_7x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2f0d5be ]

SOLR-9743: Reverse inadvertent relocating edits in 7.1.0 and 7.2.0 sections.


> An UTILIZENODE command
> --
>
> Key: SOLR-9743
> URL: https://issues.apache.org/jira/browse/SOLR-9743
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.2
>
>
> The command would accept one or more nodes and create appropriate replicas 
> based on some strategy.
> The params are
>  *node: (required && multi-valued) : The nodes to be deployed 
>  * collection: (optional) The collection to which the node should be added 
> to. if this parameter is not passed, try to assign to all collections
> example:
> {code}
> action=UTILIZENODE=gettingstarted
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 332 - Failure!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/332/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseParallelGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestBackwardsCompatibility

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_1723217F3C4726D4-001\6.2.0-cfs-002:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_1723217F3C4726D4-001\6.2.0-cfs-002
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_1723217F3C4726D4-001\6.2.0-cfs-002:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_1723217F3C4726D4-001\6.2.0-cfs-002

at __randomizedtesting.SeedInfo.seed([1723217F3C4726D4]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestBoolean2

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.search.TestBoolean2_6F0B8F30FC7DF6A0-001\tempDir-004:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.search.TestBoolean2_6F0B8F30FC7DF6A0-001\tempDir-004
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.search.TestBoolean2_6F0B8F30FC7DF6A0-001\tempDir-004:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.search.TestBoolean2_6F0B8F30FC7DF6A0-001\tempDir-004

at __randomizedtesting.SeedInfo.seed([6F0B8F30FC7DF6A0]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.TestDistributedMissingSort

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedMissingSort_162E5A1B42F6CFAC-001\tempDir-001\shard0\cores\collection1\data\spellchecker3:
 

Re: Lucene/Solr 7.2

2017-12-07 Thread Adrien Grand
OK, it looks like all changes that we wanted to be included are now in?
Please let me know if there is still something left to include in 7.2
before building a RC.

I noticed SOLR-11423 is in a weird state, it is included in the changelog
in 7.1 but has only been committed to master. Did we forget to backport it?

Le mer. 6 déc. 2017 à 21:13, Andrzej Białecki <
andrzej.biale...@lucidworks.com> a écrit :

> On 6 Dec 2017, at 18:45, Andrzej Białecki 
> wrote:
>
> I attached the patch to SOLR-11714, which disables the ‘searchRate’
> trigger - if there are no objections I’ll commit it shortly to branch_7.2.
>
>
>
> This has been committed now to branch_7_2 and I don’t have any other open
> issues for 7.2. Thanks!
>
>
>
> On 6 Dec 2017, at 15:51, Andrzej Białecki 
> wrote:
>
>
> On 6 Dec 2017, at 15:35, Andrzej Białecki 
> wrote:
>
> SOLR-11458 is committed and resolved - thanks for the patience.
>
>
>
> Actually, one more thing … ;) SOLR-11714 is a more serious bug in a new
> feature (searchRate autoscaling trigger). It’s probably best to disable
> this feature in 7.2 rather than releasing a broken version, so I’d like to
> commit a patch that disables it (plus a note in CHANGES.txt).
>
>
>
>
> On 6 Dec 2017, at 14:02, Adrien Grand  wrote:
>
> Thanks for the heads up, Anshum.
>
> This leaves us with only SOLR-11458 to wait for before building a RC
> (which might be ready but just not marked as resolved).
>
>
>
> Le mer. 6 déc. 2017 à 13:47, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> a écrit :
>
>> Hi Adrien,
>> I'm planning to skip SOLR-11624 for this release (as per my last comment
>> https://issues.apache.org/jira/browse/SOLR-11624?focusedCommentId=16280121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16280121).
>> If someone has an objection, please let me know; otherwise, please feel
>> free to proceed with the release.
>> I'll continue working on it anyway, and shall try to have it ready for
>> the next release.
>> Thanks,
>> Ishan
>>
>> On Wed, Dec 6, 2017 at 2:41 PM, Adrien Grand  wrote:
>>
>>> FYI I created the new branch for 7.2, so you will have to backport to
>>> this branch. No hurry though, I mostly created the branch so that it's fine
>>> to cherry-pick changes that may wait for 7.3 to be released.
>>>
>>> Le mer. 6 déc. 2017 à 08:53, Adrien Grand  a écrit :
>>>
 Sorry to hear that Ishan, I hope you are doing better now. +1 to get
 SOLR-11624 in.

 Le mer. 6 déc. 2017 à 07:57, Ishan Chattopadhyaya <
 ichattopadhy...@gmail.com> a écrit :

> I was a bit unwell over the weekend and yesterday; I'm working on a
> very targeted fix for SOLR-11624 right now; I expect it to take another 
> 5-6
> hours.
> Is that fine with you, Adrien? If not, please go ahead with the
> release, and I'll volunteer later for a bugfix release for this after 7.2
> is out.
>
> On Wed, Dec 6, 2017 at 3:25 AM, Adrien Grand 
> wrote:
>
>> Fine with me.
>>
>> Le mar. 5 déc. 2017 à 22:34, Varun Thacker  a
>> écrit :
>>
>>> Hi Adrien,
>>>
>>> I'd like to commit SOLR-11590 . The issue had a patch couple of
>>> weeks ago and has been reviewed but never got committed. I've run all 
>>> the
>>> tests twice as well to verify.
>>>
>>> On Tue, Dec 5, 2017 at 9:08 AM, Andrzej Białecki <
>>> andrzej.biale...@lucidworks.com> wrote:
>>>

 On 5 Dec 2017, at 18:05, Adrien Grand  wrote:

 Andrzej, ok to merge since it is a bug fix. Since we're close to
 the RC build, maybe try to get someone familiar with the code to 
 review it
 to make sure it doesn't have unexpected side-effects?


 Sure I’ll do this - thanks!


 Le mar. 5 déc. 2017 à 17:57, Andrzej Białecki <
 andrzej.biale...@lucidworks.com> a écrit :

> Adrien,
>
> If it’s ok I would also like to merge SOLR-11458, this
> significantly reduces the chance of accidental data loss when using
> MoveReplicaCmd.
>
> On 5 Dec 2017, at 14:44, Adrien Grand  wrote:
>
> Quick update:
>
> LUCENE-8043, SOLR-9137, SOLR-11662 and SOLR-11687 have been
> merged, they will be in 7.2.
>
> LUCENE-8048 and SOLR-11624 are still open.
>
> LUCENE-8048 looks like it could make things better in some cases
> but I don't think it is required for 7.2, so I don't plan to hold the
> release on it.
>
> SOLR-11624 looks bad, I'll wait for it.
>
> Le mar. 5 déc. 2017 à 07:45, Noble Paul  a
> écrit :
>

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+32) - Build # 954 - Still Unstable!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/954/
Java: 64bit/jdk-10-ea+32 -XX:+UseCompressedOops -XX:+UseG1GC

13 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at 
http://127.0.0.1:33785/solr/awhollynewcollection_0_shard1_replica_n1: 
ClusterState says we are the leader 
(http://127.0.0.1:33785/solr/awhollynewcollection_0_shard1_replica_n1), but 
locally we don't think so. Request came from null

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:33785/solr/awhollynewcollection_0_shard1_replica_n1: 
ClusterState says we are the leader 
(http://127.0.0.1:33785/solr/awhollynewcollection_0_shard1_replica_n1), but 
locally we don't think so. Request came from null
at 
__randomizedtesting.SeedInfo.seed([7368C9D7D08C1C9E:3B1DBD63D6BF330B]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:549)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1012)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:459)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11285) Support simulations at scale in the autoscaling framework

2017-12-07 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282341#comment-16282341
 ] 

Adrien Grand commented on SOLR-11285:
-

[~a...@getopt.org] Is this issue good to be resolved?

> Support simulations at scale in the autoscaling framework
> -
>
> Key: SOLR-11285
> URL: https://issues.apache.org/jira/browse/SOLR-11285
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Attachments: SOLR-11285.patch
>
>
> This is a spike to investigate how difficult it would be to modify the 
> autoscaling framework so that it's possible to run simulated large-scale 
> experiments and test its dynamic behavior without actually spinning up a 
> large cluster.
> Currently many components rely heavily on actual Solr, ZK and behavior of ZK 
> watches, or insist on making actual HTTP calls. Notable exception is the core 
> Policy framework where most of the ZK / Solr details are abstracted.
> As the algorithms for autoscaling that we implement become more and more 
> complex the ability to effectively run multiple large simulations will be 
> crucial - it's very easy to unknowingly introduce catastrophic instabilities 
> that don't manifest themselves in regular unit tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11725) json.facet's stddev() function should be changed to use the "Corrected sample stddev" formula

2017-12-07 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282354#comment-16282354
 ] 

Hoss Man commented on SOLR-11725:
-



bq. This does bring up the question of what to do when N=1 (or N=0 for that 
matter).

I ommitted them from my original description for brevity to focus on the bigger 
picture of the equations, but for the record the full implemetnion of stddev in 
each of the two classes mentioned are...

* {{StddevAgg.java}}: {code}
double val = count == 0 ? 0.0d : Math.sqrt((sumSq/count)-Math.pow(sum/count, 
2));
return val;
{code}
* {{StatsValuesFactory.java}}: {code}
if (count <= 1.0D) {
  return 0.0D;
}

return Math.sqrt(((count * sumOfSquares) - (sum * sum)) / (count * (count - 
1.0D)));
{code}


bq. When N=0, the current code produces 0, but I don't think that's the best 
choice. ...

Agreed, it should really be 'null' (or 'NaN')

(i'm not sure why {{StatsValuesFactory.java}} currently returns {{0.0D}} when 
{{count==0}} ... other {{StatsValuesFactory.java}} stats like min/max correctly 
return 'null' ... it's weird)

bq. ...In general we've been moving toward omitting undefined functions. Stats 
like min() and max() already do this.

Whoa... really? ... that seems like it would make th client parsing realy 
hard...

You're saying users can't expect that every "facet" key they specify in the 
request will be include in the response? (in the event it's 'null' or 'NaN' or 
whatever makes sense given it's data type)  Why???

bq. I'd be tempted to treat N=0 and N=1 as undefined

As I said, for N=0 I agree with you that the result should be 
"undefined/null/NaN" (and if that means that it's excluded from the response to 
be consistent with the existing behavior in {{json.facet}} then so be it) ... 
but i'm a big "-1" (vote, i mean, not math) on treating stddev(N=1) as 
"undefined" ... that makes no sense to me.  

For a singleton set, the stddev() should *absolutely* be "0" -- all of the 
value(s) in the set are identical, the amount of deviation between the value(s) 
in set is "none".  For the purpose of comparing the "consistency" of this set 
to any other sets, you know that this set is as consistent as it can possibly 
be.

Why sould the {{stddv(\[42]}}} be any different then the 
{{stddev(\[42,42,42,42,42,])}} 

bq. Oh, and whatever treatment we give stddev(), we should presumably give to 
variance()?

I would asssume so, but first i'd have to go refresh my memory on how exactly 
variance differs from stddev :)




> json.facet's stddev() function should be changed to use the "Corrected sample 
> stddev" formula
> -
>
> Key: SOLR-11725
> URL: https://issues.apache.org/jira/browse/SOLR-11725
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-11725.patch
>
>
> While working on some equivalence tests/demonstrations for 
> {{facet.pivot+stats.field}} vs {{json.facet}} I noticed that the {{stddev}} 
> calculations done between the two code paths can be measurably different, and 
> realized this is due to them using very different code...
> * {{json.facet=foo:stddev(foo)}}
> ** {{StddevAgg.java}}
> ** {{Math.sqrt((sumSq/count)-Math.pow(sum/count, 2))}}
> * {{stats.field=\{!stddev=true\}foo}}
> ** {{StatsValuesFactory.java}}
> ** {{Math.sqrt(((count * sumOfSquares) - (sum * sum)) / (count * (count - 
> 1.0D)))}}
> Since I"m not really a math guy, I consulting with a bunch of smart math/stat 
> nerds I know online to help me sanity check if these equations (some how) 
> reduced to eachother (In which case the discrepancies I was seeing in my 
> results might have just been due to the order of intermediate operation 
> execution & floating point rounding differences).
> They confirmed that the two bits of code are _not_ equivalent to each other, 
> and explained that the code JSON Faceting is using is equivalent to the 
> "Uncorrected sample stddev" formula, while StatsComponent's code is 
> equivalent to the the "Corrected sample stddev" formula...
> https://en.wikipedia.org/wiki/Standard_deviation#Uncorrected_sample_standard_deviation
> When I told them that stuff like this is why no one likes mathematicians and 
> pressed them to explain which one was the "most canonical" (or "most 
> generally applicable" or "best") definition of stddev, I was told that:
> # This is something statisticians frequently disagree on
> # Practically speaking the diff between the calculations doesn't tend to 
> differ significantly when count is "very large"
> # _"Corrected sample stddev" is more appropriate when comparing two 
> distributions_
> Given that:
> * the primary usage of computing the stddev of a field/function against a 
> Solr 

[jira] [Commented] (SOLR-11714) AddReplicaSuggester endless loop

2017-12-07 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282352#comment-16282352
 ] 

Adrien Grand commented on SOLR-11714:
-

OK, thanks for the explanation.

> AddReplicaSuggester endless loop
> 
>
> Key: SOLR-11714
> URL: https://issues.apache.org/jira/browse/SOLR-11714
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.2, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Noble Paul
> Attachments: 7.2-disable-search-rate-trigger.diff, SOLR-11714.diff
>
>
> {{SearchRateTrigger}} events are processed by {{ComputePlanAction}} and 
> depending on the condition either a MoveReplicaSuggester or 
> AddReplicaSuggester is selected.
> When {{AddReplicaSuggester}} is selected there's currently a bug in master, 
> due to an API change (Hint.COLL_SHARD should be used instead of Hint.COLL). 
> However, after fixing that bug {{ComputePlanAction}} goes into an endless 
> loop because the suggester endlessly keeps creating new operations.
> Please see the patch that fixes the Hint.COLL_SHARD issue and modifies the 
> unit test to illustrate this failure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9743) An UTILIZENODE command

2017-12-07 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282333#comment-16282333
 ] 

Adrien Grand commented on SOLR-9743:


OK I cross-checked with JIRA to make sure the deleted entries had actually been 
pushed to 7.2. We should be good now. [~cpoerschke] Can you confirm it looks 
good to you now?

> An UTILIZENODE command
> --
>
> Key: SOLR-9743
> URL: https://issues.apache.org/jira/browse/SOLR-9743
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.2
>
>
> The command would accept one or more nodes and create appropriate replicas 
> based on some strategy.
> The params are
>  *node: (required && multi-valued) : The nodes to be deployed 
>  * collection: (optional) The collection to which the node should be added 
> to. if this parameter is not passed, try to assign to all collections
> example:
> {code}
> action=UTILIZENODE=gettingstarted
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8075) Possible null pointer dereference in core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java

2017-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282310#comment-16282310
 ] 

ASF GitHub Bot commented on LUCENE-8075:


Github user imgpulak commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/286#discussion_r155607967
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java 
---
@@ -106,37 +106,37 @@ public IntersectTermsEnum(FieldReader fr, Automaton 
automaton, RunAutomaton runA
 if (fr.index == null) {
   fstReader = null;
--- End diff --

@jpountz Any update here?


> Possible null pointer dereference in 
> core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java
> -
>
> Key: LUCENE-8075
> URL: https://issues.apache.org/jira/browse/LUCENE-8075
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 7.1
>Reporter: Xiaoshan Sun
>  Labels: easyfix
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> Possible null pointer dereference in 
> core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java.
> at line 119. The fr.index may be NULL. This result is based on static 
> analysis tools and the details are shown below:
> *
> {code:java}
> 106: if (fr.index == null) {
> 107:  fstReader = null;  // fr.index is Known NULL here.
> } else {
>   fstReader = fr.index.getBytesReader();
> }
> // TODO: if the automaton is "smallish" we really
> // should use the terms index to seek at least to
> // the initial term and likely to subsequent terms
> // (or, maybe just fallback to ATE for such cases).
> // Else the seek cost of loading the frames will be
> // too costly.
> 119:final FST.Arc arc = fr.index.getFirstArc(arcs[0]); 
> //  fr.index is dereferenced here and fr.index can be NULL if 107 is arrived.
> {code}
> *
> It is not sure if fr.index can be NULL in runtime.
> We think it is reasonable to fix it by a test if fr.index is NULL and an 
> error handling.
> --
> Please Refer to "Trusted Operating System and System Assurance Working Group, 
> TCA, Institute of Software, Chinese Academy of Sciences" in the 
> acknowledgement if applicable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9743) An UTILIZENODE command

2017-12-07 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282302#comment-16282302
 ] 

Adrien Grand commented on SOLR-9743:


Oh I see what you mean. I will fix those as well.

> An UTILIZENODE command
> --
>
> Key: SOLR-9743
> URL: https://issues.apache.org/jira/browse/SOLR-9743
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.2
>
>
> The command would accept one or more nodes and create appropriate replicas 
> based on some strategy.
> The params are
>  *node: (required && multi-valued) : The nodes to be deployed 
>  * collection: (optional) The collection to which the node should be added 
> to. if this parameter is not passed, try to assign to all collections
> example:
> {code}
> action=UTILIZENODE=gettingstarted
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11713) CdcrUpdateLogTest.testSubReader() failure

2017-12-07 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated SOLR-11713:

Fix Version/s: master (8.0)

> CdcrUpdateLogTest.testSubReader() failure
> -
>
> Key: SOLR-11713
> URL: https://issues.apache.org/jira/browse/SOLR-11713
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Varun Thacker
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11713.patch
>
>
> Reproduces for me, from 
> [https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1430/]:
> {noformat}
> Checking out Revision ebdaa44182cf4e017efc418134821291dc40ea46 
> (refs/remotes/origin/master)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=CdcrUpdateLogTest 
> -Dtests.method=testSubReader -Dtests.seed=1A5FD357C74335A5 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
>  -Dtests.locale=vi -Dtests.timezone=America/Toronto -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 6.59s J1 | CdcrUpdateLogTest.testSubReader <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([1A5FD357C74335A5:57875934B794E477]:0)
>[junit4]>  at 
> org.apache.solr.update.CdcrUpdateLogTest.testSubReader(CdcrUpdateLogTest.java:583)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[junit4]   2> NOTE: test params are: 
> codec=DummyCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=DUMMY,
>  chunkSize=2, maxDocsPerChunk=982, blockSize=6), 
> termVectorsFormat=CompressingTermVectorsFormat(compressionMode=DUMMY, 
> chunkSize=2, blockSize=6)), 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@1e1386ea),
>  locale=vi, timezone=America/Toronto
>[junit4]   2> NOTE: Linux 3.13.0-88-generic amd64/Oracle Corporation 
> 1.8.0_144 (64-bit)/cpus=4,threads=1,free=211037008,total=384827392
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9743) An UTILIZENODE command

2017-12-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282332#comment-16282332
 ] 

ASF subversion and git services commented on SOLR-9743:
---

Commit 80bbe6392786e6ac122b72866b3d1c2def5e4ec2 in lucene-solr's branch 
refs/heads/branch_7x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=80bbe63 ]

SOLR-9743: Recover changelog entries that had been removed by error.


> An UTILIZENODE command
> --
>
> Key: SOLR-9743
> URL: https://issues.apache.org/jira/browse/SOLR-9743
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.2
>
>
> The command would accept one or more nodes and create appropriate replicas 
> based on some strategy.
> The params are
>  *node: (required && multi-valued) : The nodes to be deployed 
>  * collection: (optional) The collection to which the node should be added 
> to. if this parameter is not passed, try to assign to all collections
> example:
> {code}
> action=UTILIZENODE=gettingstarted
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9743) An UTILIZENODE command

2017-12-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282331#comment-16282331
 ] 

ASF subversion and git services commented on SOLR-9743:
---

Commit 8803fecbdb8a959eb390323e842b7131206b4d51 in lucene-solr's branch 
refs/heads/branch_7_2 from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8803fec ]

SOLR-9743: Recover changelog entries that had been removed by error.


> An UTILIZENODE command
> --
>
> Key: SOLR-9743
> URL: https://issues.apache.org/jira/browse/SOLR-9743
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.2
>
>
> The command would accept one or more nodes and create appropriate replicas 
> based on some strategy.
> The params are
>  *node: (required && multi-valued) : The nodes to be deployed 
>  * collection: (optional) The collection to which the node should be added 
> to. if this parameter is not passed, try to assign to all collections
> example:
> {code}
> action=UTILIZENODE=gettingstarted
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10181) CREATEALIAS and DELETEALIAS commands consistency problems under concurrency

2017-12-07 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282320#comment-16282320
 ] 

David Smiley commented on SOLR-10181:
-

Glad to have helped fix this with [~gus_heck] :-)

> CREATEALIAS and DELETEALIAS commands consistency problems under concurrency
> ---
>
> Key: SOLR-10181
> URL: https://issues.apache.org/jira/browse/SOLR-10181
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 5.3, 5.4, 5.5, 6.4.1
>Reporter: Samuel García Martínez
>Assignee: Erick Erickson
> Attachments: SOLR-10181_testcase.patch
>
>
> When several CREATEALIAS are run at the same time by the OCP it could happen 
> that, even tho the API response is OK, some of those CREATEALIAS request 
> changes are lost.
> h3. The problem
> The problem happens because the CREATEALIAS cmd implementation relies on 
> _zkStateReader.getAliases()_ to create the map that will be stored in ZK. If 
> several threads reach that line at the same time it will happen that only one 
> will be stored correctly and the others will be overridden.
> The code I'm referencing is [this 
> piece|https://github.com/apache/lucene-solr/blob/8c1e67e30e071ceed636083532d4598bf6a8791f/solr/core/src/java/org/apache/solr/cloud/CreateAliasCmd.java#L65].
>  As an example, let's say that the current aliases map has {a:colA, b:colB}. 
> If two CREATEALIAS (one adding c:colC and other creating d:colD) are 
> submitted to the _tpe_ and reach that line at the same time, the resulting 
> maps will look like {a:colA, b:colB, c:colC} and {a:colA, b:colB, d:colD} and 
> only one of them will be stored correctly in ZK, resulting in "data loss", 
> meaning that API is returning OK despite that it didn't work as expected.
> On top of this, another concurrency problem could happen when the command 
> checks if the alias has been set using _checkForAlias_ method. if these two 
> CREATEALIAS zk writes had ran at the same time, the alias check fir one of 
> the threads can timeout since only one of the writes has "survived" and has 
> been "committed" to the _zkStateReader.getAliases()_ map.
> h3. How to fix it
> I can post a patch to this if someone gives me directions on how it should be 
> fixed. As I see this, there are two places where the issue can be fixed: in 
> the processor (OverseerCollectionMessageHandler) in a generic way or inside 
> the command itself.
> h5. The processor fix
> The locking mechanism (_OverseerCollectionMessageHandler#lockTask_) should be 
> the place to fix this inside the processor. I thought that adding the 
> operation name instead of only "collection" or "name" to the locking key 
> would fix the issue, but I realized that the problem will happen anyway if 
> the concurrency happens between different operations modifying the same 
> resource (like CREATEALIAS and DELETEALIAS do). So, if this should be the 
> path to follow I don't know what should be used as a locking key.
> h5. The command fix
> Fixing it at the command level (_CreateAliasCmd_ and _DeleteAliasCmd_) would 
> be relatively easy. Using optimistic locking, i.e, using the aliases.json zk 
> version in the keeper.setData. To do that, Aliases class should offer the 
> aliases version so the commands can forward that version with the update and 
> retry when it fails.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-12-07 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282305#comment-16282305
 ] 

Adrien Grand commented on SOLR-11423:
-

[~dragonsinth] Is this issue good to resolve? I see commits have been pushed 
but it is still open?

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11616) Backup failing on a constantly changing index with NoSuchFileException

2017-12-07 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated SOLR-11616:

Fix Version/s: master (8.0)

> Backup failing on a constantly changing index with NoSuchFileException
> --
>
> Key: SOLR-11616
> URL: https://issues.apache.org/jira/browse/SOLR-11616
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11616.patch, SOLR-11616.patch, solr-6.3.log, 
> solr-7.1.log
>
>
> As reported by several users on SOLR-9120 , Solr backups fail with 
> NoSuchFileException on a constantly changing index. 
> Users linked SOLR-9120 to the root cause as the stack trace is the same , but 
> the fix proposed there won't fix backups to stop failing.
> We need to implement a similar fix in {{SnapShooter#createSnapshot}} to fix 
> the problem



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11687) SolrCore.getNewIndexDir falsely returns {dataDir}/index on any IOException reading index.properties

2017-12-07 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated SOLR-11687:

Fix Version/s: master (8.0)

> SolrCore.getNewIndexDir falsely returns {dataDir}/index on any IOException 
> reading index.properties
> ---
>
> Key: SOLR-11687
> URL: https://issues.apache.org/jira/browse/SOLR-11687
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11687.patch, SOLR-11687.patch, SOLR-11687.patch, 
> SOLR-11687_alt.patch
>
>
> I'll link the originating Solr JIRA in a minute (many thanks Nikolay). 
> right at the top of this method we have this:
> {code}
> String result = dataDir + "index/";
> {code}
> If, for any reason, the method doesn't complete properly, the "result" is 
> still returned. Now for instance, down in SolrCore.cleanupOldIndexDirectories 
> the "old" directory is dataDir/index which may point to the current index.
> This seems particularly dangerous:
> {code}
>try {
>   p.load(new InputStreamReader(is, StandardCharsets.UTF_8));
>   String s = p.getProperty("index");
>   if (s != null && s.trim().length() > 0) {
>   result = dataDir + s;
>   }
> } catch (Exception e) {
>   log.error("Unable to load " + IndexFetcher.INDEX_PROPERTIES, e);
> } finally {
>   IOUtils.closeQuietly(is);
> }
> {code}
> Should "p.load" fail for any reason whatsoever, we'll still return 
> dataDir/index.
> Anyone want to chime on on what the expectations are here before I dive in?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #286: [LUCENE-8075] Possible null pointer dereferen...

2017-12-07 Thread imgpulak
Github user imgpulak commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/286#discussion_r155607967
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java 
---
@@ -106,37 +106,37 @@ public IntersectTermsEnum(FieldReader fr, Automaton 
automaton, RunAutomaton runA
 if (fr.index == null) {
   fstReader = null;
--- End diff --

@jpountz Any update here?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11714) AddReplicaSuggester endless loop

2017-12-07 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282295#comment-16282295
 ] 

Adrien Grand commented on SOLR-11714:
-

[~a...@getopt.org] Can this issue be resolved or is there work left to do?

> AddReplicaSuggester endless loop
> 
>
> Key: SOLR-11714
> URL: https://issues.apache.org/jira/browse/SOLR-11714
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.2, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Noble Paul
> Attachments: 7.2-disable-search-rate-trigger.diff, SOLR-11714.diff
>
>
> {{SearchRateTrigger}} events are processed by {{ComputePlanAction}} and 
> depending on the condition either a MoveReplicaSuggester or 
> AddReplicaSuggester is selected.
> When {{AddReplicaSuggester}} is selected there's currently a bug in master, 
> due to an API change (Hint.COLL_SHARD should be used instead of Hint.COLL). 
> However, after fixing that bug {{ComputePlanAction}} goes into an endless 
> loop because the suggester endlessly keeps creating new operations.
> Please see the patch that fixes the Hint.COLL_SHARD issue and modifies the 
> unit test to illustrate this failure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11719) Committing to restored collection does nothing

2017-12-07 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282269#comment-16282269
 ] 

Varun Thacker commented on SOLR-11719:
--

HI Vitaly,

I have been very busy this week so haven't had a chance to look at it. Maybe 
this weekend. If you have been able to reproduce this patches to the solution 
are always welcome :)

> Committing to restored collection does nothing
> --
>
> Key: SOLR-11719
> URL: https://issues.apache.org/jira/browse/SOLR-11719
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.1
>Reporter: Vitaly Lavrov
> Attachments: script.sh
>
>
> Scenario that was reproduced many times:
> 1. Restore collection
> 2. Send updates
> 3. Send commit
> Commit request returns instantly and the index stays intact. After collection 
> reload or cluster reboot updates are visible and commits will work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8010) fix or sandbox similarities in core with problems

2017-12-07 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8010:
-
Attachment: LUCENE-8010.patch

I could get all similarities to pass current tests with some tweaks:
 - Axiomatic similarities add 1 to the freq, is it ok? Otherwise we'll need to 
take freq = max(freq, 1) but this means sloppy phrase queries will produce the 
same score on all documents whose sloppy freq is less than 1
 - AxiomaticF3* Similarities have their score truncated to 0 when the gamma 
component would cause scores to be less than 0. This means they could produce 
low-quality scores but I don't have ideas how to fix it otherwise.
 - Lambda impls use a nextUp/nextDown to make sure they never produce lambda=1, 
which doesn't work with DistributionSPL
 - DistributionSPL also makes use of some calls to nextUp/nextDown to avoid 
producing infinite/NaN scores while still guaranteeing that scores do not 
decrease when tfn increases

> fix or sandbox similarities in core with problems
> -
>
> Key: LUCENE-8010
> URL: https://issues.apache.org/jira/browse/LUCENE-8010
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-8010.patch
>
>
> We want to support scoring optimizations such as LUCENE-4100 and LUCENE-7993, 
> which put very minimal requirements on the similarity impl. Today 
> similarities of various quality are in core and tests. 
> The ones with problems currently have warnings in the javadocs about their 
> bugs, and if the problems are severe enough, then they are also disabled in 
> randomized testing too.
> IMO lucene core should only have practical functions that won't return 
> {{NaN}} scores at times or cause relevance to go backwards if the user's 
> stopfilter isn't configured perfectly. Also it is important for unit tests to 
> not deal with broken or semi-broken sims, and the ones in core should pass 
> all unit tests.
> I propose we move the buggy ones to sandbox and deprecate them. If they can 
> be fixed we can put them back in core, otherwise bye-bye.
> FWIW tests developed in LUCENE-7997 document the following requirements:
>* scores are non-negative and finite.
>* score matches the explanation exactly.
>* internal explanations calculations are sane (e.g. sum of: and so on 
> actually compute sums)
>* scores don't decrease as term frequencies increase: e.g. score(freq=N + 
> 1) >= score(freq=N)
>* scores don't decrease as documents get shorter, e.g. score(len=M) >= 
> score(len=M+1)
>* scores don't decrease as terms get rarer, e.g. score(term=N) >= 
> score(term=N+1)
>* scoring works for floating point frequencies (e.g. sloppy phrase and 
> span queries will work)
>* scoring works for reasonably large 64-bit statistic values (e.g. 
> distributed search will work)
>* scoring works for reasonably large boost values (0 .. Integer.MAX_VALUE, 
> e.g. query boosts will work)
>* scoring works for parameters randomized within valid ranges



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11729) Increase default overrequest ratio/count in json.facet to match existing defaults for facet.overrequest.ratio & facet.overrequest.count ?

2017-12-07 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-11729.
-
Resolution: Invalid

Resolving Invalid since the basic premise i was operating under for assuming 
these defaults should be the same is flawed.

(It may make sense to change/increase the default overrequesting in json.facet, 
but that should be considered on it's own merrits, based on the refinement algo 
used -- not because of any question of equivilence with facet.field)

> Increase default overrequest ratio/count in json.facet to match existing 
> defaults for facet.overrequest.ratio & facet.overrequest.count ?
> -
>
> Key: SOLR-11729
> URL: https://issues.apache.org/jira/browse/SOLR-11729
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> When FacetComponent first got support for distributed search, the default 
> "effective shard limit" done on shards followed the formula...
> {code}
> limit = (int)(dff.initialLimit * 1.5) + 10;
> {code}
> ...over time, this became configurable with the introduction of some expert 
> level tuning options: {{facet.overrequest.ratio}} & 
> {{facet.overrequest.count}} -- but the defaults (and basic formula) remain 
> the same to this day...
> {code}
>   this.overrequestRatio
> = params.getFieldDouble(field, FacetParams.FACET_OVERREQUEST_RATIO, 
> 1.5);
>   this.overrequestCount 
> = params.getFieldInt(field, FacetParams.FACET_OVERREQUEST_COUNT, 10);
> ...
>   private int doOverRequestMath(int limit, double ratio, int count) {
> // NOTE: normally, "1.0F < ratio"
> //
> // if the user chooses a ratio < 1, we allow it and don't "bottom out" at
> // the original limit until *after* we've also added the count.
> int adjustedLimit = (int) (limit * ratio) + count;
> return Math.max(limit, adjustedLimit);
>   }
> {code}
> However...
> When {{json.facet}} multi-shard refinement was added, the code was written 
> slightly diff:
> * there is an explicit {{overrequest:N}} (count) option
> * if {{-1 == overrequest}} (which is the default) then an "effective shard 
> limit" is computed using the same basic formula as in FacetComponet -- _*but 
> the constants are different*_...
> ** {{effectiveLimit = (long) (effectiveLimit * 1.1 + 4);}}
> * For any (non "-1") user specified {{overrequest}} value, it's added 
> verbatim to the {{limit}} (which may have been user specified, or may just be 
> the default)
> ** {{effectiveLimit += freq.overrequest;}}
> Given the design of the {{json.facet}} syntax, I can understand why the code 
> path for an "advanced" user specified {{overrequest:N}} option avoids using 
> any (implicit) ratio calculation and just does the straight addition of 
> {{limit += overrequest}}.
> What I'm not clear on is the choice of the constants {{1.1}} and {{4}} in the 
> common (default) case, and why those differ from the historically used 
> {{1.5}} and {{10}}.
> 
> It may seem like a small thing to worry about, but it can/will cause odd 
> inconsistencies when people try to migrate simple {{facet.field=foo}} (or 
> {{facet.pivot=foo,bar}}) queries to {{json.facet}} -- I have also seen it 
> give people attempting these types of migrations the (mistaken) impression 
> that discrepancies they are seeing are because {{refine:true}} is not be 
> working.
> For this reason, I propose we change the (default) {{overrequest:-1}} 
> behavior to use the same constants as the equivilent FacetComponent code...
> {code}
> if (fcontext.isShard()) {
>   if (freq.overrequest == -1) {
> // add over-request if this is a shard request and if we have a small 
> offset (large offsets will already be gathering many more buckets than needed)
> if (freq.offset < 10) {
>   effectiveLimit = (long) (effectiveLimit * 1.5 + 10);
> }
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11729) Increase default overrequest ratio/count in json.facet to match existing defaults for facet.overrequest.ratio & facet.overrequest.count ?

2017-12-07 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282268#comment-16282268
 ] 

Hoss Man commented on SOLR-11729:
-

bq. Aligning over-request limits still wouldn't prevent some differences... the 
refinement algorithm is currently different.

Yeah, I hadn't realized how diff the algos are (linking SOLR-11733).

In light of which, I'm not sure if it makes any sense to even remotely worry 
about making the default overrequesting consistent.

> Increase default overrequest ratio/count in json.facet to match existing 
> defaults for facet.overrequest.ratio & facet.overrequest.count ?
> -
>
> Key: SOLR-11729
> URL: https://issues.apache.org/jira/browse/SOLR-11729
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> When FacetComponent first got support for distributed search, the default 
> "effective shard limit" done on shards followed the formula...
> {code}
> limit = (int)(dff.initialLimit * 1.5) + 10;
> {code}
> ...over time, this became configurable with the introduction of some expert 
> level tuning options: {{facet.overrequest.ratio}} & 
> {{facet.overrequest.count}} -- but the defaults (and basic formula) remain 
> the same to this day...
> {code}
>   this.overrequestRatio
> = params.getFieldDouble(field, FacetParams.FACET_OVERREQUEST_RATIO, 
> 1.5);
>   this.overrequestCount 
> = params.getFieldInt(field, FacetParams.FACET_OVERREQUEST_COUNT, 10);
> ...
>   private int doOverRequestMath(int limit, double ratio, int count) {
> // NOTE: normally, "1.0F < ratio"
> //
> // if the user chooses a ratio < 1, we allow it and don't "bottom out" at
> // the original limit until *after* we've also added the count.
> int adjustedLimit = (int) (limit * ratio) + count;
> return Math.max(limit, adjustedLimit);
>   }
> {code}
> However...
> When {{json.facet}} multi-shard refinement was added, the code was written 
> slightly diff:
> * there is an explicit {{overrequest:N}} (count) option
> * if {{-1 == overrequest}} (which is the default) then an "effective shard 
> limit" is computed using the same basic formula as in FacetComponet -- _*but 
> the constants are different*_...
> ** {{effectiveLimit = (long) (effectiveLimit * 1.1 + 4);}}
> * For any (non "-1") user specified {{overrequest}} value, it's added 
> verbatim to the {{limit}} (which may have been user specified, or may just be 
> the default)
> ** {{effectiveLimit += freq.overrequest;}}
> Given the design of the {{json.facet}} syntax, I can understand why the code 
> path for an "advanced" user specified {{overrequest:N}} option avoids using 
> any (implicit) ratio calculation and just does the straight addition of 
> {{limit += overrequest}}.
> What I'm not clear on is the choice of the constants {{1.1}} and {{4}} in the 
> common (default) case, and why those differ from the historically used 
> {{1.5}} and {{10}}.
> 
> It may seem like a small thing to worry about, but it can/will cause odd 
> inconsistencies when people try to migrate simple {{facet.field=foo}} (or 
> {{facet.pivot=foo,bar}}) queries to {{json.facet}} -- I have also seen it 
> give people attempting these types of migrations the (mistaken) impression 
> that discrepancies they are seeing are because {{refine:true}} is not be 
> working.
> For this reason, I propose we change the (default) {{overrequest:-1}} 
> behavior to use the same constants as the equivilent FacetComponent code...
> {code}
> if (fcontext.isShard()) {
>   if (freq.overrequest == -1) {
> // add over-request if this is a shard request and if we have a small 
> offset (large offsets will already be gathering many more buckets than needed)
> if (freq.offset < 10) {
>   effectiveLimit = (long) (effectiveLimit * 1.5 + 10);
> }
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11733) json.facet refinement fails to bubble up some long tail (overrequested) terms?

2017-12-07 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282264#comment-16282264
 ] 

Hoss Man commented on SOLR-11733:
-



bq. I mentioned in SOLR-11729 the refinement algorithm being different (and for 
a single-level facet field, simpler).

FWIW, here's yonik's comment from SOLR-11729 which seems to specifically be on 
point for this issue (emphaiss mine)...

bq. It seems like there are many logical ways to refine results - I originally 
thought about using refine:simple because I imagined we would have other 
implementations in the future.  Anyway, this one is the simplest one to think 
about and implement: *the top buckets to return for all facets are determined 
in the first phase.* The second phase only gets contributions from other shards 
for those buckets.

bq. i.e. simple refinement doesn't change the buckets you get back.

Ah ... ok.  I didn't realize the refinement approach in {{json.facet}} wasn't 
as sophisticated as {{facet.field}}

To summarize again (in my own words to ensure I'm understanding you correctly):

# do a first pass, requesting "#limit + #overrequest" buckets from each shard
#* use the accumulated results of the first pass to determine the "top #limit 
buckets"
# do a second passs, in which we back-fill the "top #limit buckets" with data 
from any shards that have no yet contributed.

In which case, in my example above, the reason {{yyy}} isn't refined, even 
though it has the same "first pass" total as {{x1}}, is because during the 
first pass {{x1}} sorts higher (due to a secondary tie breaker sort on the 
terms) pushing {{yyy}} out of the "top 6".  (likewise {{x2}} and {{tail}} are 
never considered because they were never part of the "top 6" even w/o a tie 
breaker sort)

Do I have that correct?



The Bottom line: even if i don't fully grasp the current refinement mechanism 
you've described, is that you're saying the behavior i described with the above 
sample documents is *not* a bug: it's the intended/expected behavior of 
{{refine:true}} (aka {{refine:simple}} )

If so i'll edit this jira into an "Improvement" & update the 
summary/description to clarify how {{facet.pivot}} refinement differs from 
{{json.facet}} + {{refine:simple}} & leave open for future improvement




As far as discussion on potential improvements


bq. From a correctness POV, smarter faceting is equivalent to increasing the 
overrequest amount... we still can't make guarantees.

Hmmm... I'm not sure that i agree with that assessment.  I guess 
"mathematically" speaking it's true that compared to a "smarter" refinement 
method, this "simple" refine method can product equally "correct" top terms 
solely by increasing the overrequest amount -- but that's like saying we don't 
even need any refinement method at all as long as we specify an infinite amount 
of overrequest.

With the refinement approach used by {{facet.field}} (and {{facet.pivot}}) we 
*can* make garuntees about the correctness of the top terms -- regardless of 
if/how-much overrequesting is used -- _for any term that is in the "top 
buckets" of at least one shard_.

IIUC the current {{json.facet}} refinement method can't make _any_ similar 
garuntees at all, regardless of what (finite) overrequest value is specified 
... but {{facet.field}} certainly can:

In {{facet.field}} today, If:
* A term is in the "top buckets" (limit + overrequest) returned by at least one 
shard
* And the sort value (ie: count) returned by that shard (along with the lowest 
sort-value/count returned by all other shards) indicates that the term _might_ 
be competitive realtive to the other terms returned by other shards
...then that term is refined. That's a garuntee we can make.

Meaning that even if you have shards with widely diff term stats (ie: time 
partioned shards, or docs co-located due to multi-level compositeId, or block 
join, etc..) we can/will refine the top terms from each shard.

In {{facet.field}} the overrequest helps to:
* increase the scope of how deep we look to find the "top (candidate) terms" 
from each shard
* decreases the amount of data we have to request when refineing

...but the *distribution* of terms across shards has very little (none? ... not 
certain) impact on the "correctness" of the "top N" in the aggregate.  Even if 
the first pass "top terms" from each shard is 100% unique, the *realtive* 
"bottom" counts from each shard is considered before assuming that the "higher" 
counts should win -- meaning that if the shards have very different sizes, "top 
terms" from the smaller shards still have a chance of being considered as an 
"aggregated top term" as long as the "bottom count" from the (larger) shards is 
high enough to indicate that those (missing) terms might still be competitive.

But in the {{json.facet}} approach to refinement, IIUC: A term returned by only 
one shard won't be considered unless the 

[JENKINS] Lucene-Solr-7.2-Windows (32bit/jdk1.8.0_144) - Build # 5 - Still Unstable!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Windows/5/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:56849/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:56849/collection1
at 
__randomizedtesting.SeedInfo.seed([692993E6D86F4A55:E17DAC3C769327AD]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.commit(AbstractFullDistribZkTestBase.java:1582)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:212)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 7044 - Still Unstable!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7044/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseG1GC

27 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.benchmark.byTask.TestPerfTasksParse

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.byTask.TestPerfTasksParse_33B0F956E57E9DF3-001\TestPerfTasksParse-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.byTask.TestPerfTasksParse_33B0F956E57E9DF3-001\TestPerfTasksParse-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.byTask.TestPerfTasksParse_33B0F956E57E9DF3-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.byTask.TestPerfTasksParse_33B0F956E57E9DF3-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.byTask.TestPerfTasksParse_33B0F956E57E9DF3-001\TestPerfTasksParse-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.byTask.TestPerfTasksParse_33B0F956E57E9DF3-001\TestPerfTasksParse-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.byTask.TestPerfTasksParse_33B0F956E57E9DF3-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.byTask.TestPerfTasksParse_33B0F956E57E9DF3-001

at __randomizedtesting.SeedInfo.seed([33B0F956E57E9DF3]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.lucene.mockfile.TestVerboseFS

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\J0\temp\lucene.mockfile.TestVerboseFS_32256ECEFFF4B6B2-001\tempDir-009:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\J0\temp\lucene.mockfile.TestVerboseFS_32256ECEFFF4B6B2-001\tempDir-009

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\J0\temp\lucene.mockfile.TestVerboseFS_32256ECEFFF4B6B2-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\J0\temp\lucene.mockfile.TestVerboseFS_32256ECEFFF4B6B2-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\J0\temp\lucene.mockfile.TestVerboseFS_32256ECEFFF4B6B2-001\tempDir-009:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\J0\temp\lucene.mockfile.TestVerboseFS_32256ECEFFF4B6B2-001\tempDir-009
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\J0\temp\lucene.mockfile.TestVerboseFS_32256ECEFFF4B6B2-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\J0\temp\lucene.mockfile.TestVerboseFS_32256ECEFFF4B6B2-001

at __randomizedtesting.SeedInfo.seed([32256ECEFFF4B6B2]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)

[jira] [Commented] (SOLR-11719) Committing to restored collection does nothing

2017-12-07 Thread Vitaly Lavrov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282110#comment-16282110
 ] 

Vitaly Lavrov commented on SOLR-11719:
--

Any comment on this issue?

> Committing to restored collection does nothing
> --
>
> Key: SOLR-11719
> URL: https://issues.apache.org/jira/browse/SOLR-11719
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.1
>Reporter: Vitaly Lavrov
> Attachments: script.sh
>
>
> Scenario that was reproduced many times:
> 1. Restore collection
> 2. Send updates
> 3. Send commit
> Commit request returns instantly and the index stays intact. After collection 
> reload or cluster reboot updates are visible and commits will work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8083) Give similarities better values for maxScore

2017-12-07 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282054#comment-16282054
 ] 

Robert Muir commented on LUCENE-8083:
-

{quote}
IBSimilarity with DistributionSPL, AxiomaticF2EXP and AxiomaticF2LOG
{quote}

These similarities can't work with maxScore. We should remove or move them out 
(LUCENE-8010) so it wont be confusing. They should also not be rotated in tests 
or they will cause confusion for e.g. booleanquery or phrasequery tests that 
try to test these optimizations.

> Give similarities better values for maxScore
> 
>
> Key: LUCENE-8083
> URL: https://issues.apache.org/jira/browse/LUCENE-8083
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8083.patch
>
>
> The benefits of LUCENE-4100 largely depend on the quality of the upper bound 
> of the scores that is provided by the similarity.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11691) v2 api for CREATEALIAS fails if given a list with more than one element

2017-12-07 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-11691.
-
   Resolution: Fixed
Fix Version/s: 7.2

> v2 api for CREATEALIAS fails if given a list with more than one element
> ---
>
> Key: SOLR-11691
> URL: https://issues.apache.org/jira/browse/SOLR-11691
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: v2 API
>Affects Versions: master (8.0)
>Reporter: Gus Heck
>Assignee: David Smiley
> Fix For: 7.2
>
> Attachments: SOLR-11691.patch, SOLR-11691.patch, SOLR-11691.patch, 
> repro.sh
>
>
> Successful, correct:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1"]
>   }
> }
> {code}
> Successful, but wrong:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1,collection2"]
>   }
> }
> {code}
> Fails, but should work based on details in _introspect:
> {code}
> {
>   "create-alias" : {
> "name": "testalias2",
> "collections":["collection1","collection2"]
>   }
> }
> {code}
> The error returned is:
> {code}
> {
> "responseHeader": {
> "status": 400,
> "QTime": 25
> },
> "Operation createalias caused exception:": 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Can't create collection alias for collections='[collection1, collection2]', 
> '[collection1' is not an existing collection or alias",
> "exception": {
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "rspCode": 400
> },
> "error": {
> "metadata": [
> "error-class",
> "org.apache.solr.common.SolrException",
> "root-error-class",
> "org.apache.solr.common.SolrException"
> ],
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "code": 400
> }
> }
> {code}
> whereas 
> {code}
> GET localhost:8981/api/c
> {code}
> yields
> {code}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 0
> },
> "collections": [
> "collection2",
> "collection1"
> ]
> }
> {code}
> Intropsection shows:
> {code}
>  "collections": {
>  "type": "array",
>  "description": "The list of collections to be known as this alias.",
>   "items": {
>   "type": "string"
>}
>   },
> {code}
> Basically the property is documented as an array, but parsed as a string (I 
> suspect it's parsed as a list but then the toString value of the list is 
> used, but haven't checked). We have a conflict between what is natural for 
> expressing a list in JSON (an array) and what is natural for expressing a 
> list as a parameter (comma separation). I'm unsure how best to resolve this, 
> as it's a question of making "direct translation" to v2 work vs making v2 
> more natural. I tend to favor accepting an array and therefore making v2 more 
> natural which would be more work, but want to know what others think. From a 
> back compatibility perspective, that direction also makes this clearly a bug 
> fix rather than a breaking change since it doesn't match the _introspect 
> documentation. I also haven't tried looking at old versions to find any 
> evidence as to whether the documented form worked previously... so I don't 
> know if this is a regression or if it never worked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.2-Linux (32bit/jdk1.8.0_144) - Build # 25 - Still Unstable!

2017-12-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/25/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseSerialGC

14 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaByCountForAllShards

Error Message:
Error from server at https://127.0.0.1:35131/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:35131/solr: create the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([D381AC2986326041:EF10EB7DC9733D14]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaByCountForAllShards(DeleteReplicaTest.java:136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11691) v2 api for CREATEALIAS fails if given a list with more than one element

2017-12-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282049#comment-16282049
 ] 

ASF subversion and git services commented on SOLR-11691:


Commit d17d331ec06f31c1abb625f53d3a0450e0c1c83c in lucene-solr's branch 
refs/heads/branch_7_2 from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d17d331 ]

SOLR-11691: Bug: V2 requests for create-alias didn't work when the collections 
param was an array.

(cherry picked from commit fe8dca8)


> v2 api for CREATEALIAS fails if given a list with more than one element
> ---
>
> Key: SOLR-11691
> URL: https://issues.apache.org/jira/browse/SOLR-11691
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: v2 API
>Affects Versions: master (8.0)
>Reporter: Gus Heck
>Assignee: David Smiley
> Attachments: SOLR-11691.patch, SOLR-11691.patch, SOLR-11691.patch, 
> repro.sh
>
>
> Successful, correct:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1"]
>   }
> }
> {code}
> Successful, but wrong:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1,collection2"]
>   }
> }
> {code}
> Fails, but should work based on details in _introspect:
> {code}
> {
>   "create-alias" : {
> "name": "testalias2",
> "collections":["collection1","collection2"]
>   }
> }
> {code}
> The error returned is:
> {code}
> {
> "responseHeader": {
> "status": 400,
> "QTime": 25
> },
> "Operation createalias caused exception:": 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Can't create collection alias for collections='[collection1, collection2]', 
> '[collection1' is not an existing collection or alias",
> "exception": {
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "rspCode": 400
> },
> "error": {
> "metadata": [
> "error-class",
> "org.apache.solr.common.SolrException",
> "root-error-class",
> "org.apache.solr.common.SolrException"
> ],
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "code": 400
> }
> }
> {code}
> whereas 
> {code}
> GET localhost:8981/api/c
> {code}
> yields
> {code}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 0
> },
> "collections": [
> "collection2",
> "collection1"
> ]
> }
> {code}
> Intropsection shows:
> {code}
>  "collections": {
>  "type": "array",
>  "description": "The list of collections to be known as this alias.",
>   "items": {
>   "type": "string"
>}
>   },
> {code}
> Basically the property is documented as an array, but parsed as a string (I 
> suspect it's parsed as a list but then the toString value of the list is 
> used, but haven't checked). We have a conflict between what is natural for 
> expressing a list in JSON (an array) and what is natural for expressing a 
> list as a parameter (comma separation). I'm unsure how best to resolve this, 
> as it's a question of making "direct translation" to v2 work vs making v2 
> more natural. I tend to favor accepting an array and therefore making v2 more 
> natural which would be more work, but want to know what others think. From a 
> back compatibility perspective, that direction also makes this clearly a bug 
> fix rather than a breaking change since it doesn't match the _introspect 
> documentation. I also haven't tried looking at old versions to find any 
> evidence as to whether the documented form worked previously... so I don't 
> know if this is a regression or if it never worked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11691) v2 api for CREATEALIAS fails if given a list with more than one element

2017-12-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282047#comment-16282047
 ] 

ASF subversion and git services commented on SOLR-11691:


Commit fe8dca8ea2ef9f58d106e109e2d02c0423e508c4 in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fe8dca8 ]

SOLR-11691: Bug: V2 requests for create-alias didn't work when the collections 
param was an array.

(cherry picked from commit 5448274)


> v2 api for CREATEALIAS fails if given a list with more than one element
> ---
>
> Key: SOLR-11691
> URL: https://issues.apache.org/jira/browse/SOLR-11691
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: v2 API
>Affects Versions: master (8.0)
>Reporter: Gus Heck
>Assignee: David Smiley
> Attachments: SOLR-11691.patch, SOLR-11691.patch, SOLR-11691.patch, 
> repro.sh
>
>
> Successful, correct:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1"]
>   }
> }
> {code}
> Successful, but wrong:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1,collection2"]
>   }
> }
> {code}
> Fails, but should work based on details in _introspect:
> {code}
> {
>   "create-alias" : {
> "name": "testalias2",
> "collections":["collection1","collection2"]
>   }
> }
> {code}
> The error returned is:
> {code}
> {
> "responseHeader": {
> "status": 400,
> "QTime": 25
> },
> "Operation createalias caused exception:": 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Can't create collection alias for collections='[collection1, collection2]', 
> '[collection1' is not an existing collection or alias",
> "exception": {
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "rspCode": 400
> },
> "error": {
> "metadata": [
> "error-class",
> "org.apache.solr.common.SolrException",
> "root-error-class",
> "org.apache.solr.common.SolrException"
> ],
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "code": 400
> }
> }
> {code}
> whereas 
> {code}
> GET localhost:8981/api/c
> {code}
> yields
> {code}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 0
> },
> "collections": [
> "collection2",
> "collection1"
> ]
> }
> {code}
> Intropsection shows:
> {code}
>  "collections": {
>  "type": "array",
>  "description": "The list of collections to be known as this alias.",
>   "items": {
>   "type": "string"
>}
>   },
> {code}
> Basically the property is documented as an array, but parsed as a string (I 
> suspect it's parsed as a list but then the toString value of the list is 
> used, but haven't checked). We have a conflict between what is natural for 
> expressing a list in JSON (an array) and what is natural for expressing a 
> list as a parameter (comma separation). I'm unsure how best to resolve this, 
> as it's a question of making "direct translation" to v2 work vs making v2 
> more natural. I tend to favor accepting an array and therefore making v2 more 
> natural which would be more work, but want to know what others think. From a 
> back compatibility perspective, that direction also makes this clearly a bug 
> fix rather than a breaking change since it doesn't match the _introspect 
> documentation. I also haven't tried looking at old versions to find any 
> evidence as to whether the documented form worked previously... so I don't 
> know if this is a regression or if it never worked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11691) v2 api for CREATEALIAS fails if given a list with more than one element

2017-12-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282045#comment-16282045
 ] 

ASF subversion and git services commented on SOLR-11691:


Commit 5448274f26191a9882aa5c3020e3cbdcbf93551c in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5448274 ]

SOLR-11691: Bug: V2 requests for create-alias didn't work when the collections 
param was an array.


> v2 api for CREATEALIAS fails if given a list with more than one element
> ---
>
> Key: SOLR-11691
> URL: https://issues.apache.org/jira/browse/SOLR-11691
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: v2 API
>Affects Versions: master (8.0)
>Reporter: Gus Heck
>Assignee: David Smiley
> Attachments: SOLR-11691.patch, SOLR-11691.patch, SOLR-11691.patch, 
> repro.sh
>
>
> Successful, correct:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1"]
>   }
> }
> {code}
> Successful, but wrong:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1,collection2"]
>   }
> }
> {code}
> Fails, but should work based on details in _introspect:
> {code}
> {
>   "create-alias" : {
> "name": "testalias2",
> "collections":["collection1","collection2"]
>   }
> }
> {code}
> The error returned is:
> {code}
> {
> "responseHeader": {
> "status": 400,
> "QTime": 25
> },
> "Operation createalias caused exception:": 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Can't create collection alias for collections='[collection1, collection2]', 
> '[collection1' is not an existing collection or alias",
> "exception": {
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "rspCode": 400
> },
> "error": {
> "metadata": [
> "error-class",
> "org.apache.solr.common.SolrException",
> "root-error-class",
> "org.apache.solr.common.SolrException"
> ],
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "code": 400
> }
> }
> {code}
> whereas 
> {code}
> GET localhost:8981/api/c
> {code}
> yields
> {code}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 0
> },
> "collections": [
> "collection2",
> "collection1"
> ]
> }
> {code}
> Intropsection shows:
> {code}
>  "collections": {
>  "type": "array",
>  "description": "The list of collections to be known as this alias.",
>   "items": {
>   "type": "string"
>}
>   },
> {code}
> Basically the property is documented as an array, but parsed as a string (I 
> suspect it's parsed as a list but then the toString value of the list is 
> used, but haven't checked). We have a conflict between what is natural for 
> expressing a list in JSON (an array) and what is natural for expressing a 
> list as a parameter (comma separation). I'm unsure how best to resolve this, 
> as it's a question of making "direct translation" to v2 work vs making v2 
> more natural. I tend to favor accepting an array and therefore making v2 more 
> natural which would be more work, but want to know what others think. From a 
> back compatibility perspective, that direction also makes this clearly a bug 
> fix rather than a breaking change since it doesn't match the _introspect 
> documentation. I also haven't tried looking at old versions to find any 
> evidence as to whether the documented form worked previously... so I don't 
> know if this is a regression or if it never worked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11331) Ability to Debug Solr With Eclipse IDE

2017-12-07 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282034#comment-16282034
 ] 

David Smiley commented on SOLR-11331:
-

Our IntelliJ config already has this.  I've found one annoyance though -- it 
requires I do "ant server" first since the running Jetty's Main ultimately 
loads webapps and doesn't care what the current classpath is (or so it 
appears).  Is the same so with your solution?  It's not clear to me.

> Ability to Debug Solr With Eclipse IDE
> --
>
> Key: SOLR-11331
> URL: https://issues.apache.org/jira/browse/SOLR-11331
> Project: Solr
>  Issue Type: Improvement
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
>Priority: Minor
> Attachments: SOLR-11331.patch
>
>
> Ability to Debug Solr With Eclipse IDE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11730) Test NodeLost / NodeAdded dynamics

2017-12-07 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282027#comment-16282027
 ] 

Andrzej Bialecki  commented on SOLR-11730:
--

Simulations indicate that even with significant flakiness the framework may not 
take any actions if there are other events happening too, because even if a 
node lost trigger creates an event that event may be discarded due to the 
cooldown period. And after the cooldown period has passed the flaky node may be 
back up again, so the event would not be generated again.

> Test NodeLost / NodeAdded dynamics
> --
>
> Key: SOLR-11730
> URL: https://issues.apache.org/jira/browse/SOLR-11730
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>
> Let's consider a "flaky node" scenario.
> A node is going up and down at short intervals (eg. due to a flaky network 
> cable). If the frequency of these events coincides with {{waitFor}} interval 
> in {{nodeLost}} trigger configuration, the node may never be reported to the 
> autoscaling framework as lost. Similarly it may never be reported as added 
> back if it's lost again within the {{waitFor}} period of {{nodeAdded}} 
> trigger.
> Other scenarios are possible here too, depending on timing:
> * node being constantly reported as lost
> * node being constantly reported as added
> One possible solution for the autoscaling triggers is that the framework 
> should keep a short-term ({{waitFor * 2}} long?) memory of a node state that 
> the trigger is tracking in order to eliminate flaky nodes (ie. those that 
> transitioned between states more than once within the period).
> Situation like this is detrimental to SolrCloud behavior regardless of 
> autoscaling actions, so it should probably be addressed at a node level by 
> eg. shutting down Solr node after the number of disconnects in a time window 
> reaches a certain threshold.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >