[JENKINS] Lucene-Solr-6.1-Linux (64bit/jdk1.8.0_92) - Build # 33 - Still Failing!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.1-Linux/33/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 53851 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-6.1-Linux/build.xml:740: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.1-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.1-Linux/lucene/build.xml:138: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.1-Linux/lucene/build.xml:480: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.1-Linux/lucene/common-build.xml:2496: 
Can't get https://issues.apache.org/jira/rest/api/2/project/LUCENE to 
/home/jenkins/workspace/Lucene-Solr-6.1-Linux/lucene/build/docs/changes/jiraVersionList.json

Total time: 69 minutes 53 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Guidance needed

2016-06-13 Thread Dan Davis
There's an excellent book called "Code Reading" that could help you make
sense of it.

My advice is to figure out first how to:
 (a) build using ant, outside of Eclipse or any IDE
 (b) run the tests using ant, outside of Eclipse or any IDE
 (c) start a server with custom configuration, outside of Eclipse or any IDE

Then, you need to start dividing the code into different components in your
mind - Solr, Lucene, Indexing, Query Process, Structures on Disk,
Structures in memory (cache), etc.

I wish I had time to follow my own advice - I know what to do, but have a
couple of full-time commitments (work & family)

On Mon, Apr 11, 2016 at 2:58 PM, gor joseph  wrote:

>
> Good Morning ,
>
>
>
>
> I am a young engineer looking to join and contribute to the project
> .However , I got stuck on the overwhelming docs and thousands of lines of
> code .
>
>  can anyone please give me advice on how to understand the project and
> contribute effectively ,?
>
>
>
>
> Thanks
>
>
>
>
>
>
> Sincerely ,
> Joseph.
> LinkedIn : https://fr.linkedin.com/in/josephgor
> Mobile : +33 630733572
> Skype :gor.jos...@outlook.com
> E-mail :gor.jos...@outlook.com


[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_92) - Build # 247 - Still Failing!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/247/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 61555 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: C:\Users\jenkins\AppData\Local\Temp\ecj1294019718
 [ecj-lint] Compiling 932 source files to 
C:\Users\jenkins\AppData\Local\Temp\ecj1294019718
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\lib\org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\lib\org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\core\HdfsDirectoryFactory.java
 (at line 34)
 [ecj-lint] import org.apache.hadoop.fs.FsStatus;
 [ecj-lint]^
 [ecj-lint] The import org.apache.hadoop.fs.FsStatus is never used
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\core\HdfsDirectoryFactory.java
 (at line 227)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\handler\AnalysisRequestHandlerBase.java
 (at line 120)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\handler\AnalysisRequestHandlerBase.java
 (at line 144)
 [ecj-lint] return namedList;
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. ERROR in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\handler\ReplicationHandler.java
 (at line 79)
 [ecj-lint] import org.apache.solr.core.DirectoryFactory;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.core.DirectoryFactory is never used
 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_92) - Build # 16984 - Still Failing!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16984/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 12455 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp/junit4-J1-20160614_025417_997.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: Java heap space
   [junit4] Dumping heap to 
/home/jenkins/workspace/Lucene-Solr-master-Linux/heapdumps/java_pid18509.hprof 
...
   [junit4] Heap dump file created [301756317 bytes in 0.778 secs]
   [junit4] <<< JVM J1: EOF 

[...truncated 10290 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:740: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:692: Some of the 
tests produced a heap dump, but did not fail. Maybe a suppressed 
OutOfMemoryError? Dumps created:
* java_pid18509.hprof

Total time: 62 minutes 51 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-6.1-Windows (64bit/jdk1.8.0_92) - Build # 10 - Still Failing!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.1-Windows/10/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"/ksom/sd", "path":"/test1", 
"httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":null},  from 
server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val' for path 'x' 
full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"/ksom/sd",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":null},  from server:  null
at 
__randomizedtesting.SeedInfo.seed([E64F5A8119B24C82:3E0277D6EE6FE922]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:233)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_92) - Build # 889 - Failure!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/889/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 61552 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj1852036018
 [ecj-lint] Compiling 932 source files to /tmp/ecj1852036018
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 34)
 [ecj-lint] import org.apache.hadoop.fs.FsStatus;
 [ecj-lint]^
 [ecj-lint] The import org.apache.hadoop.fs.FsStatus is never used
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 227)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 120)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 144)
 [ecj-lint] return namedList;
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. ERROR in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java
 (at line 79)
 [ecj-lint] import org.apache.solr.core.DirectoryFactory;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.core.DirectoryFactory is never used
 [ecj-lint] --
 [ecj-lint] 12. WARNING in 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_92) - Build # 5909 - Still Failing!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5909/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 63144 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: C:\Users\jenkins\AppData\Local\Temp\ecj26987694
 [ecj-lint] Compiling 932 source files to 
C:\Users\jenkins\AppData\Local\Temp\ecj26987694
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\core\lib\org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\core\lib\org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\core\src\java\org\apache\solr\core\HdfsDirectoryFactory.java
 (at line 34)
 [ecj-lint] import org.apache.hadoop.fs.FsStatus;
 [ecj-lint]^
 [ecj-lint] The import org.apache.hadoop.fs.FsStatus is never used
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\core\src\java\org\apache\solr\core\HdfsDirectoryFactory.java
 (at line 227)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\core\src\java\org\apache\solr\handler\AnalysisRequestHandlerBase.java
 (at line 120)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\core\src\java\org\apache\solr\handler\AnalysisRequestHandlerBase.java
 (at line 144)
 [ecj-lint] return namedList;
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. ERROR in 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\core\src\java\org\apache\solr\handler\ReplicationHandler.java
 (at line 79)
 [ecj-lint] import org.apache.solr.core.DirectoryFactory;
 [ecj-lint]^
 [ecj-lint] The import 

Re: [VOTE] Release Lucene/Solr 6.1.0 RC1

2016-06-13 Thread Steve Rowe
I’ve committed fixes for all three problems.

--
Steve
www.lucidworks.com

> On Jun 13, 2016, at 2:46 PM, Steve Rowe  wrote:
> 
> Smoke tester was happy: SUCCESS! [0:23:40.900240]
> 
> Except for the below-described minor issues: changes, docs and javadocs look 
> good:
> 
> * Broken description section links from documentation to javadocs 
> 
> * Solr’s CHANGES.txt is missing a “Versions of Major Components” section.
> * Solr’s Changes.html has a section "Upgrading from Solr any prior release” 
> that is not formatted properly (the hyphens are put into a bullet item below)
> 
> +0 to release.  I’ll work on the above and backport to the 6.1 branch, in 
> case there is another RC.
> 
> --
> Steve
> www.lucidworks.com
> 
>> On Jun 13, 2016, at 5:15 AM, Adrien Grand  wrote:
>> 
>> Please vote for release candidate 1 for Lucene/Solr 6.1.0
>> 
>> 
>> The artifacts can be downloaded from:
>> 
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/
>> 
>> You can run the smoke tester directly with this command:
>> 
>> 
>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>> 
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/
>> Here is my +1.
>> SUCCESS! [0:36:57.750669]
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.1-Linux (64bit/jdk1.8.0_92) - Build # 32 - Failure!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.1-Linux/32/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestMiniSolrCloudCluster.testStopAllStartAll

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
at 
__randomizedtesting.SeedInfo.seed([78ABA6732D8FE0C4:E95B9006CB84DEB]:0)
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:326)
at 
org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at 
org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:244)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:384)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:327)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:377)
at 
org.apache.solr.cloud.TestMiniSolrCloudCluster.testStopAllStartAll(TestMiniSolrCloudCluster.java:443)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 92 - Still Failing

2016-06-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/92/

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
3 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=17183, 
name=searcherExecutor-7588-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=17205, 
name=searcherExecutor-7595-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=17185, 
name=searcherExecutor-7587-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 3 threads leaked from SUITE 
scope at org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 
   1) Thread[id=17183, name=searcherExecutor-7588-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
   2) Thread[id=17205, name=searcherExecutor-7595-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
   3) Thread[id=17185, name=searcherExecutor-7587-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 

[JENKINS] Lucene-Solr-5.5-Linux (64bit/jdk1.8.0_92) - Build # 278 - Failure!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/278/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 68755 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.5-Linux/build.xml:750: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.5-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.5-Linux/solr/build.xml:632: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.5-Linux/solr/build.xml:607: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.5-Linux/lucene/common-build.xml:2606: 
Can't get https://issues.apache.org/jira/rest/api/2/project/SOLR to 
/home/jenkins/workspace/Lucene-Solr-5.5-Linux/solr/build/docs/changes/jiraVersionList.json

Total time: 71 minutes 1 second
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-6590) Explore different ways to apply boosts

2016-06-13 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328467#comment-15328467
 ] 

Upayavira commented on LUCENE-6590:
---

It seems a previous test I did was flawed (I thought I was pushing updated 
configs, but suspect that I was actually pushing old ones). Scoring is now 
working correctly, the main change being an update of the Lucene match version 
from 4.6 to 5.5.

> Explore different ways to apply boosts
> --
>
> Key: LUCENE-6590
> URL: https://issues.apache.org/jira/browse/LUCENE-6590
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, 
> LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch
>
>
> Follow-up from LUCENE-6570: the fact that all queries are mutable in order to 
> allow for applying a boost raises issues since it makes queries bad cache 
> keys since their hashcode can change anytime. We could just document that 
> queries should never be modified after they have gone through IndexSearcher 
> but it would be even better if the API made queries impossible to mutate at 
> all.
> I think there are two main options:
>  - either replace "void setBoost(boost)" with something like "Query 
> withBoost(boost)" which would return a clone that has a different boost
>  - or move boost handling outside of Query, for instance we could have a 
> (immutable) query impl that would be dedicated to applying boosts, that 
> queries that need to change boosts at rewrite time (such as BooleanQuery) 
> would use as a wrapper.
> The latter idea is from Robert and I like it a lot given how often I either 
> introduced or found a bug which was due to the boost parameter being ignored. 
> Maybe there are other options, but I think this is worth exploring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6590) Explore different ways to apply boosts

2016-06-13 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328413#comment-15328413
 ] 

Upayavira commented on LUCENE-6590:
---

It is quite possibly something in my setup. I figured because someone else 
reported the same issue it might be more global, but I think now it is time for 
me to assume I've done something stupid. Apologies and thanks.

> Explore different ways to apply boosts
> --
>
> Key: LUCENE-6590
> URL: https://issues.apache.org/jira/browse/LUCENE-6590
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, 
> LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch
>
>
> Follow-up from LUCENE-6570: the fact that all queries are mutable in order to 
> allow for applying a boost raises issues since it makes queries bad cache 
> keys since their hashcode can change anytime. We could just document that 
> queries should never be modified after they have gone through IndexSearcher 
> but it would be even better if the API made queries impossible to mutate at 
> all.
> I think there are two main options:
>  - either replace "void setBoost(boost)" with something like "Query 
> withBoost(boost)" which would return a clone that has a different boost
>  - or move boost handling outside of Query, for instance we could have a 
> (immutable) query impl that would be dedicated to applying boosts, that 
> queries that need to change boosts at rewrite time (such as BooleanQuery) 
> would use as a wrapper.
> The latter idea is from Robert and I like it a lot given how often I either 
> introduced or found a bug which was due to the boost parameter being ignored. 
> Maybe there are other options, but I think this is worth exploring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+122) - Build # 16983 - Still Failing!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16983/
Java: 32bit/jdk-9-ea+122 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"/jtx/lk", "path":"/test1", 
"httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":null},  from 
server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val' for path 'x' 
full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"/jtx/lk",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":null},  from server:  null
at 
__randomizedtesting.SeedInfo.seed([80F9B8A8BD1F575E:58B495FF4AC2F2FE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:232)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7337) MultiTermQuery are sometimes rewritten into an empty boolean query

2016-06-13 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328362#comment-15328362
 ] 

Michael McCandless commented on LUCENE-7337:


bq. A simple fix would be to replace the empty boolean query produced by the 
multi term query with a MatchNoDocsQuery but I am not sure that it's the best 
way to fix. 

+1

Or more generally can we have an empty-clause BQ rewrite to 
{{MatchNoDocsQuery}}?  I had folded this into my attempt to fix the 
world's-hardest-toString-issue (LUCENE-7276) but it was too many changes to try 
at once, so breaking it out here is great.

However, before we can do this, we need to fix {{MatchNoDocsQuery}} to not 
rewrite to an empty BQ else we get a never-terminating rewrite ;)

> MultiTermQuery are sometimes rewritten into an empty boolean query
> --
>
> Key: LUCENE-7337
> URL: https://issues.apache.org/jira/browse/LUCENE-7337
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Reporter: Ferenczi Jim
>Priority: Minor
>
> MultiTermQuery are sometimes rewritten to an empty boolean query (depending 
> on the rewrite method), it can happen when no expansions are found on a fuzzy 
> query for instance.
> It can be problematic when the multi term query is boosted. 
> For instance consider the following query:
> `((title:bar~1)^100 text:bar)`
> This is a boolean query with two optional clauses. The first one is a fuzzy 
> query on the field title with a boost of 100. 
> If there is no expansion for "title:bar~1" the query is rewritten into:
> `(()^100 text:bar)`
> ... and when expansions are found:
> `((title:bars | title:bar)^100 text:bar)`
> The scoring of those two queries will differ because the normalization factor 
> and the norm for the first query will be equal to 1 (the boost is ignored 
> because the empty boolean query is not taken into account for the computation 
> of the normalization factor) whereas the second query will have a 
> normalization factor of 10,000 (100*100) and a norm equal to 0.01. 
> This kind of discrepancy can happen in a single index because the expansions 
> for the fuzzy query are done at the segment level. It can also happen when 
> multiple indices are requested (Solr/ElasticSearch case).
> A simple fix would be to replace the empty boolean query produced by the 
> multi term query with a MatchNoDocsQuery but I am not sure that it's the best 
> way to fix. WDYT ?
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7276) Add an optional reason to the MatchNoDocsQuery

2016-06-13 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328351#comment-15328351
 ] 

Michael McCandless commented on LUCENE-7276:


I don't think we have ever, nor should we ever, make a guarantee that 
{{MatchNoDocsQuery.toString}} would somehow round-trip through a query parser 
back to itself, and so I think we are free to improve it here/now.

A right not exercised is soon forgotten.

> Add an optional reason to the MatchNoDocsQuery
> --
>
> Key: LUCENE-7276
> URL: https://issues.apache.org/jira/browse/LUCENE-7276
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Ferenczi Jim
>Priority: Minor
>  Labels: patch
> Attachments: LUCENE-7276.patch, LUCENE-7276.patch, LUCENE-7276.patch, 
> LUCENE-7276.patch, LUCENE-7276.patch
>
>
> It's sometimes difficult to debug a query that results in a MatchNoDocsQuery. 
> The MatchNoDocsQuery is always rewritten in an empty boolean query.
> This patch adds an optional reason and implements a weight in order to keep 
> track of the reason why the query did not match any document. The reason is 
> printed on toString and when an explanation for noMatch is asked.  
> For instance the query:
> new MatchNoDocsQuery("Field not found").toString()
> => 'MatchNoDocsQuery["field 'title' not found"]'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7276) Add an optional reason to the MatchNoDocsQuery

2016-06-13 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328347#comment-15328347
 ] 

Michael McCandless commented on LUCENE-7276:


Wow thank you for digging on this [~jim.ferenczi] ... I've been meaning to get 
back to this issue, and if we can fix that scoring issue (separately) that will 
make it much easier.

> Add an optional reason to the MatchNoDocsQuery
> --
>
> Key: LUCENE-7276
> URL: https://issues.apache.org/jira/browse/LUCENE-7276
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Ferenczi Jim
>Priority: Minor
>  Labels: patch
> Attachments: LUCENE-7276.patch, LUCENE-7276.patch, LUCENE-7276.patch, 
> LUCENE-7276.patch, LUCENE-7276.patch
>
>
> It's sometimes difficult to debug a query that results in a MatchNoDocsQuery. 
> The MatchNoDocsQuery is always rewritten in an empty boolean query.
> This patch adds an optional reason and implements a weight in order to keep 
> track of the reason why the query did not match any document. The reason is 
> printed on toString and when an explanation for noMatch is asked.  
> For instance the query:
> new MatchNoDocsQuery("Field not found").toString()
> => 'MatchNoDocsQuery["field 'title' not found"]'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-06-13 Thread Shikha Somani (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328320#comment-15328320
 ] 

Shikha Somani edited comment on SOLR-8297 at 6/13/16 9:33 PM:
--

Below are two proposed solutions to “Allow join query over 2 sharded 
collections” i.e. fixing the broken functionality in Solr 5.x. It is not an 
enhancement for supporting join on multiple shards present on same jvm.

*Proposed solution*: Two possible solutions:
*1. Distributed join with Range*: This will allow join with greater flexibility 
by considering range instead of shard name while selecting fromCollection 
replica. The current implementation requires fromCollection to be singly 
sharded, with this solution fromCollection can be either singly sharded, 
equally sharded (as toCollection) or it can overlap with toCollection range.

* *Solution details*: A new parameter “joinMode” will be introduced. This 
parameter will govern on what basis replica will be selected based on range.
Possible values of joinMode:
** *Exact*: The “fromCollection” shard range should exactly match with 
“toCollection” shard present on that node then only join will be applied 
between two collections. This is the _default_ value
** *Overlap*: Shard range of “fromCollection” should overlap with 
“toCollection” on given node. 
** *Any*: This option will not consider range check, it will pick any replica 
of fromCollection that is present on that node and apply join

*2. Non-distributed join*: The same way join worked in Solr 4.x. Client will 
mention exact replica of “fromCollection” with which join will be applied. It 
is required to pass  “distrib=false” in query parameters

If either of the solution is fine will submit a PR for that.


was (Author: shikhasomani):
Below are two proposed solutions to “Allow join query over 2 sharded 
collections” i.e. fixing the broken functionality in Solr 5.x. It is not an 
enhancement for supporting join on multiple shards present on same jvm.

*Proposed solution*: Two possible solutions:
# *Distributed join with Range*: This will allow join with greater flexibility 
by considering range instead of shard name (rigid criteria) while selecting 
fromCollection replica. The current implementation requires fromCollection to 
be singly sharded, with this solution fromCollection can be either singly 
sharded, equally sharded (as toCollection) or it can overlap with toCollection 
range.

** *Solution details*: A new parameter “joinMode” will be introduced. This 
parameter will govern on what basis replica will be selected based on range.
Possible values of joinMode:
#**Exact*: The “fromCollection” shard range should exactly match with 
“toCollection” shard present on that node then only join will be applied 
between two collections. This is the _default_ value
#**Overlap*: Shard range of “fromCollection” should overlap with “toCollection” 
on given node. 
#**Any*: This option will not consider range check, it will pick any replica of 
fromCollection that is present on that node and apply join
#*Non-distributed join*: The same way it worked in Solr 4.x. Client will 
mention exact replica of “fromCollection” with which join will be applied. It 
is required to pass  “distrib=false” in query parameters

If this solution is fine will submit a PR for this fix.

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 

[jira] [Commented] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-06-13 Thread Shikha Somani (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328320#comment-15328320
 ] 

Shikha Somani commented on SOLR-8297:
-

Below are two proposed solutions to “Allow join query over 2 sharded 
collections” i.e. fixing the broken functionality in Solr 5.x. It is not an 
enhancement for supporting join on multiple shards present on same jvm.

*Proposed solution*: Two possible solutions:
# *Distributed join with Range*: This will allow join with greater flexibility 
by considering range instead of shard name (rigid criteria) while selecting 
fromCollection replica. The current implementation requires fromCollection to 
be singly sharded, with this solution fromCollection can be either singly 
sharded, equally sharded (as toCollection) or it can overlap with toCollection 
range.

** *Solution details*: A new parameter “joinMode” will be introduced. This 
parameter will govern on what basis replica will be selected based on range.
Possible values of joinMode:
#**Exact*: The “fromCollection” shard range should exactly match with 
“toCollection” shard present on that node then only join will be applied 
between two collections. This is the _default_ value
#**Overlap*: Shard range of “fromCollection” should overlap with “toCollection” 
on given node. 
#**Any*: This option will not consider range check, it will pick any replica of 
fromCollection that is present on that node and apply join
#*Non-distributed join*: The same way it worked in Solr 4.x. Client will 
mention exact replica of “fromCollection” with which join will be applied. It 
is required to pass  “distrib=false” in query parameters

If this solution is fine will submit a PR for this fix.

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem to be a "normal" 
> use-case in the cross-core join in SolrCloud.
> Hope this helps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6590) Explore different ways to apply boosts

2016-06-13 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328281#comment-15328281
 ] 

Adrien Grand commented on LUCENE-6590:
--

Thanks for the ping, I had missed your previous message. The bug is that 
queryNorm should not be 1.0 in the 5.5 explanation. There must be something 
that by-passes query normalization somewhere. I believe your query was a simple 
term query for description:obama, is it correct? Since I had run something 
similar and did not reproduce the bug, I believe there must be something 
specific to your setup that triggers this problem. Could you try to build a 
reproducible test case so that I can dig what is happening, either an actual 
test case or a sequence of commands that I can run against Solr to reproduce 
the problem?

> Explore different ways to apply boosts
> --
>
> Key: LUCENE-6590
> URL: https://issues.apache.org/jira/browse/LUCENE-6590
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, 
> LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch
>
>
> Follow-up from LUCENE-6570: the fact that all queries are mutable in order to 
> allow for applying a boost raises issues since it makes queries bad cache 
> keys since their hashcode can change anytime. We could just document that 
> queries should never be modified after they have gone through IndexSearcher 
> but it would be even better if the API made queries impossible to mutate at 
> all.
> I think there are two main options:
>  - either replace "void setBoost(boost)" with something like "Query 
> withBoost(boost)" which would return a clone that has a different boost
>  - or move boost handling outside of Query, for instance we could have a 
> (immutable) query impl that would be dedicated to applying boosts, that 
> queries that need to change boosts at rewrite time (such as BooleanQuery) 
> would use as a wrapper.
> The latter idea is from Robert and I like it a lot given how often I either 
> introduced or found a bug which was due to the boost parameter being ignored. 
> Maybe there are other options, but I think this is worth exploring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7338) Broken description section links from documentation to javadocs

2016-06-13 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-7338.

   Resolution: Fixed
Fix Version/s: 6.2
   6.0.2
   master (7.0)
   6.1.1

> Broken description section links from documentation to javadocs
> ---
>
> Key: LUCENE-7338
> URL: https://issues.apache.org/jira/browse/LUCENE-7338
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/javadocs
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: 6.1.1, master (7.0), 6.0.2, 6.2
>
> Attachments: LUCENE-7338.patch
>
>
> In Lucene's top-level documentation, there are links to Description sections 
> in Javadocs, e.g. in the Getting Started section: to the Lucene demo; to an 
> Introduction to Lucene's APIs; and to the Analysis overview.
> All of these links are anchored at {{#overview_description}} or 
> {{#package_description}}, but it looks like Java8 switched how these anchors 
> are named: in the 6.0.0, 6.0.1 and now the 6.1.0 RC1 javadocs, these anchors 
> are named with dots rather than underscores: {{#overview.description}} and 
> {{#package.description}}.  As a result, the documentation links go to the 
> right page, but the browser stays at the top of the page because it can't 
> find the now-misnamed anchors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8972) Add GraphHandler and GraphMLResponseWriter to support graph visualizations

2016-06-13 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-8972.
--
Resolution: Fixed

> Add GraphHandler and GraphMLResponseWriter to support graph visualizations
> --
>
> Key: SOLR-8972
> URL: https://issues.apache.org/jira/browse/SOLR-8972
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: GraphHandler.java, GraphMLResponseWriter.java, 
> SOLR-8972.patch, SOLR-8972.patch, SOLR-8972.patch
>
>
> SOLR-8925 is shaping up nicely. It would be great if Solr could support 
> outputting graphs in GraphML. This will allow users to visualize their graphs 
> in a number of graph visualization tools (NodeXL, Gephi, Tulip etc...). This 
> ticket will create a new Graph handler which will take a Streaming Expression 
> graph traversal and output GraphML. A new GraphMLResponseWriter will handle 
> the GraphML formatting. In future releases we can consider supporting other 
> graph formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7338) Broken description section links from documentation to javadocs

2016-06-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328214#comment-15328214
 ] 

ASF subversion and git services commented on LUCENE-7338:
-

Commit 112b7f308dc867398e0a7a03e813dc361ef488dc in lucene-solr's branch 
refs/heads/branch_6_1 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=112b7f3 ]

LUCENE-7338: Fix javadocs package and overview description section anchor names 
to the Java8 style: s/*_description/*.description/


> Broken description section links from documentation to javadocs
> ---
>
> Key: LUCENE-7338
> URL: https://issues.apache.org/jira/browse/LUCENE-7338
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/javadocs
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Attachments: LUCENE-7338.patch
>
>
> In Lucene's top-level documentation, there are links to Description sections 
> in Javadocs, e.g. in the Getting Started section: to the Lucene demo; to an 
> Introduction to Lucene's APIs; and to the Analysis overview.
> All of these links are anchored at {{#overview_description}} or 
> {{#package_description}}, but it looks like Java8 switched how these anchors 
> are named: in the 6.0.0, 6.0.1 and now the 6.1.0 RC1 javadocs, these anchors 
> are named with dots rather than underscores: {{#overview.description}} and 
> {{#package.description}}.  As a result, the documentation links go to the 
> right page, but the browser stays at the top of the page because it can't 
> find the now-misnamed anchors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7338) Broken description section links from documentation to javadocs

2016-06-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328216#comment-15328216
 ] 

ASF subversion and git services commented on LUCENE-7338:
-

Commit a2a1bd2a4ae91e6b990a3b7f9df62802acddf40e in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a2a1bd2 ]

LUCENE-7338: Fix javadocs package and overview description section anchor names 
to the Java8 style: s/*_description/*.description/


> Broken description section links from documentation to javadocs
> ---
>
> Key: LUCENE-7338
> URL: https://issues.apache.org/jira/browse/LUCENE-7338
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/javadocs
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Attachments: LUCENE-7338.patch
>
>
> In Lucene's top-level documentation, there are links to Description sections 
> in Javadocs, e.g. in the Getting Started section: to the Lucene demo; to an 
> Introduction to Lucene's APIs; and to the Analysis overview.
> All of these links are anchored at {{#overview_description}} or 
> {{#package_description}}, but it looks like Java8 switched how these anchors 
> are named: in the 6.0.0, 6.0.1 and now the 6.1.0 RC1 javadocs, these anchors 
> are named with dots rather than underscores: {{#overview.description}} and 
> {{#package.description}}.  As a result, the documentation links go to the 
> right page, but the browser stays at the top of the page because it can't 
> find the now-misnamed anchors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7338) Broken description section links from documentation to javadocs

2016-06-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328215#comment-15328215
 ] 

ASF subversion and git services commented on LUCENE-7338:
-

Commit 03838732c1a720a4d82ffd7d1433563c16fc9876 in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0383873 ]

LUCENE-7338: Fix javadocs package and overview description section anchor names 
to the Java8 style: s/*_description/*.description/


> Broken description section links from documentation to javadocs
> ---
>
> Key: LUCENE-7338
> URL: https://issues.apache.org/jira/browse/LUCENE-7338
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/javadocs
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Attachments: LUCENE-7338.patch
>
>
> In Lucene's top-level documentation, there are links to Description sections 
> in Javadocs, e.g. in the Getting Started section: to the Lucene demo; to an 
> Introduction to Lucene's APIs; and to the Analysis overview.
> All of these links are anchored at {{#overview_description}} or 
> {{#package_description}}, but it looks like Java8 switched how these anchors 
> are named: in the 6.0.0, 6.0.1 and now the 6.1.0 RC1 javadocs, these anchors 
> are named with dots rather than underscores: {{#overview.description}} and 
> {{#package.description}}.  As a result, the documentation links go to the 
> right page, but the browser stays at the top of the page because it can't 
> find the now-misnamed anchors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7338) Broken description section links from documentation to javadocs

2016-06-13 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328209#comment-15328209
 ] 

Steve Rowe edited comment on LUCENE-7338 at 6/13/16 8:37 PM:
-

I looked for other anchor names in Lucene/Solr source with regex 
{{#\[a-z0-9]+_}}, but didn't find anything other than the ones I'd already 
spotted ({{#package_description}} and {{#overview_description}}).

This patch fixes anchor names in the Lucene site's per-release {{index.xsl}} as 
well a few mentions in javadocs.

Committing shortly.


was (Author: steve_rowe):
I looked for other anchor names in Lucene/Solr source with regex 
{{#\[a-z0-9]+_}}, but didn't find anything other than the ones I'd already 
spotted ({{#package_description}} and {{#overview_description}}.

This patch fixes anchor names in the Lucene site's per-release {{index.xsl}} as 
well a few mentions in javadocs.

Committing shortly.

> Broken description section links from documentation to javadocs
> ---
>
> Key: LUCENE-7338
> URL: https://issues.apache.org/jira/browse/LUCENE-7338
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/javadocs
>Reporter: Steve Rowe
> Attachments: LUCENE-7338.patch
>
>
> In Lucene's top-level documentation, there are links to Description sections 
> in Javadocs, e.g. in the Getting Started section: to the Lucene demo; to an 
> Introduction to Lucene's APIs; and to the Analysis overview.
> All of these links are anchored at {{#overview_description}} or 
> {{#package_description}}, but it looks like Java8 switched how these anchors 
> are named: in the 6.0.0, 6.0.1 and now the 6.1.0 RC1 javadocs, these anchors 
> are named with dots rather than underscores: {{#overview.description}} and 
> {{#package.description}}.  As a result, the documentation links go to the 
> right page, but the browser stays at the top of the page because it can't 
> find the now-misnamed anchors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-7338) Broken description section links from documentation to javadocs

2016-06-13 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned LUCENE-7338:
--

Assignee: Steve Rowe

> Broken description section links from documentation to javadocs
> ---
>
> Key: LUCENE-7338
> URL: https://issues.apache.org/jira/browse/LUCENE-7338
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/javadocs
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Attachments: LUCENE-7338.patch
>
>
> In Lucene's top-level documentation, there are links to Description sections 
> in Javadocs, e.g. in the Getting Started section: to the Lucene demo; to an 
> Introduction to Lucene's APIs; and to the Analysis overview.
> All of these links are anchored at {{#overview_description}} or 
> {{#package_description}}, but it looks like Java8 switched how these anchors 
> are named: in the 6.0.0, 6.0.1 and now the 6.1.0 RC1 javadocs, these anchors 
> are named with dots rather than underscores: {{#overview.description}} and 
> {{#package.description}}.  As a result, the documentation links go to the 
> right page, but the browser stays at the top of the page because it can't 
> find the now-misnamed anchors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7338) Broken description section links from documentation to javadocs

2016-06-13 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-7338:
---
Attachment: LUCENE-7338.patch

I looked for other anchor names in Lucene/Solr source with regex 
{{#\[a-z0-9]+_}}, but didn't find anything other than the ones I'd already 
spotted ({{#package_description}} and {{#overview_description}}.

This patch fixes anchor names in the Lucene site's per-release {{index.xsl}} as 
well a few mentions in javadocs.

Committing shortly.

> Broken description section links from documentation to javadocs
> ---
>
> Key: LUCENE-7338
> URL: https://issues.apache.org/jira/browse/LUCENE-7338
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/javadocs
>Reporter: Steve Rowe
> Attachments: LUCENE-7338.patch
>
>
> In Lucene's top-level documentation, there are links to Description sections 
> in Javadocs, e.g. in the Getting Started section: to the Lucene demo; to an 
> Introduction to Lucene's APIs; and to the Analysis overview.
> All of these links are anchored at {{#overview_description}} or 
> {{#package_description}}, but it looks like Java8 switched how these anchors 
> are named: in the 6.0.0, 6.0.1 and now the 6.1.0 RC1 javadocs, these anchors 
> are named with dots rather than underscores: {{#overview.description}} and 
> {{#package.description}}.  As a result, the documentation links go to the 
> right page, but the browser stays at the top of the page because it can't 
> find the now-misnamed anchors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6590) Explore different ways to apply boosts

2016-06-13 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328207#comment-15328207
 ] 

Upayavira commented on LUCENE-6590:
---

Any ideas [~jpountz]

> Explore different ways to apply boosts
> --
>
> Key: LUCENE-6590
> URL: https://issues.apache.org/jira/browse/LUCENE-6590
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, 
> LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch
>
>
> Follow-up from LUCENE-6570: the fact that all queries are mutable in order to 
> allow for applying a boost raises issues since it makes queries bad cache 
> keys since their hashcode can change anytime. We could just document that 
> queries should never be modified after they have gone through IndexSearcher 
> but it would be even better if the API made queries impossible to mutate at 
> all.
> I think there are two main options:
>  - either replace "void setBoost(boost)" with something like "Query 
> withBoost(boost)" which would return a clone that has a different boost
>  - or move boost handling outside of Query, for instance we could have a 
> (immutable) query impl that would be dedicated to applying boosts, that 
> queries that need to change boosts at rewrite time (such as BooleanQuery) 
> would use as a wrapper.
> The latter idea is from Robert and I like it a lot given how often I either 
> introduced or found a bug which was due to the boost parameter being ignored. 
> Maybe there are other options, but I think this is worth exploring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 196 - Failure!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/196/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([6836BCC222CF2ADE:816C07FABC56BA76]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
at 
org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir(TestArbitraryIndexDir.java:107)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=*[count(//doc)=1]
xml response was: 

00


request was:q=id:2=standard=0=20=2.2
at 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1041 - Still Failing

2016-06-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1041/

9 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrReplicationDistributedZkTest

Error Message:
ObjectTracker found 48 object(s) that were not released!!! [InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 48 object(s) that were not 
released!!! [InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient, InternalHttpClient, InternalHttpClient, InternalHttpClient, 
InternalHttpClient]
at __randomizedtesting.SeedInfo.seed([B653EE7F5BD480E2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:256)
at sun.reflect.GeneratedMethodAccessor104.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrReplicationDistributedZkTest

Error Message:

[jira] [Comment Edited] (LUCENE-6968) LSH Filter

2016-06-13 Thread Andy Hind (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328054#comment-15328054
 ] 

Andy Hind edited comment on LUCENE-6968 at 6/13/16 7:55 PM:


Hi Tommaso, the MinHashFilterTest was running fine. It was JapaneseNumberFilter 
that was failing intermittently. I think on one of the evil test cases.

LongPair should implement equals (and probably hashCode if it will be reused) 
as it goes into a TreeSet. An over sight on my part.

FWIW, as far as I can tell, the change in patch 6 was included in 5.


was (Author: andyhind):
Hi Tommaso, the MinHashFilterTest was running fine. It was JapaneseNumberFilter 
that was failing intermittently. I think on one of the evil test cases.

LongPair should implement equals (and probably hashCode if it will be reused) 
as it goes into a TreeSet. An over sight on my part.

> LSH Filter
> --
>
> Key: LUCENE-6968
> URL: https://issues.apache.org/jira/browse/LUCENE-6968
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Cao Manh Dat
>Assignee: Tommaso Teofili
> Attachments: LUCENE-6968.4.patch, LUCENE-6968.5.patch, 
> LUCENE-6968.6.patch, LUCENE-6968.patch, LUCENE-6968.patch, LUCENE-6968.patch
>
>
> I'm planning to implement LSH. Which support query like this
> {quote}
> Find similar documents that have 0.8 or higher similar score with a given 
> document. Similarity measurement can be cosine, jaccard, euclid..
> {quote}
> For example. Given following corpus
> {quote}
> 1. Solr is an open source search engine based on Lucene
> 2. Solr is an open source enterprise search engine based on Lucene
> 3. Solr is an popular open source enterprise search engine based on Lucene
> 4. Apache Lucene is a high-performance, full-featured text search engine 
> library written entirely in Java
> {quote}
> We wanna find documents that have 0.6 score in jaccard measurement with this 
> doc
> {quote}
> Solr is an open source search engine
> {quote}
> It will return only docs 1,2 and 3 (MoreLikeThis will also return doc 4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6968) LSH Filter

2016-06-13 Thread Andy Hind (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328054#comment-15328054
 ] 

Andy Hind commented on LUCENE-6968:
---

Hi Tommaso, the MinHashFilterTest was running fine. It was JapaneseNumberFilter 
that was failing intermittently. I think on one of the evil test cases.

LongPair should implement equals (and probably hashCode if it will be reused) 
as it goes into a TreeSet. An over sight on my part.

> LSH Filter
> --
>
> Key: LUCENE-6968
> URL: https://issues.apache.org/jira/browse/LUCENE-6968
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Cao Manh Dat
>Assignee: Tommaso Teofili
> Attachments: LUCENE-6968.4.patch, LUCENE-6968.5.patch, 
> LUCENE-6968.6.patch, LUCENE-6968.patch, LUCENE-6968.patch, LUCENE-6968.patch
>
>
> I'm planning to implement LSH. Which support query like this
> {quote}
> Find similar documents that have 0.8 or higher similar score with a given 
> document. Similarity measurement can be cosine, jaccard, euclid..
> {quote}
> For example. Given following corpus
> {quote}
> 1. Solr is an open source search engine based on Lucene
> 2. Solr is an open source enterprise search engine based on Lucene
> 3. Solr is an popular open source enterprise search engine based on Lucene
> 4. Apache Lucene is a high-performance, full-featured text search engine 
> library written entirely in Java
> {quote}
> We wanna find documents that have 0.6 score in jaccard measurement with this 
> doc
> {quote}
> Solr is an open source search engine
> {quote}
> It will return only docs 1,2 and 3 (MoreLikeThis will also return doc 4)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8972) Add GraphHandler and GraphMLResponseWriter to support graph visualizations

2016-06-13 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328032#comment-15328032
 ] 

Cassandra Targett commented on SOLR-8972:
-

[~joel.bernstein]: Is this done? If so, can we mark it as resolved?

> Add GraphHandler and GraphMLResponseWriter to support graph visualizations
> --
>
> Key: SOLR-8972
> URL: https://issues.apache.org/jira/browse/SOLR-8972
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1
>
> Attachments: GraphHandler.java, GraphMLResponseWriter.java, 
> SOLR-8972.patch, SOLR-8972.patch, SOLR-8972.patch
>
>
> SOLR-8925 is shaping up nicely. It would be great if Solr could support 
> outputting graphs in GraphML. This will allow users to visualize their graphs 
> in a number of graph visualization tools (NodeXL, Gephi, Tulip etc...). This 
> ticket will create a new Graph handler which will take a Streaming Expression 
> graph traversal and output GraphML. A new GraphMLResponseWriter will handle 
> the GraphML formatting. In future releases we can consider supporting other 
> graph formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7336) Move TermRangeQuery to sandbox

2016-06-13 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328031#comment-15328031
 ] 

Michael McCandless commented on LUCENE-7336:


I think sandbox should also be used for cases where we know there are problems 
with a class but in some rare use cases, where users understand those problems, 
it's perhaps useful, e.g. {{SlowFuzzyQuery}}.

I would put {{TermRangeQuery}} in this same category, but I guess not as 
extreme, so {{misc}} module is maybe also a good home.

> Move TermRangeQuery to sandbox
> --
>
> Key: LUCENE-7336
> URL: https://issues.apache.org/jira/browse/LUCENE-7336
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
>
> I think, long ago, this class was abused for numeric range searching, if you 
> converted your numeric terms into text terms "carefully", but we now have 
> dimensional points for that, and I think otherwise this query class is quite 
> dangerous: you can easily accidentally make a very costly query.
> Furthermore, the common use cases for multi-term queries are already covered 
> by other classes ({{PrefixQuery}}, {{WildcardQuery}}, {{FuzzyQuery}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8744) Overseer operations need more fine grained mutual exclusion

2016-06-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15328028#comment-15328028
 ] 

ASF GitHub Bot commented on SOLR-8744:
--

Github user dragonsinth closed the pull request at:

https://github.com/apache/lucene-solr/pull/42


> Overseer operations need more fine grained mutual exclusion
> ---
>
> Key: SOLR-8744
> URL: https://issues.apache.org/jira/browse/SOLR-8744
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.4.1
>Reporter: Scott Blum
>Assignee: Noble Paul
>Priority: Blocker
>  Labels: sharding, solrcloud
> Fix For: 6.1
>
> Attachments: SOLR-8744.patch, SOLR-8744.patch, SOLR-8744.patch, 
> SOLR-8744.patch, SOLR-8744.patch, SOLR-8744.patch, SOLR-8744.patch, 
> SOLR-8744.patch, SOLR-8744.patch, SmileyLockTree.java, SmileyLockTree.java
>
>
> SplitShard creates a mutex over the whole collection, but, in practice, this 
> is a big scaling problem.  Multiple split shard operations could happen at 
> the time time, as long as different shards are being split.  In practice, 
> those shards often reside on different machines, so there's no I/O bottleneck 
> in those cases, just the mutex in Overseer forcing the operations to be done 
> serially.
> Given that a single split can take many minutes on a large collection, this 
> is a bottleneck at scale.
> Here is the proposed new design
> There are various Collection operations performed at Overseer. They may need 
> exclusive access at various levels. Each operation must define the Access 
> level at which the access is required. Access level is an enum. 
> CLUSTER(0)
> COLLECTION(1)
> SHARD(2)
> REPLICA(3)
> The Overseer node maintains a tree of these locks. The lock tree would look 
> as follows. The tree can be created lazily as and when tasks come up.
> {code}
> Legend: 
> C1, C2 -> Collections
> S1, S2 -> Shards 
> R1,R2,R3,R4 -> Replicas
>  Cluster
> /   \
>/ \ 
>   C1  C2
>  / \ /   \ 
> /   \   / \  
>S1   S2  S1 S2
> R1, R2  R3.R4  R1,R2   R3,R4
> {code}
> When the overseer receives a message, it tries to acquire the appropriate 
> lock from the tree. For example, if an operation needs a lock at a Collection 
> level and it needs to operate on Collection C1, the node C1 and all child 
> nodes of C1 must be free. 
> h2.Lock acquiring logic
> Each operation would start from the root of the tree (Level 0 -> Cluster) and 
> start moving down depending upon the operation. After it reaches the right 
> node, it checks if all the children are free from a lock.  If it fails to 
> acquire a lock, it remains in the work queue. A scheduler thread waits for 
> notification from the current set of tasks . Every task would do a 
> {{notify()}} on the monitor of  the scheduler thread. The thread would start 
> from the head of the queue and check all tasks to see if that task is able to 
> acquire the right lock. If yes, it is executed, if not, the task is left in 
> the work queue.  
> When a new task arrives in the work queue, the schedulerthread wakes and just 
> try to schedule that task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #42: SOLR-8744 blockedTasks

2016-06-13 Thread dragonsinth
Github user dragonsinth closed the pull request at:

https://github.com/apache/lucene-solr/pull/42


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7276) Add an optional reason to the MatchNoDocsQuery

2016-06-13 Thread Ferenczi Jim (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327946#comment-15327946
 ] 

Ferenczi Jim edited comment on LUCENE-7276 at 6/13/16 7:02 PM:
---

??Somehow the test is angry that the rewritten query scores differently from 
the original ... so somehow the fact that we no longer rewrite to an empty BQ 
is changing something ... I'll dig.??

I tried to find a reason and I think I found something interesting. The change 
is related to the normalization factor and the fact that those queries are 
boosted. When you use a boolean query with no clause the normalization factor 
is 0, when the matchnodocs query is used the normalization factor is 1 
(BooleanWeight.getValueForNormalization and 
ConstantScoreWeight.getValueForNormalization).
This part of the query is supposed to return no documents so it should be ok to 
ignore it when the query norm is computed. Though for the distributed case 
where results are merged from different shards there is no guarantee that the 
rewrite will be the same among the shards. 
I think we can get rid of the matchnodocsquery vs empty boolean query 
difference if we change the return value of  
BooleanWeight.getValueForNormalization to be 1 (instead of 0) when there is no 
clause.

https://issues.apache.org/jira/browse/LUCENE-7337


was (Author: jim.ferenczi):
??
Somehow the test is angry that the rewritten query scores differently from the 
original ... so somehow the fact that we no longer rewrite to an empty BQ is 
changing something ... I'll dig.
??

I tried to find a reason and I think I found something interesting. The change 
is related to the normalization factor and the fact that those queries are 
boosted. When you use a boolean query with no clause the normalization factor 
is 0, when the matchnodocs query is used the normalization factor is 1 
(BooleanWeight.getValueForNormalization and 
ConstantScoreWeight.getValueForNormalization).
This part of the query is supposed to return no documents so it should be ok to 
ignore it when the query norm is computed. Though for the distributed case 
where results are merged from different shards there is no guarantee that the 
rewrite will be the same among the shards. 
I think we can get rid of the matchnodocsquery vs empty boolean query 
difference if we change the return value of  
BooleanWeight.getValueForNormalization to be 1 (instead of 0) when there is no 
clause.

https://issues.apache.org/jira/browse/LUCENE-7337

> Add an optional reason to the MatchNoDocsQuery
> --
>
> Key: LUCENE-7276
> URL: https://issues.apache.org/jira/browse/LUCENE-7276
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Ferenczi Jim
>Priority: Minor
>  Labels: patch
> Attachments: LUCENE-7276.patch, LUCENE-7276.patch, LUCENE-7276.patch, 
> LUCENE-7276.patch, LUCENE-7276.patch
>
>
> It's sometimes difficult to debug a query that results in a MatchNoDocsQuery. 
> The MatchNoDocsQuery is always rewritten in an empty boolean query.
> This patch adds an optional reason and implements a weight in order to keep 
> track of the reason why the query did not match any document. The reason is 
> printed on toString and when an explanation for noMatch is asked.  
> For instance the query:
> new MatchNoDocsQuery("Field not found").toString()
> => 'MatchNoDocsQuery["field 'title' not found"]'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7276) Add an optional reason to the MatchNoDocsQuery

2016-06-13 Thread Ferenczi Jim (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327946#comment-15327946
 ] 

Ferenczi Jim commented on LUCENE-7276:
--

??
Somehow the test is angry that the rewritten query scores differently from the 
original ... so somehow the fact that we no longer rewrite to an empty BQ is 
changing something ... I'll dig.
??

I tried to find a reason and I think I found something interesting. The change 
is related to the normalization factor and the fact that those queries are 
boosted. When you use a boolean query with no clause the normalization factor 
is 0, when the matchnodocs query is used the normalization factor is 1 
(BooleanWeight.getValueForNormalization and 
ConstantScoreWeight.getValueForNormalization).
This part of the query is supposed to return no documents so it should be ok to 
ignore it when the query norm is computed. Though for the distributed case 
where results are merged from different shards there is no guarantee that the 
rewrite will be the same among the shards. 
I think we can get rid of the matchnodocsquery vs empty boolean query 
difference if we change the return value of  
BooleanWeight.getValueForNormalization to be 1 (instead of 0) when there is no 
clause.

https://issues.apache.org/jira/browse/LUCENE-7337

> Add an optional reason to the MatchNoDocsQuery
> --
>
> Key: LUCENE-7276
> URL: https://issues.apache.org/jira/browse/LUCENE-7276
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Ferenczi Jim
>Priority: Minor
>  Labels: patch
> Attachments: LUCENE-7276.patch, LUCENE-7276.patch, LUCENE-7276.patch, 
> LUCENE-7276.patch, LUCENE-7276.patch
>
>
> It's sometimes difficult to debug a query that results in a MatchNoDocsQuery. 
> The MatchNoDocsQuery is always rewritten in an empty boolean query.
> This patch adds an optional reason and implements a weight in order to keep 
> track of the reason why the query did not match any document. The reason is 
> printed on toString and when an explanation for noMatch is asked.  
> For instance the query:
> new MatchNoDocsQuery("Field not found").toString()
> => 'MatchNoDocsQuery["field 'title' not found"]'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 646 - Failure!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/646/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 63079 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /var/tmp/ecj376108784
 [ecj-lint] Compiling 932 source files to /var/tmp/ecj376108784
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 34)
 [ecj-lint] import org.apache.hadoop.fs.FsStatus;
 [ecj-lint]^
 [ecj-lint] The import org.apache.hadoop.fs.FsStatus is never used
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 227)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 120)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 144)
 [ecj-lint] return namedList;
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. ERROR in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java
 (at line 79)
 [ecj-lint] import org.apache.solr.core.DirectoryFactory;
 [ecj-lint]^
 [ecj-lint] The import 

Re: Lucene/Solr 5.5.2

2016-06-13 Thread Steve Rowe
Thanks Uwe!

--
Steve
www.lucidworks.com

> On Jun 13, 2016, at 2:39 PM, Uwe Schindler  wrote:
> 
> I enabled 5.5 builds on Policeman (without Java 9 as this breaks).
> 
> Uwe
> 
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
> 
>> -Original Message-
>> From: Steve Rowe [mailto:sar...@gmail.com]
>> Sent: Monday, June 13, 2016 7:26 PM
>> To: Lucene Dev 
>> Subject: Lucene/Solr 5.5.2
>> 
>> I’d like to make a 5.5.2 release, and I volunteer to be RM.
>> 
>> I propose to cut the first RC no sooner than one week from today: Monday
>> June 20th.  I plan on delaying cutting the RC until after 6.1.0 has been
>> released; I’d rather avoid two RMs trying to do release work at the same
>> time.
>> 
>> I’ll start looking now at backporting bugfixes I’ve worked on to the 5.5
>> branch, and I encourage others to do the same.
>> 
>> I’ll go enable the 5.5 branch Jenkins jobs now.
>> 
>> --
>> Steve
>> www.lucidworks.com
>> 
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 5.5.2

2016-06-13 Thread Anshum Gupta
Sounds good to me ! Thanks for doing this Steve :).

On Mon, Jun 13, 2016 at 10:25 AM, Steve Rowe  wrote:

> I’d like to make a 5.5.2 release, and I volunteer to be RM.
>
> I propose to cut the first RC no sooner than one week from today: Monday
> June 20th.  I plan on delaying cutting the RC until after 6.1.0 has been
> released; I’d rather avoid two RMs trying to do release work at the same
> time.
>
> I’ll start looking now at backporting bugfixes I’ve worked on to the 5.5
> branch, and I encourage others to do the same.
>
> I’ll go enable the 5.5 branch Jenkins jobs now.
>
> --
> Steve
> www.lucidworks.com
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Anshum Gupta


Re: [VOTE] Release Lucene/Solr 6.1.0 RC1

2016-06-13 Thread Steve Rowe
Smoke tester was happy: SUCCESS! [0:23:40.900240]

Except for the below-described minor issues: changes, docs and javadocs look 
good:

* Broken description section links from documentation to javadocs 

* Solr’s CHANGES.txt is missing a “Versions of Major Components” section.
* Solr’s Changes.html has a section "Upgrading from Solr any prior release” 
that is not formatted properly (the hyphens are put into a bullet item below)

+0 to release.  I’ll work on the above and backport to the 6.1 branch, in case 
there is another RC.

--
Steve
www.lucidworks.com

> On Jun 13, 2016, at 5:15 AM, Adrien Grand  wrote:
> 
> Please vote for release candidate 1 for Lucene/Solr 6.1.0
> 
> 
> The artifacts can be downloaded from:
> 
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/
> 
> You can run the smoke tester directly with this command:
> 
> 
> python3 -u dev-tools/scripts/smokeTestRelease.py \
> 
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/
> Here is my +1.
> SUCCESS! [0:36:57.750669]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6171) Make lucene completely write-once

2016-06-13 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327898#comment-15327898
 ] 

Michael McCandless commented on LUCENE-6171:


bq.  does each update create a new one, perhaps in parallel-index like manner?

That's what we do ... we always fully write the new doc values to another file, 
and stop referencing the old one.

> Make lucene completely write-once
> -
>
> Key: LUCENE-6171
> URL: https://issues.apache.org/jira/browse/LUCENE-6171
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Michael McCandless
> Attachments: LUCENE-6171.patch
>
>
> Today, lucene is mostly write-once, but not always, and these are just very 
> exceptional cases. 
> This is an invitation for exceptional bugs: (and we have occasional test 
> failures when doing "no-wait close" because of this). 
> I would prefer it if we didn't try to delete files before we open them for 
> write, and if we opened them with the CREATE_NEW option by default to throw 
> an exception, if the file already exists.
> The trickier parts of the change are going to be IndexFileDeleter and 
> exceptions on merge / CFS construction logic.
> Overall for IndexFileDeleter I think the least invasive option might be to 
> only delete files older than the current commit point? This will ensure that 
> inflateGens() always avoids trying to overwrite any files that were from an 
> aborted segment. 
> For CFS construction/exceptions on merge, we really need to remove the custom 
> "sniping" of index files there and let only IndexFileDeleter delete files. My 
> previous failed approach involved always consistently using 
> TrackingDirectoryWrapper, but it failed, and only in backwards compatibility 
> tests, because of LUCENE-6146 (but i could never figure that out). I am 
> hoping this time I will be successful :)
> Longer term we should think about more simplifications, progress has been 
> made on LUCENE-5987, but I think overall we still try to be a superhero for 
> exceptions on merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Lucene/Solr 5.5.2

2016-06-13 Thread Uwe Schindler
I enabled 5.5 builds on Policeman (without Java 9 as this breaks).

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Steve Rowe [mailto:sar...@gmail.com]
> Sent: Monday, June 13, 2016 7:26 PM
> To: Lucene Dev 
> Subject: Lucene/Solr 5.5.2
> 
> I’d like to make a 5.5.2 release, and I volunteer to be RM.
> 
> I propose to cut the first RC no sooner than one week from today: Monday
> June 20th.  I plan on delaying cutting the RC until after 6.1.0 has been
> released; I’d rather avoid two RMs trying to do release work at the same
> time.
> 
> I’ll start looking now at backporting bugfixes I’ve worked on to the 5.5
> branch, and I encourage others to do the same.
> 
> I’ll go enable the 5.5 branch Jenkins jobs now.
> 
> --
> Steve
> www.lucidworks.com
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 16982 - Still Failing!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16982/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrClientTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest: 1) Thread[id=1016, 
name=OverseerHdfsCoreFailoverThread-96065412121427975-127.0.0.1:43010_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.client.solrj.impl.CloudSolrClientTest: 
   1) Thread[id=1016, 
name=OverseerHdfsCoreFailoverThread-96065412121427975-127.0.0.1:43010_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([CBD42A2319254FE6]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrClientTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=1016, 
name=OverseerHdfsCoreFailoverThread-96065412121427975-127.0.0.1:43010_solr-n_01,
 state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=1016, 
name=OverseerHdfsCoreFailoverThread-96065412121427975-127.0.0.1:43010_solr-n_01,
 state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([CBD42A2319254FE6]:0)


FAILED:  org.apache.solr.client.solrj.TestLBHttpSolrClient.testReliability

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at 
__randomizedtesting.SeedInfo.seed([CBD42A2319254FE6:A1CF765B8439E4F]:0)
at org.apache.http.util.ByteArrayBuffer.(ByteArrayBuffer.java:56)
at 
org.apache.http.impl.io.SessionOutputBufferImpl.(SessionOutputBufferImpl.java:90)
at 
org.apache.http.impl.BHttpConnectionBase.(BHttpConnectionBase.java:119)
at 
org.apache.http.impl.DefaultBHttpClientConnection.(DefaultBHttpClientConnection.java:97)
at 
org.apache.http.impl.conn.DefaultManagedHttpClientConnection.(DefaultManagedHttpClientConnection.java:76)
at 
org.apache.http.impl.conn.LoggingManagedHttpClientConnection.(LoggingManagedHttpClientConnection.java:68)
at 
org.apache.http.impl.conn.ManagedHttpClientConnectionFactory.create(ManagedHttpClientConnectionFactory.java:126)
at 
org.apache.http.impl.conn.ManagedHttpClientConnectionFactory.create(ManagedHttpClientConnectionFactory.java:56)
at 
org.apache.http.impl.conn.PoolingHttpClientConnectionManager$InternalConnectionFactory.create(PoolingHttpClientConnectionManager.java:591)
at 
org.apache.http.impl.conn.PoolingHttpClientConnectionManager$InternalConnectionFactory.create(PoolingHttpClientConnectionManager.java:562)
at 
org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(AbstractConnPool.java:295)
at 
org.apache.http.pool.AbstractConnPool.access$000(AbstractConnPool.java:64)
at 
org.apache.http.pool.AbstractConnPool$2.getPoolEntry(AbstractConnPool.java:192)
at 
org.apache.http.pool.AbstractConnPool$2.getPoolEntry(AbstractConnPool.java:185)
at org.apache.http.pool.PoolEntryFuture.get(PoolEntryFuture.java:107)
at 
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.leaseConnection(PoolingHttpClientConnectionManager.java:276)
at 
org.apache.http.impl.conn.PoolingHttpClientConnectionManager$1.get(PoolingHttpClientConnectionManager.java:263)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:190)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 

[jira] [Created] (LUCENE-7338) Broken description section links from documentation to javadocs

2016-06-13 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-7338:
--

 Summary: Broken description section links from documentation to 
javadocs
 Key: LUCENE-7338
 URL: https://issues.apache.org/jira/browse/LUCENE-7338
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/javadocs
Reporter: Steve Rowe


In Lucene's top-level documentation, there are links to Description sections in 
Javadocs, e.g. in the Getting Started section: to the Lucene demo; to an 
Introduction to Lucene's APIs; and to the Analysis overview.

All of these links are anchored at {{#overview_description}} or 
{{#package_description}}, but it looks like Java8 switched how these anchors 
are named: in the 6.0.0, 6.0.1 and now the 6.1.0 RC1 javadocs, these anchors 
are named with dots rather than underscores: {{#overview.description}} and 
{{#package.description}}.  As a result, the documentation links go to the right 
page, but the browser stays at the top of the page because it can't find the 
now-misnamed anchors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1093) A RequestHandler to run multiple queries in a batch

2016-06-13 Thread Pedro Rosanes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327797#comment-15327797
 ] 

Pedro Rosanes edited comment on SOLR-1093 at 6/13/16 5:36 PM:
--

If multi queries were sent, the resulting json would be invalid, since it'd 
have two or more "response" keys.
In this [patch|^SOLR-1093-1.1.patch], each response has an identifier of the 
corresponding query.
Eg.: {code}{ "1.response" : ..., "2.response" : ... }{code}

And you should use {code}{code}.


was (Author: prosanes):
If multi queries were sent, the resulting json would be invalid, since it'd 
have two or more "response" keys.
In this [patch|^SOLR-1093-1.1.patch], each response has an identifier of the 
corresponding query.
Eg.: { "1.response" : ..., "2.response" : ... }

> A RequestHandler to run multiple queries in a batch
> ---
>
> Key: SOLR-1093
> URL: https://issues.apache.org/jira/browse/SOLR-1093
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Noble Paul
> Attachments: SOLR-1093-1.1.patch, SOLR-1093.patch
>
>
> It is a common requirement that a single page requires to fire multiple 
> queries .In cases where these queries are independent of each other. If there 
> is a handler which can take in multiple queries , run them in paralll and 
> send the response as one big chunk it would be useful
> Let us say the handler is  MultiRequestHandler
> {code}
> 
> {code}
> h2.Query Syntax
> The request must specify the no:of queries as count=n
> Each request parameter must be prefixed with a number which denotes the query 
> index.optionally ,it may can also specify the handler name.
> example
> {code}
> /multi?count=2&1.handler=/select&1.q=a:b&2.handler=/select&2.q=a:c
> {code}
> default handler can be '/select' so the equivalent can be
> {code} 
> /multi?count=2&1.q=a:b&2.q=a:c
> {code}
> h2.The response
> The response will be a List where each NamedList will be a 
> response to a query. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1093) A RequestHandler to run multiple queries in a batch

2016-06-13 Thread Pedro Rosanes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327797#comment-15327797
 ] 

Pedro Rosanes edited comment on SOLR-1093 at 6/13/16 5:32 PM:
--

If multi queries were sent, the resulting json would be invalid, since it'd 
have two or more "response" keys.
In this [patch|^SOLR-1093-1.1.patch], each response has an identifier of the 
corresponding query.
Eg.: { "1.response" : ..., "2.response" : ... }


was (Author: prosanes):
If multi queries were sent, the resulting json would be invalid, since it'd 
have two or more "response" keys.
In this patch, each response has an identifier of the corresponding query.
Eg.: { "1.response" : ..., "2.response" : ... }

> A RequestHandler to run multiple queries in a batch
> ---
>
> Key: SOLR-1093
> URL: https://issues.apache.org/jira/browse/SOLR-1093
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Noble Paul
> Attachments: SOLR-1093-1.1.patch, SOLR-1093.patch
>
>
> It is a common requirement that a single page requires to fire multiple 
> queries .In cases where these queries are independent of each other. If there 
> is a handler which can take in multiple queries , run them in paralll and 
> send the response as one big chunk it would be useful
> Let us say the handler is  MultiRequestHandler
> {code}
> 
> {code}
> h2.Query Syntax
> The request must specify the no:of queries as count=n
> Each request parameter must be prefixed with a number which denotes the query 
> index.optionally ,it may can also specify the handler name.
> example
> {code}
> /multi?count=2&1.handler=/select&1.q=a:b&2.handler=/select&2.q=a:c
> {code}
> default handler can be '/select' so the equivalent can be
> {code} 
> /multi?count=2&1.q=a:b&2.q=a:c
> {code}
> h2.The response
> The response will be a List where each NamedList will be a 
> response to a query. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1093) A RequestHandler to run multiple queries in a batch

2016-06-13 Thread Pedro Rosanes (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Rosanes updated SOLR-1093:

Attachment: SOLR-1093-1.1.patch

> A RequestHandler to run multiple queries in a batch
> ---
>
> Key: SOLR-1093
> URL: https://issues.apache.org/jira/browse/SOLR-1093
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Noble Paul
> Attachments: SOLR-1093-1.1.patch, SOLR-1093.patch
>
>
> It is a common requirement that a single page requires to fire multiple 
> queries .In cases where these queries are independent of each other. If there 
> is a handler which can take in multiple queries , run them in paralll and 
> send the response as one big chunk it would be useful
> Let us say the handler is  MultiRequestHandler
> {code}
> 
> {code}
> h2.Query Syntax
> The request must specify the no:of queries as count=n
> Each request parameter must be prefixed with a number which denotes the query 
> index.optionally ,it may can also specify the handler name.
> example
> {code}
> /multi?count=2&1.handler=/select&1.q=a:b&2.handler=/select&2.q=a:c
> {code}
> default handler can be '/select' so the equivalent can be
> {code} 
> /multi?count=2&1.q=a:b&2.q=a:c
> {code}
> h2.The response
> The response will be a List where each NamedList will be a 
> response to a query. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1093) A RequestHandler to run multiple queries in a batch

2016-06-13 Thread Pedro Rosanes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327797#comment-15327797
 ] 

Pedro Rosanes edited comment on SOLR-1093 at 6/13/16 5:29 PM:
--

If multi queries were sent, the resulting json would be invalid, since it'd 
have two or more "response" keys.
In this patch, each response has an identifier of the corresponding query.
Eg.: { "1.response" : ..., "2.response" : ... }


was (Author: prosanes):
If multi queries were sent, the resulting json would be invalid, since it'd 
have two or more "response" keys.
In this patch, each response has an identifier of the corresponding query.
Eg.: { "1.response" : ..., "2.response" : ... }

> A RequestHandler to run multiple queries in a batch
> ---
>
> Key: SOLR-1093
> URL: https://issues.apache.org/jira/browse/SOLR-1093
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Noble Paul
> Attachments: SOLR-1093.patch
>
>
> It is a common requirement that a single page requires to fire multiple 
> queries .In cases where these queries are independent of each other. If there 
> is a handler which can take in multiple queries , run them in paralll and 
> send the response as one big chunk it would be useful
> Let us say the handler is  MultiRequestHandler
> {code}
> 
> {code}
> h2.Query Syntax
> The request must specify the no:of queries as count=n
> Each request parameter must be prefixed with a number which denotes the query 
> index.optionally ,it may can also specify the handler name.
> example
> {code}
> /multi?count=2&1.handler=/select&1.q=a:b&2.handler=/select&2.q=a:c
> {code}
> default handler can be '/select' so the equivalent can be
> {code} 
> /multi?count=2&1.q=a:b&2.q=a:c
> {code}
> h2.The response
> The response will be a List where each NamedList will be a 
> response to a query. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1093) A RequestHandler to run multiple queries in a batch

2016-06-13 Thread Pedro Rosanes (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Rosanes updated SOLR-1093:

Attachment: (was: SOLR-1093.patch)

> A RequestHandler to run multiple queries in a batch
> ---
>
> Key: SOLR-1093
> URL: https://issues.apache.org/jira/browse/SOLR-1093
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Noble Paul
> Attachments: SOLR-1093.patch
>
>
> It is a common requirement that a single page requires to fire multiple 
> queries .In cases where these queries are independent of each other. If there 
> is a handler which can take in multiple queries , run them in paralll and 
> send the response as one big chunk it would be useful
> Let us say the handler is  MultiRequestHandler
> {code}
> 
> {code}
> h2.Query Syntax
> The request must specify the no:of queries as count=n
> Each request parameter must be prefixed with a number which denotes the query 
> index.optionally ,it may can also specify the handler name.
> example
> {code}
> /multi?count=2&1.handler=/select&1.q=a:b&2.handler=/select&2.q=a:c
> {code}
> default handler can be '/select' so the equivalent can be
> {code} 
> /multi?count=2&1.q=a:b&2.q=a:c
> {code}
> h2.The response
> The response will be a List where each NamedList will be a 
> response to a query. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1093) A RequestHandler to run multiple queries in a batch

2016-06-13 Thread Pedro Rosanes (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Rosanes updated SOLR-1093:

Attachment: SOLR-1093.patch

If multi queries were sent, the resulting json would be invalid, since it'd 
have two or more "response" keys.
In this patch, each response has an identifier of the corresponding query.
Eg.: { "1.response" : ..., "2.response" : ... }

> A RequestHandler to run multiple queries in a batch
> ---
>
> Key: SOLR-1093
> URL: https://issues.apache.org/jira/browse/SOLR-1093
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Noble Paul
> Attachments: SOLR-1093.patch, SOLR-1093.patch
>
>
> It is a common requirement that a single page requires to fire multiple 
> queries .In cases where these queries are independent of each other. If there 
> is a handler which can take in multiple queries , run them in paralll and 
> send the response as one big chunk it would be useful
> Let us say the handler is  MultiRequestHandler
> {code}
> 
> {code}
> h2.Query Syntax
> The request must specify the no:of queries as count=n
> Each request parameter must be prefixed with a number which denotes the query 
> index.optionally ,it may can also specify the handler name.
> example
> {code}
> /multi?count=2&1.handler=/select&1.q=a:b&2.handler=/select&2.q=a:c
> {code}
> default handler can be '/select' so the equivalent can be
> {code} 
> /multi?count=2&1.q=a:b&2.q=a:c
> {code}
> h2.The response
> The response will be a List where each NamedList will be a 
> response to a query. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene/Solr 5.5.2

2016-06-13 Thread Steve Rowe
I’d like to make a 5.5.2 release, and I volunteer to be RM.

I propose to cut the first RC no sooner than one week from today: Monday June 
20th.  I plan on delaying cutting the RC until after 6.1.0 has been released; 
I’d rather avoid two RMs trying to do release work at the same time.

I’ll start looking now at backporting bugfixes I’ve worked on to the 5.5 
branch, and I encourage others to do the same.

I’ll go enable the 5.5 branch Jenkins jobs now.

--
Steve
www.lucidworks.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9204) Improve performance of getting directory size with hdfs.

2016-06-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-9204.
---
   Resolution: Fixed
Fix Version/s: 6.2
   master (7.0)

> Improve performance of getting directory size with hdfs.
> 
>
> Key: SOLR-9204
> URL: https://issues.apache.org/jira/browse/SOLR-9204
> Project: Solr
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: master (7.0), 6.2
>
> Attachments: SOLR-9204.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9204) Improve performance of getting directory size with hdfs.

2016-06-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327688#comment-15327688
 ] 

ASF subversion and git services commented on SOLR-9204:
---

Commit 90c920d276e8e0115aa708fcd266463b7bba8239 in lucene-solr's branch 
refs/heads/branch_6x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=90c920d ]

SOLR-9204: Improve performance of getting directory size with hdfs.


> Improve performance of getting directory size with hdfs.
> 
>
> Key: SOLR-9204
> URL: https://issues.apache.org/jira/browse/SOLR-9204
> Project: Solr
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-9204.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9204) Improve performance of getting directory size with hdfs.

2016-06-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327685#comment-15327685
 ] 

ASF subversion and git services commented on SOLR-9204:
---

Commit 08c14f135639beddc0c33c0c087962f8b5f88f33 in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=08c14f1 ]

SOLR-9204: Improve performance of getting directory size with hdfs.


> Improve performance of getting directory size with hdfs.
> 
>
> Key: SOLR-9204
> URL: https://issues.apache.org/jira/browse/SOLR-9204
> Project: Solr
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-9204.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7337) MultiTermQuery are sometimes rewritten into an empty boolean query

2016-06-13 Thread Ferenczi Jim (JIRA)
Ferenczi Jim created LUCENE-7337:


 Summary: MultiTermQuery are sometimes rewritten into an empty 
boolean query
 Key: LUCENE-7337
 URL: https://issues.apache.org/jira/browse/LUCENE-7337
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Reporter: Ferenczi Jim
Priority: Minor


MultiTermQuery are sometimes rewritten to an empty boolean query (depending on 
the rewrite method), it can happen when no expansions are found on a fuzzy 
query for instance.
It can be problematic when the multi term query is boosted. 
For instance consider the following query:

`((title:bar~1)^100 text:bar)`

This is a boolean query with two optional clauses. The first one is a fuzzy 
query on the field title with a boost of 100. 
If there is no expansion for "title:bar~1" the query is rewritten into:

`(()^100 text:bar)`

... and when expansions are found:

`((title:bars | title:bar)^100 text:bar)`

The scoring of those two queries will differ because the normalization factor 
and the norm for the first query will be equal to 1 (the boost is ignored 
because the empty boolean query is not taken into account for the computation 
of the normalization factor) whereas the second query will have a normalization 
factor of 10,000 (100*100) and a norm equal to 0.01. 

This kind of discrepancy can happen in a single index because the expansions 
for the fuzzy query are done at the segment level. It can also happen when 
multiple indices are requested (Solr/ElasticSearch case).

A simple fix would be to replace the empty boolean query produced by the multi 
term query with a MatchNoDocsQuery but I am not sure that it's the best way to 
fix. WDYT ?
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.1-Linux (64bit/jdk-9-ea+122) - Build # 30 - Still Failing!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.1-Linux/30/
Java: 64bit/jdk-9-ea+122 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Mon Jun 13 11:30:53 
EDT 2016

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Mon Jun 13 11:30:53 EDT 2016
at 
__randomizedtesting.SeedInfo.seed([C8F067863A5CEF4F:135B67403F7486FC]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1508)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:858)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:843)

[jira] [Commented] (LUCENE-6171) Make lucene completely write-once

2016-06-13 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327634#comment-15327634
 ] 

David Smiley commented on LUCENE-6171:
--

Curious -- what does updatable docValues do?  Does it not update a file or does 
each update create a new one, perhaps in parallel-index like manner?

> Make lucene completely write-once
> -
>
> Key: LUCENE-6171
> URL: https://issues.apache.org/jira/browse/LUCENE-6171
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Michael McCandless
> Attachments: LUCENE-6171.patch
>
>
> Today, lucene is mostly write-once, but not always, and these are just very 
> exceptional cases. 
> This is an invitation for exceptional bugs: (and we have occasional test 
> failures when doing "no-wait close" because of this). 
> I would prefer it if we didn't try to delete files before we open them for 
> write, and if we opened them with the CREATE_NEW option by default to throw 
> an exception, if the file already exists.
> The trickier parts of the change are going to be IndexFileDeleter and 
> exceptions on merge / CFS construction logic.
> Overall for IndexFileDeleter I think the least invasive option might be to 
> only delete files older than the current commit point? This will ensure that 
> inflateGens() always avoids trying to overwrite any files that were from an 
> aborted segment. 
> For CFS construction/exceptions on merge, we really need to remove the custom 
> "sniping" of index files there and let only IndexFileDeleter delete files. My 
> previous failed approach involved always consistently using 
> TrackingDirectoryWrapper, but it failed, and only in backwards compatibility 
> tests, because of LUCENE-6146 (but i could never figure that out). I am 
> hoping this time I will be successful :)
> Longer term we should think about more simplifications, progress has been 
> made on LUCENE-5987, but I think overall we still try to be a superhero for 
> exceptions on merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9207) PeerSync recovery failes if number of updates requested is high

2016-06-13 Thread Pushkar Raste (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pushkar Raste updated SOLR-9207:

Description: 
{{PeerSync}} recovery fails if we request more than ~99K updates. 

If update solrconfig to retain more {{tlogs}} to leverage 
https://issues.apache.org/jira/browse/SOLR-6359

During out testing we found out that recovery using {{PeerSync}} fails if we 
ask for more than ~99K updates, with following error

{code}
 WARN  PeerSync [RecoveryThread] - PeerSync: core=hold_shard1 url=
exception talking to , failed
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected 
mime type application/octet-stream but got application/xml. 


application/x-www-form-urlencoded content 
length (4761994 bytes) exceeds upload limit of 2048 KB400

{code}


We arrived at ~99K with following match
* max_version_number = Long.MAX_VALUE = 9223372036854775807  
* bytes per version number =  20 (on the wire as POST request sends version 
number as string)
* additional bytes for separator ,
* max_versions_in_single_request = 2MB/21 = ~99864

I could think of 2 ways to fix it
1. Ask for about updates in chunks of 90K inside {{PeerSync.requestUpdates()}}

2. Use application/octet-stream encoding 

  was:
{{PeerSync}} recovery fails if we request more than ~99K updates. 

If update solrconfig to retain more {{tlogs}} to leverage 
https://issues.apache.org/jira/browse/SOLR-6359

During out testing we found out that recovery using {{PeerSync}} fails if we 
ask for more than ~99K updates, with following error

{code}
 WARN  PeerSync [RecoveryThread] - PeerSync: core=hold_shard1 url=
exception talking to , failed
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected 
mime type application/octet-stream but got application/xml. 


application/x-www-form-urlencoded content 
length (4761994 bytes) exceeds upload limit of 2048 KB400

{code}


We arrived at ~99K with following match
* max_version_number = Long.MAX_VALUE = 9223372036854775807  
* bytes per version number =  20 (on the wire as POST requestsends version 
number as string)
* additional bytes for separate ,
* max_versions_in_single_request = 2MB/21 = ~99864

I could think of 2 ways to fix it
1. Ask for about updates in chunks of 90K inside {{PeerSync.requestUpdates()}}

2. Use application/octet-stream encoding 


> PeerSync recovery failes if number of updates requested is high
> ---
>
> Key: SOLR-9207
> URL: https://issues.apache.org/jira/browse/SOLR-9207
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1, 6.0
>Reporter: Pushkar Raste
>Priority: Minor
>
> {{PeerSync}} recovery fails if we request more than ~99K updates. 
> If update solrconfig to retain more {{tlogs}} to leverage 
> https://issues.apache.org/jira/browse/SOLR-6359
> During out testing we found out that recovery using {{PeerSync}} fails if we 
> ask for more than ~99K updates, with following error
> {code}
>  WARN  PeerSync [RecoveryThread] - PeerSync: core=hold_shard1 url=
> exception talking to , failed
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
> Expected mime type application/octet-stream but got application/xml. 
> 
> 
> application/x-www-form-urlencoded content 
> length (4761994 bytes) exceeds upload limit of 2048 KB t name="code">400
> 
> {code}
> We arrived at ~99K with following match
> * max_version_number = Long.MAX_VALUE = 9223372036854775807  
> * bytes per version number =  20 (on the wire as POST request sends version 
> number as string)
> * additional bytes for separator ,
> * max_versions_in_single_request = 2MB/21 = ~99864
> I could think of 2 ways to fix it
> 1. Ask for about updates in chunks of 90K inside {{PeerSync.requestUpdates()}}
> 2. Use application/octet-stream encoding 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9207) PeerSync recovery failes if number of updates requested is high

2016-06-13 Thread Pushkar Raste (JIRA)
Pushkar Raste created SOLR-9207:
---

 Summary: PeerSync recovery failes if number of updates requested 
is high
 Key: SOLR-9207
 URL: https://issues.apache.org/jira/browse/SOLR-9207
 Project: Solr
  Issue Type: Bug
Affects Versions: 6.0, 5.1
Reporter: Pushkar Raste
Priority: Minor


{{PeerSync}} recovery fails if we request more than ~99K updates. 

If update solrconfig to retain more {{tlogs}} to leverage 
https://issues.apache.org/jira/browse/SOLR-6359

During out testing we found out that recovery using {{PeerSync}} fails if we 
ask for more than ~99K updates, with following error

{code}
 WARN  PeerSync [RecoveryThread] - PeerSync: core=hold_shard1 url=
exception talking to , failed
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected 
mime type application/octet-stream but got application/xml. 


application/x-www-form-urlencoded content 
length (4761994 bytes) exceeds upload limit of 2048 KB400

{code}


We arrived at ~99K with following match
* max_version_number = Long.MAX_VALUE = 9223372036854775807  
* bytes per version number =  20 (on the wire as POST requestsends version 
number as string)
* additional bytes for separate ,
* max_versions_in_single_request = 2MB/21 = ~99864

I could think of 2 ways to fix it
1. Ask for about updates in chunks of 90K inside {{PeerSync.requestUpdates()}}

2. Use application/octet-stream encoding 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7336) Move TermRangeQuery to sandbox

2016-06-13 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327619#comment-15327619
 ] 

David Smiley commented on LUCENE-7336:
--

Is Sandbox the right place or Misc?  What comes to mind when I think of sandbox 
is stuff that is in development or buggy.  But maybe that's me; I have no 
convictions on the matter.

> Move TermRangeQuery to sandbox
> --
>
> Key: LUCENE-7336
> URL: https://issues.apache.org/jira/browse/LUCENE-7336
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
>
> I think, long ago, this class was abused for numeric range searching, if you 
> converted your numeric terms into text terms "carefully", but we now have 
> dimensional points for that, and I think otherwise this query class is quite 
> dangerous: you can easily accidentally make a very costly query.
> Furthermore, the common use cases for multi-term queries are already covered 
> by other classes ({{PrefixQuery}}, {{WildcardQuery}}, {{FuzzyQuery}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7330) Speed up conjunctions

2016-06-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327617#comment-15327617
 ] 

ASF subversion and git services commented on LUCENE-7330:
-

Commit 72914198e60dcaa2008f6945e53e36e1c0053078 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7291419 ]

LUCENE-7330: Speed up conjunctions.


> Speed up conjunctions
> -
>
> Key: LUCENE-7330
> URL: https://issues.apache.org/jira/browse/LUCENE-7330
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7330.patch
>
>
> I am digging into some performance regressions between 4.x and 5.x which seem 
> to be due to how we always run conjunctions with ConjunctionDISI now while 
> 4.x had FilteredQuery, which was optimized for the case that there are only 
> two clauses or that one of the clause supports random access. I'd like to 
> explore the former in this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7330) Speed up conjunctions

2016-06-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327616#comment-15327616
 ] 

ASF subversion and git services commented on LUCENE-7330:
-

Commit 4a02813e2eec9ba5093b0e8f285e14b68b07051b in lucene-solr's branch 
refs/heads/branch_6x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4a02813 ]

LUCENE-7330: Speed up conjunctions.


> Speed up conjunctions
> -
>
> Key: LUCENE-7330
> URL: https://issues.apache.org/jira/browse/LUCENE-7330
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7330.patch
>
>
> I am digging into some performance regressions between 4.x and 5.x which seem 
> to be due to how we always run conjunctions with ConjunctionDISI now while 
> 4.x had FilteredQuery, which was optimized for the case that there are only 
> two clauses or that one of the clause supports random access. I'd like to 
> explore the former in this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.1.0 RC1

2016-06-13 Thread Joel Bernstein
+1

SUCCESS! [0:54:35.971947]

Joel Bernstein
http://joelsolr.blogspot.com/

On Mon, Jun 13, 2016 at 9:09 AM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:

> SUCCESS! [1:56:13.790113]
> briefly checked [subquery].
>
> On Mon, Jun 13, 2016 at 12:15 PM, Adrien Grand  wrote:
>
>> Please vote for release candidate 1 for Lucene/Solr 6.1.0The artifacts can 
>> be downloaded 
>> from:https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/You
>>  can run the smoke tester directly with this command:python3 -u 
>> dev-tools/scripts/smokeTestRelease.py 
>> \https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/
>>
>> Here is my +1.
>> SUCCESS! [0:36:57.750669]
>>
>
>
>
> --
> Sincerely yours
> Mikhail Khludnev
> Principal Engineer,
> Grid Dynamics
>
> 
> 
>


[jira] [Commented] (LUCENE-7336) Move TermRangeQuery to sandbox

2016-06-13 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327582#comment-15327582
 ] 

Adrien Grand commented on LUCENE-7336:
--

+1

> Move TermRangeQuery to sandbox
> --
>
> Key: LUCENE-7336
> URL: https://issues.apache.org/jira/browse/LUCENE-7336
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.2
>
>
> I think, long ago, this class was abused for numeric range searching, if you 
> converted your numeric terms into text terms "carefully", but we now have 
> dimensional points for that, and I think otherwise this query class is quite 
> dangerous: you can easily accidentally make a very costly query.
> Furthermore, the common use cases for multi-term queries are already covered 
> by other classes ({{PrefixQuery}}, {{WildcardQuery}}, {{FuzzyQuery}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7336) Move TermRangeQuery to sandbox

2016-06-13 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-7336:
--

 Summary: Move TermRangeQuery to sandbox
 Key: LUCENE-7336
 URL: https://issues.apache.org/jira/browse/LUCENE-7336
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: master (7.0), 6.2


I think, long ago, this class was abused for numeric range searching, if you 
converted your numeric terms into text terms "carefully", but we now have 
dimensional points for that, and I think otherwise this query class is quite 
dangerous: you can easily accidentally make a very costly query.

Furthermore, the common use cases for multi-term queries are already covered by 
other classes ({{PrefixQuery}}, {{WildcardQuery}}, {{FuzzyQuery}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8742) HdfsDirectoryTest fails reliably after changes in LUCENE-6932

2016-06-13 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327571#comment-15327571
 ] 

Steve Rowe commented on SOLR-8742:
--

My Jenkins found another reproducing seed on master:

{noformat}
Checking out Revision 8bd27977dd993d4443be359a6f7ec92c7f012247 
(refs/remotes/origin/master)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=HdfsDirectoryTest 
-Dtests.method=testEOF -Dtests.seed=F166FBD4C32A0557 -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=es-CU -Dtests.timezone=US/Central -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.05s J9  | HdfsDirectoryTest.testEOF <<<
   [junit4]> Throwable #1: java.lang.NullPointerException
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([F166FBD4C32A0557:600DB9DC810EA32B]:0)
   [junit4]>at 
org.apache.lucene.store.RAMInputStream.readByte(RAMInputStream.java:69)
   [junit4]>at 
org.apache.solr.store.hdfs.HdfsDirectoryTest.testEof(HdfsDirectoryTest.java:159)
   [junit4]>at 
org.apache.solr.store.hdfs.HdfsDirectoryTest.testEOF(HdfsDirectoryTest.java:151)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
[...]
   [junit4]   2> 556088 ERROR 
(SUITE-HdfsDirectoryTest-seed#[F166FBD4C32A0557]-worker) [] 
o.a.h.m.l.MethodMetric Error invoking method getBlocksTotal
   [junit4]   2> java.lang.reflect.InvocationTargetException
   [junit4]   2>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   [junit4]   2>at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]   2>at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]   2>at java.lang.reflect.Method.invoke(Method.java:498)
   [junit4]   2>at 
org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111)
   [junit4]   2>at 
org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144)
   [junit4]   2>at 
org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:387)
   [junit4]   2>at 
org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:79)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:195)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:172)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:151)
   [junit4]   2>at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getClassName(DefaultMBeanServerInterceptor.java:1804)
   [junit4]   2>at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.safeGetClassName(DefaultMBeanServerInterceptor.java:1595)
   [junit4]   2>at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanPermission(DefaultMBeanServerInterceptor.java:1813)
   [junit4]   2>at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:430)
   [junit4]   2>at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
   [junit4]   2>at 
com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
   [junit4]   2>at 
org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:81)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stopMBeans(MetricsSourceAdapter.java:227)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stop(MetricsSourceAdapter.java:212)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stopSources(MetricsSystemImpl.java:461)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stop(MetricsSystemImpl.java:212)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.shutdown(MetricsSystemImpl.java:592)
   [junit4]   2>at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdownInstance(DefaultMetricsSystem.java:72)
   [junit4]   2>at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdown(DefaultMetricsSystem.java:68)
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics.shutdown(NameNodeMetrics.java:145)
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:822)
   [junit4]   2>at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1720)
   [junit4]   2>at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1699)
   [junit4]   2>at 

[jira] [Created] (LUCENE-7335) IndexWriter.setCommitData should be late binding

2016-06-13 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-7335:
--

 Summary: IndexWriter.setCommitData should be late binding
 Key: LUCENE-7335
 URL: https://issues.apache.org/jira/browse/LUCENE-7335
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: master (7.0), 6.2


Today, {{IndexWriter.setCommitData}} is early-binding: as soon as you call it, 
it clones the provided map and later on when commit is called, it uses that 
clone.

But this makes it hard for some use cases where the app needs to record more 
timely information based on when specifically the commit actually occurs.  
E.g., with LUCENE-7302, it would be helpful to store the max completed sequence 
number in the commit point: that would be a lower bound of operations that were 
after the commit.

I think the most minimal way to do this would be to upgrade the existing method 
to take an {{Iterable}}, and document that it's now 
late binding, i.e. IW will pull an {{Iterator}} from that when it's time to 
write the segments file.

Or we could also make an explicit interface that you pass (seems like 
overkill), or maybe have a listener or something (or you subclass IW) that's 
invoked when the commit is about to write the segments file, but that also 
seems like overkill.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (SOLR-8912) SolrJ UpdateRequest does not copy Basic Authentication Credentials

2016-06-13 Thread Erick Erickson
If you follow the link to the duplicate where it was fixed
(Solr-8640) you'll see it's fixed in 5.5, 6.0

Best,
Erick

On Mon, Jun 13, 2016 at 1:31 AM, Rajeshkumar (JIRA)  wrote:
>
> [ 
> https://issues.apache.org/jira/browse/SOLR-8912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15326991#comment-15326991
>  ]
>
> Rajeshkumar commented on SOLR-8912:
> ---
>
> In which Solr version this issue has been fixed?
>
>> SolrJ UpdateRequest does not copy Basic Authentication Credentials
>> --
>>
>> Key: SOLR-8912
>> URL: https://issues.apache.org/jira/browse/SOLR-8912
>> Project: Solr
>>  Issue Type: Bug
>>  Components: clients - java
>>Affects Versions: 5.4.1
>> Environment: all
>>Reporter: harcor
>>
>> SolrJ UpdateRequest.java creates "new" instances of itself but does not copy 
>> credentials.
>> Solution is to add two lines of code to UpdateRequest.java in the getRoutes 
>> method.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 16981 - Still Failing!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16981/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 61434 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj2618259
 [ecj-lint] Compiling 932 source files to /tmp/ecj2618259
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/OverseerTaskProcessor.java
 (at line 22)
 [ecj-lint] import java.util.Collections;
 [ecj-lint]^
 [ecj-lint] The import java.util.Collections is never used
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/OverseerTaskProcessor.java
 (at line 45)
 [ecj-lint] import org.apache.solr.handler.component.ShardHandlerFactory;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.handler.component.ShardHandlerFactory is 
never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/OverseerTaskQueue.java
 (at line 22)
 [ecj-lint] import java.util.Set;
 [ecj-lint]^
 [ecj-lint] The import java.util.Set is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 226)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 120)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 12. WARNING in 

[jira] [Commented] (LUCENE-7319) remove unused imports

2016-06-13 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327448#comment-15327448
 ] 

Christine Poerschke commented on LUCENE-7319:
-

Thanks [~jpountz]! Sorry about that, am surprised those unused imports weren't 
caught locally here. Let me try freshly cloning the entire repo in case there's 
more which working with existing local copy somehow doesn't catch.

> remove unused imports
> -
>
> Key: LUCENE-7319
> URL: https://issues.apache.org/jira/browse/LUCENE-7319
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 6.x, master (7.0)
>
> Attachments: LUCENE-7319.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7319) remove unused imports

2016-06-13 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327404#comment-15327404
 ] 

Adrien Grand commented on LUCENE-7319:
--

[~cpoerschke] precommit was failing for me so I went ahead and removed a couple 
more imports. Hopefully I did not screw up anything.

> remove unused imports
> -
>
> Key: LUCENE-7319
> URL: https://issues.apache.org/jira/browse/LUCENE-7319
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 6.x, master (7.0)
>
> Attachments: LUCENE-7319.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7319) remove unused imports

2016-06-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327398#comment-15327398
 ] 

ASF subversion and git services commented on LUCENE-7319:
-

Commit 6d1bb14077b7f5ab4c79b0dbaff2e71ea8dd64e2 in lucene-solr's branch 
refs/heads/branch_6x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6d1bb14 ]

LUCENE-7319: Remove more unused imports so that precommit passes.


> remove unused imports
> -
>
> Key: LUCENE-7319
> URL: https://issues.apache.org/jira/browse/LUCENE-7319
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 6.x, master (7.0)
>
> Attachments: LUCENE-7319.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_92) - Build # 246 - Still Failing!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/246/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 61563 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: C:\Users\jenkins\AppData\Local\Temp\ecj708582462
 [ecj-lint] Compiling 932 source files to 
C:\Users\jenkins\AppData\Local\Temp\ecj708582462
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\lib\org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\lib\org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\client\solrj\embedded\JettySolrRunner.java
 (at line 38)
 [ecj-lint] import java.util.SortedMap;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.SortedMap is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\OverseerTaskProcessor.java
 (at line 22)
 [ecj-lint] import java.util.Collections;
 [ecj-lint]^
 [ecj-lint] The import java.util.Collections is never used
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\OverseerTaskProcessor.java
 (at line 45)
 [ecj-lint] import org.apache.solr.handler.component.ShardHandlerFactory;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.handler.component.ShardHandlerFactory is 
never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\OverseerTaskQueue.java
 (at line 22)
 [ecj-lint] import java.util.Set;
 [ecj-lint]^
 [ecj-lint] The import java.util.Set is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\core\HdfsDirectoryFactory.java
 (at line 226)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 

Re: [JENKINS] Lucene-Solr-Tests-master - Build # 1212 - Failure

2016-06-13 Thread Adrien Grand
I pushed a fix for this one.

Le lun. 13 juin 2016 à 15:30, Apache Jenkins Server <
jenk...@builds.apache.org> a écrit :

> Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1212/
>
> All tests passed
>
> Build Log:
> [...truncated 63141 lines...]
> -ecj-javadoc-lint-src:
> [mkdir] Created dir: /tmp/ecj166341476
>  [ecj-lint] Compiling 932 source files to /tmp/ecj166341476
>  [ecj-lint] invalid Class-Path header in manifest of jar file:
> /x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
>  [ecj-lint] invalid Class-Path header in manifest of jar file:
> /x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
>  [ecj-lint] --
>  [ecj-lint] 1. WARNING in
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/Assign.java
> (at line 101)
>  [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
>  [ecj-lint]^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda
> shape analysis
>  [ecj-lint] --
>  [ecj-lint] 2. WARNING in
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/Assign.java
> (at line 101)
>  [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
>  [ecj-lint]^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda
> shape analysis
>  [ecj-lint] --
>  [ecj-lint] 3. WARNING in
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/Assign.java
> (at line 101)
>  [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
>  [ecj-lint]^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda
> shape analysis
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 4. ERROR in
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/OverseerTaskProcessor.java
> (at line 22)
>  [ecj-lint] import java.util.Collections;
>  [ecj-lint]^
>  [ecj-lint] The import java.util.Collections is never used
>  [ecj-lint] --
>  [ecj-lint] 5. ERROR in
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/OverseerTaskProcessor.java
> (at line 45)
>  [ecj-lint] import
> org.apache.solr.handler.component.ShardHandlerFactory;
>  [ecj-lint]
> ^
>  [ecj-lint] The import
> org.apache.solr.handler.component.ShardHandlerFactory is never used
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 6. ERROR in
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/OverseerTaskQueue.java
> (at line 22)
>  [ecj-lint] import java.util.Set;
>  [ecj-lint]^
>  [ecj-lint] The import java.util.Set is never used
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 7. WARNING in
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
> (at line 213)
>  [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
>  [ecj-lint]   ^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda
> shape analysis
>  [ecj-lint] --
>  [ecj-lint] 8. WARNING in
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
> (at line 213)
>  [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
>  [ecj-lint]   ^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda
> shape analysis
>  [ecj-lint] --
>  [ecj-lint] 9. WARNING in
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
> (at line 213)
>  [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
>  [ecj-lint]   ^^^
>  [ecj-lint] (Recovered) Internal inconsistency detected during lambda
> shape analysis
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 10. WARNING in
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
> (at line 226)
>  [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null,
> blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
>  [ecj-lint]
>  
> ^^
>  [ecj-lint] Resource leak: 'dir' is never closed
>  [ecj-lint] --
>  [ecj-lint] --
>  [ecj-lint] 11. WARNING in
> 

[jira] [Resolved] (LUCENE-7329) Simplify CharacterUtils

2016-06-13 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7329.
--
   Resolution: Fixed
Fix Version/s: 6.2
   master (7.0)

> Simplify CharacterUtils
> ---
>
> Key: LUCENE-7329
> URL: https://issues.apache.org/jira/browse/LUCENE-7329
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7329.patch
>
>
> This class has abstractions for the Java 4 and 5 ways of dealing with 
> characters, but we now only use the Java 5 implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7329) Simplify CharacterUtils

2016-06-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327374#comment-15327374
 ] 

ASF subversion and git services commented on LUCENE-7329:
-

Commit 061f688022debf8db001886bc4e4847cc03c572d in lucene-solr's branch 
refs/heads/branch_6x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=061f688 ]

LUCENE-7329: Simplify CharacterUtils.


> Simplify CharacterUtils
> ---
>
> Key: LUCENE-7329
> URL: https://issues.apache.org/jira/browse/LUCENE-7329
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7329.patch
>
>
> This class has abstractions for the Java 4 and 5 ways of dealing with 
> characters, but we now only use the Java 5 implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7319) remove unused imports

2016-06-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327367#comment-15327367
 ] 

ASF subversion and git services commented on LUCENE-7319:
-

Commit 5e2677e0fb357c89408005e49b9d55981f884f73 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5e2677e ]

LUCENE-7319: Remove more unused imports.


> remove unused imports
> -
>
> Key: LUCENE-7319
> URL: https://issues.apache.org/jira/browse/LUCENE-7319
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 6.x, master (7.0)
>
> Attachments: LUCENE-7319.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7329) Simplify CharacterUtils

2016-06-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327368#comment-15327368
 ] 

ASF subversion and git services commented on LUCENE-7329:
-

Commit af2ae05d6ec158a962731b77478d9cf451d9e00a in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=af2ae05 ]

LUCENE-7329: Simplify CharacterUtils.


> Simplify CharacterUtils
> ---
>
> Key: LUCENE-7329
> URL: https://issues.apache.org/jira/browse/LUCENE-7329
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7329.patch
>
>
> This class has abstractions for the Java 4 and 5 ways of dealing with 
> characters, but we now only use the Java 5 implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1212 - Failure

2016-06-13 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1212/

All tests passed

Build Log:
[...truncated 63141 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj166341476
 [ecj-lint] Compiling 932 source files to /tmp/ecj166341476
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/OverseerTaskProcessor.java
 (at line 22)
 [ecj-lint] import java.util.Collections;
 [ecj-lint]^
 [ecj-lint] The import java.util.Collections is never used
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/OverseerTaskProcessor.java
 (at line 45)
 [ecj-lint] import org.apache.solr.handler.component.ShardHandlerFactory;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.handler.component.ShardHandlerFactory is 
never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/OverseerTaskQueue.java
 (at line 22)
 [ecj-lint] import java.util.Set;
 [ecj-lint]^
 [ecj-lint] The import java.util.Set is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 226)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 120)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this 

[jira] [Commented] (SOLR-445) Update Handlers abort with bad documents

2016-06-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327357#comment-15327357
 ] 

ASF GitHub Bot commented on SOLR-445:
-

GitHub user arafalov opened a pull request:

https://github.com/apache/lucene-solr/pull/43

Trivial name spelling fix for SOLR-445

ToleranteUpdateProcessorFactory -> ToleranteUpdateProcessorFactory

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arafalov/lucene-solr-1 patch-3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/43.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #43


commit 6742355f93f0d2d03600fe408b542507ee89bf54
Author: Alexandre Rafalovitch 
Date:   2016-06-13T13:19:25Z

Trivial Spelling fix 

ToleranteUpdateProcessorFactory -> TolerantUpdateProcessorFactory

commit ebffa9aa2aebd689db53ba363d5022b893c7eeb0
Author: Alexandre Rafalovitch 
Date:   2016-06-13T13:22:49Z

Trivial Spelling fix

ToleranteUpdateProcessorFactory -> TolerantUpdateProcessorFactory




> Update Handlers abort with bad documents
> 
>
> Key: SOLR-445
> URL: https://issues.apache.org/jira/browse/SOLR-445
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Reporter: Will Johnson
>Assignee: Hoss Man
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-445-3_x.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445-alternative.patch, 
> SOLR-445-alternative.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, SOLR-445.patch, 
> SOLR-445.patch, SOLR-445.patch, SOLR-445_3x.patch, solr-445.xml
>
>
> This issue adds a new {{TolerantUpdateProcessorFactory}} making it possible 
> to configure solr updates so that they are "tolerant" of individual errors in 
> an update request...
> {code}
>   
> 10
>   
> {code}
> When a chain with this processor is used, but maxErrors isn't exceeded, 
> here's what the response looks like...
> {code}
> $ curl 
> 'http://localhost:8983/solr/techproducts/update?update.chain=tolerant-chain=json=true=-1'
>  -H "Content-Type: application/json" --data-binary '{"add" : { 
> "doc":{"id":"1","foo_i":"bogus"}}, "delete": {"query":"malformed:["}}'
> {
>   "responseHeader":{
> "errors":[{
> "type":"ADD",
> "id":"1",
> "message":"ERROR: [doc=1] Error adding field 'foo_i'='bogus' msg=For 
> input string: \"bogus\""},
>   {
> "type":"DELQ",
> "id":"malformed:[",
> "message":"org.apache.solr.search.SyntaxError: Cannot parse 
> 'malformed:[': Encountered \"\" at line 1, column 11.\nWas expecting one 
> of:\n ...\n ...\n"}],
> "maxErrors":-1,
> "status":0,
> "QTime":1}}
> {code}
> Note in the above example that:
> * maxErrors can be overridden on a per-request basis
> * an effective {{maxErrors==-1}} (either from config, or request param) means 
> "unlimited" (under the covers it's using {{Integer.MAX_VALUE}})
> If/When maxErrors is reached for a request, then the _first_ exception that 
> the processor caught is propagated back to the user, and metadata is set on 
> that exception with all of the same details about all the tolerated errors.
> This next example is the same as the previous except that instead of 
> {{maxErrors=-1}} the request param is now {{maxErrors=1}}...
> {code}
> $ curl 
> 'http://localhost:8983/solr/techproducts/update?update.chain=tolerant-chain=json=true=1'
>  -H "Content-Type: application/json" --data-binary '{"add" : { 
> "doc":{"id":"1","foo_i":"bogus"}}, "delete": {"query":"malformed:["}}'
> {
>   "responseHeader":{
> "errors":[{
> "type":"ADD",
> "id":"1",
> "message":"ERROR: [doc=1] Error adding field 'foo_i'='bogus' msg=For 
> input string: \"bogus\""},
>   {
> "type":"DELQ",
> "id":"malformed:[",
> "message":"org.apache.solr.search.SyntaxError: Cannot parse 
> 'malformed:[': Encountered \"\" at line 1, column 11.\nWas expecting one 
> of:\n ...\n ...\n"}],
> "maxErrors":1,
> "status":400,
> "QTime":1},
>   "error":{
> "metadata":[
>   "org.apache.solr.common.ToleratedUpdateError--ADD:1","ERROR: [doc=1] 
> Error adding field 'foo_i'='bogus' msg=For input string: \"bogus\"",
>   
> "org.apache.solr.common.ToleratedUpdateError--DELQ:malformed:[","org.apache.solr.search.SyntaxError:
>  Cannot parse 'malformed:[': Encountered \"\" at line 1, column 11.\nWas 
> expecting one of:\n ...\n ...\n",
>   

[GitHub] lucene-solr pull request #43: Trivial name spelling fix for SOLR-445

2016-06-13 Thread arafalov
GitHub user arafalov opened a pull request:

https://github.com/apache/lucene-solr/pull/43

Trivial name spelling fix for SOLR-445

ToleranteUpdateProcessorFactory -> ToleranteUpdateProcessorFactory

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/arafalov/lucene-solr-1 patch-3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/43.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #43


commit 6742355f93f0d2d03600fe408b542507ee89bf54
Author: Alexandre Rafalovitch 
Date:   2016-06-13T13:19:25Z

Trivial Spelling fix 

ToleranteUpdateProcessorFactory -> TolerantUpdateProcessorFactory

commit ebffa9aa2aebd689db53ba363d5022b893c7eeb0
Author: Alexandre Rafalovitch 
Date:   2016-06-13T13:22:49Z

Trivial Spelling fix

ToleranteUpdateProcessorFactory -> TolerantUpdateProcessorFactory




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_92) - Build # 887 - Failure!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/887/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 63240 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj1806507459
 [ecj-lint] Compiling 932 source files to /tmp/ecj1806507459
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/client/solrj/embedded/JettySolrRunner.java
 (at line 38)
 [ecj-lint] import java.util.SortedMap;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.SortedMap is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/OverseerTaskProcessor.java
 (at line 22)
 [ecj-lint] import java.util.Collections;
 [ecj-lint]^
 [ecj-lint] The import java.util.Collections is never used
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/OverseerTaskProcessor.java
 (at line 45)
 [ecj-lint] import org.apache.solr.handler.component.ShardHandlerFactory;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.handler.component.ShardHandlerFactory is 
never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/OverseerTaskQueue.java
 (at line 22)
 [ecj-lint] import java.util.Set;
 [ecj-lint]^
 [ecj-lint] The import java.util.Set is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 213)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 226)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 12. WARNING in 

[jira] [Commented] (SOLR-8715) New Admin UI's Schema screen fails for some fields

2016-06-13 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327328#comment-15327328
 ] 

Alexandre Rafalovitch commented on SOLR-8715:
-

Is this one too late for 6.1? We figured the solution already, it just needs to 
be committed.

> New Admin UI's Schema screen fails for some fields
> --
>
> Key: SOLR-8715
> URL: https://issues.apache.org/jira/browse/SOLR-8715
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 5.5, 6.0
> Environment: mac, firefox
>Reporter: Alexandre Rafalovitch
>Assignee: Upayavira
>  Labels: admin-interface
> Attachments: Problem shown in the released 5.5 version.png
>
>
> In techproducts example, using new Admin UI and trying to load the Schema for 
> text field causes blank screen and the Javascript error in the developer 
> console:
> {noformat}
> Error: row.flags is undefined
> getFieldProperties@http://localhost:8983/solr/js/angular/controllers/schema.js:482:40
> $scope.refresh/http://localhost:8983/solr/js/angular/controllers/schema.js:76:38
> 
> {noformat}
> Tested with 5.5rc3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.1.0 RC1

2016-06-13 Thread Mikhail Khludnev
SUCCESS! [1:56:13.790113]
briefly checked [subquery].

On Mon, Jun 13, 2016 at 12:15 PM, Adrien Grand  wrote:

> Please vote for release candidate 1 for Lucene/Solr 6.1.0The artifacts can be 
> downloaded 
> from:https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/You
>  can run the smoke tester directly with this command:python3 -u 
> dev-tools/scripts/smokeTestRelease.py 
> \https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.1.0-RC1-rev4726c5b2d2efa9ba160b608d46a977d0a6b83f94/
>
> Here is my +1.
> SUCCESS! [0:36:57.750669]
>



-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics





[jira] [Commented] (SOLR-9161) SolrPluginUtils.invokeSetters should accommodate setter variants

2016-06-13 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327302#comment-15327302
 ] 

Christine Poerschke commented on SOLR-9161:
---

Thanks Steve. I also just ran the beasting script and it failed 0/100 
iterations.
{code}
cd solr/core
ant beast -Dbeast.iters=100 -Dtestcase=SolrPluginUtilsTest
{code}

> SolrPluginUtils.invokeSetters should accommodate setter variants
> 
>
> Key: SOLR-9161
> URL: https://issues.apache.org/jira/browse/SOLR-9161
> Project: Solr
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9161.patch, SOLR-9161.patch
>
>
> The code currently assumes that there is only one setter (or if there are 
> several setters then the first one found is used and it could mismatch on the 
> arg type).
> Context and motivation is that a class with a
> {code}
> void setAFloat(float val) {
>   this.val = val;
> }
> {code}
> setter may wish to also provide a
> {code}
> void setAFloat(String val) {
>   this.val = Float.parseFloat(val);
> }
> {code}
> convenience setter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9161) SolrPluginUtils.invokeSetters should accommodate setter variants

2016-06-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327303#comment-15327303
 ] 

ASF subversion and git services commented on SOLR-9161:
---

Commit 038fe9378dab18d0e16b34c26dc802c6560e77e7 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=038fe93 ]

SOLR-9161: change SolrPluginUtils.invokeSetters implementation to accommodate 
setter variants


> SolrPluginUtils.invokeSetters should accommodate setter variants
> 
>
> Key: SOLR-9161
> URL: https://issues.apache.org/jira/browse/SOLR-9161
> Project: Solr
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9161.patch, SOLR-9161.patch
>
>
> The code currently assumes that there is only one setter (or if there are 
> several setters then the first one found is used and it could mismatch on the 
> arg type).
> Context and motivation is that a class with a
> {code}
> void setAFloat(float val) {
>   this.val = val;
> }
> {code}
> setter may wish to also provide a
> {code}
> void setAFloat(String val) {
>   this.val = Float.parseFloat(val);
> }
> {code}
> convenience setter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.1-Linux (64bit/jdk1.8.0_92) - Build # 29 - Failure!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.1-Linux/29/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'CY val' for path 'params/c' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{ "a":"A 
val", "b":"B val", "wt":"json", "useParams":""},   "context":{ 
"webapp":"", "path":"/dump1", "httpMethod":"GET"}},  from server:  
http://127.0.0.1:34839/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'CY val' for path 
'params/c' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{
"a":"A val",
"b":"B val",
"wt":"json",
"useParams":""},
  "context":{
"webapp":"",
"path":"/dump1",
"httpMethod":"GET"}},  from server:  http://127.0.0.1:34839/collection1
at 
__randomizedtesting.SeedInfo.seed([A750870D78669E8:822137AA797A0410]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:172)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-6.1-Windows (64bit/jdk1.8.0_92) - Build # 9 - Failure!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.1-Windows/9/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 8 object(s) that were not released!!! 
[MDCAwareThreadPoolExecutor, TransactionLog, MDCAwareThreadPoolExecutor, 
MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, 
MockDirectoryWrapper, TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 8 object(s) that were not 
released!!! [MDCAwareThreadPoolExecutor, TransactionLog, 
MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper, 
MockDirectoryWrapper, MockDirectoryWrapper, TransactionLog]
at __randomizedtesting.SeedInfo.seed([F7A23C3ACBBC2A21]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-6.1-Windows\solr\build\solr-core\test\J0\temp\solr.schema.TestManagedSchemaAPI_F7A23C3ACBBC2A21-001\tempDir-001\node2\testschemaapi_shard1_replica1\data\tlog\tlog.001:
 java.nio.file.FileSystemException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.1-Windows\solr\build\solr-core\test\J0\temp\solr.schema.TestManagedSchemaAPI_F7A23C3ACBBC2A21-001\tempDir-001\node2\testschemaapi_shard1_replica1\data\tlog\tlog.001:
 The process cannot access the file because it is being used by another 
process. 
C:\Users\jenkins\workspace\Lucene-Solr-6.1-Windows\solr\build\solr-core\test\J0\temp\solr.schema.TestManagedSchemaAPI_F7A23C3ACBBC2A21-001\tempDir-001\node2\testschemaapi_shard1_replica1\data\tlog:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.1-Windows\solr\build\solr-core\test\J0\temp\solr.schema.TestManagedSchemaAPI_F7A23C3ACBBC2A21-001\tempDir-001\node2\testschemaapi_shard1_replica1\data\tlog

C:\Users\jenkins\workspace\Lucene-Solr-6.1-Windows\solr\build\solr-core\test\J0\temp\solr.schema.TestManagedSchemaAPI_F7A23C3ACBBC2A21-001\tempDir-001\node2\testschemaapi_shard1_replica1\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.1-Windows\solr\build\solr-core\test\J0\temp\solr.schema.TestManagedSchemaAPI_F7A23C3ACBBC2A21-001\tempDir-001\node2\testschemaapi_shard1_replica1\data


[jira] [Resolved] (LUCENE-7319) remove unused imports

2016-06-13 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved LUCENE-7319.
-
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.x

> remove unused imports
> -
>
> Key: LUCENE-7319
> URL: https://issues.apache.org/jira/browse/LUCENE-7319
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 6.x, master (7.0)
>
> Attachments: LUCENE-7319.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7320) fail precommit on unusedImport

2016-06-13 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved LUCENE-7320.
-
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.x

> fail precommit on unusedImport
> --
>
> Key: LUCENE-7320
> URL: https://issues.apache.org/jira/browse/LUCENE-7320
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 6.x, master (7.0)
>
> Attachments: LUCENE-7320.patch
>
>
> One-line change to 
> [ecj.javadocs.prefs|https://github.com/apache/lucene-solr/blob/master/lucene/tools/javadoc/ecj.javadocs.prefs]
>  once LUCENE-7319 has taken care of removing unused imports in the existing 
> code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7319) remove unused imports

2016-06-13 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327216#comment-15327216
 ] 

Christine Poerschke commented on LUCENE-7319:
-

https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=commit;h=52f5c502468846138f73ab83837528fa91a54733
 was also for master branch (not sure why it didn't auto-update the ticket) and 
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=commit;h=21bc7ef104b149ee9b7b09e1fcecd7896ec328c1
 is the equivalent for branch_6x branch.



> remove unused imports
> -
>
> Key: LUCENE-7319
> URL: https://issues.apache.org/jira/browse/LUCENE-7319
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7319.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7320) fail precommit on unusedImport

2016-06-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327207#comment-15327207
 ] 

ASF subversion and git services commented on LUCENE-7320:
-

Commit 7433d60bd105668b17dfe44c55624a5136383893 in lucene-solr's branch 
refs/heads/branch_6x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7433d60 ]

LUCENE-7320: fail precommit on unusedImport


> fail precommit on unusedImport
> --
>
> Key: LUCENE-7320
> URL: https://issues.apache.org/jira/browse/LUCENE-7320
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7320.patch
>
>
> One-line change to 
> [ecj.javadocs.prefs|https://github.com/apache/lucene-solr/blob/master/lucene/tools/javadoc/ecj.javadocs.prefs]
>  once LUCENE-7319 has taken care of removing unused imports in the existing 
> code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 16980 - Still Failing!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16980/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:33518/solr/testschemaapi_shard1_replica2: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:33518/solr/testschemaapi_shard1_replica2: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([C89CB319CA42D2C1:40C88CC364BEBF39]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:697)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:86)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_92) - Build # 5908 - Still Failing!

2016-06-13 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5908/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [InternalHttpClient]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [InternalHttpClient]
at __randomizedtesting.SeedInfo.seed([F29067D0C63F3C3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:256)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11367 lines...]
   [junit4] Suite: org.apache.solr.cloud.CdcrVersionReplicationTest
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.CdcrVersionReplicationTest_F29067D0C63F3C3-001\init-core-data-001
   [junit4]   2> 1216016 INFO  
(SUITE-CdcrVersionReplicationTest-seed#[F29067D0C63F3C3]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 1216016 INFO  
(SUITE-CdcrVersionReplicationTest-seed#[F29067D0C63F3C3]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 1216021 INFO  
(TEST-CdcrVersionReplicationTest.testCdcrDocVersions-seed#[F29067D0C63F3C3]) [  
  ] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1216026 INFO  (Thread-3223) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1216026 INFO  (Thread-3223) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1216123 INFO  
(TEST-CdcrVersionReplicationTest.testCdcrDocVersions-seed#[F29067D0C63F3C3]) [  
  ] o.a.s.c.ZkTestServer start zk server on port:54239
   [junit4]   2> 1216127 INFO  
(TEST-CdcrVersionReplicationTest.testCdcrDocVersions-seed#[F29067D0C63F3C3]) [  
  ] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1216129 INFO  
(TEST-CdcrVersionReplicationTest.testCdcrDocVersions-seed#[F29067D0C63F3C3]) [  
  ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1216141 INFO  (zkCallback-1760-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@2d9bff name:ZooKeeperConnection 
Watcher:127.0.0.1:54239 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 1216141 INFO  

[jira] [Commented] (LUCENE-7319) remove unused imports

2016-06-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327133#comment-15327133
 ] 

ASF subversion and git services commented on LUCENE-7319:
-

Commit 95c7e6d716ae5e96a9fff3b682a383f4c073 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=95c7e6d ]

LUCENE-7319: remove one more unused import


> remove unused imports
> -
>
> Key: LUCENE-7319
> URL: https://issues.apache.org/jira/browse/LUCENE-7319
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7319.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7320) fail precommit on unusedImport

2016-06-13 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15327132#comment-15327132
 ] 

ASF subversion and git services commented on LUCENE-7320:
-

Commit c8911ccc772cae252c03099dc8711509a1bede34 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c8911cc ]

LUCENE-7320: fail precommit on unusedImport


> fail precommit on unusedImport
> --
>
> Key: LUCENE-7320
> URL: https://issues.apache.org/jira/browse/LUCENE-7320
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7320.patch
>
>
> One-line change to 
> [ecj.javadocs.prefs|https://github.com/apache/lucene-solr/blob/master/lucene/tools/javadoc/ecj.javadocs.prefs]
>  once LUCENE-7319 has taken care of removing unused imports in the existing 
> code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6171) Make lucene completely write-once

2016-06-13 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6171:
---
Attachment: LUCENE-6171.patch

Here's a naive initial patch ... all I did was add the {{CREATE_NEW}}
flag in {{FSDirectory.createOutput}}, and removed
{{MockDirectoryWrapper.setPreventDoubleWrite}} and all tests that
invoked that.

Lucene tests passed (once!); not sure about solr tests.  I haven't looked into 
the other fixes [~rcmuir] suggested yet ...


> Make lucene completely write-once
> -
>
> Key: LUCENE-6171
> URL: https://issues.apache.org/jira/browse/LUCENE-6171
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Assignee: Michael McCandless
> Attachments: LUCENE-6171.patch
>
>
> Today, lucene is mostly write-once, but not always, and these are just very 
> exceptional cases. 
> This is an invitation for exceptional bugs: (and we have occasional test 
> failures when doing "no-wait close" because of this). 
> I would prefer it if we didn't try to delete files before we open them for 
> write, and if we opened them with the CREATE_NEW option by default to throw 
> an exception, if the file already exists.
> The trickier parts of the change are going to be IndexFileDeleter and 
> exceptions on merge / CFS construction logic.
> Overall for IndexFileDeleter I think the least invasive option might be to 
> only delete files older than the current commit point? This will ensure that 
> inflateGens() always avoids trying to overwrite any files that were from an 
> aborted segment. 
> For CFS construction/exceptions on merge, we really need to remove the custom 
> "sniping" of index files there and let only IndexFileDeleter delete files. My 
> previous failed approach involved always consistently using 
> TrackingDirectoryWrapper, but it failed, and only in backwards compatibility 
> tests, because of LUCENE-6146 (but i could never figure that out). I am 
> hoping this time I will be successful :)
> Longer term we should think about more simplifications, progress has been 
> made on LUCENE-5987, but I think overall we still try to be a superhero for 
> exceptions on merge?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >