[JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.8.0_92) - Build # 295 - Still Failing!

2016-06-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/295/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.lucene.search.TestBoolean2.testQueries02

Error Message:
hits1 doc nrs for hit 0 expected:<4653> but was:<6702>

Stack Trace:
junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 expected:<4653> 
but was:<6702>
at 
__randomizedtesting.SeedInfo.seed([662443CFEE45B337:15254080257997A8]:0)
at junit.framework.Assert.fail(Assert.java:50)
at junit.framework.Assert.failNotEquals(Assert.java:287)
at junit.framework.Assert.assertEquals(Assert.java:67)
at junit.framework.Assert.assertEquals(Assert.java:199)
at org.apache.lucene.search.CheckHits.checkDocIds(CheckHits.java:190)
at org.apache.lucene.search.CheckHits.checkHitsQuery(CheckHits.java:203)
at 
org.apache.lucene.search.TestBoolean2.queriesTest(TestBoolean2.java:192)
at 
org.apache.lucene.search.TestBoolean2.testQueries02(TestBoolean2.java:236)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.lucene.search.TestBoolean2.testQueries08

Error Message:
hits1 doc nrs for hit 0 expected:<4653> but was:<6702>

Stack Trace:
junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 expected:<4653> 
but was:<6702>

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3344 - Failure!

2016-06-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3344/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:63397","node_name":"127.0.0.1:63397_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={   "replicationFactor":"3",   
"shards":{"shard1":{   "range":"8000-7fff",   "state":"active", 
  "replicas":{ "core_node1":{   
"core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:63375;,   "node_name":"127.0.0.1:63375_",  
 "state":"down"}, "core_node2":{   "state":"down",  
 "base_url":"http://127.0.0.1:63391;,   
"core":"c8n_1x3_lf_shard1_replica2",   "node_name":"127.0.0.1:63391_"}, 
"core_node3":{   "core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:63397;,   "node_name":"127.0.0.1:63397_",  
 "state":"active",   "leader":"true",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:63397","node_name":"127.0.0.1:63397_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:63375;,
  "node_name":"127.0.0.1:63375_",
  "state":"down"},
"core_node2":{
  "state":"down",
  "base_url":"http://127.0.0.1:63391;,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:63391_"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:63397;,
  "node_name":"127.0.0.1:63397_",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([23EB9BFCF38DAAE:8A6A866561C4B756]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:170)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 

[JENKINS] Lucene-Solr-Tests-5.5-Java8 - Build # 32 - Failure

2016-06-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5-Java8/32/

4 tests failed.
FAILED:  org.apache.lucene.search.TestBoolean2.testQueries01

Error Message:
hits1 doc nrs for hit 0 expected:<4411> but was:<6460>

Stack Trace:
junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 expected:<4411> 
but was:<6460>
at 
__randomizedtesting.SeedInfo.seed([DF5D40551FEA611F:C20306268A968AF7]:0)
at junit.framework.Assert.fail(Assert.java:50)
at junit.framework.Assert.failNotEquals(Assert.java:287)
at junit.framework.Assert.assertEquals(Assert.java:67)
at junit.framework.Assert.assertEquals(Assert.java:199)
at org.apache.lucene.search.CheckHits.checkDocIds(CheckHits.java:190)
at org.apache.lucene.search.CheckHits.checkHitsQuery(CheckHits.java:203)
at 
org.apache.lucene.search.TestBoolean2.queriesTest(TestBoolean2.java:192)
at 
org.apache.lucene.search.TestBoolean2.testQueries01(TestBoolean2.java:226)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.lucene.search.TestBoolean2.testQueries08

Error Message:
hits1 doc nrs for hit 0 expected:<4411> but was:<6460>

Stack Trace:
junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 expected:<4411> 
but was:<6460>
at 

[JENKINS] Lucene-Solr-SmokeRelease-5.5 - Build # 16 - Still Failing

2016-06-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.5/16/

No tests ran.

Build Log:
[...truncated 39773 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (16.7 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.5.2-src.tgz...
   [smoker] 28.7 MB in 0.03 sec (1008.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.5.2.tgz...
   [smoker] 63.4 MB in 0.06 sec (1108.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.5.2.zip...
   [smoker] 73.9 MB in 0.07 sec (1127.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.5.2.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.5.2.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6190 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.5.2-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] 
   [smoker] command "export 
JAVA_HOME="/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7" 
PATH="/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7/bin:$PATH" 
JAVACMD="/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7/bin/java";
 ant clean test -Dtests.slow=false" failed:
   [smoker] Buildfile: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/tmp/unpack/lucene-5.5.2/build.xml
   [smoker] 
   [smoker] clean:
   [smoker][delete] Deleting directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/tmp/unpack/lucene-5.5.2/build
   [smoker] 
   [smoker] ivy-availability-check:
   [smoker] 
   [smoker] ivy-fail:
   [smoker] 
   [smoker] ivy-configure:
   [smoker] [ivy:configure] :: Apache Ivy 2.3.0 - 20130110142753 :: 
http://ant.apache.org/ivy/ ::
   [smoker] [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/tmp/unpack/lucene-5.5.2/ivy-settings.xml
   [smoker] 
   [smoker] -clover.load:
   [smoker] 
   [smoker] resolve-groovy:
   [smoker] [ivy:cachepath] :: resolving dependencies :: 
org.codehaus.groovy#groovy-all-caller;working
   [smoker] [ivy:cachepath] confs: [default]
   [smoker] [ivy:cachepath] found org.codehaus.groovy#groovy-all;2.4.4 in 
public
   [smoker] [ivy:cachepath] :: resolution report :: resolve 182ms :: artifacts 
dl 2ms
   [smoker] 
-
   [smoker] |  |modules||   
artifacts   |
   [smoker] |   conf   | number| search|dwnlded|evicted|| 
number|dwnlded|
   [smoker] 
-
   [smoker] |  default |   1   |   0   |   0   |   0   ||   1   |   
0   |
   [smoker] 
-
   [smoker] 
   [smoker] -init-totals:
   [smoker] 
   [smoker] test-core:
   [smoker] 
   [smoker] -clover.disable:
   [smoker] 
   [smoker] ivy-availability-check:
   [smoker] 
   [smoker] ivy-fail:
   [smoker] 
   [smoker] ivy-configure:
   [smoker] [ivy:configure] :: loading settings :: file = 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 518 - Failure

2016-06-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/518/

No tests ran.

Build Log:
[...truncated 40556 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (20.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 28.6 MB in 0.02 sec (1212.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 63.1 MB in 0.05 sec (1208.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 73.7 MB in 0.06 sec (1201.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6019 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6019 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 224 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   6.1.0
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1431, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1375, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1413, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, gitRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 590, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
gitRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 736, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(version, unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1351, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCompatibility?

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/build.xml:536:
 exec returned: 1

Total time: 37 minutes 27 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-5.5-Windows (64bit/jdk1.8.0_92) - Build # 84 - Still Failing!

2016-06-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Windows/84/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  org.apache.lucene.search.TestBoolean2.testQueries01

Error Message:
hits1 doc nrs for hit 0 expected:<4240> but was:<6289>

Stack Trace:
junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 expected:<4240> 
but was:<6289>
at 
__randomizedtesting.SeedInfo.seed([A7B8C09A76BE9E0:1725CA7A32170208]:0)
at junit.framework.Assert.fail(Assert.java:50)
at junit.framework.Assert.failNotEquals(Assert.java:287)
at junit.framework.Assert.assertEquals(Assert.java:67)
at junit.framework.Assert.assertEquals(Assert.java:199)
at org.apache.lucene.search.CheckHits.checkDocIds(CheckHits.java:190)
at org.apache.lucene.search.CheckHits.checkHitsQuery(CheckHits.java:203)
at 
org.apache.lucene.search.TestBoolean2.queriesTest(TestBoolean2.java:192)
at 
org.apache.lucene.search.TestBoolean2.testQueries01(TestBoolean2.java:226)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.lucene.search.TestBoolean2.testQueries03

Error Message:
hits1 doc nrs for hit 0 expected:<4240> but was:<6289>

Stack Trace:
junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 expected:<4240> 
but 

[JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.8.0_92) - Build # 294 - Failure!

2016-06-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/294/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseG1GC

4 tests failed.
FAILED:  org.apache.lucene.search.TestBoolean2.testQueries03

Error Message:
hits1 doc nrs for hit 0 expected:<4381> but was:<6430>

Stack Trace:
junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 expected:<4381> 
but was:<6430>
at 
__randomizedtesting.SeedInfo.seed([55D8DFC8BCFC02AB:103D61FE5C2A432B]:0)
at junit.framework.Assert.fail(Assert.java:50)
at junit.framework.Assert.failNotEquals(Assert.java:287)
at junit.framework.Assert.assertEquals(Assert.java:67)
at junit.framework.Assert.assertEquals(Assert.java:199)
at org.apache.lucene.search.CheckHits.checkDocIds(CheckHits.java:190)
at org.apache.lucene.search.CheckHits.checkHitsQuery(CheckHits.java:203)
at 
org.apache.lucene.search.TestBoolean2.queriesTest(TestBoolean2.java:192)
at 
org.apache.lucene.search.TestBoolean2.testQueries03(TestBoolean2.java:246)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.lucene.search.TestBoolean2.testQueries02

Error Message:
hits1 doc nrs for hit 0 expected:<4381> but was:<6430>

Stack Trace:
junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 expected:<4381> 
but was:<6430>
at 

Re: [JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.7.0_80) - Build # 292 - Failure!

2016-06-16 Thread Steve Rowe
Thanks for looking Hoss.

I compared files changed by the commits on branch_6x and on branch_5_5, and I 
don’t see anything consequential, so I don’t think this is a case of a 
misapplied backport.

--
Steve
www.lucidworks.com

> On Jun 16, 2016, at 6:25 PM, Chris Hostetter  wrote:
> 
> 
> : : I ran this test before I committed the backport, but it succeeded then.  
> : : I beasted it on current branch_5_5 and 49/100 seeds succeeded.
> : 
> : one of the things that cahnged as part of LUCENE-7132 was that i made all 
> : the BQ related tests start randomizing setDisableCoord() ... so you might 
> : be seeing some previously unidentified coord related bug that is only in 
> : the 5.x line of code?
> : 
> : that could probably jive with the roughtly 50% failure ratio you're 
> : seeing?
> 
> Hmmm  nope.  Even with the setDisableCoord commented out (but still 
> consuming random().nextBoolean() consistently) the same seeds reliably 
> fail on branch_5_5
> 
> Looks like the "~50%" comes from the "use filler docs or not?" bit of the 
> test?  with the patch below i can't find any seeds to fail -- which makes 
> it seem like the crux of the original bug (results incorrect when docs are 
> in diff blocks) is still relevant even after the backport to branch_5_5.
> 
> Mike -- any idea what might still be the problem here?
> 
> 
> 
> -Hoss
> http://www.lucidworks.com/
> 
> 
> diff --git 
> a/lucene/core/src/test/org/apache/lucene/search/TestBoolean2.java 
> b/lucene/core/src/test/org/apache/lucene/search/TestBoolean2.java
> index d97d8d4..596eb64 100644
> --- a/lucene/core/src/test/org/apache/lucene/search/TestBoolean2.java
> +++ b/lucene/core/src/test/org/apache/lucene/search/TestBoolean2.java
> @@ -67,6 +67,7 @@ public class TestBoolean2 extends LuceneTestCase {
>   public static void beforeClass() throws Exception {
> // in some runs, test immediate adjacency of matches - in others, force a 
> full bucket gap betwen docs
> NUM_FILLER_DOCS = random().nextBoolean() ? 0 : BooleanScorer.SIZE;
> +NUM_FILLER_DOCS = 0; // nocommit
> PRE_FILLER_DOCS = TestUtil.nextInt(random(), 0, (NUM_FILLER_DOCS / 2));
> 
> directory = newDirectory();
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.5-Java7 - Build # 30 - Failure

2016-06-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5-Java7/30/

4 tests failed.
FAILED:  org.apache.lucene.search.TestBoolean2.testQueries02

Error Message:
hits1 doc nrs for hit 0 expected:<4132> but was:<6181>

Stack Trace:
junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 expected:<4132> 
but was:<6181>
at 
__randomizedtesting.SeedInfo.seed([CA8508789685CF3B:B9840B375DB9EBA4]:0)
at junit.framework.Assert.fail(Assert.java:50)
at junit.framework.Assert.failNotEquals(Assert.java:287)
at junit.framework.Assert.assertEquals(Assert.java:67)
at junit.framework.Assert.assertEquals(Assert.java:199)
at org.apache.lucene.search.CheckHits.checkDocIds(CheckHits.java:190)
at org.apache.lucene.search.CheckHits.checkHitsQuery(CheckHits.java:203)
at 
org.apache.lucene.search.TestBoolean2.queriesTest(TestBoolean2.java:192)
at 
org.apache.lucene.search.TestBoolean2.testQueries02(TestBoolean2.java:236)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.lucene.search.TestBoolean2.testQueries03

Error Message:
hits1 doc nrs for hit 0 expected:<4132> but was:<6181>

Stack Trace:
junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 expected:<4132> 
but was:<6181>
at 

[JENKINS] Lucene-Solr-NightlyTests-5.5 - Build # 13 - Still Failing

2016-06-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.5/13/

4 tests failed.
FAILED:  org.apache.lucene.search.TestBoolean2.testQueries03

Error Message:
hits1 doc nrs for hit 0 expected:<4635> but was:<6684>

Stack Trace:
junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 expected:<4635> 
but was:<6684>
at 
__randomizedtesting.SeedInfo.seed([E4F538C28CB310CA:A11086F46C65514A]:0)
at junit.framework.Assert.fail(Assert.java:50)
at junit.framework.Assert.failNotEquals(Assert.java:287)
at junit.framework.Assert.assertEquals(Assert.java:67)
at junit.framework.Assert.assertEquals(Assert.java:199)
at org.apache.lucene.search.CheckHits.checkDocIds(CheckHits.java:190)
at org.apache.lucene.search.CheckHits.checkHitsQuery(CheckHits.java:203)
at 
org.apache.lucene.search.TestBoolean2.queriesTest(TestBoolean2.java:192)
at 
org.apache.lucene.search.TestBoolean2.testQueries03(TestBoolean2.java:246)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.lucene.search.TestBoolean2.testQueries02

Error Message:
hits1 doc nrs for hit 0 expected:<4635> but was:<6684>

Stack Trace:
junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 expected:<4635> 
but was:<6684>
at 

[jira] [Commented] (SOLR-7065) Let a replica become the leader regardless of it's last published state if all replicas participate in the election process.

2016-06-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335191#comment-15335191
 ] 

Mark Miller commented on SOLR-7065:
---

We are skipping recovery, so we want to return -1 (success).

I think this is a tricky issue. Requires a bit of thought to make sure it's all 
okay. But I think I roughly had what we need in the patch. The main issue was 
that the test exposed some kind of problem where no leader would be elected. I 
think this may now be okay since another issue has been resolved. Most of the 
discussion above is not very related to this patch.

> Let a replica become the leader regardless of it's last published state if 
> all replicas participate in the election process.
> 
>
> Key: SOLR-7065
> URL: https://issues.apache.org/jira/browse/SOLR-7065
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-7065.patch, SOLR-7065.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_92) - Build # 252 - Still Failing!

2016-06-16 Thread Steve Rowe
Thanks Uwe.

I noticed Lucene-Solr-6.1-Linux was still running on Policeman Jenkins, so I 
disabled it as well. 

I also disabled the 6.1 jobs on ASF Jenkins.

--
Steve
www.lucidworks.com

> On Jun 16, 2016, at 12:16 PM, Uwe Schindler  wrote:
> 
> Hi,
> 
> as 6.1 is out I disabled this job and nuked workspace.
> Unfortunately the Windows VMs are a bit limited in space (although they have 
> the largest disk!). If one of the jobs somehow uses much space (randomly) it 
> fcks up :-(
> 
> Uwe
> 
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
> 
>> -Original Message-
>> From: Steve Rowe [mailto:sar...@gmail.com]
>> Sent: Thursday, June 16, 2016 4:23 PM
>> To: Uwe Schindler 
>> Cc: Lucene Dev 
>> Subject: Re: [JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_92) - Build #
>> 252 - Still Failing!
>> 
>> Uwe, looks like you have disk space problems on Policeman Jenkins:
>> 
>>> Caused by: java.io.IOException: There is not enough space on the disk
>> 
>> --
>> Steve
>> www.lucidworks.com
>> 
>>> On Jun 16, 2016, at 10:18 AM, Policeman Jenkins Server
>>  wrote:
>>> 
>>> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/252/
>>> Java: 32bit/jdk1.8.0_92 -client -XX:+UseParallelGC
>>> 
>>> No tests ran.
>>> 
>>> Build Log:
>>> [...truncated 14 lines...]
>>> FATAL: Exception caught during execution of reset command. {0}
>>> org.eclipse.jgit.api.errors.JGitInternalException: Exception caught during
>> execution of reset command. {0}
>>> at org.eclipse.jgit.api.ResetCommand.call(ResetCommand.java:230)
>>> at
>> org.jenkinsci.plugins.gitclient.JGitAPIImpl.clean(JGitAPIImpl.java:1299)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown
>> Source)
>>> at java.lang.reflect.Method.invoke(Unknown Source)
>>> at
>> hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteI
>> nvocationHandler.java:884)
>>> at
>> hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvoca
>> tionHandler.java:859)
>>> at
>> hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvoca
>> tionHandler.java:818)
>>> at hudson.remoting.UserRequest.perform(UserRequest.java:152)
>>> at hudson.remoting.UserRequest.perform(UserRequest.java:50)
>>> at hudson.remoting.Request$2.run(Request.java:332)
>>> at
>> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorSe
>> rvice.java:68)
>>> at java.util.concurrent.FutureTask.run(Unknown Source)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
>> Source)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
>> Source)
>>> at java.lang.Thread.run(Unknown Source)
>>> at ..remote call to Windows VBOX(Native Method)
>>> at
>> hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
>>> at hudson.remoting.UserResponse.retrieve(UserRequest.java:252)
>>> at hudson.remoting.Channel.call(Channel.java:781)
>>> at
>> hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandl
>> er.java:249)
>>> at com.sun.proxy.$Proxy56.clean(Unknown Source)
>>> at
>> org.jenkinsci.plugins.gitclient.RemoteGitImpl.clean(RemoteGitImpl.java:453)
>>> at
>> hudson.plugins.git.extensions.impl.CleanBeforeCheckout.decorateFetchCo
>> mmand(CleanBeforeCheckout.java:32)
>>> at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:806)
>>> at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1066)
>>> at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1097)
>>> at hudson.scm.SCM.checkout(SCM.java:485)
>>> at
>> hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
>>> at
>> hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(Abstr
>> actBuild.java:604)
>>> at
>> jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
>>> at
>> hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:
>> 529)
>>> at hudson.model.Run.execute(Run.java:1741)
>>> at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
>>> at
>> hudson.model.ResourceController.execute(ResourceController.java:98)
>>> at hudson.model.Executor.run(Executor.java:410)
>>> Caused by: java.io.IOException: There is not enough space on the disk
>>> at java.io.FileOutputStream.writeBytes(Native Method)
>>> at java.io.FileOutputStream.write(Unknown Source)
>>> at
>> org.eclipse.jgit.internal.storage.file.LockFile$2.write(LockFile.java:327)
>>> at java.io.BufferedOutputStream.flushBuffer(Unknown Source)
>>> at java.io.BufferedOutputStream.write(Unknown Source)
>>> at java.security.DigestOutputStream.write(Unknown Source)
>>> at
>> 

Re: NoHttpResponseException error between leader and replica

2016-06-16 Thread Mark Miller
I'm sorry, you say it's easy to reproduce, but can you explain roughly what
you are doing to reproduce it?

Mark
On Thu, Jun 16, 2016 at 9:20 PM Mark Miller  wrote:

> That's already how things work. It's now part of HttpClient. There are
> some settings you can mess with. Is it easy to reproduce?
>
> Mark
> On Thu, Jun 16, 2016 at 1:15 PM Varun Thacker 
> wrote:
>
>> When running a bulk index process occasionally we see a
>> NoHttpResponseException error when the leader is forwarding docs to the
>> replica. I think this is a known issue and can be reproduced pretty easily.
>>
>> What makes me want to dig more is that because of one such
>> NoHttpResponseException the leader will put the replica into recovery. The
>> replica can never catch up because the indexing throughput is quite high .
>> This can add hours of recovery time for the replica depending on how many
>> documents one is indexing .
>>
>> So from what I can think we have two options here -
>> 1. Implement a thread which removes stale connections. This has been
>> discussed on https://issues.apache.org/jira/browse/SOLR-4509 in the past
>> 2. The above solution is not the right way forward. The main problem here
>> is that replicas can't catch up because Solr doesn't implement backpressure
>> yet and implementing that would be the correct solution here
>>
>> Does anyone have an opinion on how we should we go forward with this
>> issue?
>>
>>
>>
>> --
>>
>>
>> Regards,
>> Varun Thacker
>>
> --
> - Mark
> about.me/markrmiller
>
-- 
- Mark
about.me/markrmiller


Re: NoHttpResponseException error between leader and replica

2016-06-16 Thread Mark Miller
That's already how things work. It's now part of HttpClient. There are some
settings you can mess with. Is it easy to reproduce?

Mark
On Thu, Jun 16, 2016 at 1:15 PM Varun Thacker 
wrote:

> When running a bulk index process occasionally we see a
> NoHttpResponseException error when the leader is forwarding docs to the
> replica. I think this is a known issue and can be reproduced pretty easily.
>
> What makes me want to dig more is that because of one such
> NoHttpResponseException the leader will put the replica into recovery. The
> replica can never catch up because the indexing throughput is quite high .
> This can add hours of recovery time for the replica depending on how many
> documents one is indexing .
>
> So from what I can think we have two options here -
> 1. Implement a thread which removes stale connections. This has been
> discussed on https://issues.apache.org/jira/browse/SOLR-4509 in the past
> 2. The above solution is not the right way forward. The main problem here
> is that replicas can't catch up because Solr doesn't implement backpressure
> yet and implementing that would be the correct solution here
>
> Does anyone have an opinion on how we should we go forward with this issue?
>
>
>
> --
>
>
> Regards,
> Varun Thacker
>
-- 
- Mark
about.me/markrmiller


[JENKINS] Lucene-Solr-6.1-Linux (32bit/jdk1.8.0_92) - Build # 47 - Failure!

2016-06-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.1-Linux/47/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestMiniSolrCloudCluster.testStopAllStartAll

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
at 
__randomizedtesting.SeedInfo.seed([5279E14ABA72A209:2447FE39FB450F26]:0)
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:326)
at 
org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at 
org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:244)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:384)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:327)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:377)
at 
org.apache.solr.cloud.TestMiniSolrCloudCluster.testStopAllStartAll(TestMiniSolrCloudCluster.java:443)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-06-16 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-8097:
-
Fix Version/s: (was: 6.0)

> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
> Fix For: 6.1
>
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-16 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334985#comment-15334985
 ] 

Hrishikesh Gadre commented on SOLR-7374:


bq. For the scope of this Jira can we just support it in ReplicationHandler as 
well ?

It looks like we are also creating a snapshot as part of post commit/optimize 
operation. Not sure which repository should we use for this? Would this require 
adding another config param to ReplicationHandler?

https://github.com/apache/lucene-solr/blob/a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java#L1324




> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-06-16 Thread Shikha Somani (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334957#comment-15334957
 ] 

Shikha Somani edited comment on SOLR-8297 at 6/16/16 11:30 PM:
---

*Any* option is introduced to support existing cloud join scenario i.e. where 
_fromCollection is singly sharded_. If asserting Any’s behavior is the only 
concern, will write test cases for thorough verification. Below is a scenario 
which resembles real world and will write test case according to it.

*Scenario*: 
There are 2 collections in a 2 node cluster:
* product_category: It has values like books, toys, etc. _Singly sharded_
* sale: Holds information about current sale. Sale and product collection are 
related, sale collection contains ‘product key’. _Multi sharded_

*Query*: Find sale information with product information:
{!join from=id to =productKey fromCollection= product_category}

*Cluster information*:

||Node1| ||Node2|| ||
|Product_category_shard1_replica1|8000-7fff|Product_category_shard1_replica2|8000-7fff|
|Sale_shard1_replica1|0-7fff|Sale_shard2_replica1|8000-|

With this scenario join can be applied between Sale and Product_category only 
with “Any” condition only otherwise range check will fail, preventing join 
query.


was (Author: shikhasomani):
*Any* option is introduced to support existing cloud join scenario i.e. where 
fromCollection is singly sharded. If asserting Any’s behavior is the only 
concern, will write test cases for thorough verification. Below is a scenario 
which resembles real world and will write test case according to it.

*Scenario*: 
There are 2 collections in a 2 node cluster:
* product_category: It has values like books, toys, etc. _Singly sharded_
* sale: Holds information about current sale. Sale and product collection are 
related, sale collection contains ‘product key’. _Multi sharded_

*Query*: Find sale information with product information:
{!join from=id to =productKey fromCollection= product_category}

*Cluster information*:

||Node1| ||Node2|| ||
|Product_category_shard1_replica1|8000-7fff|Product_category_shard1_replica2|8000-7fff|
|Sale_shard1_replica1|0-7fff|Sale_shard2_replica1|8000-|

With this scenario join can be applied between Sale and Product_category only 
with “Any” condition only otherwise range check will fail, preventing join 
query.

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem to be a "normal" 
> use-case in the cross-core join in SolrCloud.
> Hope this helps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-06-16 Thread Shikha Somani (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334957#comment-15334957
 ] 

Shikha Somani commented on SOLR-8297:
-

*Any* option is introduced to support existing cloud join scenario i.e. where 
fromCollection is singly sharded. If asserting Any’s behavior is the only 
concern, will write test cases for thorough verification. Below is a scenario 
which resembles real world and will write test case according to it.

*Scenario*: 
There are 2 collections in a 2 node cluster:
* product_category: It has values like books, toys, etc. _Singly sharded_
* sale: Holds information about current sale. Sale and product collection are 
related, sale collection contains ‘product key’. _Multi sharded_

*Query*: Find sale information with product information:
{!join from=id to =productKey fromCollection= product_category}

*Cluster information*:

||Node1| ||Node2|| ||
|Product_category_shard1_replica1|8000-7fff|Product_category_shard1_replica2|8000-7fff|
|Sale_shard1_replica1|0-7fff|Sale_shard2_replica1|8000-|

With this scenario join can be applied between Sale and Product_category only 
with “Any” condition only otherwise range check will fail, preventing join 
query.

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem to be a "normal" 
> use-case in the cross-core join in SolrCloud.
> Hope this helps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-16 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334905#comment-15334905
 ] 

Hrishikesh Gadre edited comment on SOLR-7374 at 6/16/16 11:14 PM:
--

bq. For the scope of this Jira can we just support it in ReplicationHandler as 
well ?

Sure I am working on this. It looks like we may not be able to provide 
identical behavior w.r.t. core level backup/restore API. 
Specifically when user does not specify "location" parameter, the existing 
ReplicationHandler implementation uses a directory relative to the "data" 
directory. e.g.
https://github.com/apache/lucene-solr/blob/a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java#L419
https://github.com/apache/lucene-solr/blob/a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef/solr/core/src/java/org/apache/solr/handler/SnapShooter.java#L67

While this logic is OK on a local file-system, it would not work if user is 
using a different file-system for backup/restore. e.g. consider a case when a 
user configures HDFS repository without a default location (and using local 
file-system for storing index files). 

Note - when only a single repository is configured, we use it as a "default".  
Now consider a case when a user invokes backup/restore without specifying 
"location" and "repository" parameters, we don't want to use the "data" 
directory as the location since it may not be valid on HDFS. So I am adding a 
constraint that if "repository" parameter is specified then location must be 
specified either via "location" parameter OR via a repository configuration in 
solr.xml

When "repository" parameter is not specified, we default to "LocalFileSystem" 
instead of configured default repository in solr.xml. This is to handle the 
use-case mentioned above. It also helps to maintain the backwards compatibility 
with the existing API behavior. 

On the other hand the Core level BACKUP API always fetches the "default" 
repository configuration from solr.xml and require that location be specified 
either via "location" parameter OR via a repository configuration. I hope this 
small difference in API behavior should be OK (since we should aim to retire 
one of the APIs).



was (Author: hgadre):
bq. For the scope of this Jira can we just support it in ReplicationHandler as 
well ?

Sure I am working on this. It looks like we may not be able to provide 
identical behavior w.r.t. core level backup/restore API. 
Specifically when user does not specify "location" parameter, the existing 
ReplicationHandler implementation uses a directory relative to the "data" 
directory. e.g.
https://github.com/apache/lucene-solr/blob/a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java#L419
https://github.com/apache/lucene-solr/blob/a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef/solr/core/src/java/org/apache/solr/handler/SnapShooter.java#L67

While this logic is OK on a local file-system, it would not work if user is 
using a different file-system for backup/restore. e.g. consider a case when a 
user configures HDFS repository without a default location (and using local 
file-system for storing index files). When only a single repository is 
configured, we use it as a "default". 

Now consider a case when a user invokes backup/restore without specifying 
"location" and "repository" parameters, we don't want to use the "data" 
directory as the location since it may not be valid on HDFS. So I am adding a 
constraint that if "repository" parameter is specified then location must be 
specified either via "location" parameter OR via a repository configuration in 
solr.xml

When "repository" parameter is not specified, we default to "LocalFileSystem" 
instead of configured default repository in solr.xml. This is to handle the 
use-case mentioned above. It also helps to maintain the backwards compatibility 
with the existing API behavior. 

On the other hand the Core level BACKUP API always fetches the "default" 
repository configuration from solr.xml and require that location be specified 
either via "location" parameter OR via a repository configuration. I hope this 
small difference in API behavior should be OK (since we should aim to retire 
one of the APIs).


> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently 

[jira] [Comment Edited] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-16 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334905#comment-15334905
 ] 

Hrishikesh Gadre edited comment on SOLR-7374 at 6/16/16 11:12 PM:
--

bq. For the scope of this Jira can we just support it in ReplicationHandler as 
well ?

Sure I am working on this. It looks like we may not be able to provide 
identical behavior w.r.t. core level backup/restore API. 
Specifically when user does not specify "location" parameter, the existing 
ReplicationHandler implementation uses a directory relative to the "data" 
directory. e.g.
https://github.com/apache/lucene-solr/blob/a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java#L419
https://github.com/apache/lucene-solr/blob/a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef/solr/core/src/java/org/apache/solr/handler/SnapShooter.java#L67

While this logic is OK on a local file-system, it would not work if user is 
using a different file-system for backup/restore. e.g. consider a case when a 
user configures HDFS repository without a default location (and using local 
file-system for storing index files). When only a single repository is 
configured, we use it as a "default". 

Now consider a case when a user invokes backup/restore without specifying 
"location" and "repository" parameters, we don't want to use the "data" 
directory as the location since it may not be valid on HDFS. So I am adding a 
constraint that if "repository" parameter is specified then location must be 
specified either via "location" parameter OR via a repository configuration in 
solr.xml

When "repository" parameter is not specified, we default to "LocalFileSystem" 
instead of configured default repository in solr.xml. This is to handle the 
use-case mentioned above. It also helps to maintain the backwards compatibility 
with the existing API behavior. 

On the other hand the Core level BACKUP API always fetches the "default" 
repository configuration from solr.xml and require that location be specified 
either via "location" parameter OR via a repository configuration. I hope this 
small difference in API behavior should be OK (since we should aim to retire 
one of the APIs).



was (Author: hgadre):
bq. For the scope of this Jira can we just support it in ReplicationHandler as 
well ?


It looks like we may not be able to provide identical behavior w.r.t. core 
level backup/restore API. Specifically when user does not specify "location" 
parameter, the existing implementation uses a directory relative to the "data" 
directory. e.g.
https://github.com/apache/lucene-solr/blob/a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java#L419
https://github.com/apache/lucene-solr/blob/a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef/solr/core/src/java/org/apache/solr/handler/SnapShooter.java#L67

While this logic is OK on a local file-system, it would not work if user is 
using a different file-system for backup/restore. e.g. consider a case when a 
user configures HDFS repository without a default location (and using local 
file-system for storing index files). When only a single repository is 
configured, we consider it as a "default". 

Now consider a case when a user invokes backup/restore without specifying 
"location" and "repository" parameters, we don't want to use the "data" 
directory as the location since it may not be valid on HDFS. So I am adding a 
constraint that if "repository" parameter is specified then location must be 
specified either via "location" parameter OR via a repository configuration in 
solr.xml

When "repository" parameter is not specified, we default to "LocalFileSystem" 
instead of configured default repository in solr.xml. This is to 
handle the use-case mentioned above. It also helps to maintain the backwards 
compatibility with the existing API behavior. On the other hand the Core level 
BACKUP API always fetches the "default" repository configuration from solr.xml 
and require that location be specified either via "location" parameter OR via a 
repository configuration. I hope this small difference in API behavior should 
be OK (since we should aim to retire one of the APIs).


> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to 

[jira] [Comment Edited] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-16 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334905#comment-15334905
 ] 

Hrishikesh Gadre edited comment on SOLR-7374 at 6/16/16 11:10 PM:
--

bq. For the scope of this Jira can we just support it in ReplicationHandler as 
well ?


It looks like we may not be able to provide identical behavior w.r.t. core 
level backup/restore API. Specifically when user does not specify "location" 
parameter, the existing implementation uses a directory relative to the "data" 
directory. e.g.
https://github.com/apache/lucene-solr/blob/a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java#L419
https://github.com/apache/lucene-solr/blob/a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef/solr/core/src/java/org/apache/solr/handler/SnapShooter.java#L67

While this logic is OK on a local file-system, it would not work if user is 
using a different file-system for backup/restore. e.g. consider a case when a 
user configures HDFS repository without a default location (and using local 
file-system for storing index files). When only a single repository is 
configured, we consider it as a "default". 

Now consider a case when a user invokes backup/restore without specifying 
"location" and "repository" parameters, we don't want to use the "data" 
directory as the location since it may not be valid on HDFS. So I am adding a 
constraint that if "repository" parameter is specified then location must be 
specified either via "location" parameter OR via a repository configuration in 
solr.xml

When "repository" parameter is not specified, we default to "LocalFileSystem" 
instead of configured default repository in solr.xml. This is to 
handle the use-case mentioned above. It also helps to maintain the backwards 
compatibility with the existing API behavior. On the other hand the Core level 
BACKUP API always fetches the "default" repository configuration from solr.xml 
and require that location be specified either via "location" parameter OR via a 
repository configuration. I hope this small difference in API behavior should 
be OK (since we should aim to retire one of the APIs).



was (Author: hgadre):
bq. For the scope of this Jira can we just support it in ReplicationHandler as 
well ?


It looks like we may have to break backwards compatibility for this. 
Specifically when user does not specify "location" parameter, the existing 
implementation uses a directory relative to the "data" directory. e.g.
https://github.com/apache/lucene-solr/blob/a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java#L419
https://github.com/apache/lucene-solr/blob/a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef/solr/core/src/java/org/apache/solr/handler/SnapShooter.java#L67

While this logic is OK on a local file-system, it would not work if user is 
using a different file-system for backup/restore. e.g. consider a case when a 
user configures HDFS repository without a default location (and using local 
file-system for storing index files). When only a single repository is 
configured, we consider it as a "default". 

Now consider a case when a user invokes backup/restore without specifying 
"location" and "repository" parameters, we don't want to use the "data" 
directory as the location since it may not be valid on HDFS. So I am adding a 
constraint that if "repository" parameter is specified then location must be 
specified either via "location" parameter OR via a repository configuration in 
solr.xml

When "repository" parameter is not specified, we default to "LocalFileSystem" 
instead of configured default repository in solr.xml. This is to 
handle the use-case mentioned above. It also helps to maintain the backwards 
compatibility with the existing API behavior. On the other hand the Core level 
BACKUP API always fetches the "default" repository configuration from solr.xml 
and require that location be specified either via "location" parameter OR via a 
repository configuration. I hope this small difference in API behavior should 
be OK (since we should aim to retire one of the APIs).


> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index 

[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-16 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334905#comment-15334905
 ] 

Hrishikesh Gadre commented on SOLR-7374:


bq. For the scope of this Jira can we just support it in ReplicationHandler as 
well ?


It looks like we may have to break backwards compatibility for this. 
Specifically when user does not specify "location" parameter, the existing 
implementation uses a directory relative to the "data" directory. e.g.
https://github.com/apache/lucene-solr/blob/a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java#L419
https://github.com/apache/lucene-solr/blob/a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef/solr/core/src/java/org/apache/solr/handler/SnapShooter.java#L67

While this logic is OK on a local file-system, it would not work if user is 
using a different file-system for backup/restore. e.g. consider a case when a 
user configures HDFS repository without a default location (and using local 
file-system for storing index files). When only a single repository is 
configured, we consider it as a "default". 

Now consider a case when a user invokes backup/restore without specifying 
"location" and "repository" parameters, we don't want to use the "data" 
directory as the location since it may not be valid on HDFS. So I am adding a 
constraint that if "repository" parameter is specified then location must be 
specified either via "location" parameter OR via a repository configuration in 
solr.xml

When "repository" parameter is not specified, we default to "LocalFileSystem" 
instead of configured default repository in solr.xml. This is to 
handle the use-case mentioned above. It also helps to maintain the backwards 
compatibility with the existing API behavior. On the other hand the Core level 
BACKUP API always fetches the "default" repository configuration from solr.xml 
and require that location be specified either via "location" parameter OR via a 
repository configuration. I hope this small difference in API behavior should 
be OK (since we should aim to retire one of the APIs).


> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8981) Upgrade to Tika 1.13 when it is available

2016-06-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334893#comment-15334893
 ] 

ASF GitHub Bot commented on SOLR-8981:
--

Github user uschindler commented on the issue:

https://github.com/apache/lucene-solr/pull/44
  
> I think this should work... ant precommit worked in Linux with these 
modifications. I kept getting hangs with ant jar-checksums in Windows.

If you checkout with git on windows using auto-eol it fails. The reason is 
git that threats sha1 files as text and converts their line endings.


> Upgrade to Tika 1.13 when it is available
> -
>
> Key: SOLR-8981
> URL: https://issues.apache.org/jira/browse/SOLR-8981
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Assignee: Uwe Schindler
>Priority: Minor
>
> Tika 1.13 should be out within a month.  This includes PDFBox 2.0.0 and a 
> number of other upgrades and improvements.  
> If there are any showstoppers in 1.13 from Solr's side or requests before we 
> roll 1.13, let us know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #44: SOLR-8981

2016-06-16 Thread uschindler
Github user uschindler commented on the issue:

https://github.com/apache/lucene-solr/pull/44
  
> I think this should work... ant precommit worked in Linux with these 
modifications. I kept getting hangs with ant jar-checksums in Windows.

If you checkout with git on windows using auto-eol it fails. The reason is 
git that threats sha1 files as text and converts their line endings.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.7.0_80) - Build # 292 - Failure!

2016-06-16 Thread Chris Hostetter

: : I ran this test before I committed the backport, but it succeeded then.  
: : I beasted it on current branch_5_5 and 49/100 seeds succeeded.
: 
: one of the things that cahnged as part of LUCENE-7132 was that i made all 
: the BQ related tests start randomizing setDisableCoord() ... so you might 
: be seeing some previously unidentified coord related bug that is only in 
: the 5.x line of code?
: 
: that could probably jive with the roughtly 50% failure ratio you're 
: seeing?

Hmmm  nope.  Even with the setDisableCoord commented out (but still 
consuming random().nextBoolean() consistently) the same seeds reliably 
fail on branch_5_5

Looks like the "~50%" comes from the "use filler docs or not?" bit of the 
test?  with the patch below i can't find any seeds to fail -- which makes 
it seem like the crux of the original bug (results incorrect when docs are 
in diff blocks) is still relevant even after the backport to branch_5_5.

Mike -- any idea what might still be the problem here?



-Hoss
http://www.lucidworks.com/


diff --git 
a/lucene/core/src/test/org/apache/lucene/search/TestBoolean2.java 
b/lucene/core/src/test/org/apache/lucene/search/TestBoolean2.java
index d97d8d4..596eb64 100644
--- a/lucene/core/src/test/org/apache/lucene/search/TestBoolean2.java
+++ b/lucene/core/src/test/org/apache/lucene/search/TestBoolean2.java
@@ -67,6 +67,7 @@ public class TestBoolean2 extends LuceneTestCase {
   public static void beforeClass() throws Exception {
 // in some runs, test immediate adjacency of matches - in others, force a 
full bucket gap betwen docs
 NUM_FILLER_DOCS = random().nextBoolean() ? 0 : BooleanScorer.SIZE;
+NUM_FILLER_DOCS = 0; // nocommit
 PRE_FILLER_DOCS = TestUtil.nextInt(random(), 0, (NUM_FILLER_DOCS / 2));
 
 directory = newDirectory();


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 202 - Still Failing!

2016-06-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/202/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:53307/solr/testschemaapi_shard1_replica2: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:53307/solr/testschemaapi_shard1_replica2: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([2CE8DE1A020A97C5:A4BCE1C0ACF6FA3D]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:697)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:86)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

Re: [JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.7.0_80) - Build # 292 - Failure!

2016-06-16 Thread Chris Hostetter

: I ran this test before I committed the backport, but it succeeded then.  
: I beasted it on current branch_5_5 and 49/100 seeds succeeded.

one of the things that cahnged as part of LUCENE-7132 was that i made all 
the BQ related tests start randomizing setDisableCoord() ... so you might 
be seeing some previously unidentified coord related bug that is only in 
the 5.x line of code?

that could probably jive with the roughtly 50% failure ratio you're 
seeing?



: 
: I’ll debug and see if there’s some obvious cause.
: 
: --
: Steve
: www.lucidworks.com
: 
: > On Jun 16, 2016, at 4:25 PM, Steve Rowe  wrote:
: > 
: > I’m investigating - this is very likely caused by my LUCENE-7132 backport.
: > 
: > --
: > Steve
: > www.lucidworks.com
: > 
: >> On Jun 16, 2016, at 3:28 PM, Policeman Jenkins Server 
 wrote:
: >> 
: >> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/292/
: >> Java: 32bit/jdk1.7.0_80 -server -XX:+UseSerialGC
: >> 
: >> 4 tests failed.
: >> FAILED:  org.apache.lucene.search.TestBoolean2.testQueries01
: >> 
: >> Error Message:
: >> hits1 doc nrs for hit 0 expected:<4456> but was:<6505>
: >> 
: >> Stack Trace:
: >> junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 
expected:<4456> but was:<6505>
: >>at 
__randomizedtesting.SeedInfo.seed([5787EE10A58E0A9C:4AD9A86330F2E174]:0)
: >>at junit.framework.Assert.fail(Assert.java:50)
: >>at junit.framework.Assert.failNotEquals(Assert.java:287)
: >>at junit.framework.Assert.assertEquals(Assert.java:67)
: >>at junit.framework.Assert.assertEquals(Assert.java:199)
: >>at org.apache.lucene.search.CheckHits.checkDocIds(CheckHits.java:190)
: >>at org.apache.lucene.search.CheckHits.checkHitsQuery(CheckHits.java:203)
: >>at 
org.apache.lucene.search.TestBoolean2.queriesTest(TestBoolean2.java:192)
: >>at 
org.apache.lucene.search.TestBoolean2.testQueries01(TestBoolean2.java:226)
: >>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
: >>at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
: >>at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
: >>at java.lang.reflect.Method.invoke(Method.java:606)
: >>at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
: >>at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
: >>at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
: >>at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
: >>at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
: >>at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
: >>at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
: >>at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
: >>at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
: >>at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
: >>at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
: >>at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
: >>at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
: >>at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
: >>at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
: >>at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
: >>at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
: >>at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
: >>at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
: >>at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
: >>at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
: >>at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
: >>at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
: >>at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
: >>at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
: >>at 

[jira] [Commented] (SOLR-8521) Add documentation for how to use Solr JDBC driver with SQL client like DB Visualizer

2016-06-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334744#comment-15334744
 ] 

Jan Høydahl commented on SOLR-8521:
---

I vote for Kevin's compromise:
{quote}
Reference guide: generic guide on what is required to use SolrJ JDBC (no 
screenshots)
Wiki: screenshot-by-screenshot walkthough with a page per client (DbVisualizer, 
SQuirrel SQL, Apache Zeppelin, etc)?
{quote}

It would also be ok to simply link to the existing blog posts from the 
refguide, we already do that for other features.

> Add documentation for how to use Solr JDBC driver with SQL client like DB 
> Visualizer
> 
>
> Key: SOLR-8521
> URL: https://issues.apache.org/jira/browse/SOLR-8521
> Project: Solr
>  Issue Type: Sub-task
>  Components: documentation, SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
>Assignee: Kevin Risden
> Attachments: dbvisualizer_solrjdbc.zip, 
> solr_jdbc_dbvisualizer_20160203.pdf
>
>
> Currently this requires the following:
> * a JDBC SQL client program (like DBVisualizer or SQuirrelSQL)
> * all jars from solr/dist/solrj-lib/* to be on the SQL client classpath
> * solr/dist/solr-solrj-6.0.0-SNAPSHOT.jar on the SQL client classpath
> * a valid JDBC connection string (like 
> jdbc:solr://SOLR_ZK_CONNECTION_STRING?collection=COLLECTION_NAME)
> * without SOLR-8213, the username/password supplied by the SQL client will be 
> ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.7.0_80) - Build # 292 - Failure!

2016-06-16 Thread Steve Rowe
Confirmed that it’s the LUCENE-7132 backport - reverting to the commit just 
before that one causes the seed to stop failing.

The seed doesn’t reproduce on branch_6_0, so the problem appears to be 
exclusive to 5.x.

I ran this test before I committed the backport, but it succeeded then.  I 
beasted it on current branch_5_5 and 49/100 seeds succeeded.

I’ll debug and see if there’s some obvious cause.

--
Steve
www.lucidworks.com

> On Jun 16, 2016, at 4:25 PM, Steve Rowe  wrote:
> 
> I’m investigating - this is very likely caused by my LUCENE-7132 backport.
> 
> --
> Steve
> www.lucidworks.com
> 
>> On Jun 16, 2016, at 3:28 PM, Policeman Jenkins Server  
>> wrote:
>> 
>> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/292/
>> Java: 32bit/jdk1.7.0_80 -server -XX:+UseSerialGC
>> 
>> 4 tests failed.
>> FAILED:  org.apache.lucene.search.TestBoolean2.testQueries01
>> 
>> Error Message:
>> hits1 doc nrs for hit 0 expected:<4456> but was:<6505>
>> 
>> Stack Trace:
>> junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 
>> expected:<4456> but was:<6505>
>>  at 
>> __randomizedtesting.SeedInfo.seed([5787EE10A58E0A9C:4AD9A86330F2E174]:0)
>>  at junit.framework.Assert.fail(Assert.java:50)
>>  at junit.framework.Assert.failNotEquals(Assert.java:287)
>>  at junit.framework.Assert.assertEquals(Assert.java:67)
>>  at junit.framework.Assert.assertEquals(Assert.java:199)
>>  at org.apache.lucene.search.CheckHits.checkDocIds(CheckHits.java:190)
>>  at org.apache.lucene.search.CheckHits.checkHitsQuery(CheckHits.java:203)
>>  at 
>> org.apache.lucene.search.TestBoolean2.queriesTest(TestBoolean2.java:192)
>>  at 
>> org.apache.lucene.search.TestBoolean2.testQueries01(TestBoolean2.java:226)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>  at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>  at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>  at java.lang.reflect.Method.invoke(Method.java:606)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
>>  at 
>> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>>  at 
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>>  at 
>> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>>  at 
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>>  at 
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>>  at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>  at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>>  at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
>>  at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
>>  at 
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>>  at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>  at 
>> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
>>  at 
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>>  at 
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>>  at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>  at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>  at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>  at 
>> 

[jira] [Comment Edited] (SOLR-9194) Enhance the bin/solr script to put and get arbitrary files to/from Zookeeper

2016-06-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321376#comment-15321376
 ] 

Jan Høydahl edited comment on SOLR-9194 at 6/16/16 9:07 PM:


bq. Minor quibble: We should require a hyphen before the cp, as:
I hate that hyphen :-) It feels wrong, since it is really not an option but a 
command argument. I think it was technical limitations from SolrCLI.java which 
made it easiest to parse arguments that way?

So I'm more for removing the mandatory dash from up/downconfig (leaving the 
dash variant working for all of 6.x).

We then get this "man page" for {{solr zk}}
{noformat}
Usage: solr zk upconfig|downconfig -d  -n  [-z zkHost]
   solr zk cp [-r]   [-z zkHost]
   solr zk rm [-r]  [-z zkHost]
   solr zk mv   [-z zkHost]

 upconfig uploads a configset from the local machine to Zookeeper. 
(Backcompat: -upconfig)

 downconfig downloads a configset from Zookeeper to the local machine. 
(Backcompat: -downconfig)

 cp copies files or folders to/from Zookeeper
,  : [file:]/path/to/local/file or zk:/path/to/zk/node
When  is a zk resource,  may be "."
If  ends with "/", files are copied into that 
folder
Wildcards are not supported

 rm removes files or folders on Zookeeper
: [zk:]/path/to/zk/node

 mv moves and/or renames files internally on Zookeeper

 -z zkHostZookeeper connection string. Only needed if not 
configured in solr.in.sh

 -r   Recursive copying

 -n configNameName of the configset in Zookeeper that will be the 
destinatino of
   'upconfig' and the source for 'downconfig'.

 -d confdir   The local directory the configuration will be uploaded 
from for
  'upconfig' or downloaded to for 'downconfig'. For 
'upconfig', this
  can be one of the example configsets, basic_configs, 
data_driven_schema_configs or
  sample_techproducts_configs or an arbitrary directory.
{noformat}



was (Author: janhoy):
bq. Minor quibble: We should require a hyphen before the cp, as:
I hate that hyphen :-) It feels wrong, since it is really not an option but a 
command argument. I think it was technical limitations from SolrCLI.java which 
made it easiest to parse arguments that way?

So I'm more for removing the mandatory dash from up/downconfig (leaving the 
dash variant working for all of 6.x).

We then get this "man page" for {{solr zk}}
{noformat}
Usage: solr zk upconfig|downconfig -d  -n  [-z zkHost]
   solr zk cp [-r]   [-z zkHost]
   solr zk rm [-r]  [-z zkHost]
   solr zk mv   [-z zkHost]

 upconfig uploads a configset from the local machine to Zookeeper. 
(Backcompat: -upconfig)

 downconfig downloads a configset from Zookeeper to the local machine. 
(Backcompat: -downconfig)

 cp copies files or folders to/from Zookeeper
,  : [file:]/path/to/local/file or zk:/path/to/zk/node
When  is a zk resource,  may be "."
If  ends with "/", files are copied into that 
folder
Wildcards are not supported

 rm removes files or folders on Zookeeper
: [zk:]/path/to/zk/node

 mv moves and/or renames files internally on Zookeeper

 -z zkHostZookeeper connection string.

 -r   Recursive copying

 -n configNameName of the configset in Zookeeper that will be the 
destinatino of
   'upconfig' and the source for 'downconfig'.

 -d confdir   The local directory the configuration will be uploaded 
from for
  'upconfig' or downloaded to for 'downconfig'. For 
'upconfig', this
  can be one of the example configsets, basic_configs, 
data_driven_schema_configs or
  sample_techproducts_configs or an arbitrary directory.
{noformat}


> Enhance the bin/solr script to put and get arbitrary files to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but 

[jira] [Commented] (SOLR-9194) Enhance the bin/solr script to put and get arbitrary files to/from Zookeeper

2016-06-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334682#comment-15334682
 ] 

Jan Høydahl commented on SOLR-9194:
---

bq. Jan Høydahl In your suggested help text, the \[-z zkHost\] indicates an 
optional param to me. Should it be mandatory or is there some thing I'm missing 
here?

The reason I propose {{-z zkHost}} to be optional, is that {{solr.in.sh}} 
already contains {{ZK_HOST=}}, so the {{-z}} would only be needed if that is 
not configured or you want to talk to another ZK.

> Enhance the bin/solr script to put and get arbitrary files to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but I'm not wedded to those, all suggestions welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 652 - Failure!

2016-06-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/652/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.handler.component.SpatialHeatmapFacetsTest.test

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([84B7C7CDA2E33448]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.SpatialHeatmapFacetsTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([84B7C7CDA2E33448]:0)




Build Log:
[...truncated 12181 lines...]
   [junit4] Suite: org.apache.solr.handler.component.SpatialHeatmapFacetsTest
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.component.SpatialHeatmapFacetsTest_84B7C7CDA2E33448-001/init-core-data-001
   [junit4]   2> 343524 INFO  
(SUITE-SpatialHeatmapFacetsTest-seed#[84B7C7CDA2E33448]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 343525 INFO  
(SUITE-SpatialHeatmapFacetsTest-seed#[84B7C7CDA2E33448]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 344189 INFO  
(TEST-SpatialHeatmapFacetsTest.testPng-seed#[84B7C7CDA2E33448]) [] 
o.a.s.SolrTestCaseJ4 Writing core.properties file to 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.component.SpatialHeatmapFacetsTest_84B7C7CDA2E33448-001/tempDir-001/control/cores/collection1
   [junit4]   2> 344191 INFO  
(TEST-SpatialHeatmapFacetsTest.testPng-seed#[84B7C7CDA2E33448]) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 344194 INFO  
(TEST-SpatialHeatmapFacetsTest.testPng-seed#[84B7C7CDA2E33448]) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@3a75e996{/,null,AVAILABLE}
   [junit4]   2> 344200 INFO  
(TEST-SpatialHeatmapFacetsTest.testPng-seed#[84B7C7CDA2E33448]) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@4daaa0b8{HTTP/1.1,[http/1.1]}{127.0.0.1:55782}
   [junit4]   2> 344200 INFO  
(TEST-SpatialHeatmapFacetsTest.testPng-seed#[84B7C7CDA2E33448]) [] 
o.e.j.s.Server Started @347819ms
   [junit4]   2> 344200 INFO  
(TEST-SpatialHeatmapFacetsTest.testPng-seed#[84B7C7CDA2E33448]) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {schema=schema-spatial.xml, 
solrconfig=solrconfig-basic.xml, hostContext=/, hostPort=55782, 
coreRootDirectory=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.component.SpatialHeatmapFacetsTest_84B7C7CDA2E33448-001/tempDir-001/control/cores}
   [junit4]   2> 344201 INFO  
(TEST-SpatialHeatmapFacetsTest.testPng-seed#[84B7C7CDA2E33448]) [] 
o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): 
sun.misc.Launcher$AppClassLoader@6d06d69c
   [junit4]   2> 344201 INFO  
(TEST-SpatialHeatmapFacetsTest.testPng-seed#[84B7C7CDA2E33448]) [] 
o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: 
'/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.component.SpatialHeatmapFacetsTest_84B7C7CDA2E33448-001/tempDir-001/control'
   [junit4]   2> 344201 INFO  
(TEST-SpatialHeatmapFacetsTest.testPng-seed#[84B7C7CDA2E33448]) [] 
o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx)
   [junit4]   2> 344201 INFO  
(TEST-SpatialHeatmapFacetsTest.testPng-seed#[84B7C7CDA2E33448]) [] 
o.a.s.c.SolrResourceLoader solr home defaulted to 'solr/' (could not find 
system property or JNDI)
   [junit4]   2> 344201 INFO  
(TEST-SpatialHeatmapFacetsTest.testPng-seed#[84B7C7CDA2E33448]) [] 
o.a.s.c.SolrXmlConfig Loading container configuration from 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.component.SpatialHeatmapFacetsTest_84B7C7CDA2E33448-001/tempDir-001/control/solr.xml
   [junit4]   2> 344208 INFO  
(TEST-SpatialHeatmapFacetsTest.testPng-seed#[84B7C7CDA2E33448]) [] 
o.a.s.c.CorePropertiesLocator Config-defined core root directory: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.component.SpatialHeatmapFacetsTest_84B7C7CDA2E33448-001/tempDir-001/control/cores
   [junit4]   2> 344208 INFO  
(TEST-SpatialHeatmapFacetsTest.testPng-seed#[84B7C7CDA2E33448]) [] 
o.a.s.c.CoreContainer New CoreContainer 1597717563
   [junit4]   2> 344208 INFO  
(TEST-SpatialHeatmapFacetsTest.testPng-seed#[84B7C7CDA2E33448]) [] 
o.a.s.c.CoreContainer Loading cores into CoreContainer 

[jira] [Commented] (SOLR-8521) Add documentation for how to use Solr JDBC driver with SQL client like DB Visualizer

2016-06-16 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334646#comment-15334646
 ] 

Cassandra Targett commented on SOLR-8521:
-

Picking up the screenshot conversation again...

In prepping the Ref Guide for 6.1 release, I noticed that the PDF for 6.1 is 
17.8 Mb in size, while the 6.0 version was 9.5Mb. Adding up the sizes of all 
the images added for DBVisualizer, Squirrel, and Zeppelin, they account for 
~6Mb of the added size. There are an additional 42 pages of PDF, bringing the 
whole thing to over 700 pages.

I would like to resize some of the screenshots so they are smaller (when it's 
possible to do so without losing their value), and take a look at omitting some 
to replace them with text (again, where the screenshot doesn't add a lot of 
extra value). I appreciate the visual walk-through, but think we could possibly 
"tell the tale", as it were, without as many screenshots. I won't delete any, 
so if I screw it up at all and we need to add any back, we can do so. Just 
wanted to give you a heads up [~risdenk] in case you had a different point of 
view.

> Add documentation for how to use Solr JDBC driver with SQL client like DB 
> Visualizer
> 
>
> Key: SOLR-8521
> URL: https://issues.apache.org/jira/browse/SOLR-8521
> Project: Solr
>  Issue Type: Sub-task
>  Components: documentation, SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
>Assignee: Kevin Risden
> Attachments: dbvisualizer_solrjdbc.zip, 
> solr_jdbc_dbvisualizer_20160203.pdf
>
>
> Currently this requires the following:
> * a JDBC SQL client program (like DBVisualizer or SQuirrelSQL)
> * all jars from solr/dist/solrj-lib/* to be on the SQL client classpath
> * solr/dist/solr-solrj-6.0.0-SNAPSHOT.jar on the SQL client classpath
> * a valid JDBC connection string (like 
> jdbc:solr://SOLR_ZK_CONNECTION_STRING?collection=COLLECTION_NAME)
> * without SOLR-8213, the username/password supplied by the SQL client will be 
> ignored.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9218) prevent filter exclusions in facet from caching main query as a filter

2016-06-16 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-9218:
--

 Summary: prevent filter exclusions in facet from caching main 
query as a filter
 Key: SOLR-9218
 URL: https://issues.apache.org/jira/browse/SOLR-9218
 Project: Solr
  Issue Type: Improvement
  Components: Facet Module, faceting
Affects Versions: 6.1
Reporter: Mikhail Khludnev
Priority: Minor


when I specify filter exclusion for calculating facets a main query is cached 
as a filter. I'm concerned about hit ratio. Can we both facet implementations: 
[the 
old|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/request/SimpleFacets.java#L260]
 and [the new 
one|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/search/facet/FacetProcessor.java#L156]
 just use existing base docset for calculating exclusion docset?  

[~ysee...@gmail.com] please provide your concerns. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.7.0_80) - Build # 292 - Failure!

2016-06-16 Thread Steve Rowe
I’m investigating - this is very likely caused by my LUCENE-7132 backport.

--
Steve
www.lucidworks.com

> On Jun 16, 2016, at 3:28 PM, Policeman Jenkins Server  
> wrote:
> 
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/292/
> Java: 32bit/jdk1.7.0_80 -server -XX:+UseSerialGC
> 
> 4 tests failed.
> FAILED:  org.apache.lucene.search.TestBoolean2.testQueries01
> 
> Error Message:
> hits1 doc nrs for hit 0 expected:<4456> but was:<6505>
> 
> Stack Trace:
> junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 expected:<4456> 
> but was:<6505>
>   at 
> __randomizedtesting.SeedInfo.seed([5787EE10A58E0A9C:4AD9A86330F2E174]:0)
>   at junit.framework.Assert.fail(Assert.java:50)
>   at junit.framework.Assert.failNotEquals(Assert.java:287)
>   at junit.framework.Assert.assertEquals(Assert.java:67)
>   at junit.framework.Assert.assertEquals(Assert.java:199)
>   at org.apache.lucene.search.CheckHits.checkDocIds(CheckHits.java:190)
>   at org.apache.lucene.search.CheckHits.checkHitsQuery(CheckHits.java:203)
>   at 
> org.apache.lucene.search.TestBoolean2.queriesTest(TestBoolean2.java:192)
>   at 
> org.apache.lucene.search.TestBoolean2.testQueries01(TestBoolean2.java:226)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
>   at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>   at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>   at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>   

Re: Congratulations to the new Lucene/Solr PMC Chair, Tommaso Teofili

2016-06-16 Thread Joel Bernstein
Congratulations Tommaso!

Joel Bernstein
http://joelsolr.blogspot.com/

On Thu, Jun 16, 2016 at 3:39 PM, Jan Høydahl  wrote:

> Congrats Tommaso!
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 16. jun. 2016 kl. 00.36 skrev Michael McCandless <
> luc...@mikemccandless.com>:
>
> Once a year the Lucene PMC rotates the PMC chair and Apache Vice
> President position.
>
> This year we have nominated and elected Tommaso Teofili as the chair, and
> today the board just approved it, so now it's official.
>
> Congratulations Tommaso!
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
>


[jira] [Created] (SOLR-9217) {!join score=..}.. should delay join to createWeight

2016-06-16 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-9217:
--

 Summary: {!join score=..}.. should delay join to createWeight
 Key: SOLR-9217
 URL: https://issues.apache.org/jira/browse/SOLR-9217
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Affects Versions: 6.1
Reporter: Mikhail Khludnev


{{ScoreJoinQParserPlugin.XxxCoreJoinQuery}} executes 
{{JoinUtil.createJoinQuery}} on {{rewrite()}}, but it makes less effective in 
{{filter(...)}} syntax. It's better to do that in {{createWeigh()}} as it's 
done in classic Solr {{JoinQuery}}.
All existing tests is enough, we just need to assert rewrite behavior - it 
should rewrite on enclosing range query or so, and doesn't on plain term query. 
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9217) {!join score=..}.. should delay join to createWeight

2016-06-16 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9217:
---
Priority: Minor  (was: Major)

> {!join score=..}.. should delay join to createWeight
> 
>
> Key: SOLR-9217
> URL: https://issues.apache.org/jira/browse/SOLR-9217
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Affects Versions: 6.1
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: newbie, newdev
>
> {{ScoreJoinQParserPlugin.XxxCoreJoinQuery}} executes 
> {{JoinUtil.createJoinQuery}} on {{rewrite()}}, but it makes less effective in 
> {{filter(...)}} syntax. It's better to do that in {{createWeigh()}} as it's 
> done in classic Solr {{JoinQuery}}.
> All existing tests is enough, we just need to assert rewrite behavior - it 
> should rewrite on enclosing range query or so, and doesn't on plain term 
> query.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



How to implement BM25 in Lucene

2016-06-16 Thread vitaly bulgakov
Need help in implementing BM25 to replace a standard Lucene TF-IDF scoring. 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-implement-BM25-in-Lucene-tp4282727.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7342) WordDelimiterFilter should observe KeywordAttribute to pass these tokens through

2016-06-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334544#comment-15334544
 ] 

David Smiley commented on LUCENE-7342:
--

A separate issue might be to refactor the APIs of TokenFilters that take a 
CharArraySet input to instead take a 
{{java.util.function.Predicate}}.  Advanced users could even 
construct a Predicate instance with access to the AttributeSource to look at 
whatever attributes it wants, provided that the TokenFilters only invoke it 
when the token stream is positioned to the token in question.

> WordDelimiterFilter should observe KeywordAttribute to pass these tokens 
> through
> 
>
> Key: LUCENE-7342
> URL: https://issues.apache.org/jira/browse/LUCENE-7342
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: David Smiley
>
> I have a text analysis requirement in which I want certain tokens to not be 
> processed by WordDelimiterFilter -- i.e. they should pass through that 
> filter.  WDF, like several other TokenFilters, has a configurable word list 
> but this list is static producing a concrete CharArraySet.  Thus, for 
> example, I can't filter by a regexp nor can I filter based on other 
> attributes.
> A simple solution that makes sense to me is to have WDF use KeywordAttribute 
> to know if it should skip the token.  KeywordAttribute seems fairly generic 
> as to how it can be used, although granted today it's only used by the 
> stemmers.  That attribute isn't named "StemmerIgnoreAttribute" or some-such; 
> it's generic so I think it's fine for WDF to use it in a similar way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7342) WordDelimiterFilter should observe KeywordAttribute to pass these tokens through

2016-06-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334528#comment-15334528
 ] 

David Smiley commented on LUCENE-7342:
--

I considered wrapping WDF; it was an interesting thought experiment with a 
possible solution but our TokenStream API makes doing this very complex.  It 
would involve an additional collaborating TokenStream instance to provide as 
the input to the delegated TokenFilter, thus intercepting the input. The input 
intercepting TokenFilter would detect a token should pass through and then 
captureState() in a loop until it finds a token not matching the predicate.  
The wrapping TokenFilter would call delegateTokenFilter.incrementToken() but 
then would see if there are any cached tokens.  If there are, it would 
captureState, replay the cached tokens, then replay the just captured state. 
This is a big mess, awkward to use, and has some overhead.

> WordDelimiterFilter should observe KeywordAttribute to pass these tokens 
> through
> 
>
> Key: LUCENE-7342
> URL: https://issues.apache.org/jira/browse/LUCENE-7342
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: David Smiley
>
> I have a text analysis requirement in which I want certain tokens to not be 
> processed by WordDelimiterFilter -- i.e. they should pass through that 
> filter.  WDF, like several other TokenFilters, has a configurable word list 
> but this list is static producing a concrete CharArraySet.  Thus, for 
> example, I can't filter by a regexp nor can I filter based on other 
> attributes.
> A simple solution that makes sense to me is to have WDF use KeywordAttribute 
> to know if it should skip the token.  KeywordAttribute seems fairly generic 
> as to how it can be used, although granted today it's only used by the 
> stemmers.  That attribute isn't named "StemmerIgnoreAttribute" or some-such; 
> it's generic so I think it's fine for WDF to use it in a similar way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7342) WordDelimiterFilter should observe KeywordAttribute to pass these tokens through

2016-06-16 Thread David Smiley (JIRA)
David Smiley created LUCENE-7342:


 Summary: WordDelimiterFilter should observe KeywordAttribute to 
pass these tokens through
 Key: LUCENE-7342
 URL: https://issues.apache.org/jira/browse/LUCENE-7342
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Reporter: David Smiley


I have a text analysis requirement in which I want certain tokens to not be 
processed by WordDelimiterFilter -- i.e. they should pass through that filter.  
WDF, like several other TokenFilters, has a configurable word list but this 
list is static producing a concrete CharArraySet.  Thus, for example, I can't 
filter by a regexp nor can I filter based on other attributes.

A simple solution that makes sense to me is to have WDF use KeywordAttribute to 
know if it should skip the token.  KeywordAttribute seems fairly generic as to 
how it can be used, although granted today it's only used by the stemmers.  
That attribute isn't named "StemmerIgnoreAttribute" or some-such; it's generic 
so I think it's fine for WDF to use it in a similar way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Congratulations to the new Lucene/Solr PMC Chair, Tommaso Teofili

2016-06-16 Thread Jan Høydahl
Congrats Tommaso!

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 16. jun. 2016 kl. 00.36 skrev Michael McCandless :
> 
> Once a year the Lucene PMC rotates the PMC chair and Apache Vice President 
> position.
> 
> This year we have nominated and elected Tommaso Teofili as the chair, and 
> today the board just approved it, so now it's official.
> 
> Congratulations Tommaso!
> 
> Mike McCandless
> 
> http://blog.mikemccandless.com 


[jira] [Commented] (SOLR-8981) Upgrade to Tika 1.13 when it is available

2016-06-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334520#comment-15334520
 ] 

ASF GitHub Bot commented on SOLR-8981:
--

Github user tballison commented on the issue:

https://github.com/apache/lucene-solr/pull/44
  
I think I got it...  ant precommit worked in Linux with these 
modifications.  I kept getting hangs with ant jar-checksums in Windows.


> Upgrade to Tika 1.13 when it is available
> -
>
> Key: SOLR-8981
> URL: https://issues.apache.org/jira/browse/SOLR-8981
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Assignee: Uwe Schindler
>Priority: Minor
>
> Tika 1.13 should be out within a month.  This includes PDFBox 2.0.0 and a 
> number of other upgrades and improvements.  
> If there are any showstoppers in 1.13 from Solr's side or requests before we 
> roll 1.13, let us know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #44: SOLR-8981

2016-06-16 Thread tballison
Github user tballison commented on the issue:

https://github.com/apache/lucene-solr/pull/44
  
I think I got it...  ant precommit worked in Linux with these 
modifications.  I kept getting hangs with ant jar-checksums in Windows.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.7.0_80) - Build # 292 - Failure!

2016-06-16 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/292/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseSerialGC

4 tests failed.
FAILED:  org.apache.lucene.search.TestBoolean2.testQueries01

Error Message:
hits1 doc nrs for hit 0 expected:<4456> but was:<6505>

Stack Trace:
junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 expected:<4456> 
but was:<6505>
at 
__randomizedtesting.SeedInfo.seed([5787EE10A58E0A9C:4AD9A86330F2E174]:0)
at junit.framework.Assert.fail(Assert.java:50)
at junit.framework.Assert.failNotEquals(Assert.java:287)
at junit.framework.Assert.assertEquals(Assert.java:67)
at junit.framework.Assert.assertEquals(Assert.java:199)
at org.apache.lucene.search.CheckHits.checkDocIds(CheckHits.java:190)
at org.apache.lucene.search.CheckHits.checkHitsQuery(CheckHits.java:203)
at 
org.apache.lucene.search.TestBoolean2.queriesTest(TestBoolean2.java:192)
at 
org.apache.lucene.search.TestBoolean2.testQueries01(TestBoolean2.java:226)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.lucene.search.TestBoolean2.testQueries03

Error Message:
hits1 doc nrs for hit 0 expected:<4456> but was:<6505>

Stack Trace:
junit.framework.AssertionFailedError: hits1 doc nrs for hit 0 expected:<4456> 
but was:<6505>
at 

Re: Lucene/Solr 6.1.0

2016-06-16 Thread Jan Høydahl
Here’s a first shot, adjustments welcome!:

Before:
Solr is the popular, blazing fast, open source NoSQL search platform from the 
Apache Lucene project. Its major features include powerful full-text search, 
hit highlighting, faceted search, dynamic clustering, database integration, 
rich document (e.g., Word, PDF) handling, and geospatial search. Solr is highly 
scalable, providing fault tolerant distributed search and indexing, and powers 
the search and navigation features of many of the world's largest internet 
sites.

After:
Solr is the popular, blazing fast, open source NoSQL search platform from the 
Apache Lucene project. Its major features include powerful full-text search, 
hit highlighting, faceted search and analytics, rich document parsing and 
geospatial search. Powerful REST APIs as well as parallel SQL. Solr is 
enterprise grade, secure and highly scalable, providing fault tolerant 
distributed search and indexing, and powers the search and navigation features 
of many of the world's largest internet sites.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 16. jun. 2016 kl. 01.09 skrev Adrien Grand :
> 
> Jan, there seems to be consensus about updating the description. Would you 
> like to give it a try?
> 
> Le mar. 14 juin 2016 à 17:18, Erick Erickson  > a écrit :
> +1
> 
> On Tue, Jun 14, 2016 at 7:24 AM, David Smiley  > wrote:
> > +1
> >
> > On Tue, Jun 14, 2016 at 4:55 AM Jan Høydahl  > > wrote:
> >>
> >>  - https://wiki.apache.org/solr/ReleaseNote61 
> >> 
> >>
> >>
> >> The Solr lead-text in the announcement says:
> >>
> >> Solr is the popular, blazing fast, open source NoSQL search platform from
> >> the Apache Lucene project. Its major features include powerful full-text
> >> search, hit highlighting, faceted search, dynamic clustering, database
> >> integration, rich document (e.g., Word, PDF) handling, and geospatial
> >> search. Solr is highly scalable, providing fault tolerant distributed 
> >> search
> >> and indexing, and powers the search and navigation features of many of the
> >> world's largest internet sites.
> >>
> >>
> >> It may be worth to consider flagging some of the newer features such as
> >> ParallellSQL, JDBC, CDCR or Security -- perhaps in place of some more
> >> obvious feature like clustering or highlighting?
> >>
> >> --
> >> Jan Høydahl, search solution architect
> >> Cominvent AS - www.cominvent.com 
> >
> > --
> > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> > LinkedIn: http://linkedin.com/in/davidwsmiley 
> >  | Book:
> > http://www.solrenterprisesearchserver.com 
> > 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> 
> For additional commands, e-mail: dev-h...@lucene.apache.org 
> 
> 



[jira] [Commented] (LUCENE-7291) HeatmapFacetCounter bug with dateline and large non-point shapes

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334404#comment-15334404
 ] 

ASF subversion and git services commented on LUCENE-7291:
-

Commit f6b0fb95dea43f9f508b613cf32f489aaa263c4e in lucene-solr's branch 
refs/heads/branch_5x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f6b0fb9 ]

LUCENE-7291: Fix spatial HeatmapFacetCounter bug with dateline and large 
non-point shapes
(cherry picked from commit 7520d79)


> HeatmapFacetCounter bug with dateline and large non-point shapes
> 
>
> Key: LUCENE-7291
> URL: https://issues.apache.org/jira/browse/LUCENE-7291
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.1, 5.5.2, 6.0.2, 5.6
>
> Attachments: LUCENE_7291.patch
>
>
> Jenkins found a test failure today.
> This reproduces for me (master, java 8):
> ant test  -Dtestcase=HeatmapFacetCounterTest -Dtests.method=testRandom 
> -Dtests.seed=3EC907D1784B6F23 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=is-IS -Dtests.timezone=Europe/Tirane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {noformat}
> java.lang.AssertionError: 
> Expected :1
> Actual   :0
>  
>   at 
> __randomizedtesting.SeedInfo.seed([3EC907D1784B6F23:A3439C5F68FEAB94]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:472)
>   at org.junit.Assert.assertEquals(Assert.java:456)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.validateHeatmapResult(HeatmapFacetCounterTest.java:226)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:193)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:206)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.testRandom(HeatmapFacetCounterTest.java:172)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7291) HeatmapFacetCounter bug with dateline and large non-point shapes

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334405#comment-15334405
 ] 

ASF subversion and git services commented on LUCENE-7291:
-

Commit a7f2876ec5ce9ca5ef271cad97027a5cb5e43619 in lucene-solr's branch 
refs/heads/branch_6_0 from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a7f2876 ]

LUCENE-7291: Fix spatial HeatmapFacetCounter bug with dateline and large 
non-point shapes
(cherry picked from commit 7520d79)


> HeatmapFacetCounter bug with dateline and large non-point shapes
> 
>
> Key: LUCENE-7291
> URL: https://issues.apache.org/jira/browse/LUCENE-7291
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.1, 5.5.2, 6.0.2, 5.6
>
> Attachments: LUCENE_7291.patch
>
>
> Jenkins found a test failure today.
> This reproduces for me (master, java 8):
> ant test  -Dtestcase=HeatmapFacetCounterTest -Dtests.method=testRandom 
> -Dtests.seed=3EC907D1784B6F23 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=is-IS -Dtests.timezone=Europe/Tirane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {noformat}
> java.lang.AssertionError: 
> Expected :1
> Actual   :0
>  
>   at 
> __randomizedtesting.SeedInfo.seed([3EC907D1784B6F23:A3439C5F68FEAB94]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:472)
>   at org.junit.Assert.assertEquals(Assert.java:456)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.validateHeatmapResult(HeatmapFacetCounterTest.java:226)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:193)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:206)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.testRandom(HeatmapFacetCounterTest.java:172)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7291) HeatmapFacetCounter bug with dateline and large non-point shapes

2016-06-16 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-7291.

   Resolution: Fixed
Fix Version/s: 5.6
   6.0.2
   5.5.2

> HeatmapFacetCounter bug with dateline and large non-point shapes
> 
>
> Key: LUCENE-7291
> URL: https://issues.apache.org/jira/browse/LUCENE-7291
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.1, 5.5.2, 6.0.2, 5.6
>
> Attachments: LUCENE_7291.patch
>
>
> Jenkins found a test failure today.
> This reproduces for me (master, java 8):
> ant test  -Dtestcase=HeatmapFacetCounterTest -Dtests.method=testRandom 
> -Dtests.seed=3EC907D1784B6F23 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=is-IS -Dtests.timezone=Europe/Tirane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {noformat}
> java.lang.AssertionError: 
> Expected :1
> Actual   :0
>  
>   at 
> __randomizedtesting.SeedInfo.seed([3EC907D1784B6F23:A3439C5F68FEAB94]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:472)
>   at org.junit.Assert.assertEquals(Assert.java:456)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.validateHeatmapResult(HeatmapFacetCounterTest.java:226)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:193)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:206)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.testRandom(HeatmapFacetCounterTest.java:172)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7291) HeatmapFacetCounter bug with dateline and large non-point shapes

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334403#comment-15334403
 ] 

ASF subversion and git services commented on LUCENE-7291:
-

Commit 1d7ad90947699e103de39fded5b78f76a30e449b in lucene-solr's branch 
refs/heads/branch_5_5 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1d7ad90 ]

LUCENE-7291: Add 5.5.2 CHANGES entry


> HeatmapFacetCounter bug with dateline and large non-point shapes
> 
>
> Key: LUCENE-7291
> URL: https://issues.apache.org/jira/browse/LUCENE-7291
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.1, 5.5.2, 6.0.2, 5.6
>
> Attachments: LUCENE_7291.patch
>
>
> Jenkins found a test failure today.
> This reproduces for me (master, java 8):
> ant test  -Dtestcase=HeatmapFacetCounterTest -Dtests.method=testRandom 
> -Dtests.seed=3EC907D1784B6F23 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=is-IS -Dtests.timezone=Europe/Tirane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {noformat}
> java.lang.AssertionError: 
> Expected :1
> Actual   :0
>  
>   at 
> __randomizedtesting.SeedInfo.seed([3EC907D1784B6F23:A3439C5F68FEAB94]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:472)
>   at org.junit.Assert.assertEquals(Assert.java:456)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.validateHeatmapResult(HeatmapFacetCounterTest.java:226)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:193)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:206)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.testRandom(HeatmapFacetCounterTest.java:172)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7291) HeatmapFacetCounter bug with dateline and large non-point shapes

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334402#comment-15334402
 ] 

ASF subversion and git services commented on LUCENE-7291:
-

Commit 5c546537d7b8130c05263832baff4946260f6a31 in lucene-solr's branch 
refs/heads/branch_5_5 from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5c54653 ]

LUCENE-7291: Fix spatial HeatmapFacetCounter bug with dateline and large 
non-point shapes
(cherry picked from commit 7520d79)


> HeatmapFacetCounter bug with dateline and large non-point shapes
> 
>
> Key: LUCENE-7291
> URL: https://issues.apache.org/jira/browse/LUCENE-7291
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.1, 5.5.2, 6.0.2, 5.6
>
> Attachments: LUCENE_7291.patch
>
>
> Jenkins found a test failure today.
> This reproduces for me (master, java 8):
> ant test  -Dtestcase=HeatmapFacetCounterTest -Dtests.method=testRandom 
> -Dtests.seed=3EC907D1784B6F23 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=is-IS -Dtests.timezone=Europe/Tirane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {noformat}
> java.lang.AssertionError: 
> Expected :1
> Actual   :0
>  
>   at 
> __randomizedtesting.SeedInfo.seed([3EC907D1784B6F23:A3439C5F68FEAB94]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:472)
>   at org.junit.Assert.assertEquals(Assert.java:456)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.validateHeatmapResult(HeatmapFacetCounterTest.java:226)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:193)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:206)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.testRandom(HeatmapFacetCounterTest.java:172)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-7291) HeatmapFacetCounter bug with dateline and large non-point shapes

2016-06-16 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reopened LUCENE-7291:


Reopening to backport to 6.0.2, 5.6 and 5.5.2

> HeatmapFacetCounter bug with dateline and large non-point shapes
> 
>
> Key: LUCENE-7291
> URL: https://issues.apache.org/jira/browse/LUCENE-7291
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.1
>
> Attachments: LUCENE_7291.patch
>
>
> Jenkins found a test failure today.
> This reproduces for me (master, java 8):
> ant test  -Dtestcase=HeatmapFacetCounterTest -Dtests.method=testRandom 
> -Dtests.seed=3EC907D1784B6F23 -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=is-IS -Dtests.timezone=Europe/Tirane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {noformat}
> java.lang.AssertionError: 
> Expected :1
> Actual   :0
>  
>   at 
> __randomizedtesting.SeedInfo.seed([3EC907D1784B6F23:A3439C5F68FEAB94]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:472)
>   at org.junit.Assert.assertEquals(Assert.java:456)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.validateHeatmapResult(HeatmapFacetCounterTest.java:226)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:193)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.queryHeatmapRecursive(HeatmapFacetCounterTest.java:206)
>   at 
> org.apache.lucene.spatial.prefix.HeatmapFacetCounterTest.testRandom(HeatmapFacetCounterTest.java:172)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7132) BooleanQuery scores can be diff for same docs+sim when using coord (disagree with Explanation which doesn't change)

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334283#comment-15334283
 ] 

ASF subversion and git services commented on LUCENE-7132:
-

Commit 707bcc9b3bdae7b2bb2b9a7d9e30e1aa348587cb in lucene-solr's branch 
refs/heads/branch_5x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=707bcc9b ]

LUCENE-7132: BooleanQuery sometimes assigned the wrong score when ranges of 
documents had only one clause matching while other ranges had more than one 
clause matchng

(Cherry-picked from commit 5dfaf0392fcd3b7e4b529dce0cd1035b766880a7)


> BooleanQuery scores can be diff for same docs+sim when using coord (disagree 
> with Explanation which doesn't change)
> ---
>
> Key: LUCENE-7132
> URL: https://issues.apache.org/jira/browse/LUCENE-7132
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5
>Reporter: Ahmet Arslan
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7132.patch, LUCENE-7132.patch, LUCENE-7132.patch, 
> LUCENE-7132.patch, LUCENE-7132.patch, LUCENE-7132.patch, LUCENE-7132.patch, 
> SOLR-8884.patch, SOLR-8884.patch, debug.xml
>
>
> Some of the folks 
> [reported|http://find.searchhub.org/document/80666f5c3b86ddda] that sometimes 
> explain's score can be different than the score requested by fields 
> parameter. Interestingly, Explain's scores would create a different ranking 
> than the original result list. This is something users experience, but it 
> cannot be re-produced deterministically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7132) BooleanQuery scores can be diff for same docs+sim when using coord (disagree with Explanation which doesn't change)

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334284#comment-15334284
 ] 

ASF subversion and git services commented on LUCENE-7132:
-

Commit 9f513d5569db42fe10b6580e69a754b7aa05f596 in lucene-solr's branch 
refs/heads/branch_6_0 from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9f513d5 ]

LUCENE-7132: BooleanQuery sometimes assigned the wrong score when ranges of 
documents had only one clause matching while other ranges had more than one 
clause matchng


> BooleanQuery scores can be diff for same docs+sim when using coord (disagree 
> with Explanation which doesn't change)
> ---
>
> Key: LUCENE-7132
> URL: https://issues.apache.org/jira/browse/LUCENE-7132
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5
>Reporter: Ahmet Arslan
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7132.patch, LUCENE-7132.patch, LUCENE-7132.patch, 
> LUCENE-7132.patch, LUCENE-7132.patch, LUCENE-7132.patch, LUCENE-7132.patch, 
> SOLR-8884.patch, SOLR-8884.patch, debug.xml
>
>
> Some of the folks 
> [reported|http://find.searchhub.org/document/80666f5c3b86ddda] that sometimes 
> explain's score can be different than the score requested by fields 
> parameter. Interestingly, Explain's scores would create a different ranking 
> than the original result list. This is something users experience, but it 
> cannot be re-produced deterministically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7132) BooleanQuery scores can be diff for same docs+sim when using coord (disagree with Explanation which doesn't change)

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334281#comment-15334281
 ] 

ASF subversion and git services commented on LUCENE-7132:
-

Commit 77844e2591235bfc1944e901922f876c1d43c264 in lucene-solr's branch 
refs/heads/branch_5_5 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=77844e2 ]

LUCENE-7132: BooleanQuery sometimes assigned the wrong score when ranges of 
documents had only one clause matching while other ranges had more than one 
clause matchng

(Cherry-picked from commit 5dfaf0392fcd3b7e4b529dce0cd1035b766880a7)


> BooleanQuery scores can be diff for same docs+sim when using coord (disagree 
> with Explanation which doesn't change)
> ---
>
> Key: LUCENE-7132
> URL: https://issues.apache.org/jira/browse/LUCENE-7132
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5
>Reporter: Ahmet Arslan
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7132.patch, LUCENE-7132.patch, LUCENE-7132.patch, 
> LUCENE-7132.patch, LUCENE-7132.patch, LUCENE-7132.patch, LUCENE-7132.patch, 
> SOLR-8884.patch, SOLR-8884.patch, debug.xml
>
>
> Some of the folks 
> [reported|http://find.searchhub.org/document/80666f5c3b86ddda] that sometimes 
> explain's score can be different than the score requested by fields 
> parameter. Interestingly, Explain's scores would create a different ranking 
> than the original result list. This is something users experience, but it 
> cannot be re-produced deterministically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7132) BooleanQuery scores can be diff for same docs+sim when using coord (disagree with Explanation which doesn't change)

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334282#comment-15334282
 ] 

ASF subversion and git services commented on LUCENE-7132:
-

Commit 4f6bddefe3310e0361c9b57fd522781d82c89bb8 in lucene-solr's branch 
refs/heads/branch_5_5 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4f6bdde ]

LUCENE-7132: Add 5.5.2 CHANGES entry


> BooleanQuery scores can be diff for same docs+sim when using coord (disagree 
> with Explanation which doesn't change)
> ---
>
> Key: LUCENE-7132
> URL: https://issues.apache.org/jira/browse/LUCENE-7132
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5
>Reporter: Ahmet Arslan
>Assignee: Steve Rowe
>Priority: Blocker
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7132.patch, LUCENE-7132.patch, LUCENE-7132.patch, 
> LUCENE-7132.patch, LUCENE-7132.patch, LUCENE-7132.patch, LUCENE-7132.patch, 
> SOLR-8884.patch, SOLR-8884.patch, debug.xml
>
>
> Some of the folks 
> [reported|http://find.searchhub.org/document/80666f5c3b86ddda] that sometimes 
> explain's score can be different than the score requested by fields 
> parameter. Interestingly, Explain's scores would create a different ranking 
> than the original result list. This is something users experience, but it 
> cannot be re-produced deterministically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9194) Enhance the bin/solr script to put and get arbitrary files to/from Zookeeper

2016-06-16 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334266#comment-15334266
 ] 

Erick Erickson commented on SOLR-9194:
--

Only if you are ZK-savvy.

My take is that people with their heads in ZK are probably familiar with the 
Unix-style commands, but not vice-versa. Although the open question is whether 
switching context to Unix styles is A Good Thing given that people are, after 
all, dealing with ZK.

[~janhoy] In your suggested help text, the [-z zkHost] indicates an optional 
param to me. Should it be mandatory or is there some thing I'm missing here?

> Enhance the bin/solr script to put and get arbitrary files to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but I'm not wedded to those, all suggestions welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9216) Support collection.configName in MODIFYCOLLECTION request

2016-06-16 Thread Keith Laban (JIRA)
Keith Laban created SOLR-9216:
-

 Summary: Support collection.configName in MODIFYCOLLECTION request
 Key: SOLR-9216
 URL: https://issues.apache.org/jira/browse/SOLR-9216
 Project: Solr
  Issue Type: Improvement
Reporter: Keith Laban


MODIFYCOLLECTION should support updating the {{/collections/}} 
value of "configName" in zookeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8981) Upgrade to Tika 1.13 when it is available

2016-06-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334235#comment-15334235
 ] 

ASF GitHub Bot commented on SOLR-8981:
--

Github user uschindler commented on the issue:

https://github.com/apache/lucene-solr/pull/44
  
Hallo,
please also update all SHA1 hashes of files. Plesae run "ant precommit" 
from root folder of Lu/Solr. This will report all missing things.


> Upgrade to Tika 1.13 when it is available
> -
>
> Key: SOLR-8981
> URL: https://issues.apache.org/jira/browse/SOLR-8981
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Assignee: Uwe Schindler
>Priority: Minor
>
> Tika 1.13 should be out within a month.  This includes PDFBox 2.0.0 and a 
> number of other upgrades and improvements.  
> If there are any showstoppers in 1.13 from Solr's side or requests before we 
> roll 1.13, let us know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-16 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334233#comment-15334233
 ] 

Varun Thacker commented on SOLR-7374:
-

bq. Also building two parallel implementations for the same functionality 
doesn't quite make sense.

Indeed . It's far from ideal right now. We document the core level 
backup/restore via the replication handler as thats where it was supported . 

With SOLR-5750 a hook to Core Admin to leverage it . It was simply for 
convenience and not meant to be made public. Maybe we should fix it leverage 
the ReplicationHandler instead . Or we could deprecate the usage via 
Replication Handler as it's more of a core admin operation anyways.

But I think let's keep that to a separate Jira/discussion? For the scope of 
this Jira can we just support it in ReplicationHandler as well ?


> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #44: SOLR-8981

2016-06-16 Thread uschindler
Github user uschindler commented on the issue:

https://github.com/apache/lucene-solr/pull/44
  
Hallo,
please also update all SHA1 hashes of files. Plesae run "ant precommit" 
from root folder of Lu/Solr. This will report all missing things.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8981) Upgrade to Tika 1.13 when it is available

2016-06-16 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reassigned SOLR-8981:
---

Assignee: Uwe Schindler

> Upgrade to Tika 1.13 when it is available
> -
>
> Key: SOLR-8981
> URL: https://issues.apache.org/jira/browse/SOLR-8981
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Assignee: Uwe Schindler
>Priority: Minor
>
> Tika 1.13 should be out within a month.  This includes PDFBox 2.0.0 and a 
> number of other upgrades and improvements.  
> If there are any showstoppers in 1.13 from Solr's side or requests before we 
> roll 1.13, let us know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8981) Upgrade to Tika 1.13 when it is available

2016-06-16 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334228#comment-15334228
 ] 

Uwe Schindler commented on SOLR-8981:
-

To test TIKA please only run tests inside contrib/extraction!

Solr tests are generally unstable, especially on windows. See our Jenkins logs.

> Upgrade to Tika 1.13 when it is available
> -
>
> Key: SOLR-8981
> URL: https://issues.apache.org/jira/browse/SOLR-8981
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Priority: Minor
>
> Tika 1.13 should be out within a month.  This includes PDFBox 2.0.0 and a 
> number of other upgrades and improvements.  
> If there are any showstoppers in 1.13 from Solr's side or requests before we 
> roll 1.13, let us know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-16 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334207#comment-15334207
 ] 

Hrishikesh Gadre edited comment on SOLR-7374 at 6/16/16 5:22 PM:
-

[~varunthacker] Thanks for the comments.

bq. In ReplicationHandler#restore why are we always using 
LocalFileSystemRepository and not reading any repository param? 

The reason is that ReplicationHandler is not configured with the CoreContainer 
(which is required to fetch the repository configuration). Also building two 
parallel implementations for the same functionality doesn't quite make sense. 
Can we instead make the Core level Backup/Restore APIs public (i.e. not limited 
to Solrcloud)? This allows users to keep using ReplicationHandler in case of 
local file-system but if they need to integrate with other file-systems they 
can move to these core level APIs. If feasible, we can even deprecate (and 
remove) backup/restore APIs from ReplicationHandler in future.

bq. However TestHdfsBackupRestore added in this patch is a solr cloud test . 
What other work is left for supporting collection level changes? 

I had to implement this test as a "cloud" test so as to enable testing these 
core level operations (since these operations are enabled only in the cloud 
mode). The collection-level changes include,
- Backup/restore collection metadata
- Check the version compatibility during restore
- Strategy interface to define "how" backup operation is performed (e.g. 
copying the index files vs. a file-system snapshot etc.)

bq. I only briefly looked at SOLR-9055 and couldn't tell why we need 
ShardRequestProcessor etc. 

The main reason is to implement a index backup strategy. Also in general 
processing shard requests is such a common functionality that embedding it in 
the OverseerCollectionMessageHandler doesn't quite seem right (from modularity 
perspective).

bq. Also i found ZkStateReader#BACKUP_LOCATION constant. We should merge those 
two constants I guess

Make sense. Let me do that.




was (Author: hgadre):
[~varunthacker] Thanks for the comments.

bq. In ReplicationHandler#restore why are we always using 
LocalFileSystemRepository and not reading any repository param? 

The reason is that ReplicationHandler is not configured with the CoreContainer 
(which is required to fetch the repository configuration). Also building two 
parallel implementations for the same functionality doesn't quite make sense. 
Can we instead make the Core level Backup/Restore APIs public (i.e. not limited 
to Solrcloud)? This allows users to keep using ReplicationHandler in case of 
local file-system but if they need to integrate with other file-systems they 
can move to these core level APIs. If feasible, we can even deprecate (and 
remove) backup/restore APIs from ReplicationHandler in future.

bq. However TestHdfsBackupRestore added in this patch is a solr cloud test . 
What other work is left for supporting collection level changes? 

I had to implement this test as a "cloud" test so as to enable testing these 
core level operations (since these operations are enabled only in the cloud 
mode). The collection-level changes include,
- Backup/restore collection metadata
- Check the version compatibility during restore
- Strategy interface to define "how" backup operation is performed (e.g. 
copying the index files vs. a file-system snapshot etc.)

bq. I only briefly looked at SOLR-9055 and couldn't tell why we need 
ShardRequestProcessor etc. 

The main reason is to implement a index backup strategy. Also in general 
processing shard requests is such a common functionality that embedding it in 
the OverseerCollectionMessageHandler doesn't quite seem write (from modularity 
perspective).

bq. Also i found ZkStateReader#BACKUP_LOCATION constant. We should merge those 
two constants I guess

Make sense. Let me do that.



> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index 

[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-06-16 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334207#comment-15334207
 ] 

Hrishikesh Gadre commented on SOLR-7374:


[~varunthacker] Thanks for the comments.

bq. In ReplicationHandler#restore why are we always using 
LocalFileSystemRepository and not reading any repository param? 

The reason is that ReplicationHandler is not configured with the CoreContainer 
(which is required to fetch the repository configuration). Also building two 
parallel implementations for the same functionality doesn't quite make sense. 
Can we instead make the Core level Backup/Restore APIs public (i.e. not limited 
to Solrcloud)? This allows users to keep using ReplicationHandler in case of 
local file-system but if they need to integrate with other file-systems they 
can move to these core level APIs. If feasible, we can even deprecate (and 
remove) backup/restore APIs from ReplicationHandler in future.

bq. However TestHdfsBackupRestore added in this patch is a solr cloud test . 
What other work is left for supporting collection level changes? 

I had to implement this test as a "cloud" test so as to enable testing these 
core level operations (since these operations are enabled only in the cloud 
mode). The collection-level changes include,
- Backup/restore collection metadata
- Check the version compatibility during restore
- Strategy interface to define "how" backup operation is performed (e.g. 
copying the index files vs. a file-system snapshot etc.)

bq. I only briefly looked at SOLR-9055 and couldn't tell why we need 
ShardRequestProcessor etc. 

The main reason is to implement a index backup strategy. Also in general 
processing shard requests is such a common functionality that embedding it in 
the OverseerCollectionMessageHandler doesn't quite seem write (from modularity 
perspective).

bq. Also i found ZkStateReader#BACKUP_LOCATION constant. We should merge those 
two constants I guess

Make sense. Let me do that.



> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Mark Miller
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch, 
> SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9194) Enhance the bin/solr script to put and get arbitrary files to/from Zookeeper

2016-06-16 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334200#comment-15334200
 ] 

Noble Paul commented on SOLR-9194:
--

put/get is more intuitive

> Enhance the bin/solr script to put and get arbitrary files to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but I'm not wedded to those, all suggestions welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9194) Enhance the bin/solr script to put and get arbitrary files to/from Zookeeper

2016-06-16 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334194#comment-15334194
 ] 

Erick Erickson commented on SOLR-9194:
--

I may have some time to work with this while flying back to MI, I generally 
hate the in-flight movies anyway

So, what do people think? Once the mechanics are in place, changing the form of 
the command is pretty easy.

The questions are:

1> Follow the ZK put/get(file) stuff or adopt the more unix-like commands. 
Straw-man: use the unix-style. More people are familiar with that than ZK

2> Require the hyphen for -cp (-rm) or take them _away_ from the 
upconfig/downconfig stuff? It looks like I added the hyphen to 
upconfig/downconfig gratuitously anyway so taking it out is no big deal 
(keeping it around for back-compat _only_ for upconfig/downconfig). straw-man: 
take it away.

> Enhance the bin/solr script to put and get arbitrary files to/from Zookeeper
> 
>
> Key: SOLR-9194
> URL: https://issues.apache.org/jira/browse/SOLR-9194
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> There are a few other files that can reasonably be pushed to Zookeeper, e.g. 
> solr.xml, security.json, clusterprops.json. Who knows? Even 
> /state.json for the brave.
> This could reduce further the need for bouncing out to zkcli.
> Assigning to myself just so I don't lose track, but I would _love_ it if 
> someone else wanted to take it...
> I'm thinking the commands would be 
> bin/solr zk -putfile -z  -p  -f 
> bin/solr zk -getfile -z  -p  -f 
> but I'm not wedded to those, all suggestions welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



NoHttpResponseException error between leader and replica

2016-06-16 Thread Varun Thacker
When running a bulk index process occasionally we see a
NoHttpResponseException error when the leader is forwarding docs to the
replica. I think this is a known issue and can be reproduced pretty easily.

What makes me want to dig more is that because of one such
NoHttpResponseException the leader will put the replica into recovery. The
replica can never catch up because the indexing throughput is quite high .
This can add hours of recovery time for the replica depending on how many
documents one is indexing .

So from what I can think we have two options here -
1. Implement a thread which removes stale connections. This has been
discussed on https://issues.apache.org/jira/browse/SOLR-4509 in the past
2. The above solution is not the right way forward. The main problem here
is that replicas can't catch up because Solr doesn't implement backpressure
yet and implementing that would be the correct solution here

Does anyone have an opinion on how we should we go forward with this issue?



-- 


Regards,
Varun Thacker


[jira] [Commented] (LUCENE-6439) Create test-framework/src/test

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334172#comment-15334172
 ] 

ASF subversion and git services commented on LUCENE-6439:
-

Commit 4c107a9a5287aabcafc8e5ce4e73d0faae653a3a in lucene-solr's branch 
refs/heads/branch_6_1 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4c107a9 ]

LUCENE-6439: IntelliJ config


> Create test-framework/src/test
> --
>
> Key: LUCENE-6439
> URL: https://issues.apache.org/jira/browse/LUCENE-6439
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
>Assignee: Robert Muir
> Fix For: 5.2, 6.0
>
> Attachments: LUCENE-6439-maven.patch, LUCENE-6439.patch
>
>
> We have quite a few tests (~30 suites) for test-framework stuff 
> ("test-the-tester") but currently they all sit in lucene/core housed with 
> real tests.
> I think we should just give test-framework a src/test and move these tests 
> there. This makes the build simpler in the future too, because its less 
> "special". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6439) Create test-framework/src/test

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334174#comment-15334174
 ] 

ASF subversion and git services commented on LUCENE-6439:
-

Commit a4455a4b14f2bf947db1136f9d5fc7d0d88d32ef in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a4455a4 ]

LUCENE-6439: IntelliJ config


> Create test-framework/src/test
> --
>
> Key: LUCENE-6439
> URL: https://issues.apache.org/jira/browse/LUCENE-6439
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
>Assignee: Robert Muir
> Fix For: 5.2, 6.0
>
> Attachments: LUCENE-6439-maven.patch, LUCENE-6439.patch
>
>
> We have quite a few tests (~30 suites) for test-framework stuff 
> ("test-the-tester") but currently they all sit in lucene/core housed with 
> real tests.
> I think we should just give test-framework a src/test and move these tests 
> there. This makes the build simpler in the future too, because its less 
> "special". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6439) Create test-framework/src/test

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334173#comment-15334173
 ] 

ASF subversion and git services commented on LUCENE-6439:
-

Commit b85c5be6c4f682136512851c5cce2d456b3ea85c in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b85c5be ]

LUCENE-6439: IntelliJ config


> Create test-framework/src/test
> --
>
> Key: LUCENE-6439
> URL: https://issues.apache.org/jira/browse/LUCENE-6439
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
>Assignee: Robert Muir
> Fix For: 5.2, 6.0
>
> Attachments: LUCENE-6439-maven.patch, LUCENE-6439.patch
>
>
> We have quite a few tests (~30 suites) for test-framework stuff 
> ("test-the-tester") but currently they all sit in lucene/core housed with 
> real tests.
> I think we should just give test-framework a src/test and move these tests 
> there. This makes the build simpler in the future too, because its less 
> "special". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6439) Create test-framework/src/test

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334170#comment-15334170
 ] 

ASF subversion and git services commented on LUCENE-6439:
-

Commit 950077915f0da98fd003252b14eae094dcae2922 in lucene-solr's branch 
refs/heads/branch_6_0 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9500779 ]

LUCENE-6439: IntelliJ config


> Create test-framework/src/test
> --
>
> Key: LUCENE-6439
> URL: https://issues.apache.org/jira/browse/LUCENE-6439
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
>Assignee: Robert Muir
> Fix For: 5.2, 6.0
>
> Attachments: LUCENE-6439-maven.patch, LUCENE-6439.patch
>
>
> We have quite a few tests (~30 suites) for test-framework stuff 
> ("test-the-tester") but currently they all sit in lucene/core housed with 
> real tests.
> I think we should just give test-framework a src/test and move these tests 
> there. This makes the build simpler in the future too, because its less 
> "special". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6439) Create test-framework/src/test

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334169#comment-15334169
 ] 

ASF subversion and git services commented on LUCENE-6439:
-

Commit ff1ffc22c1648c67554afa0f9224db45b79b69de in lucene-solr's branch 
refs/heads/branch_5x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ff1ffc2 ]

LUCENE-6439: IntelliJ config


> Create test-framework/src/test
> --
>
> Key: LUCENE-6439
> URL: https://issues.apache.org/jira/browse/LUCENE-6439
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
>Assignee: Robert Muir
> Fix For: 5.2, 6.0
>
> Attachments: LUCENE-6439-maven.patch, LUCENE-6439.patch
>
>
> We have quite a few tests (~30 suites) for test-framework stuff 
> ("test-the-tester") but currently they all sit in lucene/core housed with 
> real tests.
> I think we should just give test-framework a src/test and move these tests 
> there. This makes the build simpler in the future too, because its less 
> "special". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6439) Create test-framework/src/test

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334160#comment-15334160
 ] 

ASF subversion and git services commented on LUCENE-6439:
-

Commit dca4f85f69b1cbe632550b9babdf3f675adab4f2 in lucene-solr's branch 
refs/heads/branch_5_5 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dca4f85 ]

LUCENE-6439: IntelliJ config


> Create test-framework/src/test
> --
>
> Key: LUCENE-6439
> URL: https://issues.apache.org/jira/browse/LUCENE-6439
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Robert Muir
>Assignee: Robert Muir
> Fix For: 5.2, 6.0
>
> Attachments: LUCENE-6439-maven.patch, LUCENE-6439.patch
>
>
> We have quite a few tests (~30 suites) for test-framework stuff 
> ("test-the-tester") but currently they all sit in lucene/core housed with 
> real tests.
> I think we should just give test-framework a src/test and move these tests 
> there. This makes the build simpler in the future too, because its less 
> "special". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8981) Upgrade to Tika 1.13 when it is available

2016-06-16 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334159#comment-15334159
 ] 

Tim Allison commented on SOLR-8981:
---

I got a build failure here:
{noformat}
 Tests with failures [seed: C22A0B280C50BF8F]:
   [junit4]   - org.apache.solr.handler.component.SpellCheckComponentTest.test
{noformat}

However, when I tested this alone, all was fine...different seed?
Not sure if this is a regular build failure or something caused by the changes.

[~lewismc], if you have a chance to review, I'd appreciate a second set of eyes 
before we bother [~thetaphi] for a review.



> Upgrade to Tika 1.13 when it is available
> -
>
> Key: SOLR-8981
> URL: https://issues.apache.org/jira/browse/SOLR-8981
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Priority: Minor
>
> Tika 1.13 should be out within a month.  This includes PDFBox 2.0.0 and a 
> number of other upgrades and improvements.  
> If there are any showstoppers in 1.13 from Solr's side or requests before we 
> roll 1.13, let us know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8981) Upgrade to Tika 1.13 when it is available

2016-06-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334150#comment-15334150
 ] 

ASF GitHub Bot commented on SOLR-8981:
--

GitHub user tballison opened a pull request:

https://github.com/apache/lucene-solr/pull/44

SOLR-8981

SOLR-8981 upgrade to Tika 1.13

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tballison/lucene-solr SOLR-8981

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/44.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #44


commit ba0e71703464849198b384aa6e92962db8a04b51
Author: tballison 
Date:   2016-06-16T16:56:45Z

SOLR-8981 upgrade to Tika 1.13




> Upgrade to Tika 1.13 when it is available
> -
>
> Key: SOLR-8981
> URL: https://issues.apache.org/jira/browse/SOLR-8981
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Priority: Minor
>
> Tika 1.13 should be out within a month.  This includes PDFBox 2.0.0 and a 
> number of other upgrades and improvements.  
> If there are any showstoppers in 1.13 from Solr's side or requests before we 
> roll 1.13, let us know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #44: SOLR-8981

2016-06-16 Thread tballison
GitHub user tballison opened a pull request:

https://github.com/apache/lucene-solr/pull/44

SOLR-8981

SOLR-8981 upgrade to Tika 1.13

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tballison/lucene-solr SOLR-8981

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/44.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #44


commit ba0e71703464849198b384aa6e92962db8a04b51
Author: tballison 
Date:   2016-06-16T16:56:45Z

SOLR-8981 upgrade to Tika 1.13




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9211) Nested negative clauses don't work as expected in filter queries for the edismax parser

2016-06-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334132#comment-15334132
 ] 

Jan Høydahl commented on SOLR-9211:
---

Filters do not use edismax, but the {{lucene}} parser. Can you try the query 
without edismax? I.e. {{/solr/collection1/select?q=CONTENT:(foo OR 
(-foo))=lucene}}?

> Nested negative clauses don't work as expected in filter queries for the 
> edismax parser
> ---
>
> Key: SOLR-9211
> URL: https://issues.apache.org/jira/browse/SOLR-9211
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4
>Reporter: Plamen M Todorov
>
> Using the edismax parser, the following query works as expected and returns 
> all documents:
> CONTENT:(foo OR (-foo))
> The same clause doesn't work in a filter query and returns no documents



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7301) updateNumericDocValue mixed with updateDocument can cause data loss in some randomized testing

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334105#comment-15334105
 ] 

ASF subversion and git services commented on LUCENE-7301:
-

Commit 078b607ff768ff47a81f4b8d1803b406b5dc39e6 in lucene-solr's branch 
refs/heads/branch_6_0 from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=078b607 ]

LUCENE-7301: Remove misplaced 6.0.1 CHANGES entry


> updateNumericDocValue mixed with updateDocument can cause data loss in some 
> randomized testing
> --
>
> Key: LUCENE-7301
> URL: https://issues.apache.org/jira/browse/LUCENE-7301
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7301.patch, LUCENE-7301.patch, LUCENE-7301.patch
>
>
> SOLR-5944 has been held up by a while due to some extremely rare randomized 
> test failures.
> Ishan and I have been working on whitling those Solr test failures down, 
> trying to create more isolated reproducable test failures, and i *think* i've 
> tracked it down to a bug in IndexWriter when the client calls to 
> updateDocument intermixed with calls to updateNumericDocValue *AND* 
> IndexWriterConfig.setMaxBufferedDocs is very low (i suspect "how low" depends 
> on the number of quantity/types of updates -- but *just* got something that 
> reproduced, and haven't tried reproducing with higher values of 
> maxBufferedDocs and larger sequences of updateDocument / 
> updateNumericDocValue calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7301) updateNumericDocValue mixed with updateDocument can cause data loss in some randomized testing

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334101#comment-15334101
 ] 

ASF subversion and git services commented on LUCENE-7301:
-

Commit 05ac400f7a85c80e5f77708ac72ec4dce5e42cbb in lucene-solr's branch 
refs/heads/branch_5_5 from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=05ac400 ]

LUCENE-7301: ensure multiple doc values updates to one document within one 
update batch are applied in the correct order


> updateNumericDocValue mixed with updateDocument can cause data loss in some 
> randomized testing
> --
>
> Key: LUCENE-7301
> URL: https://issues.apache.org/jira/browse/LUCENE-7301
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7301.patch, LUCENE-7301.patch, LUCENE-7301.patch
>
>
> SOLR-5944 has been held up by a while due to some extremely rare randomized 
> test failures.
> Ishan and I have been working on whitling those Solr test failures down, 
> trying to create more isolated reproducable test failures, and i *think* i've 
> tracked it down to a bug in IndexWriter when the client calls to 
> updateDocument intermixed with calls to updateNumericDocValue *AND* 
> IndexWriterConfig.setMaxBufferedDocs is very low (i suspect "how low" depends 
> on the number of quantity/types of updates -- but *just* got something that 
> reproduced, and haven't tried reproducing with higher values of 
> maxBufferedDocs and larger sequences of updateDocument / 
> updateNumericDocValue calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7301) updateNumericDocValue mixed with updateDocument can cause data loss in some randomized testing

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334104#comment-15334104
 ] 

ASF subversion and git services commented on LUCENE-7301:
-

Commit e9ccc822bb8d606dba5385c409a5ea2804d6282c in lucene-solr's branch 
refs/heads/branch_6_0 from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e9ccc82 ]

LUCENE-7301: ensure multiple doc values updates to one document within one 
update batch are applied in the correct order


> updateNumericDocValue mixed with updateDocument can cause data loss in some 
> randomized testing
> --
>
> Key: LUCENE-7301
> URL: https://issues.apache.org/jira/browse/LUCENE-7301
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7301.patch, LUCENE-7301.patch, LUCENE-7301.patch
>
>
> SOLR-5944 has been held up by a while due to some extremely rare randomized 
> test failures.
> Ishan and I have been working on whitling those Solr test failures down, 
> trying to create more isolated reproducable test failures, and i *think* i've 
> tracked it down to a bug in IndexWriter when the client calls to 
> updateDocument intermixed with calls to updateNumericDocValue *AND* 
> IndexWriterConfig.setMaxBufferedDocs is very low (i suspect "how low" depends 
> on the number of quantity/types of updates -- but *just* got something that 
> reproduced, and haven't tried reproducing with higher values of 
> maxBufferedDocs and larger sequences of updateDocument / 
> updateNumericDocValue calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7301) updateNumericDocValue mixed with updateDocument can cause data loss in some randomized testing

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334103#comment-15334103
 ] 

ASF subversion and git services commented on LUCENE-7301:
-

Commit f121be688fab4254172c315ec21a891e8199e6e5 in lucene-solr's branch 
refs/heads/branch_5x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f121be6 ]

LUCENE-7301: Remove misplaced 5.6 CHANGES entry


> updateNumericDocValue mixed with updateDocument can cause data loss in some 
> randomized testing
> --
>
> Key: LUCENE-7301
> URL: https://issues.apache.org/jira/browse/LUCENE-7301
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7301.patch, LUCENE-7301.patch, LUCENE-7301.patch
>
>
> SOLR-5944 has been held up by a while due to some extremely rare randomized 
> test failures.
> Ishan and I have been working on whitling those Solr test failures down, 
> trying to create more isolated reproducable test failures, and i *think* i've 
> tracked it down to a bug in IndexWriter when the client calls to 
> updateDocument intermixed with calls to updateNumericDocValue *AND* 
> IndexWriterConfig.setMaxBufferedDocs is very low (i suspect "how low" depends 
> on the number of quantity/types of updates -- but *just* got something that 
> reproduced, and haven't tried reproducing with higher values of 
> maxBufferedDocs and larger sequences of updateDocument / 
> updateNumericDocValue calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7301) updateNumericDocValue mixed with updateDocument can cause data loss in some randomized testing

2016-06-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334102#comment-15334102
 ] 

ASF subversion and git services commented on LUCENE-7301:
-

Commit ba170fa830fdf0342e7e55aab2d8754d4d8a2135 in lucene-solr's branch 
refs/heads/branch_5x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ba170fa ]

LUCENE-7301: ensure multiple doc values updates to one document within one 
update batch are applied in the correct order


> updateNumericDocValue mixed with updateDocument can cause data loss in some 
> randomized testing
> --
>
> Key: LUCENE-7301
> URL: https://issues.apache.org/jira/browse/LUCENE-7301
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7301.patch, LUCENE-7301.patch, LUCENE-7301.patch
>
>
> SOLR-5944 has been held up by a while due to some extremely rare randomized 
> test failures.
> Ishan and I have been working on whitling those Solr test failures down, 
> trying to create more isolated reproducable test failures, and i *think* i've 
> tracked it down to a bug in IndexWriter when the client calls to 
> updateDocument intermixed with calls to updateNumericDocValue *AND* 
> IndexWriterConfig.setMaxBufferedDocs is very low (i suspect "how low" depends 
> on the number of quantity/types of updates -- but *just* got something that 
> reproduced, and haven't tried reproducing with higher values of 
> maxBufferedDocs and larger sequences of updateDocument / 
> updateNumericDocValue calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-7301) updateNumericDocValue mixed with updateDocument can cause data loss in some randomized testing

2016-06-16 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reopened LUCENE-7301:


Reopening to backport to 6.0.2, 5.6, and 5.5.2.

> updateNumericDocValue mixed with updateDocument can cause data loss in some 
> randomized testing
> --
>
> Key: LUCENE-7301
> URL: https://issues.apache.org/jira/browse/LUCENE-7301
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7301.patch, LUCENE-7301.patch, LUCENE-7301.patch
>
>
> SOLR-5944 has been held up by a while due to some extremely rare randomized 
> test failures.
> Ishan and I have been working on whitling those Solr test failures down, 
> trying to create more isolated reproducable test failures, and i *think* i've 
> tracked it down to a bug in IndexWriter when the client calls to 
> updateDocument intermixed with calls to updateNumericDocValue *AND* 
> IndexWriterConfig.setMaxBufferedDocs is very low (i suspect "how low" depends 
> on the number of quantity/types of updates -- but *just* got something that 
> reproduced, and haven't tried reproducing with higher values of 
> maxBufferedDocs and larger sequences of updateDocument / 
> updateNumericDocValue calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_92) - Build # 252 - Still Failing!

2016-06-16 Thread Uwe Schindler
Hi,

as 6.1 is out I disabled this job and nuked workspace.
Unfortunately the Windows VMs are a bit limited in space (although they have 
the largest disk!). If one of the jobs somehow uses much space (randomly) it 
fcks up :-(

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Steve Rowe [mailto:sar...@gmail.com]
> Sent: Thursday, June 16, 2016 4:23 PM
> To: Uwe Schindler 
> Cc: Lucene Dev 
> Subject: Re: [JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_92) - Build #
> 252 - Still Failing!
> 
> Uwe, looks like you have disk space problems on Policeman Jenkins:
> 
> > Caused by: java.io.IOException: There is not enough space on the disk
> 
> --
> Steve
> www.lucidworks.com
> 
> > On Jun 16, 2016, at 10:18 AM, Policeman Jenkins Server
>  wrote:
> >
> > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/252/
> > Java: 32bit/jdk1.8.0_92 -client -XX:+UseParallelGC
> >
> > No tests ran.
> >
> > Build Log:
> > [...truncated 14 lines...]
> > FATAL: Exception caught during execution of reset command. {0}
> > org.eclipse.jgit.api.errors.JGitInternalException: Exception caught during
> execution of reset command. {0}
> > at org.eclipse.jgit.api.ResetCommand.call(ResetCommand.java:230)
> > at
> org.jenkinsci.plugins.gitclient.JGitAPIImpl.clean(JGitAPIImpl.java:1299)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown
> Source)
> > at java.lang.reflect.Method.invoke(Unknown Source)
> > at
> hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteI
> nvocationHandler.java:884)
> > at
> hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvoca
> tionHandler.java:859)
> > at
> hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvoca
> tionHandler.java:818)
> > at hudson.remoting.UserRequest.perform(UserRequest.java:152)
> > at hudson.remoting.UserRequest.perform(UserRequest.java:50)
> > at hudson.remoting.Request$2.run(Request.java:332)
> > at
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorSe
> rvice.java:68)
> > at java.util.concurrent.FutureTask.run(Unknown Source)
> > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
> Source)
> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source)
> > at java.lang.Thread.run(Unknown Source)
> > at ..remote call to Windows VBOX(Native Method)
> > at
> hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
> > at hudson.remoting.UserResponse.retrieve(UserRequest.java:252)
> > at hudson.remoting.Channel.call(Channel.java:781)
> > at
> hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandl
> er.java:249)
> > at com.sun.proxy.$Proxy56.clean(Unknown Source)
> > at
> org.jenkinsci.plugins.gitclient.RemoteGitImpl.clean(RemoteGitImpl.java:453)
> > at
> hudson.plugins.git.extensions.impl.CleanBeforeCheckout.decorateFetchCo
> mmand(CleanBeforeCheckout.java:32)
> > at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:806)
> > at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1066)
> > at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1097)
> > at hudson.scm.SCM.checkout(SCM.java:485)
> > at
> hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
> > at
> hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(Abstr
> actBuild.java:604)
> > at
> jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
> > at
> hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:
> 529)
> > at hudson.model.Run.execute(Run.java:1741)
> > at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
> > at
> hudson.model.ResourceController.execute(ResourceController.java:98)
> > at hudson.model.Executor.run(Executor.java:410)
> > Caused by: java.io.IOException: There is not enough space on the disk
> > at java.io.FileOutputStream.writeBytes(Native Method)
> > at java.io.FileOutputStream.write(Unknown Source)
> > at
> org.eclipse.jgit.internal.storage.file.LockFile$2.write(LockFile.java:327)
> > at java.io.BufferedOutputStream.flushBuffer(Unknown Source)
> > at java.io.BufferedOutputStream.write(Unknown Source)
> > at java.security.DigestOutputStream.write(Unknown Source)
> > at
> org.eclipse.jgit.dircache.DirCacheEntry.write(DirCacheEntry.java:299)
> > at org.eclipse.jgit.dircache.DirCache.writeTo(DirCache.java:670)
> > at org.eclipse.jgit.dircache.DirCache.write(DirCache.java:610)
> > at
> org.eclipse.jgit.dircache.BaseDirCacheEditor.commit(BaseDirCacheEditor.java
> :198)
> > at
> 

[jira] [Commented] (SOLR-8981) Upgrade to Tika 1.13 when it is available

2016-06-16 Thread Lewis John McGibbney (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334085#comment-15334085
 ] 

Lewis John McGibbney commented on SOLR-8981:


Brilliant. The most recent patch I submitted matches Tika 1.13 dependencies
less scientific data formats and all of the other non 'document' formats.
Thanks for rebuilding Tim it's appreciated.




-- 
*Lewis*


> Upgrade to Tika 1.13 when it is available
> -
>
> Key: SOLR-8981
> URL: https://issues.apache.org/jira/browse/SOLR-8981
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Priority: Minor
>
> Tika 1.13 should be out within a month.  This includes PDFBox 2.0.0 and a 
> number of other upgrades and improvements.  
> If there are any showstoppers in 1.13 from Solr's side or requests before we 
> roll 1.13, let us know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 93 - Still Failing

2016-06-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/93/

2 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:42413/tot

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:42413/tot
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:601)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:399)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:515)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-8981) Upgrade to Tika 1.13 when it is available

2016-06-16 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15334015#comment-15334015
 ] 

Tim Allison commented on SOLR-8981:
---

Just tested now, and the upgrade patch is no longer failing on that test (?!).  
If I get a fully clean build, I'll submit it.

> Upgrade to Tika 1.13 when it is available
> -
>
> Key: SOLR-8981
> URL: https://issues.apache.org/jira/browse/SOLR-8981
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Priority: Minor
>
> Tika 1.13 should be out within a month.  This includes PDFBox 2.0.0 and a 
> number of other upgrades and improvements.  
> If there are any showstoppers in 1.13 from Solr's side or requests before we 
> roll 1.13, let us know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7123) /update/json/docs should have nested document support

2016-06-16 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-7123.
--
Resolution: Fixed

> /update/json/docs should have nested document support
> -
>
> Key: SOLR-7123
> URL: https://issues.apache.org/jira/browse/SOLR-7123
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: EaseOfUse
> Fix For: 6.1, master (7.0)
>
> Attachments: NestedDocumentMapper.java, SOLR-7123-test.patch, 
> SOLR-7123.patch, SOLR-7123.patch
>
>
> It is the next logical step after SOLR-6304
> For the example document given below where the /orgs belong to a nested 
> document, 
> {code}
> {
> name: 'Joe Smith',
> phone: 876876687 ,
> orgs :[ {name : Microsoft,
>   city: "Seattle,
>   zip: 98052},
> {name: Apple,
>  city : Cupertino,
>  zip :95014 }
>   ]
> } 
> {code}
> The extra mapping parameters would be
> {noformat}
> split=/|/orgs&
> f=name:/orgs/name&
> f=city:/orgs/city&
> f=zip:/orgs/zip
> {noformat}
> * The objects at {{/org}} automatically becomes a child document because 
> {{/org}} is a child path of{{/}} 
> * All fields falling under the {{/orgs/}} will be mapped to the child document
> alternately you can just do
> {noformat}
> split=/|/orgs=$FQN:/**
> {noformat}
> The fully qualified name (FQN) for chiild docs begin from {{/org}}. So the 
> output would be
> {noformat}
> {
>   "name":"Joe Smith",
>   "phone":876876687,
>   "_childDocuments_":[
> {
>   "name":"Microsoft",
>   "city":"Seattle",
>   "zip":98052},
> {
>   "name":"Apple",
>   "city":"Cupertino",
>   "zip":95014}]}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9215) QT parameter doesn't appear to function anymore

2016-06-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333965#comment-15333965
 ] 

David Smiley commented on SOLR-9215:


For requestHandlers with a leading '/', you can only request them directly 
(without 'qt').  Otherwise, "qt" should work for request handlers that don't 
have a leading "/".  If I recall, handleSelect=true in solrconfig.xml might 
allow you to use "qt" even if it has a leading "/".  We ought to make this 
default to false for the next major release.

Confusing things a little is SolrJ... you can specify the request handler via 
the "qt" param (SolrQuery.setRequestHandler works this way).  When SolrJ issues 
the HTTP request, it will look at "qt" to see if it starts with a "/" and send 
it there.  I wish it would _also_ then remove 'qt' since it effectively 
consumed it's purpose but it does not, which is misleading when on the Solr 
side you see logged a qt param that Solr ignores server-side.

> QT parameter doesn't appear to function anymore
> ---
>
> Key: SOLR-9215
> URL: https://issues.apache.org/jira/browse/SOLR-9215
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0, master (7.0)
>Reporter: Markus Jelsma
> Fix For: 6.1, master (7.0)
>
>
> The qt parameter doesn't seem to work anymore. A call directly to the /terms 
> handler returns actual terms, as expected. Using the select handler but with 
> qt=terms returns noting.
> http://localhost:8983/solr/logs/select?qt=terms=true=compound_digest=100=index
> {code}
> 
> 
> 
>   0
>   0
>   
> terms
> true
> compound_digest
> 100
> index
>   
> 
> 
> 
> 
> {code}
> A peculiar detail, my unit tests that rely on the qt parameter are not 
> affected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7340) MemoryIndex.toString is broken if you enable payloads

2016-06-16 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333959#comment-15333959
 ] 

Alan Woodward commented on LUCENE-7340:
---

The payload query tests in the queries module should give you an idea of how to 
add payloads to a TokenStream for tests.

> MemoryIndex.toString is broken if you enable payloads
> -
>
> Key: LUCENE-7340
> URL: https://issues.apache.org/jira/browse/LUCENE-7340
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.4.1, 6.0.1, master (7.0)
>Reporter: Daniel Collins
>Priority: Minor
> Attachments: LUCENE-7340.diff
>
>
> Noticed this as we use Luwak which creates a MemoryIndex(true, true) storing 
> both offsets and payloads (though in reality we never put any payloads in it).
> We used to use MemoryIndex.toString() for debugging and noticed it broke in 
> Lucene 5.x  and beyond.  I think LUCENE-6155 broke it when it added support 
> for payloads?
> Creating default memoryindex (as all the tests currently do) works fine, as 
> does one with just offsets, it is just the payload version which is broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9215) QT parameter doesn't appear to function anymore

2016-06-16 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated SOLR-9215:

Description: 
The qt parameter doesn't seem to work anymore. A call directly to the /terms 
handler returns actual terms, as expected. Using the select handler but with 
qt=terms returns noting.

http://localhost:8983/solr/logs/select?qt=terms=true=compound_digest=100=index

{code}




  0
  0
  
terms
true
compound_digest
100
index
  




{code}

A peculiar detail, my unit tests that rely on the qt parameter are not affected.

  was:
The qt parameter doesn't seem to work anymore. A call directly to the /terms 
handler returns actual terms, as expected. Using the select handler but with 
qt=terms returns noting.

http://localhost:8983/solr/logs/select?qt=terms=true=compound_digest=100=index

{code}




  0
  0
  
terms
true
compound_digest
100
index
  




{code}


> QT parameter doesn't appear to function anymore
> ---
>
> Key: SOLR-9215
> URL: https://issues.apache.org/jira/browse/SOLR-9215
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0, master (7.0)
>Reporter: Markus Jelsma
> Fix For: 6.1, master (7.0)
>
>
> The qt parameter doesn't seem to work anymore. A call directly to the /terms 
> handler returns actual terms, as expected. Using the select handler but with 
> qt=terms returns noting.
> http://localhost:8983/solr/logs/select?qt=terms=true=compound_digest=100=index
> {code}
> 
> 
> 
>   0
>   0
>   
> terms
> true
> compound_digest
> 100
> index
>   
> 
> 
> 
> 
> {code}
> A peculiar detail, my unit tests that rely on the qt parameter are not 
> affected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8981) Upgrade to Tika 1.13 when it is available

2016-06-16 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333929#comment-15333929
 ] 

Tim Allison commented on SOLR-8981:
---

Y, I think the only thing stopping us now was the unit test failure noted 
above.  I'll take a look.  I don't know if that'll be a blocker.

> Upgrade to Tika 1.13 when it is available
> -
>
> Key: SOLR-8981
> URL: https://issues.apache.org/jira/browse/SOLR-8981
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Priority: Minor
>
> Tika 1.13 should be out within a month.  This includes PDFBox 2.0.0 and a 
> number of other upgrades and improvements.  
> If there are any showstoppers in 1.13 from Solr's side or requests before we 
> roll 1.13, let us know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9215) QT parameter doesn't appear to function anymore

2016-06-16 Thread Markus Jelsma (JIRA)
Markus Jelsma created SOLR-9215:
---

 Summary: QT parameter doesn't appear to function anymore
 Key: SOLR-9215
 URL: https://issues.apache.org/jira/browse/SOLR-9215
 Project: Solr
  Issue Type: Bug
Affects Versions: 6.0, master (7.0)
Reporter: Markus Jelsma
 Fix For: 6.1, master (7.0)


The qt parameter doesn't seem to work anymore. A call directly to the /terms 
handler returns actual terms, as expected. Using the select handler but with 
qt=terms returns noting.

http://localhost:8983/solr/logs/select?qt=terms=true=compound_digest=100=index

{code}




  0
  0
  
terms
true
compound_digest
100
index
  




{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9214) Distrib=true causes NPE in SearchHandler

2016-06-16 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated SOLR-9214:

Description: 
This doesn't seems to be a problem on another Solr 6.0 instance. That one is an 
actual SolrCloud.
{code}
http://localhost:8983/solr/logs/terms?qt=terms=true=compound_digest=100=index=true
{code}

causes

{code}
477967 INFO  (qtp97730845-13) [   x:logs] o.a.s.c.S.Request [logs]  
webapp=/solr path=/terms 
params={qt=terms=true=compound_digest=100=index}
 status=500 QTime=0
477967 ERROR (qtp97730845-13) [   x:logs] o.a.s.s.HttpSolrCall 
null:java.lang.NullPointerException
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:351)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:518)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)
{code}

  was:
{code}
http://localhost:8983/solr/logs/terms?qt=terms=true=compound_digest=100=index=true
{code}

causes

{code}
477967 INFO  (qtp97730845-13) [   x:logs] o.a.s.c.S.Request [logs]  
webapp=/solr path=/terms 
params={qt=terms=true=compound_digest=100=index}
 status=500 QTime=0
477967 ERROR (qtp97730845-13) [   x:logs] o.a.s.s.HttpSolrCall 
null:java.lang.NullPointerException
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:351)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 

[jira] [Updated] (SOLR-9214) Distrib=true causes NPE in SearchHandler

2016-06-16 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated SOLR-9214:

Summary: Distrib=true causes NPE in SearchHandler  (was: NPE in 
SearchHandler)

> Distrib=true causes NPE in SearchHandler
> 
>
> Key: SOLR-9214
> URL: https://issues.apache.org/jira/browse/SOLR-9214
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0, master (7.0)
>Reporter: Markus Jelsma
> Fix For: 6.1, master (7.0)
>
>
> {code}
> http://localhost:8983/solr/logs/terms?qt=terms=true=compound_digest=100=index=true
> {code}
> causes
> {code}
> 477967 INFO  (qtp97730845-13) [   x:logs] o.a.s.c.S.Request [logs]  
> webapp=/solr path=/terms 
> params={qt=terms=true=compound_digest=100=index}
>  status=500 QTime=0
> 477967 ERROR (qtp97730845-13) [   x:logs] o.a.s.s.HttpSolrCall 
> null:java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:351)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9214) NPE in SearchHandler

2016-06-16 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated SOLR-9214:

Description: 
{code}
http://localhost:8983/solr/logs/terms?qt=terms=true=compound_digest=100=index=true
{code}

causes

{code}
477967 INFO  (qtp97730845-13) [   x:logs] o.a.s.c.S.Request [logs]  
webapp=/solr path=/terms 
params={qt=terms=true=compound_digest=100=index}
 status=500 QTime=0
477967 ERROR (qtp97730845-13) [   x:logs] o.a.s.s.HttpSolrCall 
null:java.lang.NullPointerException
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:351)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:518)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)
{code}

  was:
{code}
http://localhost:8983/solr/logs/terms?qt=terms=true=compound_digest=100=index
{code}

causes

{code}
477967 INFO  (qtp97730845-13) [   x:logs] o.a.s.c.S.Request [logs]  
webapp=/solr path=/terms 
params={qt=terms=true=compound_digest=100=index}
 status=500 QTime=0
477967 ERROR (qtp97730845-13) [   x:logs] o.a.s.s.HttpSolrCall 
null:java.lang.NullPointerException
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:351)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at 

[jira] [Comment Edited] (SOLR-8981) Upgrade to Tika 1.13 when it is available

2016-06-16 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15333910#comment-15333910
 ] 

Tommaso Teofili edited comment on SOLR-8981 at 6/16/16 2:39 PM:


IIRC there's a related Solr issue about upgrading to Tika 1.12 [~lewismc] was 
working on (progress slowed down by having to find out which, transitive or 
not, dependencies needed to be updated or not back then).


was (Author: teofili):
IIRC there's a related Solr issue about upgrading to Tika 1.12 [~lewismc] was 
working on (progress slowed down by having to hand scraping which, transitive 
or not, dependencies needed to be updated or not back then).

> Upgrade to Tika 1.13 when it is available
> -
>
> Key: SOLR-8981
> URL: https://issues.apache.org/jira/browse/SOLR-8981
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Priority: Minor
>
> Tika 1.13 should be out within a month.  This includes PDFBox 2.0.0 and a 
> number of other upgrades and improvements.  
> If there are any showstoppers in 1.13 from Solr's side or requests before we 
> roll 1.13, let us know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >