[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_25) - Build # 4309 - Still Failing!

2015-01-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4309/
Java: 64bit/jdk1.8.0_25 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.lucene.facet.TestRandomSamplingFacetsCollector.testRandomSampling

Error Message:
68

Stack Trace:
java.lang.ArrayIndexOutOfBoundsException: 68
at 
__randomizedtesting.SeedInfo.seed([A24E6A77FBB20DBC:4856036A4AE5FA0B]:0)
at 
org.apache.lucene.facet.TestRandomSamplingFacetsCollector.testRandomSampling(TestRandomSamplingFacetsCollector.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 6248 lines...]
   [junit4] Suite: org.apache.lucene.facet.TestRandomSamplingFacetsCollector
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestRandomSamplingFacetsCollector -Dtests.method=testRandomSampling 
-Dtests.seed=A24E6A77FBB20DBC -Dtests.slow=true -Dtests.locale=ar_EG 
-Dtests.timezone=America/Anchorage -Dtests.asserts=true 
-Dtests.file.encoding=Cp1252
   [junit4] ERROR   0.92s | 
TestRandomSamplingFacetsCollector.testRandomSampling <<<
   [junit4]> Throwable #1: java.lang.ArrayIndexOutOfBoundsException: 68
   [junit4]>

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_25) - Build # 4412 - Still Failing!

2015-01-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4412/
Java: 64bit/jdk1.8.0_25 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.handler.TestSolrConfigHandlerCloud.testDistribSearch

Error Message:
Could not get expected value  null for path [response, params, y, c] full 
output {   "responseHeader":{ "status":0, "QTime":0},   "response":{
 "znodeVersion":1, "params":{   "x":{ "a":"A val", 
"b":"B val", "":{"v":0}},   "y":{ "c":"CY val", 
"b":"BY val", "":{"v":0}

Stack Trace:
java.lang.AssertionError: Could not get expected value  null for path 
[response, params, y, c] full output {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":1,
"params":{
  "x":{
"a":"A val",
"b":"B val",
"":{"v":0}},
  "y":{
"c":"CY val",
"b":"BY val",
"":{"v":0}
at 
__randomizedtesting.SeedInfo.seed([597A5661C5E44563:D89CD879B2BB255F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:259)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.testReqParams(TestSolrConfigHandlerCloud.java:270)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.doTest(TestSolrConfigHandlerCloud.java:70)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtes

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b44) - Build # 11587 - Still Failing!

2015-01-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11587/
Java: 32bit/jdk1.9.0-ea-b44 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Captured an uncaught exception in thread: Thread[id=1485, name=Thread-569, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=1485, name=Thread-569, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:43020: Could not find collection : 
awholynewstresscollection_collection1_0
at __randomizedtesting.SeedInfo.seed([C0C3B2CC7F69C84]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:558)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:214)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:210)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:365)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:320)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1015)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:928)




Build Log:
[...truncated 8944 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.CollectionsAPIDistributedZkTest
 C0C3B2CC7F69C84-001/init-core-data-001
   [junit4]   2> 257364 T1234 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(false) and clientAuth (false)
   [junit4]   2> 257365 T1234 oas.BaseDistributedSearchTestCase.initHostContext 
Setting hostContext system property: /
   [junit4]   2> 257370 T1234 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2> 257371 T1234 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1> client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 257371 T1235 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2> 257471 T1234 oasc.ZkTestServer.run start zk server on 
port:48478
   [junit4]   2> 257472 T1234 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 257473 T1234 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 257476 T1242 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@18e7ffe name:ZooKeeperConnection 
Watcher:127.0.0.1:48478 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 257477 T1234 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 257477 T1234 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 257478 T1234 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2> 257482 T1234 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 257483 T1234 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 257485 T1245 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@16c37b1 name:ZooKeeperConnection 
Watcher:127.0.0.1:48478/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2> 257487 T1234 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 257488 T1234 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 257488 T1234 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2> 257491 T1234 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2> 257493 T1234 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2> 257495 T1234 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2> 257497 T1234 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 257497 T1234 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2> 257500 T1234 oasc.AbstractZkTestCase.putConfig put 
/mn

[jira] [Commented] (SOLR-6902) Use JUnit rules instead of inheritance with distributed Solr tests to allow for multiple tests without the same class

2015-01-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274770#comment-14274770
 ] 

Noble Paul commented on SOLR-6902:
--

[~erickerickson] This is long overdue and it is very annoying to have these 
tests all running from a single method. But as Mark mentioned , we are very 
close to 5.0 RC. So +1 for committing this after 5.0 branch

> Use JUnit rules instead of inheritance with distributed Solr tests to allow 
> for multiple tests without the same class
> -
>
> Key: SOLR-6902
> URL: https://issues.apache.org/jira/browse/SOLR-6902
> Project: Solr
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Ramkumar Aiyengar
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-6902.patch, SOLR-6902.patch
>
>
> Finally got annoyed enough with too many things being clubbed into one test 
> method in all distributed Solr tests (anything inheriting from 
> {{BaseDistributedSearchTestCase}} and currently implementing {{doTest}})..
> This just lays the groundwork really for allowing multiple test methods 
> within the same class, and doesn't split tests as yet or flatten the 
> inheritance hierarchy (when abused for doing multiple tests), as this touches 
> a lot of files by itself. For that reason, the sooner this is picked up the 
> better..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2473 - Still Failing

2015-01-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2473/

5 tests failed.
FAILED:  org.apache.solr.cloud.SaslZkACLProviderTest.testSaslZkACLProvider

Error Message:
KeeperErrorCode = ConnectionLoss for /solr

Stack Trace:
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = 
ConnectionLoss for /solr
at 
__randomizedtesting.SeedInfo.seed([67000182AA35AF0E:63F4575A33710C00]:0)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at 
org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:293)
at 
org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:290)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
at 
org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:290)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:485)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:402)
at 
org.apache.solr.cloud.SaslZkACLProviderTest.setUp(SaslZkACLProviderTest.java:80)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:861)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestS

[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1881 - Failure!

2015-01-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1881/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.TestSolrConfigHandlerCloud.testDistribSearch

Error Message:
Could not get expected value  A val for path [params, a] full output null

Stack Trace:
java.lang.AssertionError: Could not get expected value  A val for path [params, 
a] full output null
at 
__randomizedtesting.SeedInfo.seed([3B7851A6C8283EE0:BA9EDFBEBF775EDC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:259)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.testReqParams(TestSolrConfigHandlerCloud.java:137)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.doTest(TestSolrConfigHandlerCloud.java:70)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor79.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.Statemen

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_25) - Build # 11586 - Failure!

2015-01-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11586/
Java: 64bit/jdk1.8.0_25 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.component.TestExpandComponent.testNumericExpand

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([46608B9E9B3B8D00:3E04C1AF38080F2E]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:711)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:678)
at 
org.apache.solr.handler.component.TestExpandComponent._testExpand(TestExpandComponent.java:118)
at 
org.apache.solr.handler.component.TestExpandComponent.testNumericExpand(TestExpandComponent.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.T

[jira] [Comment Edited] (LUCENE-6170) MultiDocValues.getSortedValues cause IndexOutOfBoundsException

2015-01-12 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274656#comment-14274656
 ] 

Littlestar edited comment on LUCENE-6170 at 1/13/15 3:46 AM:
-

I shared the result of MultiDocValues.getBinaryValues | getSortedValues  in 
some threads, it throw the problem.
I changed to call MultiDocValues.getBinaryValues in each threads, it works ok 
but very slow.
Maby mulithread cause the problem.

how to use BinaryDocValues in mulithread, thanks.

  /** Returns a BinaryDocValues for a reader's docvalues (potentially merging 
on-the-fly)
   * 
   * This is a slow way to access binary values. Instead, access them 
per-segment
   * with {@link AtomicReader#getBinaryDocValues(String)}
   *   
   */

AtomicReader#getBinaryDocValues(String)
Returns BinaryDocValues for this field, or null if no BinaryDocValues were 
indexed for this field. The returned instance should only be used by a single 
thread. 


was (Author: cnstar9988):
I shared the result of MultiDocValues.getBinaryValues in some threads, it throw 
the problem.
I changed to call MultiDocValues.getBinaryValues in each threads, it works ok 
but very slow.
Maby mulithread cause the problem.

how to use BinaryDocValues in mulithread, thanks.

  /** Returns a BinaryDocValues for a reader's docvalues (potentially merging 
on-the-fly)
   * 
   * This is a slow way to access binary values. Instead, access them 
per-segment
   * with {@link AtomicReader#getBinaryDocValues(String)}
   *   
   */

AtomicReader#getBinaryDocValues(String)
Returns BinaryDocValues for this field, or null if no BinaryDocValues were 
indexed for this field. The returned instance should only be used by a single 
thread. 

> MultiDocValues.getSortedValues cause IndexOutOfBoundsException
> --
>
> Key: LUCENE-6170
> URL: https://issues.apache.org/jira/browse/LUCENE-6170
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.1
>Reporter: Littlestar
>
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.nio.Buffer.checkBounds(Buffer.java:567)
>   at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:265)
>   at 
> org.apache.lucene.store.ByteBufferIndexInput.readBytes(ByteBufferIndexInput.java:95)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene410DocValuesProducer.java:909)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.seekExact(Lucene410DocValuesProducer.java:1017)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues.get(Lucene410DocValuesProducer.java:815)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$LongBinaryDocValues.get(Lucene410DocValuesProducer.java:775)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$6.lookupOrd(Lucene410DocValuesProducer.java:513)
>   at 
> org.apache.lucene.index.MultiDocValues$MultiSortedDocValues.lookupOrd(MultiDocValues.java:670)
>   at org.apache.lucene.index.SortedDocValues.get(SortedDocValues.java:69)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6170) MultiDocValues.getSortedValues cause IndexOutOfBoundsException

2015-01-12 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274656#comment-14274656
 ] 

Littlestar edited comment on LUCENE-6170 at 1/13/15 3:46 AM:
-

I shared the result of MultiDocValues.getBinaryValues in some threads, it throw 
the problem.
I changed to call MultiDocValues.getBinaryValues in each threads, it works ok 
but very slow.
Maby mulithread cause the problem.

how to use BinaryDocValues in mulithread, thanks.

  /** Returns a BinaryDocValues for a reader's docvalues (potentially merging 
on-the-fly)
   * 
   * This is a slow way to access binary values. Instead, access them 
per-segment
   * with {@link AtomicReader#getBinaryDocValues(String)}
   *   
   */

AtomicReader#getBinaryDocValues(String)
Returns BinaryDocValues for this field, or null if no BinaryDocValues were 
indexed for this field. The returned instance should only be used by a single 
thread. 


was (Author: cnstar9988):
I shared the result of MultiDocValues.getBinaryValues in some threads, it throw 
the problem.
I changed to call MultiDocValues.getBinaryValues in each threads, it works ok.
Maby mulithread cause the problem.

how to use BinaryDocValues in mulithread, thanks.

  /** Returns a BinaryDocValues for a reader's docvalues (potentially merging 
on-the-fly)
   * 
   * This is a slow way to access binary values. Instead, access them 
per-segment
   * with {@link AtomicReader#getBinaryDocValues(String)}
   *   
   */

AtomicReader#getBinaryDocValues(String)
Returns BinaryDocValues for this field, or null if no BinaryDocValues were 
indexed for this field. The returned instance should only be used by a single 
thread. 

> MultiDocValues.getSortedValues cause IndexOutOfBoundsException
> --
>
> Key: LUCENE-6170
> URL: https://issues.apache.org/jira/browse/LUCENE-6170
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.1
>Reporter: Littlestar
>
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.nio.Buffer.checkBounds(Buffer.java:567)
>   at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:265)
>   at 
> org.apache.lucene.store.ByteBufferIndexInput.readBytes(ByteBufferIndexInput.java:95)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene410DocValuesProducer.java:909)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.seekExact(Lucene410DocValuesProducer.java:1017)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues.get(Lucene410DocValuesProducer.java:815)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$LongBinaryDocValues.get(Lucene410DocValuesProducer.java:775)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$6.lookupOrd(Lucene410DocValuesProducer.java:513)
>   at 
> org.apache.lucene.index.MultiDocValues$MultiSortedDocValues.lookupOrd(MultiDocValues.java:670)
>   at org.apache.lucene.index.SortedDocValues.get(SortedDocValues.java:69)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6170) MultiDocValues.getSortedValues cause IndexOutOfBoundsException

2015-01-12 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274656#comment-14274656
 ] 

Littlestar edited comment on LUCENE-6170 at 1/13/15 3:45 AM:
-

I shared the result of MultiDocValues.getBinaryValues in some threads, it throw 
the problem.
I changed to call MultiDocValues.getBinaryValues in each threads, it works ok.
Maby mulithread cause the problem.

how to use BinaryDocValues in mulithread, thanks.

  /** Returns a BinaryDocValues for a reader's docvalues (potentially merging 
on-the-fly)
   * 
   * This is a slow way to access binary values. Instead, access them 
per-segment
   * with {@link AtomicReader#getBinaryDocValues(String)}
   *   
   */

AtomicReader#getBinaryDocValues(String)
Returns BinaryDocValues for this field, or null if no BinaryDocValues were 
indexed for this field. The returned instance should only be used by a single 
thread. 


was (Author: cnstar9988):
I shared the MultiDocValues.getBinaryValues in some threads, it throw the 
problem.
I changed to call MultiDocValues.getBinaryValues in each threads, it works ok.
Maby mulithread cause the problem.

how to use BinaryDocValues in mulithread, thanks.

  /** Returns a BinaryDocValues for a reader's docvalues (potentially merging 
on-the-fly)
   * 
   * This is a slow way to access binary values. Instead, access them 
per-segment
   * with {@link AtomicReader#getBinaryDocValues(String)}
   *   
   */

AtomicReader#getBinaryDocValues(String)
Returns BinaryDocValues for this field, or null if no BinaryDocValues were 
indexed for this field. The returned instance should only be used by a single 
thread. 

> MultiDocValues.getSortedValues cause IndexOutOfBoundsException
> --
>
> Key: LUCENE-6170
> URL: https://issues.apache.org/jira/browse/LUCENE-6170
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.1
>Reporter: Littlestar
>
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.nio.Buffer.checkBounds(Buffer.java:567)
>   at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:265)
>   at 
> org.apache.lucene.store.ByteBufferIndexInput.readBytes(ByteBufferIndexInput.java:95)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene410DocValuesProducer.java:909)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.seekExact(Lucene410DocValuesProducer.java:1017)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues.get(Lucene410DocValuesProducer.java:815)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$LongBinaryDocValues.get(Lucene410DocValuesProducer.java:775)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$6.lookupOrd(Lucene410DocValuesProducer.java:513)
>   at 
> org.apache.lucene.index.MultiDocValues$MultiSortedDocValues.lookupOrd(MultiDocValues.java:670)
>   at org.apache.lucene.index.SortedDocValues.get(SortedDocValues.java:69)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6170) MultiDocValues.getSortedValues cause IndexOutOfBoundsException

2015-01-12 Thread Littlestar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274656#comment-14274656
 ] 

Littlestar commented on LUCENE-6170:


I shared the MultiDocValues.getBinaryValues in some threads, it throw the 
problem.
I changed to call MultiDocValues.getBinaryValues in each threads, it works ok.
Maby mulithread cause the problem.

how to use BinaryDocValues in mulithread, thanks.

  /** Returns a BinaryDocValues for a reader's docvalues (potentially merging 
on-the-fly)
   * 
   * This is a slow way to access binary values. Instead, access them 
per-segment
   * with {@link AtomicReader#getBinaryDocValues(String)}
   *   
   */

AtomicReader#getBinaryDocValues(String)
Returns BinaryDocValues for this field, or null if no BinaryDocValues were 
indexed for this field. The returned instance should only be used by a single 
thread. 

> MultiDocValues.getSortedValues cause IndexOutOfBoundsException
> --
>
> Key: LUCENE-6170
> URL: https://issues.apache.org/jira/browse/LUCENE-6170
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 4.10.1
>Reporter: Littlestar
>
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.nio.Buffer.checkBounds(Buffer.java:567)
>   at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:265)
>   at 
> org.apache.lucene.store.ByteBufferIndexInput.readBytes(ByteBufferIndexInput.java:95)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.readTerm(Lucene410DocValuesProducer.java:909)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues$CompressedBinaryTermsEnum.seekExact(Lucene410DocValuesProducer.java:1017)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$CompressedBinaryDocValues.get(Lucene410DocValuesProducer.java:815)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$LongBinaryDocValues.get(Lucene410DocValuesProducer.java:775)
>   at 
> org.apache.lucene.codecs.lucene410.Lucene410DocValuesProducer$6.lookupOrd(Lucene410DocValuesProducer.java:513)
>   at 
> org.apache.lucene.index.MultiDocValues$MultiSortedDocValues.lookupOrd(MultiDocValues.java:670)
>   at org.apache.lucene.index.SortedDocValues.get(SortedDocValues.java:69)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: how to highlight the whole search phrase only?

2015-01-12 Thread david.w.smi...@gmail.com
Hi Meena,
Please use the “solr-user” list for user questions. This is the list for
development of Lucene & Solr.

~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley

On Mon, Jan 12, 2015 at 6:26 PM, meena.sri...@mathworks.com <
meena.sri...@mathworks.com> wrote:

> Highlighting does not highlight the whole Phrase, instead each word gets
> highlighted.
> I tried all the suggestions that was given, with no luck
> These are my special setting for phrase highlighting
> hl.usePhraseHighlighter=true
> hl.q="query"
>
>
>
> http://localhost.mathworks.com:8983/solr/db/select?q=syndrome%3A%22Override+ignored+for+property%22&rows=1&fl=syndrome_id&wt=json&indent=true&hl=true&hl.simple.pre=%3Cem%3E&hl.simple.post=%3C%2Fem%3E&hl.usePhraseHighlighter=true&hl.q=%22Override+ignored+for+property%22&hl.fragsize=1000
>
>
> This is from my schema.xml
> 
>
> Should I add special in the indexing stage itself to make this work?
>
> Thanks for your time.
>
> Meena
>
>
>
>
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/how-to-highlight-the-whole-search-phrase-only-tp4179078.html
> Sent from the Lucene - Java Developer mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Resolved] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-01-12 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan resolved SOLR-6915.
--
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

Thanks for the review Mark, committed to 5.0 and trunk.

> SaslZkACLProvider and Kerberos Test Using MiniKdc
> -
>
> Key: SOLR-6915
> URL: https://issues.apache.org/jira/browse/SOLR-6915
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6915.patch, SOLR-6915.patch
>
>
> We should provide a ZkACLProvider that requires SASL authentication.  This 
> provider will be useful for administration in a kerberos environment.   In 
> such an environment, the administrator wants solr to authenticate to 
> zookeeper using SASL, since this is only way to authenticate with zookeeper 
> via kerberos.
> The authorization model in such a setup can vary, e.g. you can imagine a 
> scenario where solr owns (is the only writer of) the non-config znodes, but 
> some set of trusted users are allowed to modify the configs.  It's hard to 
> predict all the possibilities here, but one model that seems generally useful 
> is to have a model where solr itself owns all the znodes and all actions that 
> require changing the znodes are routed to Solr APIs.  That seems simple and 
> reasonable as a first version.
> As for testing, I noticed while working on SOLR-6625 that we don't really 
> have any infrastructure for testing kerberos integration in unit tests.  
> Internally, I've been testing using kerberos-enabled VM clusters, but this 
> isn't great since we won't notice any breakages until someone actually spins 
> up a VM.  So part of this JIRA is to provide some infrastructure for testing 
> kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274605#comment-14274605
 ] 

ASF subversion and git services commented on SOLR-6915:
---

Commit 1651266 from gcha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1651266 ]

SOLR-6915: SaslZkACLProvider and Kerberos Test Using MiniKdc

> SaslZkACLProvider and Kerberos Test Using MiniKdc
> -
>
> Key: SOLR-6915
> URL: https://issues.apache.org/jira/browse/SOLR-6915
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: SOLR-6915.patch, SOLR-6915.patch
>
>
> We should provide a ZkACLProvider that requires SASL authentication.  This 
> provider will be useful for administration in a kerberos environment.   In 
> such an environment, the administrator wants solr to authenticate to 
> zookeeper using SASL, since this is only way to authenticate with zookeeper 
> via kerberos.
> The authorization model in such a setup can vary, e.g. you can imagine a 
> scenario where solr owns (is the only writer of) the non-config znodes, but 
> some set of trusted users are allowed to modify the configs.  It's hard to 
> predict all the possibilities here, but one model that seems generally useful 
> is to have a model where solr itself owns all the znodes and all actions that 
> require changing the znodes are routed to Solr APIs.  That seems simple and 
> reasonable as a first version.
> As for testing, I noticed while working on SOLR-6625 that we don't really 
> have any infrastructure for testing kerberos integration in unit tests.  
> Internally, I've been testing using kerberos-enabled VM clusters, but this 
> isn't great since we won't notice any breakages until someone actually spins 
> up a VM.  So part of this JIRA is to provide some infrastructure for testing 
> kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274590#comment-14274590
 ] 

ASF subversion and git services commented on SOLR-6915:
---

Commit 1651264 from gcha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1651264 ]

SOLR-6915: SaslZkACLProvider and Kerberos Test Using MiniKdc

> SaslZkACLProvider and Kerberos Test Using MiniKdc
> -
>
> Key: SOLR-6915
> URL: https://issues.apache.org/jira/browse/SOLR-6915
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: SOLR-6915.patch, SOLR-6915.patch
>
>
> We should provide a ZkACLProvider that requires SASL authentication.  This 
> provider will be useful for administration in a kerberos environment.   In 
> such an environment, the administrator wants solr to authenticate to 
> zookeeper using SASL, since this is only way to authenticate with zookeeper 
> via kerberos.
> The authorization model in such a setup can vary, e.g. you can imagine a 
> scenario where solr owns (is the only writer of) the non-config znodes, but 
> some set of trusted users are allowed to modify the configs.  It's hard to 
> predict all the possibilities here, but one model that seems generally useful 
> is to have a model where solr itself owns all the znodes and all actions that 
> require changing the znodes are routed to Solr APIs.  That seems simple and 
> reasonable as a first version.
> As for testing, I noticed while working on SOLR-6625 that we don't really 
> have any infrastructure for testing kerberos integration in unit tests.  
> Internally, I've been testing using kerberos-enabled VM clusters, but this 
> isn't great since we won't notice any breakages until someone actually spins 
> up a VM.  So part of this JIRA is to provide some infrastructure for testing 
> kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2472 - Still Failing

2015-01-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2472/

4 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:14095/_/rn/c8n_1x2_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:14095/_/rn/c8n_1x2_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([A11EF02AC2810BCE:20F87E32B5DE6BF2]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRule

[jira] [Resolved] (SOLR-6248) MoreLikeThis Query Parser

2015-01-12 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-6248.

Resolution: Fixed

> MoreLikeThis Query Parser
> -
>
> Key: SOLR-6248
> URL: https://issues.apache.org/jira/browse/SOLR-6248
> Project: Solr
>  Issue Type: New Feature
>  Components: query parsers
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6248.patch, SOLR-6248.patch, SOLR-6248.patch, 
> SOLR-6248.patch, SOLR-6248.patch, SOLR-6248.patch
>
>
> MLT Component doesn't let people highlight/paginate and the handler comes 
> with an cost of maintaining another piece in the config. Also, any changes to 
> the default (number of results to be fetched etc.) /select handler need to be 
> copied/synced with this handler too.
> Having an MLT QParser would let users get back docs based on a query for them 
> to paginate, highlight etc. It would also give them the flexibility to use 
> this anywhere i.e. q,fq,bq etc.
> A bit of history about MLT (thanks to Hoss)
> MLT Handler pre-dates the existence of QParsers and was meant to take an 
> arbitrary query as input, find docs that match that 
> query, club them together to find interesting terms, and then use those 
> terms as if they were my main query to generate a main result set.
> This result would then be used as the set to facet, highlight etc.
> The flow: Query -> DocList(m) -> Bag (terms) -> Query -> DocList\(y)
> The MLT component on the other hand solved a very different purpose of 
> augmenting the main result set. It is used to get similar docs for each of 
> the doc in the main result set.
> DocSet\(n) -> n * Bag (terms) -> n * (Query) -> n * DocList(m)
> The new approach:
> All of this can be done better and cleaner (and makes more sense too) using 
> an MLT QParser.
> An important thing to handle here is the case where the user doesn't have 
> TermVectors, in which case, it does what happens right now i.e. parsing 
> stored fields.
> Also, in case the user doesn't have a field (to be used for MLT) indexed, the 
> field would need to be a TextField with an index analyzer defined. This 
> analyzer will then be used to extract terms for MLT.
> In case of SolrCloud mode, '/get-termvectors' can be used after looking at 
> the schema (if TermVectors are enabled for the field). If not, a /get call 
> can be used to fetch the field and parse it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6496) LBHttpSolrClient should stop server retries after the timeAllowed threshold is met

2015-01-12 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-6496.

   Resolution: Fixed
Fix Version/s: Trunk

> LBHttpSolrClient should stop server retries after the timeAllowed threshold 
> is met
> --
>
> Key: SOLR-6496
> URL: https://issues.apache.org/jira/browse/SOLR-6496
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch, 
> SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch
>
>
> The LBHttpSolrServer will continue to perform retries for each server it was 
> given without honoring the timeAllowed request parameter. Once the threshold 
> has been met, you should no longer perform retries and allow the exception to 
> bubble up and allow the request to either error out or return partial results 
> per the shards.tolerant request parameter.
> For a little more context on how this is can be extremely problematic please 
> see the comment here: 
> https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
>  (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6496) LBHttpSolrClient should stop server retries after the timeAllowed threshold is met

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274429#comment-14274429
 ] 

ASF subversion and git services commented on SOLR-6496:
---

Commit 1651237 from [~anshumg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1651237 ]

SOLR-6496: LBHttpSolrClient stops retrying after the timeAllowed threshold is 
met (merge from trunk)

> LBHttpSolrClient should stop server retries after the timeAllowed threshold 
> is met
> --
>
> Key: SOLR-6496
> URL: https://issues.apache.org/jira/browse/SOLR-6496
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 5.0
>
> Attachments: SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch, 
> SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch
>
>
> The LBHttpSolrServer will continue to perform retries for each server it was 
> given without honoring the timeAllowed request parameter. Once the threshold 
> has been met, you should no longer perform retries and allow the exception to 
> bubble up and allow the request to either error out or return partial results 
> per the shards.tolerant request parameter.
> For a little more context on how this is can be extremely problematic please 
> see the comment here: 
> https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
>  (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6496) LBHttpSolrClient should stop server retries after the timeAllowed threshold is met

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274427#comment-14274427
 ] 

ASF subversion and git services commented on SOLR-6496:
---

Commit 1651236 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1651236 ]

SOLR-6496: LBHttpSolrClient stops retrying after the timeAllowed threshold is 
met

> LBHttpSolrClient should stop server retries after the timeAllowed threshold 
> is met
> --
>
> Key: SOLR-6496
> URL: https://issues.apache.org/jira/browse/SOLR-6496
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 5.0
>
> Attachments: SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch, 
> SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch
>
>
> The LBHttpSolrServer will continue to perform retries for each server it was 
> given without honoring the timeAllowed request parameter. Once the threshold 
> has been met, you should no longer perform retries and allow the exception to 
> bubble up and allow the request to either error out or return partial results 
> per the shards.tolerant request parameter.
> For a little more context on how this is can be extremely problematic please 
> see the comment here: 
> https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
>  (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6496) LBHttpSolrClient should stop server retries after the timeAllowed threshold is met

2015-01-12 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-6496:
---
Summary: LBHttpSolrClient should stop server retries after the timeAllowed 
threshold is met  (was: LBHttpSolrServer should stop server retries after the 
timeAllowed threshold is met)

> LBHttpSolrClient should stop server retries after the timeAllowed threshold 
> is met
> --
>
> Key: SOLR-6496
> URL: https://issues.apache.org/jira/browse/SOLR-6496
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 5.0
>
> Attachments: SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch, 
> SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch
>
>
> The LBHttpSolrServer will continue to perform retries for each server it was 
> given without honoring the timeAllowed request parameter. Once the threshold 
> has been met, you should no longer perform retries and allow the exception to 
> bubble up and allow the request to either error out or return partial results 
> per the shards.tolerant request parameter.
> For a little more context on how this is can be extremely problematic please 
> see the comment here: 
> https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
>  (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6933) bin/solr script should just have a single create action that creates a core or collection depending on the mode solr is running in

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274415#comment-14274415
 ] 

ASF subversion and git services commented on SOLR-6933:
---

Commit 1651233 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1651233 ]

SOLR-6952: bin/solr create action should copy configset directory instead of 
reusing an existing configset in ZooKeeper by default; commit also includes fix 
for SOLR-6933 - create alias

> bin/solr script should just have a single create action that creates a core 
> or collection depending on the mode solr is running in
> --
>
> Key: SOLR-6933
> URL: https://issues.apache.org/jira/browse/SOLR-6933
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>
> instead of create_core and create_collection, just have create that creates a 
> core or a collection based on which mode Solr is running in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6952) Re-using data-driven configsets by default is not helpful

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274414#comment-14274414
 ] 

ASF subversion and git services commented on SOLR-6952:
---

Commit 1651233 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1651233 ]

SOLR-6952: bin/solr create action should copy configset directory instead of 
reusing an existing configset in ZooKeeper by default; commit also includes fix 
for SOLR-6933 - create alias

> Re-using data-driven configsets by default is not helpful
> -
>
> Key: SOLR-6952
> URL: https://issues.apache.org/jira/browse/SOLR-6952
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 5.0
>Reporter: Grant Ingersoll
>Assignee: Timothy Potter
> Fix For: 5.0
>
> Attachments: SOLR-6952.patch, SOLR-6952.patch
>
>
> When creating collections (I'm using the bin/solr scripts), I think we should 
> automatically copy configsets, especially when running in "getting started 
> mode" or data driven mode.
> I did the following:
> {code}
> bin/solr create_collection -n foo
> bin/post foo some_data.csv
> {code}
> I then created a second collection with the intention of sending in the same 
> data, but this time run through a python script that changed a value from an 
> int to a string (since it was an enumerated type) and was surprised to see 
> that I got:
> {quote}
> Caused by: java.lang.NumberFormatException: For input string: "NA"
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Long.parseLong(Long.java:441)
> {quote}
> for my new version of the data that passes in a string instead of an int, as 
> this new collection had only seen strings for that field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6952) Re-using data-driven configsets by default is not helpful

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274397#comment-14274397
 ] 

ASF subversion and git services commented on SOLR-6952:
---

Commit 1651231 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1651231 ]

SOLR-6952: bin/solr create action should copy configset directory instead of 
reusing an existing configset in ZooKeeper by default; commit also includes fix 
for SOLR-6933 - create alias

> Re-using data-driven configsets by default is not helpful
> -
>
> Key: SOLR-6952
> URL: https://issues.apache.org/jira/browse/SOLR-6952
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 5.0
>Reporter: Grant Ingersoll
>Assignee: Timothy Potter
> Fix For: 5.0
>
> Attachments: SOLR-6952.patch, SOLR-6952.patch
>
>
> When creating collections (I'm using the bin/solr scripts), I think we should 
> automatically copy configsets, especially when running in "getting started 
> mode" or data driven mode.
> I did the following:
> {code}
> bin/solr create_collection -n foo
> bin/post foo some_data.csv
> {code}
> I then created a second collection with the intention of sending in the same 
> data, but this time run through a python script that changed a value from an 
> int to a string (since it was an enumerated type) and was surprised to see 
> that I got:
> {quote}
> Caused by: java.lang.NumberFormatException: For input string: "NA"
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Long.parseLong(Long.java:441)
> {quote}
> for my new version of the data that passes in a string instead of an int, as 
> this new collection had only seen strings for that field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6933) bin/solr script should just have a single create action that creates a core or collection depending on the mode solr is running in

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274398#comment-14274398
 ] 

ASF subversion and git services commented on SOLR-6933:
---

Commit 1651231 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1651231 ]

SOLR-6952: bin/solr create action should copy configset directory instead of 
reusing an existing configset in ZooKeeper by default; commit also includes fix 
for SOLR-6933 - create alias

> bin/solr script should just have a single create action that creates a core 
> or collection depending on the mode solr is running in
> --
>
> Key: SOLR-6933
> URL: https://issues.apache.org/jira/browse/SOLR-6933
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>
> instead of create_core and create_collection, just have create that creates a 
> core or a collection based on which mode Solr is running in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



how to highlight the whole search phrase only?

2015-01-12 Thread meena.sri...@mathworks.com
Highlighting does not highlight the whole Phrase, instead each word gets
highlighted.
I tried all the suggestions that was given, with no luck
These are my special setting for phrase highlighting
hl.usePhraseHighlighter=true
hl.q="query"


http://localhost.mathworks.com:8983/solr/db/select?q=syndrome%3A%22Override+ignored+for+property%22&rows=1&fl=syndrome_id&wt=json&indent=true&hl=true&hl.simple.pre=%3Cem%3E&hl.simple.post=%3C%2Fem%3E&hl.usePhraseHighlighter=true&hl.q=%22Override+ignored+for+property%22&hl.fragsize=1000


This is from my schema.xml


Should I add special in the indexing stage itself to make this work?

Thanks for your time.

Meena








--
View this message in context: 
http://lucene.472066.n3.nabble.com/how-to-highlight-the-whole-search-phrase-only-tp4179078.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_40-ea-b20) - Build # 11584 - Still Failing!

2015-01-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11584/
Java: 32bit/jdk1.8.0_40-ea-b20 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
some core start times did not change on reload

Stack Trace:
java.lang.AssertionError: some core start times did not change on reload
at 
__randomizedtesting.SeedInfo.seed([E8D7A4B972D8047D:69312AA105876441]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:766)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:201)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLe

Re: Solr EventListerner where to add the implementing classes

2015-01-12 Thread meena.sri...@mathworks.com
Thanks for your reply. I tried adding plugin and referenced to them in
solr-config.xml file with no luck.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-EventListerner-where-to-add-the-implementing-classes-tp4178172p4179076.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6640) ChaosMonkeySafeLeaderTest failure with CorruptIndexException

2015-01-12 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-6640:
---
Priority: Blocker  (was: Major)

Changing this to a Blocker for 5.0 as I think it needs to go in for 5.0.

> ChaosMonkeySafeLeaderTest failure with CorruptIndexException
> 
>
> Key: SOLR-6640
> URL: https://issues.apache.org/jira/browse/SOLR-6640
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 5.0
>Reporter: Shalin Shekhar Mangar
>Priority: Blocker
> Fix For: 5.0
>
> Attachments: Lucene-Solr-5.x-Linux-64bit-jdk1.8.0_20-Build-11333.txt, 
> SOLR-6640.patch, SOLR-6640.patch, SOLR-6640.patch, 
> SOLR-6640_new_index_dir.patch
>
>
> Test failure found on jenkins:
> http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11333/
> {code}
> 1 tests failed.
> REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch
> Error Message:
> shard2 is not consistent.  Got 62 from 
> http://127.0.0.1:57436/collection1lastClient and got 24 from 
> http://127.0.0.1:53065/collection1
> Stack Trace:
> java.lang.AssertionError: shard2 is not consistent.  Got 62 from 
> http://127.0.0.1:57436/collection1lastClient and got 24 from 
> http://127.0.0.1:53065/collection1
> at 
> __randomizedtesting.SeedInfo.seed([F4B371D421E391CD:7555FFCC56BCF1F1]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at 
> org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1255)
> at 
> org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1234)
> at 
> org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:162)
> at 
> org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
> {code}
> Cause of inconsistency is:
> {code}
> Caused by: org.apache.lucene.index.CorruptIndexException: file mismatch, 
> expected segment id=yhq3vokoe1den2av9jbd3yp8, got=yhq3vokoe1den2av9jbd3yp7 
> (resource=BufferedChecksumIndexInput(MMapIndexInput(path="/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest-F4B371D421E391CD-001/tempDir-001/jetty3/index/_1_2.liv")))
>[junit4]   2>  at 
> org.apache.lucene.codecs.CodecUtil.checkSegmentHeader(CodecUtil.java:259)
>[junit4]   2>  at 
> org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat.readLiveDocs(Lucene50LiveDocsFormat.java:88)
>[junit4]   2>  at 
> org.apache.lucene.codecs.asserting.AssertingLiveDocsFormat.readLiveDocs(AssertingLiveDocsFormat.java:64)
>[junit4]   2>  at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:102)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6178) don't score MUST_NOT clauses with BooleanScorer

2015-01-12 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-6178.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

> don't score MUST_NOT clauses with BooleanScorer
> ---
>
> Key: LUCENE-6178
> URL: https://issues.apache.org/jira/browse/LUCENE-6178
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6178.patch
>
>
> Its similar to the conjunction case: we should just use BS2 since it has 
> advance(). Even in the dense case I think its currently better since it 
> avoids calling score() in cases where BS1 calls it redundantly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6178) don't score MUST_NOT clauses with BooleanScorer

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274375#comment-14274375
 ] 

ASF subversion and git services commented on LUCENE-6178:
-

Commit 1651227 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1651227 ]

LUCENE-6178: don't score MUST_NOT clauses with BooleanScorer

> don't score MUST_NOT clauses with BooleanScorer
> ---
>
> Key: LUCENE-6178
> URL: https://issues.apache.org/jira/browse/LUCENE-6178
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6178.patch
>
>
> Its similar to the conjunction case: we should just use BS2 since it has 
> advance(). Even in the dense case I think its currently better since it 
> avoids calling score() in cases where BS1 calls it redundantly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_72) - Build # 4308 - Still Failing!

2015-01-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4308/
Java: 64bit/jdk1.7.0_72 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([A371F8E1B5CB9435]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([A371F8E1B5CB9435]:0)




Build Log:
[...truncated 9732 lines...]
   [junit4] Suite: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest
   [junit4]   2> Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.ChaosMonkeySafeLeaderTest
 A371F8E1B5CB9435-001\init-core-data-001
   [junit4]   2> 3820505 T11833 oas.SolrTestCaseJ4.buildSSLConfig Randomized 
ssl (false) and clientAuth (false)
   [junit4]   2> 3820505 T11833 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /s_t/yd
   [junit4]   2> 3820515 T11833 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2> 3820517 T11833 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1> client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 3820518 T11834 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2> 3820614 T11833 oasc.ZkTestServer.run start zk server on 
port:52428
   [junit4]   2> 3820614 T11833 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 3820617 T11833 oascc.ConnectionManager.waitForConnected 
Waiting for client to connect to ZooKeeper
   [junit4]   2> 3820623 T11841 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@15f02961 
name:ZooKeeperConnection Watcher:127.0.0.1:52428 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 3820624 T11833 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 3820624 T11833 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 3820624 T11833 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2> 3820630 T11833 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 3820633 T11833 oascc.ConnectionManager.waitForConnected 
Waiting for client to connect to ZooKeeper
   [junit4]   2> 3820636 T11844 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@3fcb32ab 
name:ZooKeeperConnection Watcher:127.0.0.1:52428/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 3820636 T11833 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 3820636 T11833 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 3820636 T11833 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2> 3820641 T11833 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2> 3820645 T11833 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2> 3820649 T11833 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2> 3820653 T11833 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 3820653 T11833 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2> 3820660 T11833 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 3820660 T11833 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/schema.xml
   [junit4]   2> 3820666 T11833 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 3820666 T11833 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 3820670 T11833 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\test-files\solr\collection1\conf\stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 3820670 T11833 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/stopwords.txt
   [junit4]   2> 3820675 T11833 oasc.

[jira] [Commented] (LUCENE-6178) don't score MUST_NOT clauses with BooleanScorer

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274352#comment-14274352
 ] 

ASF subversion and git services commented on LUCENE-6178:
-

Commit 1651224 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1651224 ]

LUCENE-6178: don't score MUST_NOT clauses with BooleanScorer

> don't score MUST_NOT clauses with BooleanScorer
> ---
>
> Key: LUCENE-6178
> URL: https://issues.apache.org/jira/browse/LUCENE-6178
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-6178.patch
>
>
> Its similar to the conjunction case: we should just use BS2 since it has 
> advance(). Even in the dense case I think its currently better since it 
> avoids calling score() in cases where BS1 calls it redundantly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6923) AutoAddReplicas should consult live nodes also to see if a state has changed

2015-01-12 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-6923.

Resolution: Fixed

> AutoAddReplicas should consult live nodes also to see if a state has changed
> 
>
> Key: SOLR-6923
> URL: https://issues.apache.org/jira/browse/SOLR-6923
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6923.patch
>
>
> - I did the following 
> {code}
> ./solr start -e cloud -noprompt
> kill -9  //Not the node which is running ZK
> {code}
> - /live_nodes reflects that the node is gone.
> - This is the only message which gets logged on the node1 server after 
> killing node2
> {code}
> 45812 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983] WARN  
> org.apache.zookeeper.server.NIOServerCnxn  – caught end of stream exception
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x14ac40f26660001, likely client has closed socket
> at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
> at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> - The graph shows the node2 as 'Gone' state
> - clusterstate.json keeps showing the replica as 'active'
> {code}
> {"collection1":{
> "shards":{"shard1":{
> "range":"8000-7fff",
> "state":"active",
> "replicas":{
>   "core_node1":{
> "state":"active",
> "core":"collection1",
> "node_name":"169.254.113.194:8983_solr",
> "base_url":"http://169.254.113.194:8983/solr";,
> "leader":"true"},
>   "core_node2":{
> "state":"active",
> "core":"collection1",
> "node_name":"169.254.113.194:8984_solr",
> "base_url":"http://169.254.113.194:8984/solr",
> "maxShardsPerNode":"1",
> "router":{"name":"compositeId"},
> "replicationFactor":"1",
> "autoAddReplicas":"false",
> "autoCreated":"true"}}
> {code}
> One immediate problem I can see is that AutoAddReplicas doesn't work since 
> the clusterstate.json never changes. There might be more features which are 
> affected by this.
> On first thought I think we can handle this - The shard leader could listen 
> to changes on /live_nodes and if it has replicas that were on that node, mark 
> it as 'down' in the clusterstate.json?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6923) AutoAddReplicas should consult live nodes also to see if a state has changed

2015-01-12 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-6923:
---
Fix Version/s: Trunk
   5.0

> AutoAddReplicas should consult live nodes also to see if a state has changed
> 
>
> Key: SOLR-6923
> URL: https://issues.apache.org/jira/browse/SOLR-6923
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6923.patch
>
>
> - I did the following 
> {code}
> ./solr start -e cloud -noprompt
> kill -9  //Not the node which is running ZK
> {code}
> - /live_nodes reflects that the node is gone.
> - This is the only message which gets logged on the node1 server after 
> killing node2
> {code}
> 45812 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983] WARN  
> org.apache.zookeeper.server.NIOServerCnxn  – caught end of stream exception
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x14ac40f26660001, likely client has closed socket
> at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
> at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> - The graph shows the node2 as 'Gone' state
> - clusterstate.json keeps showing the replica as 'active'
> {code}
> {"collection1":{
> "shards":{"shard1":{
> "range":"8000-7fff",
> "state":"active",
> "replicas":{
>   "core_node1":{
> "state":"active",
> "core":"collection1",
> "node_name":"169.254.113.194:8983_solr",
> "base_url":"http://169.254.113.194:8983/solr";,
> "leader":"true"},
>   "core_node2":{
> "state":"active",
> "core":"collection1",
> "node_name":"169.254.113.194:8984_solr",
> "base_url":"http://169.254.113.194:8984/solr",
> "maxShardsPerNode":"1",
> "router":{"name":"compositeId"},
> "replicationFactor":"1",
> "autoAddReplicas":"false",
> "autoCreated":"true"}}
> {code}
> One immediate problem I can see is that AutoAddReplicas doesn't work since 
> the clusterstate.json never changes. There might be more features which are 
> affected by this.
> On first thought I think we can handle this - The shard leader could listen 
> to changes on /live_nodes and if it has replicas that were on that node, mark 
> it as 'down' in the clusterstate.json?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6923) AutoAddReplicas should consult live nodes also to see if a state has changed

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274345#comment-14274345
 ] 

ASF subversion and git services commented on SOLR-6923:
---

Commit 1651223 from [~anshumg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1651223 ]

SOLR-6923: AutoAddReplicas also consults live_nodes to see if a state change 
has happened (merge from trunk)

> AutoAddReplicas should consult live nodes also to see if a state has changed
> 
>
> Key: SOLR-6923
> URL: https://issues.apache.org/jira/browse/SOLR-6923
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Attachments: SOLR-6923.patch
>
>
> - I did the following 
> {code}
> ./solr start -e cloud -noprompt
> kill -9  //Not the node which is running ZK
> {code}
> - /live_nodes reflects that the node is gone.
> - This is the only message which gets logged on the node1 server after 
> killing node2
> {code}
> 45812 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983] WARN  
> org.apache.zookeeper.server.NIOServerCnxn  – caught end of stream exception
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x14ac40f26660001, likely client has closed socket
> at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
> at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> - The graph shows the node2 as 'Gone' state
> - clusterstate.json keeps showing the replica as 'active'
> {code}
> {"collection1":{
> "shards":{"shard1":{
> "range":"8000-7fff",
> "state":"active",
> "replicas":{
>   "core_node1":{
> "state":"active",
> "core":"collection1",
> "node_name":"169.254.113.194:8983_solr",
> "base_url":"http://169.254.113.194:8983/solr";,
> "leader":"true"},
>   "core_node2":{
> "state":"active",
> "core":"collection1",
> "node_name":"169.254.113.194:8984_solr",
> "base_url":"http://169.254.113.194:8984/solr",
> "maxShardsPerNode":"1",
> "router":{"name":"compositeId"},
> "replicationFactor":"1",
> "autoAddReplicas":"false",
> "autoCreated":"true"}}
> {code}
> One immediate problem I can see is that AutoAddReplicas doesn't work since 
> the clusterstate.json never changes. There might be more features which are 
> affected by this.
> On first thought I think we can handle this - The shard leader could listen 
> to changes on /live_nodes and if it has replicas that were on that node, mark 
> it as 'down' in the clusterstate.json?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6923) AutoAddReplicas should consult live nodes also to see if a state has changed

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274340#comment-14274340
 ] 

ASF subversion and git services commented on SOLR-6923:
---

Commit 1651221 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1651221 ]

SOLR-6923: AutoAddReplicas also consults live_nodes to see if a state change 
has happened

> AutoAddReplicas should consult live nodes also to see if a state has changed
> 
>
> Key: SOLR-6923
> URL: https://issues.apache.org/jira/browse/SOLR-6923
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Attachments: SOLR-6923.patch
>
>
> - I did the following 
> {code}
> ./solr start -e cloud -noprompt
> kill -9  //Not the node which is running ZK
> {code}
> - /live_nodes reflects that the node is gone.
> - This is the only message which gets logged on the node1 server after 
> killing node2
> {code}
> 45812 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983] WARN  
> org.apache.zookeeper.server.NIOServerCnxn  – caught end of stream exception
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x14ac40f26660001, likely client has closed socket
> at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
> at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> - The graph shows the node2 as 'Gone' state
> - clusterstate.json keeps showing the replica as 'active'
> {code}
> {"collection1":{
> "shards":{"shard1":{
> "range":"8000-7fff",
> "state":"active",
> "replicas":{
>   "core_node1":{
> "state":"active",
> "core":"collection1",
> "node_name":"169.254.113.194:8983_solr",
> "base_url":"http://169.254.113.194:8983/solr";,
> "leader":"true"},
>   "core_node2":{
> "state":"active",
> "core":"collection1",
> "node_name":"169.254.113.194:8984_solr",
> "base_url":"http://169.254.113.194:8984/solr",
> "maxShardsPerNode":"1",
> "router":{"name":"compositeId"},
> "replicationFactor":"1",
> "autoAddReplicas":"false",
> "autoCreated":"true"}}
> {code}
> One immediate problem I can see is that AutoAddReplicas doesn't work since 
> the clusterstate.json never changes. There might be more features which are 
> affected by this.
> On first thought I think we can handle this - The shard leader could listen 
> to changes on /live_nodes and if it has replicas that were on that node, mark 
> it as 'down' in the clusterstate.json?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6915) SaslZkACLProvider and Kerberos Test Using MiniKdc

2015-01-12 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-6915:
-
Attachment: SOLR-6915.patch

Here's a version without the hadoop upgrade diffs, which have been committed in 
SOLR-6963.

> SaslZkACLProvider and Kerberos Test Using MiniKdc
> -
>
> Key: SOLR-6915
> URL: https://issues.apache.org/jira/browse/SOLR-6915
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: SOLR-6915.patch, SOLR-6915.patch
>
>
> We should provide a ZkACLProvider that requires SASL authentication.  This 
> provider will be useful for administration in a kerberos environment.   In 
> such an environment, the administrator wants solr to authenticate to 
> zookeeper using SASL, since this is only way to authenticate with zookeeper 
> via kerberos.
> The authorization model in such a setup can vary, e.g. you can imagine a 
> scenario where solr owns (is the only writer of) the non-config znodes, but 
> some set of trusted users are allowed to modify the configs.  It's hard to 
> predict all the possibilities here, but one model that seems generally useful 
> is to have a model where solr itself owns all the znodes and all actions that 
> require changing the znodes are routed to Solr APIs.  That seems simple and 
> reasonable as a first version.
> As for testing, I noticed while working on SOLR-6625 that we don't really 
> have any infrastructure for testing kerberos integration in unit tests.  
> Internally, I've been testing using kerberos-enabled VM clusters, but this 
> isn't great since we won't notice any breakages until someone actually spins 
> up a VM.  So part of this JIRA is to provide some infrastructure for testing 
> kerberos at the unit test level (using Hadoop's MiniKdc, HADOOP-9848).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6178) don't score MUST_NOT clauses with BooleanScorer

2015-01-12 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274306#comment-14274306
 ] 

Adrien Grand commented on LUCENE-6178:
--

+1

> don't score MUST_NOT clauses with BooleanScorer
> ---
>
> Key: LUCENE-6178
> URL: https://issues.apache.org/jira/browse/LUCENE-6178
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-6178.patch
>
>
> Its similar to the conjunction case: we should just use BS2 since it has 
> advance(). Even in the dense case I think its currently better since it 
> avoids calling score() in cases where BS1 calls it redundantly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6943) HdfsDirectoryFactory should fall back to system props for most of it's config if it is not found in solrconfig.xml.

2015-01-12 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274290#comment-14274290
 ] 

Mike Drob commented on SOLR-6943:
-

My thoughts:

{code:title=HdfsDirectoryFactory.java}
+Integer value = params.getInt(name, defaultValue);
{code}
When calling {{getConfig}}, for a boolean the precedence is param value, system 
value, passed default. For strings it is the same order. For ints, it looks 
like it is param value, passed default, and then system value. Should be 
consistent with the other two. The problem is on this line.

{code:title=HDFSTestUtil.java}
+  Timer timer = new Timer();
{code}
Probably outside of the scope of this issue, but using a Timer is sometimes 
unsafe. Since all Timer objects share a thread, delays or issues in one Timer 
execution can propogate to other executions (Java Concurrency In Practice, 
p123). We should consider replacing with a {{ScheduledThreadPoolExecutor}}. A 
follow-on issue is fine for this, I expcet the actual impact to be minimal.

{code:title=HdfsDirectoryFactoryTest.java}
+  public void testInitArgsOrSysPropConfig() throws Exception {
{code}
This test covers a lot of ground, it would be nice to see it broken down into 
several smaller tests - one for each scenario that you're trying to do, I 
think. Not sure if the testing framework is easily amenable to that, however.

{code}
+  public static class MockCoreDescriptor extends CoreDescriptor {
{code}
Does this enable something that EasyMock does not?

{code}
+++ solr/core/src/test/org/apache/solr/util/MockSolrResourceLoader.java 
(revision 0)
{code}
This class looks unused.

> HdfsDirectoryFactory should fall back to system props for most of it's config 
> if it is not found in solrconfig.xml.
> ---
>
> Key: SOLR-6943
> URL: https://issues.apache.org/jira/browse/SOLR-6943
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6943.patch, SOLR-6943.patch
>
>
> The new server and config sets has undone the work I did to make hdfs easy 
> out of the box. Rather than count on config for that, we should just allow 
> most of this config to be specified at the sys property level. This improves 
> the global cache config situation as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6963) Upgrade hadoop version to 2.3

2015-01-12 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan resolved SOLR-6963.
--
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

Thanks for the review, Mark.  Committed to 5.0 and trunk.

> Upgrade hadoop version to 2.3
> -
>
> Key: SOLR-6963
> URL: https://issues.apache.org/jira/browse/SOLR-6963
> Project: Solr
>  Issue Type: Task
>  Components: Hadoop Integration
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6963.patch
>
>
> See SOLR-6915; we need at least hadoop version 2.3 to be able to use the 
> MiniKdc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6963) Upgrade hadoop version to 2.3

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274285#comment-14274285
 ] 

ASF subversion and git services commented on SOLR-6963:
---

Commit 1651217 from gcha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1651217 ]

SOLR-6963: Upgrade hadoop version to 2.3

> Upgrade hadoop version to 2.3
> -
>
> Key: SOLR-6963
> URL: https://issues.apache.org/jira/browse/SOLR-6963
> Project: Solr
>  Issue Type: Task
>  Components: Hadoop Integration
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: SOLR-6963.patch
>
>
> See SOLR-6915; we need at least hadoop version 2.3 to be able to use the 
> MiniKdc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6581) Efficient DocValues support and numeric collapse field implementations for Collapse and Expand

2015-01-12 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-6581:
-
Description: 
The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
are optimized to work with a top level FieldCache. Top level FieldCaches have a 
very fast docID to top-level ordinal lookup. Fast access to the top-level 
ordinals allows for very high performance field collapsing on high cardinality 
fields. 

LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
FieldCache is no longer in regular use. Instead all top level caches are 
accessed through MultiDocValues. 

This ticket does the following:

1) Optimizes Collapse and Expand to use MultiDocValues and makes this the 
default approach when collapsing on String fields

2) Provides an option to use a top level FieldCache if the performance of 
MultiDocValues is a blocker. The mechanism for switching to the FieldCache is a 
new "hint" parameter. If the hint parameter is set to "top_fc" then the 
top-level FieldCache would be used for both Collapse and Expand.

Example syntax:
{code}
fq={!collapse field=x hint=top_fc}
{code}

3)  Adds numeric collapse field implementations.

4) Resolves issue SOLR-6066







 






  was:
The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
are optimized to work with a top level FieldCache. Top level FieldCaches have a 
very fast docID to top-level ordinal lookup. Fast access to the top-level 
ordinals allows for very high performance field collapsing on high cardinality 
fields. 

LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
FieldCache is no longer in regular use. Instead all top level caches are 
accessed through MultiDocValues. 

This ticket does the following:

1) Optimizes Collapse and Expand to use MultiDocValues and makes this the 
default approach when collapsing on String fields

2) Provides an option to use a top level FieldCache if the performance of 
MultiDocValues is a blocker. The mechanism for switching to the FieldCache is a 
new "hint" parameter. If the hint parameter is set to "top_fc" then the 
top-level FieldCache would be used for both Collapse and Expand.

Example syntax:
{code}
fq={!collapse field=x hint=top_fc}
{code}

3)  Adds numeric collapse field implementations.







 







> Efficient DocValues support and numeric collapse field implementations for 
> Collapse and Expand
> --
>
> Key: SOLR-6581
> URL: https://issues.apache.org/jira/browse/SOLR-6581
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
> SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
> SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
> SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, SOLR-6581.patch, 
> renames.diff
>
>
> The 4x implementation of the CollapsingQParserPlugin and the ExpandComponent 
> are optimized to work with a top level FieldCache. Top level FieldCaches have 
> a very fast docID to top-level ordinal lookup. Fast access to the top-level 
> ordinals allows for very high performance field collapsing on high 
> cardinality fields. 
> LUCENE-5666 unified the DocValues and FieldCache api's so that the top level 
> FieldCache is no longer in regular use. Instead all top level caches are 
> accessed through MultiDocValues. 
> This ticket does the following:
> 1) Optimizes Collapse and Expand to use MultiDocValues and makes this the 
> default approach when collapsing on String fields
> 2) Provides an option to use a top level FieldCache if the performance of 
> MultiDocValues is a blocker. The mechanism for switching to the FieldCache is 
> a new "hint" parameter. If the hint parameter is set to "top_fc" then the 
> top-level FieldCache would be used for both Collapse and Expand.
> Example syntax:
> {code}
> fq={!collapse field=x hint=top_fc}
> {code}
> 3)  Adds numeric collapse field implementations.
> 4) Resolves issue SOLR-6066
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects "fq" (filter query)

2015-01-12 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274277#comment-14274277
 ] 

Joel Bernstein commented on SOLR-6066:
--

This issue is now resolved in Trunk and 5.0 as part of SOLR-6581. 

> CollapsingQParserPlugin + Elevation does not respects "fq" (filter query) 
> --
>
> Key: SOLR-6066
> URL: https://issues.apache.org/jira/browse/SOLR-6066
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 4.8
>Reporter: Herb Jiang
>Assignee: Joel Bernstein
> Fix For: 4.9
>
> Attachments: SOLR-6066.patch, SOLR-6066.patch, SOLR-6066.patch, 
> TestCollapseQParserPlugin.java
>
>
> QueryElevationComponent respects the "fq" parameter. But when use 
> CollapsingQParserPlugin with QueryElevationComponent, additional "fq" has no 
> effect.
> I use following test case to show this issue. (It will failed)
> {code:java}
> String[] doc = {"id","1", "term_s", "", "group_s", "group1", 
> "category_s", "cat2", "test_ti", "5", "test_tl", "10", "test_tf", "2000"};
> assertU(adoc(doc));
> assertU(commit());
> String[] doc1 = {"id","2", "term_s","", "group_s", "group1", 
> "category_s", "cat2", "test_ti", "50", "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc1));
> String[] doc2 = {"id","3", "term_s", "", "test_ti", "5000", 
> "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc2));
> assertU(commit());
> String[] doc3 = {"id","4", "term_s", "", "test_ti", "500", "test_tl", 
> "1000", "test_tf", "2000"};
> assertU(adoc(doc3));
> String[] doc4 = {"id","5", "term_s", "", "group_s", "group2", 
> "category_s", "cat1", "test_ti", "4", "test_tl", "10", "test_tf", "2000"};
> assertU(adoc(doc4));
> assertU(commit());
> String[] doc5 = {"id","6", "term_s","", "group_s", "group2", 
> "category_s", "cat1", "test_ti", "10", "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc5));
> assertU(commit());
> //Test additional filter query when using collapse
> params = new ModifiableSolrParams();
> params.add("q", "");
> params.add("fq", "{!collapse field=group_s}");
> params.add("fq", "category_s:cat1");
> params.add("defType", "edismax");
> params.add("bf", "field(test_ti)");
> params.add("qf", "term_s");
> params.add("qt", "/elevate");
> params.add("elevateIds", "2");
> assertQ(req(params), "*[count(//doc)=1]",
> "//result/doc[1]/float[@name='id'][.='6.0']");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6963) Upgrade hadoop version to 2.3

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274255#comment-14274255
 ] 

ASF subversion and git services commented on SOLR-6963:
---

Commit 1651212 from gcha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1651212 ]

SOLR-6963: Upgrade hadoop version to 2.3

> Upgrade hadoop version to 2.3
> -
>
> Key: SOLR-6963
> URL: https://issues.apache.org/jira/browse/SOLR-6963
> Project: Solr
>  Issue Type: Task
>  Components: Hadoop Integration
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: SOLR-6963.patch
>
>
> See SOLR-6915; we need at least hadoop version 2.3 to be able to use the 
> MiniKdc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6969) Just like we have to retry when the NameNode is in safemode on Solr startup, we also need to retry when opening a transaction log file for append when we get a RecoveryI

2015-01-12 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274244#comment-14274244
 ] 

Mark Miller commented on SOLR-6969:
---

Praneeth also mentions seeing AlreadyBeingCreatedException in SOLR-6367 - we 
should deal with that as well.

> Just like we have to retry when the NameNode is in safemode on Solr startup, 
> we also need to retry when opening a transaction log file for append when we 
> get a RecoveryInProgressException.
> 
>
> Key: SOLR-6969
> URL: https://issues.apache.org/jira/browse/SOLR-6969
> Project: Solr
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
> Fix For: 5.0, Trunk
>
>
> This can happen after a hard crash and restart. The current workaround is to 
> stop and wait it out and start again. We should retry and wait a given amount 
> of time as we do when we detect safe mode though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6640) ChaosMonkeySafeLeaderTest failure with CorruptIndexException

2015-01-12 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274233#comment-14274233
 ] 

Mark Miller commented on SOLR-6640:
---

Can we get this in? I'd love to see it's affect on some common test fails over 
some time before 5.0 hits and see that nothing else pops out.

> ChaosMonkeySafeLeaderTest failure with CorruptIndexException
> 
>
> Key: SOLR-6640
> URL: https://issues.apache.org/jira/browse/SOLR-6640
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 5.0
>Reporter: Shalin Shekhar Mangar
> Fix For: 5.0
>
> Attachments: Lucene-Solr-5.x-Linux-64bit-jdk1.8.0_20-Build-11333.txt, 
> SOLR-6640.patch, SOLR-6640.patch, SOLR-6640.patch, 
> SOLR-6640_new_index_dir.patch
>
>
> Test failure found on jenkins:
> http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11333/
> {code}
> 1 tests failed.
> REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch
> Error Message:
> shard2 is not consistent.  Got 62 from 
> http://127.0.0.1:57436/collection1lastClient and got 24 from 
> http://127.0.0.1:53065/collection1
> Stack Trace:
> java.lang.AssertionError: shard2 is not consistent.  Got 62 from 
> http://127.0.0.1:57436/collection1lastClient and got 24 from 
> http://127.0.0.1:53065/collection1
> at 
> __randomizedtesting.SeedInfo.seed([F4B371D421E391CD:7555FFCC56BCF1F1]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at 
> org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1255)
> at 
> org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1234)
> at 
> org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.doTest(ChaosMonkeySafeLeaderTest.java:162)
> at 
> org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
> {code}
> Cause of inconsistency is:
> {code}
> Caused by: org.apache.lucene.index.CorruptIndexException: file mismatch, 
> expected segment id=yhq3vokoe1den2av9jbd3yp8, got=yhq3vokoe1den2av9jbd3yp7 
> (resource=BufferedChecksumIndexInput(MMapIndexInput(path="/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.ChaosMonkeySafeLeaderTest-F4B371D421E391CD-001/tempDir-001/jetty3/index/_1_2.liv")))
>[junit4]   2>  at 
> org.apache.lucene.codecs.CodecUtil.checkSegmentHeader(CodecUtil.java:259)
>[junit4]   2>  at 
> org.apache.lucene.codecs.lucene50.Lucene50LiveDocsFormat.readLiveDocs(Lucene50LiveDocsFormat.java:88)
>[junit4]   2>  at 
> org.apache.lucene.codecs.asserting.AssertingLiveDocsFormat.readLiveDocs(AssertingLiveDocsFormat.java:64)
>[junit4]   2>  at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:102)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6367) empty tlog on HDFS when hard crash - no docs to replay on recovery

2015-01-12 Thread Praneeth (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274229#comment-14274229
 ] 

Praneeth commented on SOLR-6367:


I won't be able to get back fast enough during work hours. Sorry about that. 
I've tried it with Solr 4.10.3 and I am able to reproduce this consistently. 
You have your autocommit 'on' may be? and the document is getting committed 
before you kill it? I am running against cloudera distribution of hadoop - 
{{2.0.0-cdh4.6.0}}.

I did notice SOLR-6969. Though it kept switching between 
{{AlreadyBeingCreatedException}} and {{RecoveryInProgressException}} . I guess 
the later happens based on how fast you restart, whether the soft limit has 
expired or not may be?

I don't think that these issues are related. I think SOLR-6969 happens on quick 
restart after every hard crash. But this current issue here is due to documents 
not making to the tlog. The file has nothing written to it before the crash. I 
haven't looked deeply into it but could this be because of underlying stream 
implementation with an intermediate buffer may be? That is why the local 
transaction log does not show such behaviour? This part complete speculation at 
this point and I will dig into it later tonight.

> empty tlog on HDFS when hard crash - no docs to replay on recovery
> --
>
> Key: SOLR-6367
> URL: https://issues.apache.org/jira/browse/SOLR-6367
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
>
> Filing this bug based on an email to solr-user@lucene from Tom Chen (Fri, 18 
> Jul 2014)...
> {panel}
> Reproduce steps:
> 1) Setup Solr to run on HDFS like this:
> {noformat}
> java -Dsolr.directoryFactory=HdfsDirectoryFactory
>  -Dsolr.lock.type=hdfs
>  -Dsolr.hdfs.home=hdfs://host:port/path
> {noformat}
> For the purpose of this testing, turn off the default auto commit in 
> solrconfig.xml, i.e. comment out autoCommit like this:
> {code}
> 
> {code}
> 2) Add a document without commit:
> {{curl "http://localhost:8983/solr/collection1/update?commit=false"; -H
> "Content-type:text/xml; charset=utf-8" --data-binary "@solr.xml"}}
> 3) Solr generate empty tlog file (0 file size, the last one ends with 6):
> {noformat}
> [hadoop@hdtest042 exampledocs]$ hadoop fs -ls
> /path/collection1/core_node1/data/tlog
> Found 5 items
> -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.001
> -rw-r--r--   1 hadoop hadoop 67 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.003
> -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.004
> -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
> /path/collection1/core_node1/data/tlog/tlog.005
> -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
> /path/collection1/core_node1/data/tlog/tlog.006
> {noformat}
> 4) Simulate Solr crash by killing the process with -9 option.
> 5) restart the Solr process. Observation is that uncommitted document are
> not replayed, files in tlog directory are cleaned up. Hence uncommitted
> document(s) is lost.
> Am I missing anything or this is a bug?
> BTW, additional observations:
> a) If in step 4) Solr is stopped gracefully (i.e. without -9 option),
> non-empty tlog file is geneated and after re-starting Solr, uncommitted
> document is replayed as expected.
> b) If Solr doesn't run on HDFS (i.e. on local file system), this issue is
> not observed either.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6923) AutoAddReplicas should consult live nodes also to see if a state has changed

2015-01-12 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274211#comment-14274211
 ] 

Anshum Gupta commented on SOLR-6923:


LGTM.

> AutoAddReplicas should consult live nodes also to see if a state has changed
> 
>
> Key: SOLR-6923
> URL: https://issues.apache.org/jira/browse/SOLR-6923
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Attachments: SOLR-6923.patch
>
>
> - I did the following 
> {code}
> ./solr start -e cloud -noprompt
> kill -9  //Not the node which is running ZK
> {code}
> - /live_nodes reflects that the node is gone.
> - This is the only message which gets logged on the node1 server after 
> killing node2
> {code}
> 45812 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983] WARN  
> org.apache.zookeeper.server.NIOServerCnxn  – caught end of stream exception
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x14ac40f26660001, likely client has closed socket
> at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
> at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> - The graph shows the node2 as 'Gone' state
> - clusterstate.json keeps showing the replica as 'active'
> {code}
> {"collection1":{
> "shards":{"shard1":{
> "range":"8000-7fff",
> "state":"active",
> "replicas":{
>   "core_node1":{
> "state":"active",
> "core":"collection1",
> "node_name":"169.254.113.194:8983_solr",
> "base_url":"http://169.254.113.194:8983/solr";,
> "leader":"true"},
>   "core_node2":{
> "state":"active",
> "core":"collection1",
> "node_name":"169.254.113.194:8984_solr",
> "base_url":"http://169.254.113.194:8984/solr",
> "maxShardsPerNode":"1",
> "router":{"name":"compositeId"},
> "replicationFactor":"1",
> "autoAddReplicas":"false",
> "autoCreated":"true"}}
> {code}
> One immediate problem I can see is that AutoAddReplicas doesn't work since 
> the clusterstate.json never changes. There might be more features which are 
> affected by this.
> On first thought I think we can handle this - The shard leader could listen 
> to changes on /live_nodes and if it has replicas that were on that node, mark 
> it as 'down' in the clusterstate.json?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2471 - Still Failing

2015-01-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2471/

5 tests failed.
REGRESSION:  
org.apache.solr.handler.TestSolrConfigHandlerCloud.testDistribSearch

Error Message:
Could not get expected value  BY val for path [params, b] full output null

Stack Trace:
java.lang.AssertionError: Could not get expected value  BY val for path 
[params, b] full output null
at 
__randomizedtesting.SeedInfo.seed([A70A517A9E6576D8:26ECDF62E93A16E4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:259)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.testReqParams(TestSolrConfigHandlerCloud.java:205)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.doTest(TestSolrConfigHandlerCloud.java:70)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.ca

[jira] [Updated] (LUCENE-6061) Add Support for something different than Strings in Highlighting (FastVectorHighlighter)

2015-01-12 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6061:
---
Priority: Minor  (was: Critical)

I agree it's not critical...

> Add Support for something different than Strings in Highlighting 
> (FastVectorHighlighter)
> 
>
> Key: LUCENE-6061
> URL: https://issues.apache.org/jira/browse/LUCENE-6061
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: core/search, modules/highlighter
>Affects Versions: Trunk
>Reporter: Martin Braun
>Priority: Minor
>  Labels: FastVectorHighlighter, Highlighter, Highlighting
> Fix For: 4.10.2, 5.0, Trunk
>
>
> In my application I need Highlighting and I stumbled upon the really neat 
> FastVectorHighlighter. One problem appeared though. It lacks a way to render 
> the Highlights into something different than Strings, so I rearranged some of 
> the code to support that:
> https://github.com/Hotware/Lucene-Extension/blob/master/src/main/java/com/github/hotware/lucene/extension/highlight/FVHighlighterUtil.java
> Is there a specific reason to only support String[] as a return type? If not, 
> I would be happy to write a new class that supports rendering into a generic 
> Type and rewire that into the existing class (or just do it as an addition 
> and leave the current class be).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6061) Add Support for something different than Strings in Highlighting (FastVectorHighlighter)

2015-01-12 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274158#comment-14274158
 ] 

Anshum Gupta commented on LUCENE-6061:
--

As this issue doesn't have a patch or even a consensus for being an actual 
issue, I don't think it qualifies as a 'Critical' issue. I am not the best 
person to comment on that so perhaps [~mikemccand] can comment about that. If 
it's not critical, I'll change the priority and also not consider this for 5.0.

> Add Support for something different than Strings in Highlighting 
> (FastVectorHighlighter)
> 
>
> Key: LUCENE-6061
> URL: https://issues.apache.org/jira/browse/LUCENE-6061
> Project: Lucene - Core
>  Issue Type: Wish
>  Components: core/search, modules/highlighter
>Affects Versions: Trunk
>Reporter: Martin Braun
>Priority: Critical
>  Labels: FastVectorHighlighter, Highlighter, Highlighting
> Fix For: 4.10.2, 5.0, Trunk
>
>
> In my application I need Highlighting and I stumbled upon the really neat 
> FastVectorHighlighter. One problem appeared though. It lacks a way to render 
> the Highlights into something different than Strings, so I rearranged some of 
> the code to support that:
> https://github.com/Hotware/Lucene-Extension/blob/master/src/main/java/com/github/hotware/lucene/extension/highlight/FVHighlighterUtil.java
> Is there a specific reason to only support String[] as a return type? If not, 
> I would be happy to write a new class that supports rendering into a generic 
> Type and rewire that into the existing class (or just do it as an addition 
> and leave the current class be).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_25) - Build # 11583 - Failure!

2015-01-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11583/
Java: 32bit/jdk1.8.0_25 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
Captured an uncaught exception in thread: Thread[id=8159, 
name=OverseerStateUpdate-93132882359812108-127.0.0.1:40985_u%2Fsu-n_04, 
state=RUNNABLE, group=Overseer state updater.]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=8159, 
name=OverseerStateUpdate-93132882359812108-127.0.0.1:40985_u%2Fsu-n_04, 
state=RUNNABLE, group=Overseer state updater.]
at 
__randomizedtesting.SeedInfo.seed([B162A86EE887F501:308426769FD8953D]:0)
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([B162A86EE887F501]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.constructState(ZkStateReader.java:458)
at 
org.apache.solr.common.cloud.ZkStateReader.updateClusterState(ZkStateReader.java:524)
at 
org.apache.solr.common.cloud.ZkStateReader.updateClusterState(ZkStateReader.java:258)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:156)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 9937 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2> Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest
 B162A86EE887F501-001/init-core-data-001
   [junit4]   2> 1739647 T7749 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /u/su
   [junit4]   2> 1739652 T7749 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2> 1739653 T7749 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1> client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1739653 T7750 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2> 1739753 T7749 oasc.ZkTestServer.run start zk server on 
port:57369
   [junit4]   2> 1739754 T7749 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 1739755 T7749 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 1739756 T7757 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@d65149 name:ZooKeeperConnection 
Watcher:127.0.0.1:57369 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 1739757 T7749 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 1739757 T7749 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 1739758 T7749 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2> 1739762 T7749 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2> 1739763 T7749 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 1739764 T7760 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@3e3c5e name:ZooKeeperConnection 
Watcher:127.0.0.1:57369/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2> 1739765 T7749 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 1739765 T7749 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2> 1739766 T7749 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2> 1739768 T7749 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2> 1739769 T7749 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2> 1739771 T7749 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2> 1739772 T7749 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 1739773 T7749 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2> 1739780 T7749 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 1739781 T7749 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/schema.xml
   [junit4]   2> 1739783 T7749 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 1739784 T7749 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.snippet.ran

[jira] [Updated] (SOLR-6923) AutoAddReplicas should consult live nodes also to see if a state has changed

2015-01-12 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-6923:

Attachment: SOLR-6923.patch

Simple patch which checks against live nodes before short circuiting.

SharedFSAutoReplicaFailoverTest passes. 

> AutoAddReplicas should consult live nodes also to see if a state has changed
> 
>
> Key: SOLR-6923
> URL: https://issues.apache.org/jira/browse/SOLR-6923
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Attachments: SOLR-6923.patch
>
>
> - I did the following 
> {code}
> ./solr start -e cloud -noprompt
> kill -9  //Not the node which is running ZK
> {code}
> - /live_nodes reflects that the node is gone.
> - This is the only message which gets logged on the node1 server after 
> killing node2
> {code}
> 45812 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983] WARN  
> org.apache.zookeeper.server.NIOServerCnxn  – caught end of stream exception
> EndOfStreamException: Unable to read additional data from client sessionid 
> 0x14ac40f26660001, likely client has closed socket
> at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
> at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> - The graph shows the node2 as 'Gone' state
> - clusterstate.json keeps showing the replica as 'active'
> {code}
> {"collection1":{
> "shards":{"shard1":{
> "range":"8000-7fff",
> "state":"active",
> "replicas":{
>   "core_node1":{
> "state":"active",
> "core":"collection1",
> "node_name":"169.254.113.194:8983_solr",
> "base_url":"http://169.254.113.194:8983/solr";,
> "leader":"true"},
>   "core_node2":{
> "state":"active",
> "core":"collection1",
> "node_name":"169.254.113.194:8984_solr",
> "base_url":"http://169.254.113.194:8984/solr",
> "maxShardsPerNode":"1",
> "router":{"name":"compositeId"},
> "replicationFactor":"1",
> "autoAddReplicas":"false",
> "autoCreated":"true"}}
> {code}
> One immediate problem I can see is that AutoAddReplicas doesn't work since 
> the clusterstate.json never changes. There might be more features which are 
> affected by this.
> On first thought I think we can handle this - The shard leader could listen 
> to changes on /live_nodes and if it has replicas that were on that node, mark 
> it as 'down' in the clusterstate.json?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6178) don't score MUST_NOT clauses with BooleanScorer

2015-01-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274139#comment-14274139
 ] 

Michael McCandless commented on LUCENE-6178:


+1

> don't score MUST_NOT clauses with BooleanScorer
> ---
>
> Key: LUCENE-6178
> URL: https://issues.apache.org/jira/browse/LUCENE-6178
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-6178.patch
>
>
> Its similar to the conjunction case: we should just use BS2 since it has 
> advance(). Even in the dense case I think its currently better since it 
> avoids calling score() in cases where BS1 calls it redundantly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6178) don't score MUST_NOT clauses with BooleanScorer

2015-01-12 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-6178:

Attachment: LUCENE-6178.patch

{noformat}
Task   QPS trunk  StdDev   QPS patch  StdDev
Pct diff
OrHighNotLow   99.21  (9.3%)  110.09  (6.2%)   
11.0% (  -4% -   29%)
OrHighNotMed   78.46  (9.3%)   90.47  (6.0%)   
15.3% (   0% -   33%)
   OrHighNotHigh   24.80  (9.1%)   29.90  (5.8%)   
20.5% (   5% -   39%)
   OrNotHighHigh   33.71  (9.0%)   50.06  (7.0%)   
48.5% (  29% -   70%)
OrNotHighMed   57.14  (8.6%)  183.47  (8.1%)  
221.1% ( 188% -  260%)
OrNotHighLow   62.74  (8.4%)  922.24 (40.7%) 
1369.9% (1218% - 1549%)
{noformat}

> don't score MUST_NOT clauses with BooleanScorer
> ---
>
> Key: LUCENE-6178
> URL: https://issues.apache.org/jira/browse/LUCENE-6178
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-6178.patch
>
>
> Its similar to the conjunction case: we should just use BS2 since it has 
> advance(). Even in the dense case I think its currently better since it 
> avoids calling score() in cases where BS1 calls it redundantly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6178) don't score MUST_NOT clauses with BooleanScorer

2015-01-12 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-6178:
---

 Summary: don't score MUST_NOT clauses with BooleanScorer
 Key: LUCENE-6178
 URL: https://issues.apache.org/jira/browse/LUCENE-6178
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir


Its similar to the conjunction case: we should just use BS2 since it has 
advance(). Even in the dense case I think its currently better since it avoids 
calling score() in cases where BS1 calls it redundantly.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6952) Re-using data-driven configsets by default is not helpful

2015-01-12 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6952:
-
Attachment: SOLR-6952.patch

Here's an updated patch that changes around some of the parameter names to be 
consistent with the zkcli.sh script. I also tackled the "create" alias 
(SOLR-6933) in this patch since it was easier to address both issues with one 
patch.

*Example 1*
{code}
bin/solr create -c foo
{code}

This is equivalent to doing:

{code}
bin/solr create -c foo -d data_driven_schema_configs
{code}

or

{code}
bin/solr create -c foo -d data_driven_schema_configs -n foo
{code}

The create action will upload the data_driven_schema_configs directory (the 
default) into ZooKeeper as /configs/foo, i.e. the data_driven_schema_configs 
"template" is copied to a unique config directory in ZooKeeper using the name 
of the collection you are creating.

*Example 2*
{code}
bin/solr create -c foo2 -d basic_configs -n SharedBasicSchema
{code}

This will upload the basic_configs directory into ZooKeeper as 
/configs/SharedBasicSchema. If one wants to reuse the SharedBasicSchema 
configuration directory when creating another collection, they can just do: 
{code}
bin/solr create -c foo3 -n SharedBasicSchema
{code}

Going to start porting these changes to the Windows solr.cmd, so please speak 
up now or this is what we'll have for 5.0 ;-)

> Re-using data-driven configsets by default is not helpful
> -
>
> Key: SOLR-6952
> URL: https://issues.apache.org/jira/browse/SOLR-6952
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 5.0
>Reporter: Grant Ingersoll
>Assignee: Timothy Potter
> Fix For: 5.0
>
> Attachments: SOLR-6952.patch, SOLR-6952.patch
>
>
> When creating collections (I'm using the bin/solr scripts), I think we should 
> automatically copy configsets, especially when running in "getting started 
> mode" or data driven mode.
> I did the following:
> {code}
> bin/solr create_collection -n foo
> bin/post foo some_data.csv
> {code}
> I then created a second collection with the intention of sending in the same 
> data, but this time run through a python script that changed a value from an 
> int to a string (since it was an enumerated type) and was surprised to see 
> that I got:
> {quote}
> Caused by: java.lang.NumberFormatException: For input string: "NA"
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Long.parseLong(Long.java:441)
> {quote}
> for my new version of the data that passes in a string instead of an int, as 
> this new collection had only seen strings for that field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6969) Just like we have to retry when the NameNode is in safemode on Solr startup, we also need to retry when opening a transaction log file for append when we get a RecoveryInP

2015-01-12 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-6969:
--
 Priority: Critical  (was: Major)
Fix Version/s: Trunk
   5.0

> Just like we have to retry when the NameNode is in safemode on Solr startup, 
> we also need to retry when opening a transaction log file for append when we 
> get a RecoveryInProgressException.
> 
>
> Key: SOLR-6969
> URL: https://issues.apache.org/jira/browse/SOLR-6969
> Project: Solr
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
> Fix For: 5.0, Trunk
>
>
> This can happen after a hard crash and restart. The current workaround is to 
> stop and wait it out and start again. We should retry and wait a given amount 
> of time as we do when we detect safe mode though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6367) empty tlog on HDFS when hard crash - no docs to replay on recovery

2015-01-12 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274047#comment-14274047
 ] 

Mark Miller commented on SOLR-6367:
---

I did run into and file SOLR-6969 while looking into this. That could perhaps 
be involved in seeing this?

> empty tlog on HDFS when hard crash - no docs to replay on recovery
> --
>
> Key: SOLR-6367
> URL: https://issues.apache.org/jira/browse/SOLR-6367
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
>
> Filing this bug based on an email to solr-user@lucene from Tom Chen (Fri, 18 
> Jul 2014)...
> {panel}
> Reproduce steps:
> 1) Setup Solr to run on HDFS like this:
> {noformat}
> java -Dsolr.directoryFactory=HdfsDirectoryFactory
>  -Dsolr.lock.type=hdfs
>  -Dsolr.hdfs.home=hdfs://host:port/path
> {noformat}
> For the purpose of this testing, turn off the default auto commit in 
> solrconfig.xml, i.e. comment out autoCommit like this:
> {code}
> 
> {code}
> 2) Add a document without commit:
> {{curl "http://localhost:8983/solr/collection1/update?commit=false"; -H
> "Content-type:text/xml; charset=utf-8" --data-binary "@solr.xml"}}
> 3) Solr generate empty tlog file (0 file size, the last one ends with 6):
> {noformat}
> [hadoop@hdtest042 exampledocs]$ hadoop fs -ls
> /path/collection1/core_node1/data/tlog
> Found 5 items
> -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.001
> -rw-r--r--   1 hadoop hadoop 67 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.003
> -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.004
> -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
> /path/collection1/core_node1/data/tlog/tlog.005
> -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
> /path/collection1/core_node1/data/tlog/tlog.006
> {noformat}
> 4) Simulate Solr crash by killing the process with -9 option.
> 5) restart the Solr process. Observation is that uncommitted document are
> not replayed, files in tlog directory are cleaned up. Hence uncommitted
> document(s) is lost.
> Am I missing anything or this is a bug?
> BTW, additional observations:
> a) If in step 4) Solr is stopped gracefully (i.e. without -9 option),
> non-empty tlog file is geneated and after re-starting Solr, uncommitted
> document is replayed as expected.
> b) If Solr doesn't run on HDFS (i.e. on local file system), this issue is
> not observed either.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6969) Just like we have to retry when the NameNode is in safemode on Solr startup, we also need to retry when opening a transaction log file for append when we get a RecoveryI

2015-01-12 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274046#comment-14274046
 ] 

Mark Miller commented on SOLR-6969:
---

{noformat}
ERROR - 2015-01-12 17:49:43.992; org.apache.solr.common.SolrException; Failure 
to open existing log file (non fatal) 
hdfs://localhost:8020/solr_test/collection1/core_node1/data/tlog/tlog.000:org.apache.solr.common.SolrException:
 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.RecoveryInProgressException):
 Failed to close file 
/solr_test/collection1/core_node1/data/tlog/tlog.000. Lease 
recovery is in progress. Try again later.
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2626)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2462)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2700)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2663)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:559)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:388)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:121)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:190)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:134)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
{noformat}

> Just like we have to retry when the NameNode is in safemode on Solr startup, 
> we also need to retry when opening a transaction log file for append when we 
> get a RecoveryInProgressException.
> 
>
> Key: SOLR-6969
> URL: https://issues.apache.org/jira/browse/SOLR-6969
> Project: Solr
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>
> This can happen after a hard crash and restart. The current workaround is to 
> stop and wait it out and start again. We should retry and wait a given amount 
> of time as we do when we detect safe mode though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1319: POMs out of sync

2015-01-12 Thread Chris Hostetter


the maven build has had failed comilation on trunk since Jan-8 due to the
"Filter" class not being found when compiling JettySolrRunner...

  [mvn] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/java/org/apache/solr/client$
  [mvn]   class file for javax.servlet.Servlet not found
  [mvn] 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/solr/core/src/java/org/apache/solr/client$
  [mvn]   symbol:   class Filter
  [mvn]   location: class 
org.apache.solr.client.solrj.embedded.JettySolrRunner

Since there is no similar compilation failure with ant, this smells like a
maven dependency problem.


first maven build with these broken compilation was #1315 @ r1650301 ...
 
https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1315/

Previous (successful) maven build was #1314  @ r1650132

Commits in between...

LUCENE-6165: Change merging APIs from LeafReader to CodecReader 
(detail/ViewSVN)
SOLR-4839: fix licensing metadata. PLEASE RUN PRECOMMIT BEFORE COMMITTING. 
(detail/ViewSVN)
SOLR-6787 more logging (detail/ViewSVN)
SOLR-6787 commit right away instead of waiting (detail/ViewSVN)
SOLR-6925: Back out changes having to do with SOLR-5287 (editing configs 
from admin UI) (detail/ViewSVN)
SOLR-4839: Remove dependency to jetty.orbit (detail/ViewSVN)
 

...perhaps the reason jetty.orbit was included before was to satisfy the 
javax.servlet.Filter import? ... why isn't the Servlet dependency handling 
that?



On Mon, 12 Jan 2015, Apache Jenkins Server wrote:

: Date: Mon, 12 Jan 2015 18:44:02 + (UTC)
: From: Apache Jenkins Server 
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: [JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1319: POMs out of sync
: 
: Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1319/
: 
: No tests ran.
: 
: Build Log:
: [...truncated 39089 lines...]
:   [mvn] [INFO] 
-
:   [mvn] [INFO] 
-
:   [mvn] [ERROR] COMPILATION ERROR : 
:   [mvn] [INFO] 
-
: 
: [...truncated 696 lines...]
: BUILD FAILED
: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:542:
 The following error occurred while executing this line:
: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:204:
 The following error occurred while executing this line:
: : Java returned: 1
: 
: Total time: 20 minutes 1 second
: Build step 'Invoke Ant' marked build as failure
: Email was triggered for: Failure
: Sending email for trigger: Failure
: 
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2470 - Still Failing

2015-01-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2470/

4 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.testDistribSearch

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:35677/jg/ov/c8n_1x2_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:35677/jg/ov/c8n_1x2_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([3D05A4C3CAF9C354:BCE32ADBBDA6A368]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:581)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:890)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:736)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:480)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:201)
at 
org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:114)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:868)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRu

[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_25) - Build # 4411 - Still Failing!

2015-01-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4411/
Java: 32bit/jdk1.8.0_25 -server -XX:+UseSerialGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestSolrConfigHandler

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007\collection1\conf\params.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007\collection1\conf\params.json: The process 
cannot access the file because it is being used by another process. 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007\collection1\conf
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007\collection1
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007\collection1\conf\params.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007\collection1\conf\params.json: The process 
cannot access the file because it is being used by another process.

   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007\collection1\conf
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007\collection1
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001\tempDir-007
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestSolrConfigHandler
 508294B06DCF883D-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene

[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1319: POMs out of sync

2015-01-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1319/

No tests ran.

Build Log:
[...truncated 39089 lines...]
  [mvn] [INFO] -
  [mvn] [INFO] -
  [mvn] [ERROR] COMPILATION ERROR : 
  [mvn] [INFO] -

[...truncated 696 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:542:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:204:
 The following error occurred while executing this line:
: Java returned: 1

Total time: 20 minutes 1 second
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6902) Use JUnit rules instead of inheritance with distributed Solr tests to allow for multiple tests without the same class

2015-01-12 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273913#comment-14273913
 ] 

Ramkumar Aiyengar commented on SOLR-6902:
-

No worries, putting this in after 5.0 is cut makes sense. Thanks for picking 
this up Erick!

> Use JUnit rules instead of inheritance with distributed Solr tests to allow 
> for multiple tests without the same class
> -
>
> Key: SOLR-6902
> URL: https://issues.apache.org/jira/browse/SOLR-6902
> Project: Solr
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Ramkumar Aiyengar
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-6902.patch, SOLR-6902.patch
>
>
> Finally got annoyed enough with too many things being clubbed into one test 
> method in all distributed Solr tests (anything inheriting from 
> {{BaseDistributedSearchTestCase}} and currently implementing {{doTest}})..
> This just lays the groundwork really for allowing multiple test methods 
> within the same class, and doesn't split tests as yet or flatten the 
> inheritance hierarchy (when abused for doing multiple tests), as this touches 
> a lot of files by itself. For that reason, the sooner this is picked up the 
> better..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6161) Applying deletes is sometimes dog slow

2015-01-12 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273897#comment-14273897
 ] 

Otis Gospodnetic commented on LUCENE-6161:
--

I'd assume that while merges are now faster, they are using more of the 
computing resources (than before) needed for the rest of what Lucene is doing, 
hence no improvement in overall indexing time.

> Applying deletes is sometimes dog slow
> --
>
> Key: LUCENE-6161
> URL: https://issues.apache.org/jira/browse/LUCENE-6161
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6161.patch, LUCENE-6161.patch, LUCENE-6161.patch
>
>
> I hit this while testing various use cases for LUCENE-6119 (adding 
> auto-throttle to ConcurrentMergeScheduler).
> When I tested "always call updateDocument" (each add buffers a delete term), 
> with many indexing threads, opening an NRT reader once per second (forcing 
> all deleted terms to be applied), I see that 
> BufferedUpdatesStream.applyDeletes sometimes seems to take a lng time, 
> e.g.:
> {noformat}
> BD 0 [2015-01-04 09:31:12.597; Lucene Merge Thread #69]: applyDeletes took 
> 339 msec for 10 segments, 117 deleted docs, 607333 visited terms
> BD 0 [2015-01-04 09:31:18.148; Thread-4]: applyDeletes took 5533 msec for 62 
> segments, 10989 deleted docs, 8517225 visited terms
> BD 0 [2015-01-04 09:31:21.463; Lucene Merge Thread #71]: applyDeletes took 
> 1065 msec for 10 segments, 470 deleted docs, 1825649 visited terms
> BD 0 [2015-01-04 09:31:26.301; Thread-5]: applyDeletes took 4835 msec for 61 
> segments, 14676 deleted docs, 9649860 visited terms
> BD 0 [2015-01-04 09:31:35.572; Thread-11]: applyDeletes took 6073 msec for 72 
> segments, 13835 deleted docs, 11865319 visited terms
> BD 0 [2015-01-04 09:31:37.604; Lucene Merge Thread #75]: applyDeletes took 
> 251 msec for 10 segments, 58 deleted docs, 240721 visited terms
> BD 0 [2015-01-04 09:31:44.641; Thread-11]: applyDeletes took 5956 msec for 64 
> segments, 15109 deleted docs, 10599034 visited terms
> BD 0 [2015-01-04 09:31:47.814; Lucene Merge Thread #77]: applyDeletes took 
> 396 msec for 10 segments, 137 deleted docs, 719914 visit
> {noformat}
> What this means is even though I want an NRT reader every second, often I 
> don't get one for up to ~7 or more seconds.
> This is on an SSD, machine has 48 GB RAM, heap size is only 2 GB.  12 
> indexing threads.
> As hideously complex as this code is, I think there are some inefficiencies, 
> but fixing them could be hard / make code even hairier ...
> Also, this code is mega-locked: holds IW's lock, holds BD's lock.  It blocks 
> things like merges kicking off or finishing...
> E.g., we pull the MergedIterator many times on the same set of sub-iterators. 
>  Maybe we can create the sorted terms up front and reuse that?
> Maybe we should go "term stride" (one term visits all N segments) not 
> "segment stride" (visit each segment, iterating all deleted terms for it).  
> Just iterating the terms to be deleted takes a sizable part of the time, and 
> we now do that once for every segment in the index.
> Also, the "isUnique" bit in LUCENE-6005 should help here, since if we know 
> the field is unique, we can stop seekExact once we found a segment that has 
> the deleted term, we can maybe pass false for removeDuplicates to 
> MergedIterator...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 732 - Still Failing

2015-01-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/732/

8 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([C6144F9E86941D69]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([C6144F9E86941D69]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
ERROR: SolrIndexSearcher opens=628 closes=625

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=628 closes=625
at __randomizedtesting.SeedInfo.seed([C6144F9E86941D69]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:442)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:188)
at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
3 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=3833, 
name=searcherExecutor-1911-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=3830, 
name=searcherExecutor-1909-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor

[jira] [Commented] (LUCENE-6161) Applying deletes is sometimes dog slow

2015-01-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273886#comment-14273886
 ] 

Michael McCandless commented on LUCENE-6161:


bq. were the same number of merges completed?

In my last test it was very close: trunk did 83 merges and patch did 84.

It is strange, because in my test I hijack one of the indexing threads to open 
an NRT reader periodically, and it's that thread that pays the cost of applying 
deletes.  So I would expect a big reduction in applyDeletes to show some gains 
in overall indexing ...

I could run everything with one thread, SMS, etc.  Would just take sooo long.

> Applying deletes is sometimes dog slow
> --
>
> Key: LUCENE-6161
> URL: https://issues.apache.org/jira/browse/LUCENE-6161
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6161.patch, LUCENE-6161.patch, LUCENE-6161.patch
>
>
> I hit this while testing various use cases for LUCENE-6119 (adding 
> auto-throttle to ConcurrentMergeScheduler).
> When I tested "always call updateDocument" (each add buffers a delete term), 
> with many indexing threads, opening an NRT reader once per second (forcing 
> all deleted terms to be applied), I see that 
> BufferedUpdatesStream.applyDeletes sometimes seems to take a lng time, 
> e.g.:
> {noformat}
> BD 0 [2015-01-04 09:31:12.597; Lucene Merge Thread #69]: applyDeletes took 
> 339 msec for 10 segments, 117 deleted docs, 607333 visited terms
> BD 0 [2015-01-04 09:31:18.148; Thread-4]: applyDeletes took 5533 msec for 62 
> segments, 10989 deleted docs, 8517225 visited terms
> BD 0 [2015-01-04 09:31:21.463; Lucene Merge Thread #71]: applyDeletes took 
> 1065 msec for 10 segments, 470 deleted docs, 1825649 visited terms
> BD 0 [2015-01-04 09:31:26.301; Thread-5]: applyDeletes took 4835 msec for 61 
> segments, 14676 deleted docs, 9649860 visited terms
> BD 0 [2015-01-04 09:31:35.572; Thread-11]: applyDeletes took 6073 msec for 72 
> segments, 13835 deleted docs, 11865319 visited terms
> BD 0 [2015-01-04 09:31:37.604; Lucene Merge Thread #75]: applyDeletes took 
> 251 msec for 10 segments, 58 deleted docs, 240721 visited terms
> BD 0 [2015-01-04 09:31:44.641; Thread-11]: applyDeletes took 5956 msec for 64 
> segments, 15109 deleted docs, 10599034 visited terms
> BD 0 [2015-01-04 09:31:47.814; Lucene Merge Thread #77]: applyDeletes took 
> 396 msec for 10 segments, 137 deleted docs, 719914 visit
> {noformat}
> What this means is even though I want an NRT reader every second, often I 
> don't get one for up to ~7 or more seconds.
> This is on an SSD, machine has 48 GB RAM, heap size is only 2 GB.  12 
> indexing threads.
> As hideously complex as this code is, I think there are some inefficiencies, 
> but fixing them could be hard / make code even hairier ...
> Also, this code is mega-locked: holds IW's lock, holds BD's lock.  It blocks 
> things like merges kicking off or finishing...
> E.g., we pull the MergedIterator many times on the same set of sub-iterators. 
>  Maybe we can create the sorted terms up front and reuse that?
> Maybe we should go "term stride" (one term visits all N segments) not 
> "segment stride" (visit each segment, iterating all deleted terms for it).  
> Just iterating the terms to be deleted takes a sizable part of the time, and 
> we now do that once for every segment in the index.
> Also, the "isUnique" bit in LUCENE-6005 should help here, since if we know 
> the field is unique, we can stop seekExact once we found a segment that has 
> the deleted term, we can maybe pass false for removeDuplicates to 
> MergedIterator...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6161) Applying deletes is sometimes dog slow

2015-01-12 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6161:
---
Attachment: LUCENE-6161.patch

Here's a less risky change that shows sizable reduction in total time applying 
deletes (and opening NRT readers)... I think with some polishing the approach 
is committable.

It just makes the merged iterator more efficient (don't check for a field 
change on every term; don't merge if there's only 1 sub), and side-steps O(N^2) 
seekExact cost for smaller segments.

On an "index all wikipedia docs, 4 indexing threads, 350 MB IW buffer, opening 
NRT reader every 5 secs", total time to get 199 NRT readers went from 501 
seconds in trunk to 313 seconds with the patch.  Overall indexing rate is 
essentially the same (still strange!)...

> Applying deletes is sometimes dog slow
> --
>
> Key: LUCENE-6161
> URL: https://issues.apache.org/jira/browse/LUCENE-6161
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6161.patch, LUCENE-6161.patch, LUCENE-6161.patch
>
>
> I hit this while testing various use cases for LUCENE-6119 (adding 
> auto-throttle to ConcurrentMergeScheduler).
> When I tested "always call updateDocument" (each add buffers a delete term), 
> with many indexing threads, opening an NRT reader once per second (forcing 
> all deleted terms to be applied), I see that 
> BufferedUpdatesStream.applyDeletes sometimes seems to take a lng time, 
> e.g.:
> {noformat}
> BD 0 [2015-01-04 09:31:12.597; Lucene Merge Thread #69]: applyDeletes took 
> 339 msec for 10 segments, 117 deleted docs, 607333 visited terms
> BD 0 [2015-01-04 09:31:18.148; Thread-4]: applyDeletes took 5533 msec for 62 
> segments, 10989 deleted docs, 8517225 visited terms
> BD 0 [2015-01-04 09:31:21.463; Lucene Merge Thread #71]: applyDeletes took 
> 1065 msec for 10 segments, 470 deleted docs, 1825649 visited terms
> BD 0 [2015-01-04 09:31:26.301; Thread-5]: applyDeletes took 4835 msec for 61 
> segments, 14676 deleted docs, 9649860 visited terms
> BD 0 [2015-01-04 09:31:35.572; Thread-11]: applyDeletes took 6073 msec for 72 
> segments, 13835 deleted docs, 11865319 visited terms
> BD 0 [2015-01-04 09:31:37.604; Lucene Merge Thread #75]: applyDeletes took 
> 251 msec for 10 segments, 58 deleted docs, 240721 visited terms
> BD 0 [2015-01-04 09:31:44.641; Thread-11]: applyDeletes took 5956 msec for 64 
> segments, 15109 deleted docs, 10599034 visited terms
> BD 0 [2015-01-04 09:31:47.814; Lucene Merge Thread #77]: applyDeletes took 
> 396 msec for 10 segments, 137 deleted docs, 719914 visit
> {noformat}
> What this means is even though I want an NRT reader every second, often I 
> don't get one for up to ~7 or more seconds.
> This is on an SSD, machine has 48 GB RAM, heap size is only 2 GB.  12 
> indexing threads.
> As hideously complex as this code is, I think there are some inefficiencies, 
> but fixing them could be hard / make code even hairier ...
> Also, this code is mega-locked: holds IW's lock, holds BD's lock.  It blocks 
> things like merges kicking off or finishing...
> E.g., we pull the MergedIterator many times on the same set of sub-iterators. 
>  Maybe we can create the sorted terms up front and reuse that?
> Maybe we should go "term stride" (one term visits all N segments) not 
> "segment stride" (visit each segment, iterating all deleted terms for it).  
> Just iterating the terms to be deleted takes a sizable part of the time, and 
> we now do that once for every segment in the index.
> Also, the "isUnique" bit in LUCENE-6005 should help here, since if we know 
> the field is unique, we can stop seekExact once we found a segment that has 
> the deleted term, we can maybe pass false for removeDuplicates to 
> MergedIterator...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6969) Just like we have to retry when the NameNode is in safemode on Solr startup, we also need to retry when opening a transaction log file for append when we get a RecoveryInP

2015-01-12 Thread Mark Miller (JIRA)
Mark Miller created SOLR-6969:
-

 Summary: Just like we have to retry when the NameNode is in 
safemode on Solr startup, we also need to retry when opening a transaction log 
file for append when we get a RecoveryInProgressException.
 Key: SOLR-6969
 URL: https://issues.apache.org/jira/browse/SOLR-6969
 Project: Solr
  Issue Type: Bug
  Components: hdfs
Reporter: Mark Miller
Assignee: Mark Miller


This can happen after a hard crash and restart. The current workaround is to 
stop and wait it out and start again. We should retry and wait a given amount 
of time as we do when we detect safe mode though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5211) updating parent as childless makes old children orphans

2015-01-12 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-5211:
---
Fix Version/s: (was: 5.0)

Removing 5.0 as fix version - no patch, action items, or assigned developer.

> updating parent as childless makes old children orphans
> ---
>
> Key: SOLR-5211
> URL: https://issues.apache.org/jira/browse/SOLR-5211
> Project: Solr
>  Issue Type: Sub-task
>  Components: update
>Affects Versions: 4.5, Trunk
>Reporter: Mikhail Khludnev
> Fix For: Trunk
>
>
> if I have parent with children in the index, I can send update omitting 
> children. as a result old children become orphaned. 
> I suppose separate \_root_ fields makes much trouble. I propose to extend 
> notion of uniqueKey, and let it spans across blocks that makes updates 
> unambiguous.  
> WDYT? Do you like to see a test proves this issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6367) empty tlog on HDFS when hard crash - no docs to replay on recovery

2015-01-12 Thread Praneeth (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273862#comment-14273862
 ] 

Praneeth edited comment on SOLR-6367 at 1/12/15 6:03 PM:
-

Sorry, I couldn't get to your previous comment earlier. I was too hasty making 
my first comment on the issue. I was wrong and call to {{flush()}} wouldn't 
make a difference. I have been able to consistently reproduce this. I suppose 
you tested it on latest version. I will give the latest version a try. I was 
testing it on Solr 4.4.0. 

I noticed that the stream is not being flushed properly and I came to a quick 
conclusion earlier. I will look into it further and post my findings here.


was (Author: praneeth.varma):
Sorry, I couldn't get to your previous comment earlier. I was too hasty making 
my first comment on the issue. I was wrong and call to {{flush()}} wouldn't 
make a difference. I have been able to consistently reproduce this. I suppose 
you tested it on latest version. I will give the latest version a try. I was 
testing it on 4.4.0. 

I noticed that the stream is not being flushed properly and I came to a quick 
conclusion earlier. I will look into it further and post my findings here.

> empty tlog on HDFS when hard crash - no docs to replay on recovery
> --
>
> Key: SOLR-6367
> URL: https://issues.apache.org/jira/browse/SOLR-6367
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
>
> Filing this bug based on an email to solr-user@lucene from Tom Chen (Fri, 18 
> Jul 2014)...
> {panel}
> Reproduce steps:
> 1) Setup Solr to run on HDFS like this:
> {noformat}
> java -Dsolr.directoryFactory=HdfsDirectoryFactory
>  -Dsolr.lock.type=hdfs
>  -Dsolr.hdfs.home=hdfs://host:port/path
> {noformat}
> For the purpose of this testing, turn off the default auto commit in 
> solrconfig.xml, i.e. comment out autoCommit like this:
> {code}
> 
> {code}
> 2) Add a document without commit:
> {{curl "http://localhost:8983/solr/collection1/update?commit=false"; -H
> "Content-type:text/xml; charset=utf-8" --data-binary "@solr.xml"}}
> 3) Solr generate empty tlog file (0 file size, the last one ends with 6):
> {noformat}
> [hadoop@hdtest042 exampledocs]$ hadoop fs -ls
> /path/collection1/core_node1/data/tlog
> Found 5 items
> -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.001
> -rw-r--r--   1 hadoop hadoop 67 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.003
> -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.004
> -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
> /path/collection1/core_node1/data/tlog/tlog.005
> -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
> /path/collection1/core_node1/data/tlog/tlog.006
> {noformat}
> 4) Simulate Solr crash by killing the process with -9 option.
> 5) restart the Solr process. Observation is that uncommitted document are
> not replayed, files in tlog directory are cleaned up. Hence uncommitted
> document(s) is lost.
> Am I missing anything or this is a bug?
> BTW, additional observations:
> a) If in step 4) Solr is stopped gracefully (i.e. without -9 option),
> non-empty tlog file is geneated and after re-starting Solr, uncommitted
> document is replayed as expected.
> b) If Solr doesn't run on HDFS (i.e. on local file system), this issue is
> not observed either.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6367) empty tlog on HDFS when hard crash - no docs to replay on recovery

2015-01-12 Thread Praneeth (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273862#comment-14273862
 ] 

Praneeth commented on SOLR-6367:


Sorry, I couldn't get to your previous comment earlier. I was too hasty making 
my first comment on the issue. I was wrong and call to {{flush()}} wouldn't 
make a difference. I have been able to consistently reproduce this. I suppose 
you tested it on latest version. I will give the latest version a try. I was 
testing it on 4.4.0. 

I noticed that the stream is not being flushed properly and I came to a quick 
conclusion earlier. I will look into it further and post my findings here.

> empty tlog on HDFS when hard crash - no docs to replay on recovery
> --
>
> Key: SOLR-6367
> URL: https://issues.apache.org/jira/browse/SOLR-6367
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
>
> Filing this bug based on an email to solr-user@lucene from Tom Chen (Fri, 18 
> Jul 2014)...
> {panel}
> Reproduce steps:
> 1) Setup Solr to run on HDFS like this:
> {noformat}
> java -Dsolr.directoryFactory=HdfsDirectoryFactory
>  -Dsolr.lock.type=hdfs
>  -Dsolr.hdfs.home=hdfs://host:port/path
> {noformat}
> For the purpose of this testing, turn off the default auto commit in 
> solrconfig.xml, i.e. comment out autoCommit like this:
> {code}
> 
> {code}
> 2) Add a document without commit:
> {{curl "http://localhost:8983/solr/collection1/update?commit=false"; -H
> "Content-type:text/xml; charset=utf-8" --data-binary "@solr.xml"}}
> 3) Solr generate empty tlog file (0 file size, the last one ends with 6):
> {noformat}
> [hadoop@hdtest042 exampledocs]$ hadoop fs -ls
> /path/collection1/core_node1/data/tlog
> Found 5 items
> -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.001
> -rw-r--r--   1 hadoop hadoop 67 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.003
> -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.004
> -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
> /path/collection1/core_node1/data/tlog/tlog.005
> -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
> /path/collection1/core_node1/data/tlog/tlog.006
> {noformat}
> 4) Simulate Solr crash by killing the process with -9 option.
> 5) restart the Solr process. Observation is that uncommitted document are
> not replayed, files in tlog directory are cleaned up. Hence uncommitted
> document(s) is lost.
> Am I missing anything or this is a bug?
> BTW, additional observations:
> a) If in step 4) Solr is stopped gracefully (i.e. without -9 option),
> non-empty tlog file is geneated and after re-starting Solr, uncommitted
> document is replayed as expected.
> b) If Solr doesn't run on HDFS (i.e. on local file system), this issue is
> not observed either.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6700) ChildDocTransformer doesn't return correct children after updating and optimising solr index

2015-01-12 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-6700:
---
 Priority: Critical  (was: Blocker)
Fix Version/s: (was: 5.0)

I'm changing this from being a Blocker for 5.0 - there's no patch or assigned 
developer, but we'll keep this open for the future.

> ChildDocTransformer doesn't return correct children after updating and 
> optimising solr index
> 
>
> Key: SOLR-6700
> URL: https://issues.apache.org/jira/browse/SOLR-6700
> Project: Solr
>  Issue Type: Bug
>Reporter: Bogdan Marinescu
>Priority: Critical
> Fix For: 4.10.4
>
>
> I have an index with nested documents. 
> {code:title=schema.xml snippet|borderStyle=solid}
>   multiValued="false" />
>  required="true"/>
> 
> 
> 
> 
> 
> {code}
> Afterwards I add the following documents:
> {code}
> 
>   
> 1
> Test Artist 1
> 1
> 
> 11
> Test Album 1
>   Test Song 1
> 2
> 
>   
>   
> 2
> Test Artist 2
> 1
> 
> 22
> Test Album 2
>   Test Song 2
> 2
> 
>   
> 
> {code}
> After performing the following query 
> {quote}
> http://localhost:8983/solr/collection1/select?q=%7B!parent+which%3DentityType%3A1%7D&fl=*%2Cscore%2C%5Bchild+parentFilter%3DentityType%3A1%5D&wt=json&indent=true
> {quote}
> I get a correct answer (child matches parent, check _root_ field)
> {code:title=add docs|borderStyle=solid}
> {
>   "responseHeader":{
> "status":0,
> "QTime":1,
> "params":{
>   "fl":"*,score,[child parentFilter=entityType:1]",
>   "indent":"true",
>   "q":"{!parent which=entityType:1}",
>   "wt":"json"}},
>   "response":{"numFound":2,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"1",
> "pName":"Test Artist 1",
> "entityType":1,
> "_version_":1483832661048819712,
> "_root_":"1",
> "score":1.0,
> "_childDocuments_":[
> {
>   "id":"11",
>   "cAlbum":"Test Album 1",
>   "cSong":"Test Song 1",
>   "entityType":2,
>   "_root_":"1"}]},
>   {
> "id":"2",
> "pName":"Test Artist 2",
> "entityType":1,
> "_version_":1483832661050916864,
> "_root_":"2",
> "score":1.0,
> "_childDocuments_":[
> {
>   "id":"22",
>   "cAlbum":"Test Album 2",
>   "cSong":"Test Song 2",
>   "entityType":2,
>   "_root_":"2"}]}]
>   }}
> {code}
> Afterwards I try to update one document:
> {code:title=update doc|borderStyle=solid}
> 
> 
> 1
> INIT
> 
> 
> {code}
> After performing the previous query I get the right result (like the previous 
> one but with the pName field updated).
> The problem only comes after performing an *optimize*. 
> Now, the same query yields the following result:
> {code}
> {
>   "responseHeader":{
> "status":0,
> "QTime":1,
> "params":{
>   "fl":"*,score,[child parentFilter=entityType:1]",
>   "indent":"true",
>   "q":"{!parent which=entityType:1}",
>   "wt":"json"}},
>   "response":{"numFound":2,"start":0,"maxScore":1.0,"docs":[
>   {
> "id":"2",
> "pName":"Test Artist 2",
> "entityType":1,
> "_version_":1483832661050916864,
> "_root_":"2",
> "score":1.0,
> "_childDocuments_":[
> {
>   "id":"11",
>   "cAlbum":"Test Album 1",
>   "cSong":"Test Song 1",
>   "entityType":2,
>   "_root_":"1"},
> {
>   "id":"22",
>   "cAlbum":"Test Album 2",
>   "cSong":"Test Song 2",
>   "entityType":2,
>   "_root_":"2"}]},
>   {
> "id":"1",
> "pName":"INIT",
> "entityType":1,
> "_root_":"1",
> "_version_":1483832916867809280,
> "score":1.0}]
>   }}
> {code}
> As can be seen, the document with id:2 now contains the child with id:11 that 
> belongs to the document with id:1. 
> I haven't found any references on the web about this except 
> http://blog.griddynamics.com/2013/09/solr-block-join-support.html
> Similar issue: SOLR-6096
> Is this problem known? Is there a workaround for this? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6952) Re-using data-driven configsets by default is not helpful

2015-01-12 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273846#comment-14273846
 ] 

Timothy Potter commented on SOLR-6952:
--

same as zkcli.sh

> Re-using data-driven configsets by default is not helpful
> -
>
> Key: SOLR-6952
> URL: https://issues.apache.org/jira/browse/SOLR-6952
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 5.0
>Reporter: Grant Ingersoll
>Assignee: Timothy Potter
> Fix For: 5.0
>
> Attachments: SOLR-6952.patch
>
>
> When creating collections (I'm using the bin/solr scripts), I think we should 
> automatically copy configsets, especially when running in "getting started 
> mode" or data driven mode.
> I did the following:
> {code}
> bin/solr create_collection -n foo
> bin/post foo some_data.csv
> {code}
> I then created a second collection with the intention of sending in the same 
> data, but this time run through a python script that changed a value from an 
> int to a string (since it was an enumerated type) and was surprised to see 
> that I got:
> {quote}
> Caused by: java.lang.NumberFormatException: For input string: "NA"
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Long.parseLong(Long.java:441)
> {quote}
> for my new version of the data that passes in a string instead of an int, as 
> this new collection had only seen strings for that field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6367) empty tlog on HDFS when hard crash - no docs to replay on recovery

2015-01-12 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273844#comment-14273844
 ] 

Mark Miller commented on SOLR-6367:
---

I have not been able to reproduce this so far. With kill -9, I am not losing 
the doc.

> empty tlog on HDFS when hard crash - no docs to replay on recovery
> --
>
> Key: SOLR-6367
> URL: https://issues.apache.org/jira/browse/SOLR-6367
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Mark Miller
> Fix For: 5.0, Trunk
>
>
> Filing this bug based on an email to solr-user@lucene from Tom Chen (Fri, 18 
> Jul 2014)...
> {panel}
> Reproduce steps:
> 1) Setup Solr to run on HDFS like this:
> {noformat}
> java -Dsolr.directoryFactory=HdfsDirectoryFactory
>  -Dsolr.lock.type=hdfs
>  -Dsolr.hdfs.home=hdfs://host:port/path
> {noformat}
> For the purpose of this testing, turn off the default auto commit in 
> solrconfig.xml, i.e. comment out autoCommit like this:
> {code}
> 
> {code}
> 2) Add a document without commit:
> {{curl "http://localhost:8983/solr/collection1/update?commit=false"; -H
> "Content-type:text/xml; charset=utf-8" --data-binary "@solr.xml"}}
> 3) Solr generate empty tlog file (0 file size, the last one ends with 6):
> {noformat}
> [hadoop@hdtest042 exampledocs]$ hadoop fs -ls
> /path/collection1/core_node1/data/tlog
> Found 5 items
> -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.001
> -rw-r--r--   1 hadoop hadoop 67 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.003
> -rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
> /path/collection1/core_node1/data/tlog/tlog.004
> -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
> /path/collection1/core_node1/data/tlog/tlog.005
> -rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
> /path/collection1/core_node1/data/tlog/tlog.006
> {noformat}
> 4) Simulate Solr crash by killing the process with -9 option.
> 5) restart the Solr process. Observation is that uncommitted document are
> not replayed, files in tlog directory are cleaned up. Hence uncommitted
> document(s) is lost.
> Am I missing anything or this is a bug?
> BTW, additional observations:
> a) If in step 4) Solr is stopped gracefully (i.e. without -9 option),
> non-empty tlog file is geneated and after re-starting Solr, uncommitted
> document is replayed as expected.
> b) If Solr doesn't run on HDFS (i.e. on local file system), this issue is
> not observed either.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6963) Upgrade hadoop version to 2.3

2015-01-12 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273841#comment-14273841
 ] 

Gregory Chanan commented on SOLR-6963:
--

bq. Did you try going past version 2.3?

No, I didn't.

> Upgrade hadoop version to 2.3
> -
>
> Key: SOLR-6963
> URL: https://issues.apache.org/jira/browse/SOLR-6963
> Project: Solr
>  Issue Type: Task
>  Components: Hadoop Integration
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: SOLR-6963.patch
>
>
> See SOLR-6915; we need at least hadoop version 2.3 to be able to use the 
> MiniKdc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6349) LocalParams for enabling/disabling individual stats

2015-01-12 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273832#comment-14273832
 ] 

Hoss Man commented on SOLR-6349:


bq. Can this patch allow output of countDistinct but not distinctValues?

i don't think we should tackle that as part of this issue - it's already fairly 
complicated w/o introducing new permutations of options.

i think the best approach would be to leave "calcDistinct" alone as it is now 
but deprecate/discourage it andmove towards adding an entirely new stats option 
for computing an aproximated count using hyperloglog (i opened a new issue for 
this: SOLR-6968)

> LocalParams for enabling/disabling individual stats
> ---
>
> Key: SOLR-6349
> URL: https://issues.apache.org/jira/browse/SOLR-6349
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Attachments: SOLR-6349-tflobbe.patch, SOLR-6349-tflobbe.patch, 
> SOLR-6349-tflobbe.patch, SOLR-6349-xu.patch, SOLR-6349-xu.patch, 
> SOLR-6349-xu.patch, SOLR-6349-xu.patch, SOLR-6349___bad_idea_broken.patch
>
>
> Stats component currently computes all stats (except for one) every time 
> because they are relatively cheap, and in some cases dependent on eachother 
> for distrib computation -- but if we start layering stats on other things it 
> becomes unnecessarily expensive to compute all the stats when they just want 
> the "sum" (and it will definitely become excessively verbose in the 
> responses).  
> The plan here is to use local params to make this configurable.  All of the 
> existing stat options could be modeled as a simple boolean param, but future 
> params (like percentiles) might take in a more complex param value...
> Example:
> {noformat}
> stats.field={!min=true max=true percentiles='99,99.999'}price
> stats.field={!mean=true}weight
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6968) add hyperloglog in statscomponent as an approximate count

2015-01-12 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6968:
--

 Summary: add hyperloglog in statscomponent as an approximate count
 Key: SOLR-6968
 URL: https://issues.apache.org/jira/browse/SOLR-6968
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man


stats component currently supports "calcDistinct" but it's terribly inefficient 
-- especially in distib mode.

we should add support for using hyperloglog to compute an approximate count of 
distinct values (using localparams via SOLR-6349 to control the precision of 
the approximation)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6937) In schemaless mode, field names with spaces should be converted

2015-01-12 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273823#comment-14273823
 ] 

Hoss Man commented on SOLR-6937:


bq. Are there problems that would result when changing the name of a field in 
FieldMutatingUpdateProcessor?

i suspect i put that in as a sanity check to protect the the surface area of 
the API -- i don't know if relaxing that will cause problems, or if it's just 
something that's there because the ramifications of allowing it aren't really 
well tested in the rest of the FieldMutating code paths.

in particular: what does it mean? should the old field name be removed? should 
the corisponding field:value pair be rmeoved, but other instances of that 
field:value2 be left in (ie: what if the mutator renames one instance of the 
field but not another?)

easiest thing would probably be to implement field renaming it as a complete 
one-off special UpdateProcessor w/o using hte FieldMutating framework (ie: no 
config, just something barebones for use in schemaless that can maybe later be 
re-parented in the class hierarchy to support more config options)

> In schemaless mode, field names with spaces should be converted
> ---
>
> Key: SOLR-6937
> URL: https://issues.apache.org/jira/browse/SOLR-6937
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Reporter: Grant Ingersoll
>Assignee: Noble Paul
> Fix For: 5.0
>
>
> Assuming spaces in field names are still bad, we should automatically convert 
> them to not have spaces.  For instance, I indexed Citibike public data set 
> which has: 
> {quote}
> "tripduration","starttime","stoptime","start station id","start station 
> name","start station latitude","start station longitude","end station 
> id","end station name","end station latitude","end station 
> longitude","bikeid","usertype","birth year","gender"{quote}
> My vote would be to replace spaces w/ underscores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6952) Re-using data-driven configsets by default is not helpful

2015-01-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273812#comment-14273812
 ] 

Noble Paul commented on SOLR-6952:
--

What r the long names ? 



> Re-using data-driven configsets by default is not helpful
> -
>
> Key: SOLR-6952
> URL: https://issues.apache.org/jira/browse/SOLR-6952
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 5.0
>Reporter: Grant Ingersoll
>Assignee: Timothy Potter
> Fix For: 5.0
>
> Attachments: SOLR-6952.patch
>
>
> When creating collections (I'm using the bin/solr scripts), I think we should 
> automatically copy configsets, especially when running in "getting started 
> mode" or data driven mode.
> I did the following:
> {code}
> bin/solr create_collection -n foo
> bin/post foo some_data.csv
> {code}
> I then created a second collection with the intention of sending in the same 
> data, but this time run through a python script that changed a value from an 
> int to a string (since it was an enumerated type) and was surprised to see 
> that I got:
> {quote}
> Caused by: java.lang.NumberFormatException: For input string: "NA"
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Long.parseLong(Long.java:441)
> {quote}
> for my new version of the data that passes in a string instead of an int, as 
> this new collection had only seen strings for that field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6902) Use JUnit rules instead of inheritance with distributed Solr tests to allow for multiple tests without the same class

2015-01-12 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273789#comment-14273789
 ] 

Erick Erickson commented on SOLR-6902:
--

bq: a bad idea to fuss with it right before 5.0 I think

Yeah, that was my feeling too, nice to have confirmation...

I'll try to keep my local copy up to date with trunk to make the eventual 
reconciliation smoother, we'll see how that works.

> Use JUnit rules instead of inheritance with distributed Solr tests to allow 
> for multiple tests without the same class
> -
>
> Key: SOLR-6902
> URL: https://issues.apache.org/jira/browse/SOLR-6902
> Project: Solr
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Ramkumar Aiyengar
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-6902.patch, SOLR-6902.patch
>
>
> Finally got annoyed enough with too many things being clubbed into one test 
> method in all distributed Solr tests (anything inheriting from 
> {{BaseDistributedSearchTestCase}} and currently implementing {{doTest}})..
> This just lays the groundwork really for allowing multiple test methods 
> within the same class, and doesn't split tests as yet or flatten the 
> inheritance hierarchy (when abused for doing multiple tests), as this touches 
> a lot of files by itself. For that reason, the sooner this is picked up the 
> better..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6952) Re-using data-driven configsets by default is not helpful

2015-01-12 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273785#comment-14273785
 ] 

Timothy Potter commented on SOLR-6952:
--

Actually, since I'm tweaking the arg names of bin/solr create options, I think 
I'll just line them up with what was already being done in zkcli.sh. 
Specifically, I'm going to change the options to be:

{code}
-c = name of collection or core to create (was -n)
-d = configuration directory to copy (was -c)
-n = configuration name (didn't exist)
{code}

> Re-using data-driven configsets by default is not helpful
> -
>
> Key: SOLR-6952
> URL: https://issues.apache.org/jira/browse/SOLR-6952
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis
>Affects Versions: 5.0
>Reporter: Grant Ingersoll
>Assignee: Timothy Potter
> Fix For: 5.0
>
> Attachments: SOLR-6952.patch
>
>
> When creating collections (I'm using the bin/solr scripts), I think we should 
> automatically copy configsets, especially when running in "getting started 
> mode" or data driven mode.
> I did the following:
> {code}
> bin/solr create_collection -n foo
> bin/post foo some_data.csv
> {code}
> I then created a second collection with the intention of sending in the same 
> data, but this time run through a python script that changed a value from an 
> int to a string (since it was an enumerated type) and was surprised to see 
> that I got:
> {quote}
> Caused by: java.lang.NumberFormatException: For input string: "NA"
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Long.parseLong(Long.java:441)
> {quote}
> for my new version of the data that passes in a string instead of an int, as 
> this new collection had only seen strings for that field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6967) SimplePostToolTest.testTypeSupported test fail.

2015-01-12 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-6967.

Resolution: Fixed
  Assignee: Erik Hatcher

> SimplePostToolTest.testTypeSupported test fail.
> ---
>
> Key: SOLR-6967
> URL: https://issues.apache.org/jira/browse/SOLR-6967
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Erik Hatcher
>
> I've seen this locally as well.
> http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11419/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6967) SimplePostToolTest.testTypeSupported test fail.

2015-01-12 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273782#comment-14273782
 ] 

Erik Hatcher commented on SOLR-6967:


Yeah, this is fixed.  Sorry for the temporary noise.

> SimplePostToolTest.testTypeSupported test fail.
> ---
>
> Key: SOLR-6967
> URL: https://issues.apache.org/jira/browse/SOLR-6967
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>
> I've seen this locally as well.
> http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11419/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6177) Add CustomAnalyzer - a builder that creates Analyzers from the factory classes

2015-01-12 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273769#comment-14273769
 ] 

Robert Muir commented on LUCENE-6177:
-

+1 Uwe, looks nice.

> Add CustomAnalyzer - a builder that creates Analyzers from the factory classes
> --
>
> Key: LUCENE-6177
> URL: https://issues.apache.org/jira/browse/LUCENE-6177
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6177.patch
>
>
> I prepared some "generic Analyzer class {{CustomAnalyzer}}, that makes it 
> easy to build analyzers like in Solr or Elasticsearch. Under the hood it uses 
> the factory classes. The class is made like a builder:
> {code:java}
> Analyzer ana = CustomAnalyzer.builder(Path.get("/path/to/config/dir"))
>   .withTokenizer("standard")
>   .addTokenFilter("standard")
>   .addTokenFilter("lowercase")
>   .addTokenFilter("stop", "ignoreCase", "false", "words", "stopwords.txt", 
> "format", "wordset")
>   .build();
> {code}
> It is possible to give the resource loader (used by stopwords and similar). 
> By default it tries to load stuff from context classloader (without any class 
> as reference so paths must be absolute - this is the behaviour 
> ClasspathResourseLoader defaults to).
> In addition you can give a Lucene MatchVersion, by default it would use 
> Version.LATEST (once LUCENE-5900 is completely fixed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4242) A better spatial query parser

2015-01-12 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-4242:
---
Attachment: SOLR-4242.patch

Here's a WIP patch that just implements the filtering qparsers.
David, Ryan, am I on the right track with this? (Review 
https://reviews.apache.org/r/29813/).

> A better spatial query parser
> -
>
> Key: SOLR-4242
> URL: https://issues.apache.org/jira/browse/SOLR-4242
> Project: Solr
>  Issue Type: New Feature
>  Components: spatial
>Reporter: David Smiley
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-4242.patch
>
>
> I've been thinking about how spatial support is exposed to Solr users. 
> Presently there's the older Solr 3 stuff, most prominently seen via 
> \{!geofilt} and \{!bbox} done by [~gsingers] (I think). and then there's the 
> Solr 4 fields using a special syntax parsed by Lucene 4 spatial that looks 
> like mygeofield:"Intersects(Circle(1 2 d=3))" What's inside the outer 
> parenthesis is parsed by Spatial4j as a shape, and it has a special 
> (non-standard) syntax for points, rects, and circles, and then there's WKT.  
> I believe this scheme was devised by [~ryantxu].
> I'd like to devise something that is both comprehensive and is aligned with 
> standards to the extent that it's prudent.  The old Solr 3 stuff is not 
> comprehensive and not standardized.  The newer stuff is comprehensive but 
> only a little based on standards. And I think it'd be nicer to implement it 
> as a Solr query parser.  I'll say more in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6967) SimplePostToolTest.testTypeSupported test fail.

2015-01-12 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273763#comment-14273763
 ] 

Mark Miller commented on SOLR-6967:
---

After a little digging, looks like this might have been a continuous fail that 
ehatcher fixed.

I saw it on an intermittent run myself, so either I saw a slightly different 
fail, or just had the timing to see it pre / post fix.

I'll resolve this later today if nothing more pops up on my jenkins machine 
locally.

> SimplePostToolTest.testTypeSupported test fail.
> ---
>
> Key: SOLR-6967
> URL: https://issues.apache.org/jira/browse/SOLR-6967
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>
> I've seen this locally as well.
> http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11419/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6177) Add CustomAnalyzer - a builder that creates Analyzers from the factory classes

2015-01-12 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6177:
--
Attachment: LUCENE-6177.patch

First patch.

I have to add tests for it. This patch should just show how it looks like. It 
may still contain bugs, it was just quickly hacked together.

> Add CustomAnalyzer - a builder that creates Analyzers from the factory classes
> --
>
> Key: LUCENE-6177
> URL: https://issues.apache.org/jira/browse/LUCENE-6177
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-6177.patch
>
>
> I prepared some "generic Analyzer class {{CustomAnalyzer}}, that makes it 
> easy to build analyzers like in Solr or Elasticsearch. Under the hood it uses 
> the factory classes. The class is made like a builder:
> {code:java}
> Analyzer ana = CustomAnalyzer.builder(Path.get("/path/to/config/dir"))
>   .withTokenizer("standard")
>   .addTokenFilter("standard")
>   .addTokenFilter("lowercase")
>   .addTokenFilter("stop", "ignoreCase", "false", "words", "stopwords.txt", 
> "format", "wordset")
>   .build();
> {code}
> It is possible to give the resource loader (used by stopwords and similar). 
> By default it tries to load stuff from context classloader (without any class 
> as reference so paths must be absolute - this is the behaviour 
> ClasspathResourseLoader defaults to).
> In addition you can give a Lucene MatchVersion, by default it would use 
> Version.LATEST (once LUCENE-5900 is completely fixed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6963) Upgrade hadoop version to 2.3

2015-01-12 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273756#comment-14273756
 ] 

Mark Miller commented on SOLR-6963:
---

+1.

Did you try going past version 2.3? I remember taking a few minutes to try a 
few months back and I hit some tough changes and dropped it for then. Perhaps 
they involved the test code, but I have very little memory of it.

> Upgrade hadoop version to 2.3
> -
>
> Key: SOLR-6963
> URL: https://issues.apache.org/jira/browse/SOLR-6963
> Project: Solr
>  Issue Type: Task
>  Components: Hadoop Integration
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: SOLR-6963.patch
>
>
> See SOLR-6915; we need at least hadoop version 2.3 to be able to use the 
> MiniKdc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6967) SimplePostToolTest.testTypeSupported test fail.

2015-01-12 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273749#comment-14273749
 ] 

Mark Miller commented on SOLR-6967:
---

{noformat}
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([10BB7BA74A19479C:840F4D445D26E6F3]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.util.SimplePostToolTest.testTypeSupported(SimplePostToolTest.java:116)
{noformat}

> SimplePostToolTest.testTypeSupported test fail.
> ---
>
> Key: SOLR-6967
> URL: https://issues.apache.org/jira/browse/SOLR-6967
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>
> I've seen this locally as well.
> http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11419/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6967) SimplePostToolTest.testTypeSupported test fail.

2015-01-12 Thread Mark Miller (JIRA)
Mark Miller created SOLR-6967:
-

 Summary: SimplePostToolTest.testTypeSupported test fail.
 Key: SOLR-6967
 URL: https://issues.apache.org/jira/browse/SOLR-6967
 Project: Solr
  Issue Type: Test
Reporter: Mark Miller


I've seen this locally as well.

http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11419/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6902) Use JUnit rules instead of inheritance with distributed Solr tests to allow for multiple tests without the same class

2015-01-12 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273736#comment-14273736
 ] 

Mark Miller commented on SOLR-6902:
---

It's a great issue, but a bad idea to fuss with it right before 5.0 I think. We 
should be focused on test hardening rather than something that might introduce 
instability. I know that is a bit inconvenient, but Anshum has said he will be 
cutting an rc very soon.

I think it's also too painful to simply do it on trunk for now and backport 
later.

We will have a release branch very soon though and then it can go in.

> Use JUnit rules instead of inheritance with distributed Solr tests to allow 
> for multiple tests without the same class
> -
>
> Key: SOLR-6902
> URL: https://issues.apache.org/jira/browse/SOLR-6902
> Project: Solr
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Ramkumar Aiyengar
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-6902.patch, SOLR-6902.patch
>
>
> Finally got annoyed enough with too many things being clubbed into one test 
> method in all distributed Solr tests (anything inheriting from 
> {{BaseDistributedSearchTestCase}} and currently implementing {{doTest}})..
> This just lays the groundwork really for allowing multiple test methods 
> within the same class, and doesn't split tests as yet or flatten the 
> inheritance hierarchy (when abused for doing multiple tests), as this touches 
> a lot of files by itself. For that reason, the sooner this is picked up the 
> better..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6966) For Data Driven Schema, consider multi-word text fields to be text, not string field types

2015-01-12 Thread Grant Ingersoll (JIRA)
Grant Ingersoll created SOLR-6966:
-

 Summary: For Data Driven Schema, consider multi-word text fields 
to be text, not string field types
 Key: SOLR-6966
 URL: https://issues.apache.org/jira/browse/SOLR-6966
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll


A tricky situation, for sure, but I suspect in data-driven mode, when field 
guessing, we should treat multi-word strings as text by default, not String, so 
that the user's first experience is they can search against that field.

Alternatively, create a second field that is either the String version or the 
Text version.

Even more advanced option: use pseudo-fields (like what we do for some spatial) 
and intelligently use one or the other depending on the context: e.g. faceting 
uses the one form, search uses the other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6496) LBHttpSolrServer should stop server retries after the timeAllowed threshold is met

2015-01-12 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273732#comment-14273732
 ] 

Anshum Gupta edited comment on SOLR-6496 at 1/12/15 4:20 PM:
-

Thanks Steve. I'll commit this later today.
I'll just change the logic to compute _timeOutTime = System.nanoTime() + 
timeOutNano_ once and use it to compare and exit.



was (Author: anshumg):
Thanks Steve. I'll commit this after a small change later today.
We should just compute _timeOutTime = System.nanoTime() + timeOutNano_ once and 
use it to compare and exit.


> LBHttpSolrServer should stop server retries after the timeAllowed threshold 
> is met
> --
>
> Key: SOLR-6496
> URL: https://issues.apache.org/jira/browse/SOLR-6496
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 5.0
>
> Attachments: SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch, 
> SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch
>
>
> The LBHttpSolrServer will continue to perform retries for each server it was 
> given without honoring the timeAllowed request parameter. Once the threshold 
> has been met, you should no longer perform retries and allow the exception to 
> bubble up and allow the request to either error out or return partial results 
> per the shards.tolerant request parameter.
> For a little more context on how this is can be extremely problematic please 
> see the comment here: 
> https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
>  (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6496) LBHttpSolrServer should stop server retries after the timeAllowed threshold is met

2015-01-12 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273732#comment-14273732
 ] 

Anshum Gupta commented on SOLR-6496:


Thanks Steve. I'll commit this after a small change later today.
We should just compute _timeOutTime = System.nanoTime() + timeOutNano_ once and 
use it to compare and exit.


> LBHttpSolrServer should stop server retries after the timeAllowed threshold 
> is met
> --
>
> Key: SOLR-6496
> URL: https://issues.apache.org/jira/browse/SOLR-6496
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Davids
>Assignee: Anshum Gupta
>Priority: Critical
> Fix For: 5.0
>
> Attachments: SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch, 
> SOLR-6496.patch, SOLR-6496.patch, SOLR-6496.patch
>
>
> The LBHttpSolrServer will continue to perform retries for each server it was 
> given without honoring the timeAllowed request parameter. Once the threshold 
> has been met, you should no longer perform retries and allow the exception to 
> bubble up and allow the request to either error out or return partial results 
> per the shards.tolerant request parameter.
> For a little more context on how this is can be extremely problematic please 
> see the comment here: 
> https://issues.apache.org/jira/browse/SOLR-5986?focusedCommentId=14100991&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14100991
>  (#2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5900) Version cleanup

2015-01-12 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-5900.
---
Resolution: Fixed

> Version cleanup
> ---
>
> Key: LUCENE-5900
> URL: https://issues.apache.org/jira/browse/LUCENE-5900
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
>Assignee: Ryan Ernst
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5900-factories.patch, LUCENE-5900.patch
>
>
> There are still a couple things taking {{Version}} in their constructor 
> (AnalyzingInfixSuggester/BlendedInfixSuggester), {{TEST_VERSION_CURRENT}} 
> isn't needed anymore, and there are a number of places with 
> {{:Post-Release-Update-Version:}}, which should be possible to remove 
> completely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5900) Version cleanup

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273724#comment-14273724
 ] 

ASF subversion and git services commented on LUCENE-5900:
-

Commit 1651128 from [~thetaphi] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1651128 ]

Merged revision(s) 1651127 from lucene/dev/trunk:
LUCENE-5900: Fix remaining issues with default matchVersion

> Version cleanup
> ---
>
> Key: LUCENE-5900
> URL: https://issues.apache.org/jira/browse/LUCENE-5900
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
>Assignee: Ryan Ernst
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5900-factories.patch, LUCENE-5900.patch
>
>
> There are still a couple things taking {{Version}} in their constructor 
> (AnalyzingInfixSuggester/BlendedInfixSuggester), {{TEST_VERSION_CURRENT}} 
> isn't needed anymore, and there are a number of places with 
> {{:Post-Release-Update-Version:}}, which should be possible to remove 
> completely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5900) Version cleanup

2015-01-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14273723#comment-14273723
 ] 

ASF subversion and git services commented on LUCENE-5900:
-

Commit 1651127 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1651127 ]

LUCENE-5900: Fix remaining issues with default matchVersion

> Version cleanup
> ---
>
> Key: LUCENE-5900
> URL: https://issues.apache.org/jira/browse/LUCENE-5900
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
>Assignee: Ryan Ernst
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5900-factories.patch, LUCENE-5900.patch
>
>
> There are still a couple things taking {{Version}} in their constructor 
> (AnalyzingInfixSuggester/BlendedInfixSuggester), {{TEST_VERSION_CURRENT}} 
> isn't needed anymore, and there are a number of places with 
> {{:Post-Release-Update-Version:}}, which should be possible to remove 
> completely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >