[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 14503 - Failure!

2015-11-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14503/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.handler.PingRequestHandlerTest.testPingInClusterWithNoHealthCheck

Error Message:
Could not find a healthy node to handle the request.

Stack Trace:
org.apache.solr.common.SolrException: Could not find a healthy node to handle 
the request.
at 
__randomizedtesting.SeedInfo.seed([E075FA4BF6D78D0E:EA6449B733BB7EA]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1084)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:953)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:953)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:953)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:953)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:953)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.handler.PingRequestHandlerTest.testPingInClusterWithNoHealthCheck(PingRequestHandlerTest.java:200)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1660)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:866)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:902)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:777)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:822)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
a

[jira] [Commented] (SOLR-6304) Transforming and Indexing custom JSON data

2015-11-04 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14991275#comment-14991275
 ] 

Mikhail Khludnev commented on SOLR-6304:


this what happen to me. I raised SOLR-8240, please let me know what you think 
about. 

> Transforming and Indexing custom JSON data
> --
>
> Key: SOLR-6304
> URL: https://issues.apache.org/jira/browse/SOLR-6304
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 4.10, Trunk
>
> Attachments: SOLR-6304.patch, SOLR-6304.patch
>
>
> example
> {noformat}
> curl 
> localhost:8983/update/json/docs?split=/batters/batter&f=recipeId:/id&f=recipeType:/type&f=id:/batters/batter/id&f=type:/batters/batter/type
>  -d '
> {
>   "id": "0001",
>   "type": "donut",
>   "name": "Cake",
>   "ppu": 0.55,
>   "batters": {
>   "batter":
>   [
>   { "id": "1001", "type": 
> "Regular" },
>   { "id": "1002", "type": 
> "Chocolate" },
>   { "id": "1003", "type": 
> "Blueberry" },
>   { "id": "1004", "type": 
> "Devil's Food" }
>   ]
>   }
> }'
> {noformat}
> should produce the following output docs
> {noformat}
> { "recipeId":"001", "recipeType":"donut", "id":"1001", "type":"Regular" }
> { "recipeId":"001", "recipeType":"donut", "id":"1002", "type":"Chocolate" }
> { "recipeId":"001", "recipeType":"donut", "id":"1003", "type":"Blueberry" }
> { "recipeId":"001", "recipeType":"donut", "id":"1004", "type":"Devil's food" }
> {noformat}
> the split param is the element in the tree where it should be split into 
> multiple docs. The 'f' are field name mappings



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8240) mapUniqueKeyOnly=true silently ignores field mapping params f=.. when indexing custom JSON

2015-11-04 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-8240:
--

 Summary: mapUniqueKeyOnly=true silently ignores field mapping 
params f=.. when indexing custom JSON
 Key: SOLR-8240
 URL: https://issues.apache.org/jira/browse/SOLR-8240
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 5.3.1
Reporter: Mikhail Khludnev


since SOLR-6633 most of published JSON examples doesn't work in 
{{techproducts}} exmple, it provides unpleasant user experience.
see [comment 
1|https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Index+Handlers?focusedCommentId=61326627#comment-61326627]
 and [comment 
2|https://issues.apache.org/jira/browse/SOLR-6304?focusedCommentId=14988279&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14988279]
 

It's proposed to explicitly throw an error if {{mapUniqueKeyOnly=true}} and 
{{f=..}} are supplied, like it's done now with srcField=.. and split=.. : 
{{"msg":"Raw data can be stored only if split=/","code":400,}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1592: POMs out of sync

2015-11-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1592/

No tests ran.

Build Log:
[...truncated 24666 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:791: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:290: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/build.xml:409:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:574:
 Error deploying artifact 'org.apache.lucene:lucene-solr-grandparent:pom': 
Error installing artifact's metadata: Error while deploying metadata: Failed to 
transfer file: 
https://repository.apache.org/content/repositories/snapshots/org/apache/lucene/lucene-solr-grandparent/maven-metadata.xml.
 Return code is: 502

Total time: 11 minutes 48 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1007 - Still Failing

2015-11-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1007/

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=27929, name=collection4, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=27929, name=collection4, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:45597/v: collection already exists: 
awholynewstresscollection_collection4_1
at __randomizedtesting.SeedInfo.seed([879120FF3D28603D]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1574)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1595)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:888)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
188 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=28412, 
name=searcherExecutor-3491-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=27631, 
name=searcherExecutor-2994-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=28976, 
name=zkCallback-722-thread-24-processing-n:127.0.0.1:44458_v-EventThread, 
state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
4) Thread[id=28419, name=searcherExecutor-3483-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)   
  at 
java.util.concurrent.Thread

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_66) - Build # 14792 - Failure!

2015-11-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14792/
Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([72E7104EE9F8F3BA:D5A3A8EA8443E003]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:465)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.clearSourceCollection(BaseCdcrDistributedZkTest.java:319)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTestPartialReplicationAfterPeerSync(CdcrReplicationHandlerTest.java:158)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest(CdcrReplicationHandlerTest.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1660)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:866)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:902)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:916)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:777)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:822)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.Statement

[jira] [Commented] (SOLR-8173) CLONE - Leader recovery process can select the wrong leader if all replicas for a shard are down and trying to recover as well as lose updates that should have been reco

2015-11-04 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14991039#comment-14991039
 ] 

Varun Thacker commented on SOLR-8173:
-

I tried to reproduce this and I think there could be two bugs in play here:

1. The bug Matteo mentioned . These were the steps I used to reproduce it

{code}
./bin/solr start -e cloud -noprompt -z localhost:2181

http://localhost:8983/solr/admin/collections?action=CREATE&name=test3&collection.configName=gettingstarted&numShards=1&replicationFactor=2

core_node1 = core_node2 = active

./bin/solr stop -p 7574

core_node2 = down

curl http://127.0.0.1:8983/solr/test3/update?commit=true -H 
'Content-type:application/json' -d '[{"id" : "1"}]'

./bin/solr stop -p 8983

./bin/solr start -c -z localhost:2181 -s example/cloud/node2/solr -p 7574; 
sleep 10; ./bin/solr start -c -z localhost:2181 -s example/cloud/node1/solr -p 
8983

At this point both replicas are 'ACTIVE' , replica 2 becomes the leader and the 
collection has 0 documents.
{code}

2. A slight variation of the test also leads to lost updates. These were the 
steps I used to reproduce it.

{code}
./bin/solr start -e cloud -noprompt -z localhost:2181

http://localhost:8983/solr/admin/collections?action=CREATE&name=test1&collection.configName=gettingstarted&numShards=1&replicationFactor=2

core_node1 = core_node2 = active

./bin/solr stop -p 7574

core_node2 = down

curl http://127.0.0.1:8983/solr/test1/update?commit=true -H 
'Content-type:application/json' -d '[{"id" : "1"}]'

./bin/solr stop -p 8983

./bin/solr start -c -z localhost:2181 -s example/cloud/node2/solr -p 7574
{code}

{code}
Replica 2 does not take leadership till timeout. It stays in down state.

INFO  - 2015-10-26 23:15:53.026; [c:test1 s:shard1 r:core_node2 
x:test1_shard1_replica1] org.apache.solr.cloud.ShardLeaderElectionContext; 
Waiting until we see more replicas up for shard shard1: total=2 found=1 
timeoutin=139681ms

Replica 2 becomes leader after timeout

INFO  - 2015-10-26 23:18:13.127; [c:test1 s:shard1 r:core_node2 
x:test1_shard1_replica1] org.apache.solr.cloud.ShardLeaderElectionContext; Was 
waiting for replicas to come up, but they are taking too long - assuming they 
won't come back till later
INFO  - 2015-10-26 23:18:13.128; [c:test1 s:shard1 r:core_node2 
x:test1_shard1_replica1] org.apache.solr.cloud.ShardLeaderElectionContext; I 
may be the new leader - try and sync
INFO  - 2015-10-26 23:18:13.129; [c:test1 s:shard1 r:core_node2 
x:test1_shard1_replica1] org.apache.solr.cloud.SyncStrategy; Sync replicas to 
http://192.168.1.9:7574/solr/test1_shard1_replica1/
INFO  - 2015-10-26 23:18:13.129; [c:test1 s:shard1 r:core_node2 
x:test1_shard1_replica1] org.apache.solr.cloud.SyncStrategy; Sync Success - now 
sync replicas to me
INFO  - 2015-10-26 23:18:13.130; [c:test1 s:shard1 r:core_node2 
x:test1_shard1_replica1] org.apache.solr.cloud.SyncStrategy; 
http://192.168.1.9:7574/solr/test1_shard1_replica1/ has no replicas
INFO  - 2015-10-26 23:18:13.131; [c:test1 s:shard1 r:core_node2 
x:test1_shard1_replica1] org.apache.solr.cloud.ShardLeaderElectionContext; I am 
the new leader: http://192.168.1.9:7574/solr/test1_shard1_replica1/ shard1
{code}

So for the first case I am guessing that the znode at the head gets picked up 
as the leader when all replicas are active. If thats the case can we pick the 
replica which has the latest data ?
In the second case after the timeout a replica can become a leader. Thinking 
aloud should we mark the replica as recovery failed instead by default and have 
a parameter which when specified allows any replica to become the leader? 

> CLONE - Leader recovery process can select the wrong leader if all replicas 
> for a shard are down and trying to recover as well as lose updates that 
> should have been recovered.
> ---
>
> Key: SOLR-8173
> URL: https://issues.apache.org/jira/browse/SOLR-8173
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Matteo Grolla
>Assignee: Mark Miller
>Priority: Critical
>  Labels: leader, recovery
> Fix For: 5.2.1
>
> Attachments: solr_8983.log, solr_8984.log
>
>
> I'm doing this test
> collection test is replicated on two solr nodes running on 8983, 8984
> using external zk
> initially both nodes are empty
> 1)turn on solr 8983
> 2)add,commit a doc x con solr 8983
> 3)turn off solr 8983
> 4)turn on solr 8984
> 5)shortly after (leader still not elected) turn on solr 8983
> 6)8984 is elected as leader
> 7)doc x is present on 8983 but not on 8984 (check issuing a query)
> In attachment are the solr.log files of both instances



--
This message was sent by

[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 579 - Still Failing

2015-11-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/579/

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:47286/b_ghk/l/awholynewcollection_0: 
Expected mime type application/octet-stream but got text/html.   
 
Error 500HTTP ERROR: 500 Problem 
accessing /b_ghk/l/awholynewcollection_0/select. Reason: 
{trace=java.lang.NullPointerException  at 
org.apache.solr.servlet.HttpSolrCall.getCoreByCollection(HttpSolrCall.java:786) 
 at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:270)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:415)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
  at 
org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)  
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364)  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)  
at org.eclipse.jetty.server.Server.handle(Server.java:499)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)  at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
 at java.lang.Thread.run(Thread.java:745) ,code=500} Powered by Jetty://   

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:47286/b_ghk/l/awholynewcollection_0: Expected 
mime type application/octet-stream but got text/html. 


Error 500 


HTTP ERROR: 500
Problem accessing /b_ghk/l/awholynewcollection_0/select. Reason:
{trace=java.lang.NullPointerException
at 
org.apache.solr.servlet.HttpSolrCall.getCoreByCollection(HttpSolrCall.java:786)
at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:270)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:415)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.eclipse.jetty.io.AbstractCon

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_66) - Build # 14790 - Still Failing!

2015-11-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14790/
Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestAuthenticationFramework

Error Message:
16 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestAuthenticationFramework: 1) Thread[id=6058, 
name=qtp1109997419-6058, state=TIMED_WAITING, 
group=TGRP-TestAuthenticationFramework] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=6037, 
name=qtp1109997419-6037-selector-ServerConnectorManager@2c9235f2/3, 
state=RUNNABLE, group=TGRP-TestAuthenticationFramework] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.select(SelectorManager.java:600)
 at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.run(SelectorManager.java:549)
 at 
org.eclipse.jetty.util.thread.NonBlockingThread.run(NonBlockingThread.java:52)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=6070, 
name=org.eclipse.jetty.server.session.HashSessionManager@6dfc242aTimer, 
state=TIMED_WAITING, group=TGRP-TestAuthenticationFramework] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)4) Thread[id=6047, 
name=qtp1109997419-6047-acceptor-0@13ea9f0f-ServerConnector@32a412d0{HTTP/1.1}{127.0.0.1:60416},
 state=RUNNABLE, group=TGRP-TestAuthenticationFramework] at 
sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) 
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) 
at 
org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:377)   
  at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:500)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
at java.lang.Thread.run(Thread.java:745)5) Thread[id=6061, 
name=Scheduler-113911771, state=TIMED_WAITING, 
group=TGRP-TestAuthenticationFramework] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3716 - Failure

2015-11-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3716/

1 tests failed.
FAILED:  org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:55465/ygp/m/collection1]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:55465/ygp/m/collection1]
at 
__randomizedtesting.SeedInfo.seed([66C35E26F6D493A5:EE9761FC5828FE5D]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.queryServer(AbstractFullDistribZkTestBase.java:1378)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertPartialResults(CloudExitableDirectoryReaderTest.java:103)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:75)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test(CloudExitableDirectoryReaderTest.java:54)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1660)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:866)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:902)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:916)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:777)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:822)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRul

[jira] [Updated] (SOLR-8239) Deprecate/rename DefaultSimilarityFactory to ClassicSimilarityFactory and remove DefaultSimilarityFactory in trunk

2015-11-04 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-8239:
---
Attachment: SOLR-8239.patch

Before applying this patch, run the following svn commands...

{code}
svn cp 
solr/core/src/java/org/apache/solr/search/similarities/DefaultSimilarityFactory.java
 
solr/core/src/java/org/apache/solr/search/similarities/ClassicSimilarityFactory.java
svn mv 
solr/core/src/test/org/apache/solr/search/similarities/TestDefaultSimilarityFactory.java
 
solr/core/src/test/org/apache/solr/search/similarities/TestClassicSimilarityFactory.java
{code}

Changes included in patch...

* clone DefaultSimilarityFactory to ClassicSimilarityFactory
* prune DefaultSimilarityFactory down to a trivial subclass of 
ClassicSimilarityFactory
** class is marked deprecated with link to ClassicSimilarityFactory
** init metod logs a warning about deprecation / new name if used
** class can be removed from trunk after backporting
* Change IndexSchema to use ClassicSimilarityFactory by default instead of 
DefaultSimilarityFactory
* Change SweetSpotSimilarityFactory to subclass ClassicSimilarityFactory
* update jdocs for SchemaSimilarityFactory & TestNonDefinedSimilarityFactory to 
refer to ClassicSimilarity/Factory
* update TestSchemaSimilarityResource & (solrj's) SchemaTest to expect 
ClassicSimilarityFactory as default
* remove gratuitious refrences to DefaultSimilarity / DefaultSimilarityFactory 
from various test schema files
* rename TestDefaultSimilarityFactory to TestClassicSimilarityFactory
** update javadocs to make it clear this is actually testing _explicit_ uses of 
the factory on a per-fieldtype basis via SchemaSimilarityFactory
** update test to verify explicit configurations of both 
ClassicSimilarityFactory & DefaultSimilarityFactory
*** DefaultSimilarityFactory assertions can be removed from trunk after 
backporting
** refactor existing assertions to legeral two-arg getSimilarity method in 
superclass
** additions to schema-tfidf.xml as needed for new assertions
*** did some re-org of schema-tfidf.xml for readability


> Deprecate/rename DefaultSimilarityFactory to ClassicSimilarityFactory and 
> remove DefaultSimilarityFactory in trunk
> --
>
> Key: SOLR-8239
> URL: https://issues.apache.org/jira/browse/SOLR-8239
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: Trunk
>
> Attachments: SOLR-8239.patch
>
>
> As outlined in parent issue...
> * clone DefaultSimilarityFactory -> ClassicSimilarityFactory
> * prune DefaultSimilarityFactory down to a trivial subclass of 
> ClassicSimilarityFactory
> ** make it log a warning on init
> * change default behavior of IndexSchema to use ClassicSimilarityFactory 
> directly
> * mark DefaultSimilarityFactory as deprecated in 5.x, remove from trunk/6.0
> This should put us in a better position moving forward of having the facotry 
> names directly map to the underlying implementation, leaving less ambiguity 
> when an explicit factory is specified in the schema.xml (either as the main 
> similarity, or as a per field similarity)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8239) Deprecate/rename DefaultSimilarityFactory to ClassicSimilarityFactory and remove DefaultSimilarityFactory in trunk

2015-11-04 Thread Hoss Man (JIRA)
Hoss Man created SOLR-8239:
--

 Summary: Deprecate/rename DefaultSimilarityFactory to 
ClassicSimilarityFactory and remove DefaultSimilarityFactory in trunk
 Key: SOLR-8239
 URL: https://issues.apache.org/jira/browse/SOLR-8239
 Project: Solr
  Issue Type: Sub-task
Reporter: Hoss Man
Assignee: Hoss Man



As outlined in parent issue...

* clone DefaultSimilarityFactory -> ClassicSimilarityFactory
* prune DefaultSimilarityFactory down to a trivial subclass of 
ClassicSimilarityFactory
** make it log a warning on init
* change default behavior of IndexSchema to use ClassicSimilarityFactory 
directly
* mark DefaultSimilarityFactory as deprecated in 5.x, remove from trunk/6.0

This should put us in a better position moving forward of having the facotry 
names directly map to the underlying implementation, leaving less ambiguity 
when an explicit factory is specified in the schema.xml (either as the main 
similarity, or as a per field similarity)





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 14499 - Failure!

2015-11-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14499/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 1814 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/home/jenkins/tools/java/64bit/jdk1.7.0_80/jre/bin/java -XX:-UseCompressedOops 
-XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/heapdumps -ea 
-esa -Dtests.prefix=tests -Dtests.seed=8B1EB52FDF3073D8 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.4.0 
-Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/core/test/temp
 -Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=5.4.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/core/test/J1
 -Djunit4.childvm.id=1 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true -Dtests.disableHdfs=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=ISO-8859-1 -classpath 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/codecs/classes/java:/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/test-framework/classes/java:/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/test-framework/lib/junit-4.10.jar:/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/test-framework/lib/randomizedtesting-runner-2.2.0.jar:/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/core/classes/java:/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/core/classes/test:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-launcher.jar:/var/lib/jenkins/.ant/lib/ivy-2.3.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jdepend.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bcel.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jmf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit4.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-xalan2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-javamail.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jai.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bsf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-logging.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-net.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-resolver.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-log4j.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-oro.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-antlr.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jsch.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-regexp.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-swing.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-testutil.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-netrexx.jar:/home/jenkins/tools/java/64bit/jdk1.7.0_80/lib/tools.jar:/var/lib/jenkins/.ivy2/cache/com.carrotsearch.randomizedtesting/junit4-ant/jars/junit4-ant-2.2.0.jar
 com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe -eventsfile 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/core/test/temp/junit4-J1-20151104_233757_853.events
 
@/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/build/core/test/temp/junit4-J1-20151104_233757_853.suites
 -stdin
   [junit4] ERROR: JVM J1 ended with an exception: Quit

[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b90) - Build # 14789 - Failure!

2015-11-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14789/
Java: 64bit/jdk1.9.0-ea-b90 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
Captured an uncaught exception in thread: Thread[id=10042, 
name=RecoveryThread-source_collection_shard1_replica2, state=RUNNABLE, 
group=TGRP-CdcrReplicationHandlerTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=10042, 
name=RecoveryThread-source_collection_shard1_replica2, state=RUNNABLE, 
group=TGRP-CdcrReplicationHandlerTest]
Caused by: org.apache.solr.common.cloud.ZooKeeperException: 
at __randomizedtesting.SeedInfo.seed([C3F0AD24BF85C53D]:0)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:232)
Caused by: org.apache.solr.common.SolrException: java.io.FileNotFoundException: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.CdcrReplicationHandlerTest_C3F0AD24BF85C53D-001/jetty-001/cores/source_collection_shard1_replica2/data/tlog/tlog.006.1516953613321109504
 (No such file or directory)
at 
org.apache.solr.update.CdcrTransactionLog.reopenOutputStream(CdcrTransactionLog.java:244)
at 
org.apache.solr.update.CdcrTransactionLog.incref(CdcrTransactionLog.java:173)
at 
org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1079)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1579)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1610)
at org.apache.solr.core.SolrCore.seedVersionBuckets(SolrCore.java:877)
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:534)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:225)
Caused by: java.io.FileNotFoundException: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.CdcrReplicationHandlerTest_C3F0AD24BF85C53D-001/jetty-001/cores/source_collection_shard1_replica2/data/tlog/tlog.006.1516953613321109504
 (No such file or directory)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:327)
at java.io.RandomAccessFile.(RandomAccessFile.java:243)
at 
org.apache.solr.update.CdcrTransactionLog.reopenOutputStream(CdcrTransactionLog.java:236)
... 7 more




Build Log:
[...truncated 10721 lines...]
   [junit4] Suite: org.apache.solr.cloud.CdcrReplicationHandlerTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.CdcrReplicationHandlerTest_C3F0AD24BF85C53D-001/init-core-data-001
   [junit4]   2> 1176029 INFO  
(SUITE-CdcrReplicationHandlerTest-seed#[C3F0AD24BF85C53D]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false)
   [junit4]   2> 1176030 INFO  
(SUITE-CdcrReplicationHandlerTest-seed#[C3F0AD24BF85C53D]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 1176031 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[C3F0AD24BF85C53D]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1176031 INFO  (Thread-4106) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1176031 INFO  (Thread-4106) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1176131 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[C3F0AD24BF85C53D]) [] 
o.a.s.c.ZkTestServer start zk server on port:41031
   [junit4]   2> 1176131 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[C3F0AD24BF85C53D]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1176132 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[C3F0AD24BF85C53D]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1176134 INFO  (zkCallback-1186-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@55d91d7e 
name:ZooKeeperConnection Watcher:127.0.0.1:41031 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 1176134 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[C3F0AD24BF85C53D]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 1176134 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[C3F0AD24BF85C53D]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 1176135 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[C3F0AD24BF85C53D]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 1176136 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[C3F0AD24BF85C53D]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4] 

[jira] [Commented] (LUCENE-6879) Allow to define custom CharTokenizer using Java 8 Lambdas/Method references

2015-11-04 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990718#comment-14990718
 ] 

Uwe Schindler commented on LUCENE-6879:
---

Just FYI: I did some quick microbenchmark like this:

{code:java}
// init & warmup
String text = "Tokenizer(Test)FooBar";
String[] result = new String[] { "tokenizer", "test", "foobar" };
final Tokenizer tokenizer1 = 
CharTokenizer.fromTokenCharPredicate(Character::isLetter, 
Character::toLowerCase);
for (int i = 0; i < 1; i++) {
  tokenizer1.setReader(new StringReader(text));
  assertTokenStreamContents(tokenizer1, result);
}
final Tokenizer tokenizer2 = new LowerCaseTokenizer();
for (int i = 0; i < 1; i++) {
  tokenizer2.setReader(new StringReader(text));
  assertTokenStreamContents(tokenizer2, result);
}

// speed test
long [] lens1 = new long[100], lens2 = new long[100]; 
for (int j = 0; j < 100; j++) {
  System.out.println("Run: " + j);
  long start1 = System.currentTimeMillis();
  for (int i = 0; i < 100; i++) {
tokenizer1.setReader(new StringReader(text));
assertTokenStreamContents(tokenizer1, result);
  }
  lens1[j] = System.currentTimeMillis() - start1;
  
  long start2 = System.currentTimeMillis();
  for (int i = 0; i < 100; i++) {
tokenizer2.setReader(new StringReader(text));
assertTokenStreamContents(tokenizer2, result);
  }
  lens2[j] = System.currentTimeMillis() - start2;
}

System.out.println("Time Lambda: " + Arrays.stream(lens1).summaryStatistics());
System.out.println("Time Old: " + Arrays.stream(lens2).summaryStatistics());
{code}

I was not able to find any speed difference after warmup:
- Time Lambda: LongSummaryStatistics{count=100, sum=58267, min=562, 
average=582.67, max=871}
- Time Old: LongSummaryStatistics{count=100, sum=61489, min=600, 
average=614.89, max=721}



> Allow to define custom CharTokenizer using Java 8 Lambdas/Method references
> ---
>
> Key: LUCENE-6879
> URL: https://issues.apache.org/jira/browse/LUCENE-6879
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: Trunk
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java8
> Fix For: Trunk
>
> Attachments: LUCENE-6879.patch, LUCENE-6879.patch
>
>
> As a followup from LUCENE-6874, I thought about how to generate custom 
> CharTokenizers wthout subclassing. I had this quite often and I was a bit 
> annoyed, that you had to create a subclass every time.
> This issue is using the pattern like ThreadLocal or many collection methods 
> in Java 8: You have the (abstract) base class and you define a factory method 
> named {{fromXxxPredicate}} (like {{ThreadLocal.withInitial(() -> value}}).
> {code:java}
> public static CharTokenizer 
> fromTokenCharPredicate(java.util.function.IntPredicate predicate)
> {code}
> This would allow to define a new CharTokenizer with a single line statement 
> using any predicate:
> {code:java}
> // long variant with lambda:
> Tokenizer tok = CharTokenizer.fromTokenCharPredicate(c -> 
> !UCharacter.isUWhiteSpace(c));
> // method reference for separator char predicate + normalization by 
> uppercasing:
> Tokenizer tok = 
> CharTokenizer.fromSeparatorCharPredicate(UCharacter::isUWhiteSpace, 
> Character::toUpperCase);
> // method reference to custom function:
> private boolean myTestFunction(int c) {
>  return (cracy condition);
> }
> Tokenizer tok = CharTokenizer.fromTokenCharPredicate(this::myTestFunction);
> {code}
> I know this would not help Solr users that want to define the Tokenizer in a 
> config file, but for real Lucene users this Java 8-like way would be easy and 
> elegant to use. It is fast as hell, as it is just a reference to a method and 
> Java 8 is optimized for that.
> The inverted factories {{fromSeparatorCharPredicate()}} are provided to allow 
> quick definition without lambdas using method references. In lots of cases, 
> like WhitespaceTokenizer, predicates are on the separator chars 
> ({{isWhitespace(int)}}, so using the 2nd set of factories you can define them 
> without the counter-intuitive negation. Internally it just uses 
> {{Predicate#negate()}}.
> The factories also allow to give the normalization function, e.g. to 
> Lowercase, you may just give {{Character::toLowerCase}} as 
> {{IntUnaryOperator}} reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6879) Allow to define custom CharTokenizer using Java 8 Lambdas/Method references

2015-11-04 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-6879.
---
Resolution: Fixed
  Assignee: Uwe Schindler

Thanks for review!

> Allow to define custom CharTokenizer using Java 8 Lambdas/Method references
> ---
>
> Key: LUCENE-6879
> URL: https://issues.apache.org/jira/browse/LUCENE-6879
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: Trunk
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java8
> Fix For: Trunk
>
> Attachments: LUCENE-6879.patch, LUCENE-6879.patch
>
>
> As a followup from LUCENE-6874, I thought about how to generate custom 
> CharTokenizers wthout subclassing. I had this quite often and I was a bit 
> annoyed, that you had to create a subclass every time.
> This issue is using the pattern like ThreadLocal or many collection methods 
> in Java 8: You have the (abstract) base class and you define a factory method 
> named {{fromXxxPredicate}} (like {{ThreadLocal.withInitial(() -> value}}).
> {code:java}
> public static CharTokenizer 
> fromTokenCharPredicate(java.util.function.IntPredicate predicate)
> {code}
> This would allow to define a new CharTokenizer with a single line statement 
> using any predicate:
> {code:java}
> // long variant with lambda:
> Tokenizer tok = CharTokenizer.fromTokenCharPredicate(c -> 
> !UCharacter.isUWhiteSpace(c));
> // method reference for separator char predicate + normalization by 
> uppercasing:
> Tokenizer tok = 
> CharTokenizer.fromSeparatorCharPredicate(UCharacter::isUWhiteSpace, 
> Character::toUpperCase);
> // method reference to custom function:
> private boolean myTestFunction(int c) {
>  return (cracy condition);
> }
> Tokenizer tok = CharTokenizer.fromTokenCharPredicate(this::myTestFunction);
> {code}
> I know this would not help Solr users that want to define the Tokenizer in a 
> config file, but for real Lucene users this Java 8-like way would be easy and 
> elegant to use. It is fast as hell, as it is just a reference to a method and 
> Java 8 is optimized for that.
> The inverted factories {{fromSeparatorCharPredicate()}} are provided to allow 
> quick definition without lambdas using method references. In lots of cases, 
> like WhitespaceTokenizer, predicates are on the separator chars 
> ({{isWhitespace(int)}}, so using the 2nd set of factories you can define them 
> without the counter-intuitive negation. Internally it just uses 
> {{Predicate#negate()}}.
> The factories also allow to give the normalization function, e.g. to 
> Lowercase, you may just give {{Character::toLowerCase}} as 
> {{IntUnaryOperator}} reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6879) Allow to define custom CharTokenizer using Java 8 Lambdas/Method references

2015-11-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990610#comment-14990610
 ] 

ASF subversion and git services commented on LUCENE-6879:
-

Commit 1712682 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1712682 ]

LUCENE-6879: Allow to define custom CharTokenizer instances without subclassing 
using Java 8 lambdas or method references

> Allow to define custom CharTokenizer using Java 8 Lambdas/Method references
> ---
>
> Key: LUCENE-6879
> URL: https://issues.apache.org/jira/browse/LUCENE-6879
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: Trunk
>Reporter: Uwe Schindler
>  Labels: Java8
> Fix For: Trunk
>
> Attachments: LUCENE-6879.patch, LUCENE-6879.patch
>
>
> As a followup from LUCENE-6874, I thought about how to generate custom 
> CharTokenizers wthout subclassing. I had this quite often and I was a bit 
> annoyed, that you had to create a subclass every time.
> This issue is using the pattern like ThreadLocal or many collection methods 
> in Java 8: You have the (abstract) base class and you define a factory method 
> named {{fromXxxPredicate}} (like {{ThreadLocal.withInitial(() -> value}}).
> {code:java}
> public static CharTokenizer 
> fromTokenCharPredicate(java.util.function.IntPredicate predicate)
> {code}
> This would allow to define a new CharTokenizer with a single line statement 
> using any predicate:
> {code:java}
> // long variant with lambda:
> Tokenizer tok = CharTokenizer.fromTokenCharPredicate(c -> 
> !UCharacter.isUWhiteSpace(c));
> // method reference for separator char predicate + normalization by 
> uppercasing:
> Tokenizer tok = 
> CharTokenizer.fromSeparatorCharPredicate(UCharacter::isUWhiteSpace, 
> Character::toUpperCase);
> // method reference to custom function:
> private boolean myTestFunction(int c) {
>  return (cracy condition);
> }
> Tokenizer tok = CharTokenizer.fromTokenCharPredicate(this::myTestFunction);
> {code}
> I know this would not help Solr users that want to define the Tokenizer in a 
> config file, but for real Lucene users this Java 8-like way would be easy and 
> elegant to use. It is fast as hell, as it is just a reference to a method and 
> Java 8 is optimized for that.
> The inverted factories {{fromSeparatorCharPredicate()}} are provided to allow 
> quick definition without lambdas using method references. In lots of cases, 
> like WhitespaceTokenizer, predicates are on the separator chars 
> ({{isWhitespace(int)}}, so using the 2nd set of factories you can define them 
> without the counter-intuitive negation. Internally it just uses 
> {{Predicate#negate()}}.
> The factories also allow to give the normalization function, e.g. to 
> Lowercase, you may just give {{Character::toLowerCase}} as 
> {{IntUnaryOperator}} reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8234) Federated Search (new) - DJoin

2015-11-04 Thread Tom Winch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Winch updated SOLR-8234:

Attachment: SOLR-8234.patch

Patch against 4.10.3

> Federated Search (new) - DJoin
> --
>
> Key: SOLR-8234
> URL: https://issues.apache.org/jira/browse/SOLR-8234
> Project: Solr
>  Issue Type: New Feature
>Reporter: Tom Winch
>Priority: Minor
>  Labels: federated_search
> Fix For: 4.10.3
>
> Attachments: SOLR-8234.patch
>
>
> This issue describes a MergeStrategy implementation (DJoin) to facilitate 
> federated search - that is, distributed search over documents stored in 
> separated instances of SOLR (for example, one server per continent), where a 
> single document (identified by an agreed, common unique id) may be stored in 
> more than one server instance, with (possibly) differing fields and data.
> When the MergeStrategy is used in a request handler (via the included 
> QParser) in combination with distributed search (shards=), documents having 
> an id that has already been seen are not discarded (as per the default 
> behaviour) but, instead, are collected and returned as a group of documents 
> all with the same id taking a single position in the result set (this is 
> implemented using parent/child documents).
> Documents are sorted in the result set based on the highest ranking document 
> with the same id. It is possible for a document ranking high in one shard to 
> rank very low on another shard. As a consequence of this, all shards must be 
> asked to return the fields for every document id in the result set (not just 
> of those documents they returned), so that all the component parts of each 
> document in the search result set are returned.
> This issue combines with others to provide full federated search support. See 
> also SOLR-8235 and SOLR-8236.
> --
> Note that this is part of a new implementation of federated search as opposed 
> to the older issues SOLR-3799 through SOLR-3805.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6879) Allow to define custom CharTokenizer using Java 8 Lambdas/Method references

2015-11-04 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6879:
--
Attachment: LUCENE-6879.patch

New patch with improved Javadocs. Will commit this soon.

> Allow to define custom CharTokenizer using Java 8 Lambdas/Method references
> ---
>
> Key: LUCENE-6879
> URL: https://issues.apache.org/jira/browse/LUCENE-6879
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: Trunk
>Reporter: Uwe Schindler
>  Labels: Java8
> Fix For: Trunk
>
> Attachments: LUCENE-6879.patch, LUCENE-6879.patch
>
>
> As a followup from LUCENE-6874, I thought about how to generate custom 
> CharTokenizers wthout subclassing. I had this quite often and I was a bit 
> annoyed, that you had to create a subclass every time.
> This issue is using the pattern like ThreadLocal or many collection methods 
> in Java 8: You have the (abstract) base class and you define a factory method 
> named {{fromXxxPredicate}} (like {{ThreadLocal.withInitial(() -> value}}).
> {code:java}
> public static CharTokenizer 
> fromTokenCharPredicate(java.util.function.IntPredicate predicate)
> {code}
> This would allow to define a new CharTokenizer with a single line statement 
> using any predicate:
> {code:java}
> // long variant with lambda:
> Tokenizer tok = CharTokenizer.fromTokenCharPredicate(c -> 
> !UCharacter.isUWhiteSpace(c));
> // method reference for separator char predicate + normalization by 
> uppercasing:
> Tokenizer tok = 
> CharTokenizer.fromSeparatorCharPredicate(UCharacter::isUWhiteSpace, 
> Character::toUpperCase);
> // method reference to custom function:
> private boolean myTestFunction(int c) {
>  return (cracy condition);
> }
> Tokenizer tok = CharTokenizer.fromTokenCharPredicate(this::myTestFunction);
> {code}
> I know this would not help Solr users that want to define the Tokenizer in a 
> config file, but for real Lucene users this Java 8-like way would be easy and 
> elegant to use. It is fast as hell, as it is just a reference to a method and 
> Java 8 is optimized for that.
> The inverted factories {{fromSeparatorCharPredicate()}} are provided to allow 
> quick definition without lambdas using method references. In lots of cases, 
> like WhitespaceTokenizer, predicates are on the separator chars 
> ({{isWhitespace(int)}}, so using the 2nd set of factories you can define them 
> without the counter-intuitive negation. Internally it just uses 
> {{Predicate#negate()}}.
> The factories also allow to give the normalization function, e.g. to 
> Lowercase, you may just give {{Character::toLowerCase}} as 
> {{IntUnaryOperator}} reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6879) Allow to define custom CharTokenizer using Java 8 Lambdas/Method references

2015-11-04 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6879:
--
Labels: Java8  (was: )

> Allow to define custom CharTokenizer using Java 8 Lambdas/Method references
> ---
>
> Key: LUCENE-6879
> URL: https://issues.apache.org/jira/browse/LUCENE-6879
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: Trunk
>Reporter: Uwe Schindler
>  Labels: Java8
> Fix For: Trunk
>
> Attachments: LUCENE-6879.patch, LUCENE-6879.patch
>
>
> As a followup from LUCENE-6874, I thought about how to generate custom 
> CharTokenizers wthout subclassing. I had this quite often and I was a bit 
> annoyed, that you had to create a subclass every time.
> This issue is using the pattern like ThreadLocal or many collection methods 
> in Java 8: You have the (abstract) base class and you define a factory method 
> named {{fromXxxPredicate}} (like {{ThreadLocal.withInitial(() -> value}}).
> {code:java}
> public static CharTokenizer 
> fromTokenCharPredicate(java.util.function.IntPredicate predicate)
> {code}
> This would allow to define a new CharTokenizer with a single line statement 
> using any predicate:
> {code:java}
> // long variant with lambda:
> Tokenizer tok = CharTokenizer.fromTokenCharPredicate(c -> 
> !UCharacter.isUWhiteSpace(c));
> // method reference for separator char predicate + normalization by 
> uppercasing:
> Tokenizer tok = 
> CharTokenizer.fromSeparatorCharPredicate(UCharacter::isUWhiteSpace, 
> Character::toUpperCase);
> // method reference to custom function:
> private boolean myTestFunction(int c) {
>  return (cracy condition);
> }
> Tokenizer tok = CharTokenizer.fromTokenCharPredicate(this::myTestFunction);
> {code}
> I know this would not help Solr users that want to define the Tokenizer in a 
> config file, but for real Lucene users this Java 8-like way would be easy and 
> elegant to use. It is fast as hell, as it is just a reference to a method and 
> Java 8 is optimized for that.
> The inverted factories {{fromSeparatorCharPredicate()}} are provided to allow 
> quick definition without lambdas using method references. In lots of cases, 
> like WhitespaceTokenizer, predicates are on the separator chars 
> ({{isWhitespace(int)}}, so using the 2nd set of factories you can define them 
> without the counter-intuitive negation. Internally it just uses 
> {{Predicate#negate()}}.
> The factories also allow to give the normalization function, e.g. to 
> Lowercase, you may just give {{Character::toLowerCase}} as 
> {{IntUnaryOperator}} reference.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8166) Introduce possibility to configure ParseContext in ExtractingRequestHandler/ExtractingDocumentLoader

2015-11-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990587#comment-14990587
 ] 

ASF subversion and git services commented on SOLR-8166:
---

Commit 1712678 from [~thetaphi] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1712678 ]

Merged revision(s) 1712677 from lucene/dev/trunk:
SOLR-8166: Add some null checks

> Introduce possibility to configure ParseContext in 
> ExtractingRequestHandler/ExtractingDocumentLoader
> 
>
> Key: SOLR-8166
> URL: https://issues.apache.org/jira/browse/SOLR-8166
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 5.3
>Reporter: Andriy Binetsky
>Assignee: Uwe Schindler
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8166.patch, SOLR-8166.patch
>
>
> Actually there is no possibility to hand over some additional configuration 
> by document extracting with ExtractingRequestHandler/ExtractingDocumentLoader.
> For example I need to put org.apache.tika.parser.pdf.PDFParserConfig with 
> "extractInlineImages" set to true in ParseContext to trigger extraction/OCR 
> recognizing of embedded images from pdf. 
> It would be nice to have possibility to configure created ParseContext due 
> xml-config file like TikaConfig does.
> I would suggest to have following:
> solrconfig.xml:
>class="org.apache.solr.handler.extraction.ExtractingRequestHandler">
> parseContext.config
>   
> parseContext.config:
> 
>value="org.apache.tika.parser.pdf.PDFParserConfig">
> 
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8166) Introduce possibility to configure ParseContext in ExtractingRequestHandler/ExtractingDocumentLoader

2015-11-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990581#comment-14990581
 ] 

ASF subversion and git services commented on SOLR-8166:
---

Commit 1712677 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1712677 ]

SOLR-8166: Add some null checks

> Introduce possibility to configure ParseContext in 
> ExtractingRequestHandler/ExtractingDocumentLoader
> 
>
> Key: SOLR-8166
> URL: https://issues.apache.org/jira/browse/SOLR-8166
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 5.3
>Reporter: Andriy Binetsky
>Assignee: Uwe Schindler
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8166.patch, SOLR-8166.patch
>
>
> Actually there is no possibility to hand over some additional configuration 
> by document extracting with ExtractingRequestHandler/ExtractingDocumentLoader.
> For example I need to put org.apache.tika.parser.pdf.PDFParserConfig with 
> "extractInlineImages" set to true in ParseContext to trigger extraction/OCR 
> recognizing of embedded images from pdf. 
> It would be nice to have possibility to configure created ParseContext due 
> xml-config file like TikaConfig does.
> I would suggest to have following:
> solrconfig.xml:
>class="org.apache.solr.handler.extraction.ExtractingRequestHandler">
> parseContext.config
>   
> parseContext.config:
> 
>value="org.apache.tika.parser.pdf.PDFParserConfig">
> 
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6304) Transforming and Indexing custom JSON data

2015-11-04 Thread Kelly Kagen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990557#comment-14990557
 ] 

Kelly Kagen commented on SOLR-6304:
---

(y) Thank you for the note and it worked this time with defined fields in 
schema.xml.

Should it have worked for dynamic fields, as these too defined in the schema? 
FYI, it didn't work in my case and works only with fully defined (static) 
fields.

> Transforming and Indexing custom JSON data
> --
>
> Key: SOLR-6304
> URL: https://issues.apache.org/jira/browse/SOLR-6304
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 4.10, Trunk
>
> Attachments: SOLR-6304.patch, SOLR-6304.patch
>
>
> example
> {noformat}
> curl 
> localhost:8983/update/json/docs?split=/batters/batter&f=recipeId:/id&f=recipeType:/type&f=id:/batters/batter/id&f=type:/batters/batter/type
>  -d '
> {
>   "id": "0001",
>   "type": "donut",
>   "name": "Cake",
>   "ppu": 0.55,
>   "batters": {
>   "batter":
>   [
>   { "id": "1001", "type": 
> "Regular" },
>   { "id": "1002", "type": 
> "Chocolate" },
>   { "id": "1003", "type": 
> "Blueberry" },
>   { "id": "1004", "type": 
> "Devil's Food" }
>   ]
>   }
> }'
> {noformat}
> should produce the following output docs
> {noformat}
> { "recipeId":"001", "recipeType":"donut", "id":"1001", "type":"Regular" }
> { "recipeId":"001", "recipeType":"donut", "id":"1002", "type":"Chocolate" }
> { "recipeId":"001", "recipeType":"donut", "id":"1003", "type":"Blueberry" }
> { "recipeId":"001", "recipeType":"donut", "id":"1004", "type":"Devil's food" }
> {noformat}
> the split param is the element in the tree where it should be split into 
> multiple docs. The 'f' are field name mappings



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b90) - Build # 14497 - Failure!

2015-11-04 Thread Varun Thacker
Fixed : https://svn.apache.org/viewvc?view=revision&revision=1712658 .
Sorry about the trouble it might have caused

On Wed, Nov 4, 2015 at 1:04 PM, Varun Thacker  wrote:

> Committing a fix shortly. I must have typed something on that window by
> mistake just before committing
>
> On Wed, Nov 4, 2015 at 11:39 AM, Policeman Jenkins Server <
> jenk...@thetaphi.de> wrote:
>
>> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14497/
>> Java: 64bit/jdk1.9.0-ea-b90 -XX:-UseCompressedOops -XX:+UseG1GC
>>
>> All tests passed
>>
>> Build Log:
>> [...truncated 9740 lines...]
>> [javac] Compiling 595 source files to
>> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/test
>> [javac] warning: [options] bootstrap class path not set in
>> conjunction with -source 1.7
>> [javac]
>> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestRandomRequestDistribution.java:48:
>> error: class gR is public, should be declared in a file named gR.java
>> [javac] public class gR extends AbstractFullDistribZkTestBase {
>> [javac]^
>> [javac] Note: Some input files use or override a deprecated API.
>> [javac] Note: Recompile with -Xlint:deprecation for details.
>> [javac] Note: Some input files use unchecked or unsafe operations.
>> [javac] Note: Recompile with -Xlint:unchecked for details.
>> [javac] 1 error
>>
>> [...truncated 1 lines...]
>> BUILD FAILED
>> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:785: The
>> following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:729: The
>> following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:59: The following
>> error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:233: The
>> following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:526:
>> The following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:814:
>> The following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:828:
>> The following error occurred while executing this line:
>> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:1964:
>> Compile failed; see the compiler error output for details.
>>
>> Total time: 22 minutes 45 seconds
>> Build step 'Invoke Ant' marked build as failure
>> Archiving artifacts
>> [WARNINGS] Skipping publisher since build result is FAILURE
>> Recording test results
>> Email was triggered for: Failure - Any
>> Sending email for trigger: Failure - Any
>>
>>
>
>
> --
>
>
> Regards,
> Varun Thacker
>



-- 


Regards,
Varun Thacker


[jira] [Commented] (SOLR-8215) SolrCloud can select a core not in active state for querying

2015-11-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990469#comment-14990469
 ] 

ASF subversion and git services commented on SOLR-8215:
---

Commit 1712658 from [~varunthacker] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1712658 ]

SOLR-8215: fixed typo. Class name is back to TestRandomRequestDistribution

> SolrCloud can select a core not in active state for querying
> 
>
> Key: SOLR-8215
> URL: https://issues.apache.org/jira/browse/SOLR-8215
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 5.4
>
> Attachments: SOLR-8215.patch, SOLR-8215.patch
>
>
> A query can be served by a core which is not in active state if the request 
> hits the node which hosts these non active cores.
> We explicitly check for only active cores to search against  in 
> {{CloudSolrClient#sendRequest}} Line 1043 on trunk.
> But we don't check this if someone uses the REST APIs 
> {{HttpSolrCall#getCoreByCollection}} should only pick cores which are active 
> on line 794 on trunk. 
> We however check it on line 882/883 in HttpSolrCall, when we try to find 
> cores on other nodes when it's not present locally.
> So let's fix {{HttpSolrCall#getCoreByCollection}} to make the active check as 
> well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b90) - Build # 14497 - Failure!

2015-11-04 Thread Varun Thacker
Committing a fix shortly. I must have typed something on that window by
mistake just before committing

On Wed, Nov 4, 2015 at 11:39 AM, Policeman Jenkins Server <
jenk...@thetaphi.de> wrote:

> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14497/
> Java: 64bit/jdk1.9.0-ea-b90 -XX:-UseCompressedOops -XX:+UseG1GC
>
> All tests passed
>
> Build Log:
> [...truncated 9740 lines...]
> [javac] Compiling 595 source files to
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/test
> [javac] warning: [options] bootstrap class path not set in conjunction
> with -source 1.7
> [javac]
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestRandomRequestDistribution.java:48:
> error: class gR is public, should be declared in a file named gR.java
> [javac] public class gR extends AbstractFullDistribZkTestBase {
> [javac]^
> [javac] Note: Some input files use or override a deprecated API.
> [javac] Note: Recompile with -Xlint:deprecation for details.
> [javac] Note: Some input files use unchecked or unsafe operations.
> [javac] Note: Recompile with -Xlint:unchecked for details.
> [javac] 1 error
>
> [...truncated 1 lines...]
> BUILD FAILED
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:785: The following
> error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:729: The following
> error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:59: The following
> error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:233: The
> following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:526:
> The following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:814:
> The following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:828:
> The following error occurred while executing this line:
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:1964:
> Compile failed; see the compiler error output for details.
>
> Total time: 22 minutes 45 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> [WARNINGS] Skipping publisher since build result is FAILURE
> Recording test results
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
>
>


-- 


Regards,
Varun Thacker


[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2790 - Failure!

2015-11-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2790/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 9669 lines...]
[javac] Compiling 595 source files to 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/classes/test
[javac] warning: [options] bootstrap class path not set in conjunction with 
-source 1.7
[javac] 
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/test/org/apache/solr/cloud/TestRandomRequestDistribution.java:48:
 error: class gR is public, should be declared in a file named gR.java
[javac] public class gR extends AbstractFullDistribZkTestBase {
[javac]^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 1 error

[...truncated 1 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:785: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:729: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:59: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build.xml:233: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/common-build.xml:526: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/common-build.xml:814: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/common-build.xml:828: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/lucene/common-build.xml:1964: 
Compile failed; see the compiler error output for details.

Total time: 24 minutes 20 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (SOLR-8166) Introduce possibility to configure ParseContext in ExtractingRequestHandler/ExtractingDocumentLoader

2015-11-04 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved SOLR-8166.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.4

Thanks Andriy!

> Introduce possibility to configure ParseContext in 
> ExtractingRequestHandler/ExtractingDocumentLoader
> 
>
> Key: SOLR-8166
> URL: https://issues.apache.org/jira/browse/SOLR-8166
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 5.3
>Reporter: Andriy Binetsky
>Assignee: Uwe Schindler
> Fix For: 5.4, Trunk
>
> Attachments: SOLR-8166.patch, SOLR-8166.patch
>
>
> Actually there is no possibility to hand over some additional configuration 
> by document extracting with ExtractingRequestHandler/ExtractingDocumentLoader.
> For example I need to put org.apache.tika.parser.pdf.PDFParserConfig with 
> "extractInlineImages" set to true in ParseContext to trigger extraction/OCR 
> recognizing of embedded images from pdf. 
> It would be nice to have possibility to configure created ParseContext due 
> xml-config file like TikaConfig does.
> I would suggest to have following:
> solrconfig.xml:
>class="org.apache.solr.handler.extraction.ExtractingRequestHandler">
> parseContext.config
>   
> parseContext.config:
> 
>value="org.apache.tika.parser.pdf.PDFParserConfig">
> 
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8166) Introduce possibility to configure ParseContext in ExtractingRequestHandler/ExtractingDocumentLoader

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990325#comment-14990325
 ] 

ASF GitHub Bot commented on SOLR-8166:
--

Github user abinet commented on the pull request:

https://github.com/apache/lucene-solr/pull/206#issuecomment-153851266
  
Hi, thank you for support and cooperation. I'll close pull request.


> Introduce possibility to configure ParseContext in 
> ExtractingRequestHandler/ExtractingDocumentLoader
> 
>
> Key: SOLR-8166
> URL: https://issues.apache.org/jira/browse/SOLR-8166
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 5.3
>Reporter: Andriy Binetsky
>Assignee: Uwe Schindler
> Attachments: SOLR-8166.patch, SOLR-8166.patch
>
>
> Actually there is no possibility to hand over some additional configuration 
> by document extracting with ExtractingRequestHandler/ExtractingDocumentLoader.
> For example I need to put org.apache.tika.parser.pdf.PDFParserConfig with 
> "extractInlineImages" set to true in ParseContext to trigger extraction/OCR 
> recognizing of embedded images from pdf. 
> It would be nice to have possibility to configure created ParseContext due 
> xml-config file like TikaConfig does.
> I would suggest to have following:
> solrconfig.xml:
>class="org.apache.solr.handler.extraction.ExtractingRequestHandler">
> parseContext.config
>   
> parseContext.config:
> 
>value="org.apache.tika.parser.pdf.PDFParserConfig">
> 
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8166) Introduce possibility to configure ParseContext in ExtractingRequestHandler/ExtractingDocumentLoader

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990326#comment-14990326
 ] 

ASF GitHub Bot commented on SOLR-8166:
--

Github user abinet closed the pull request at:

https://github.com/apache/lucene-solr/pull/206


> Introduce possibility to configure ParseContext in 
> ExtractingRequestHandler/ExtractingDocumentLoader
> 
>
> Key: SOLR-8166
> URL: https://issues.apache.org/jira/browse/SOLR-8166
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 5.3
>Reporter: Andriy Binetsky
>Assignee: Uwe Schindler
> Attachments: SOLR-8166.patch, SOLR-8166.patch
>
>
> Actually there is no possibility to hand over some additional configuration 
> by document extracting with ExtractingRequestHandler/ExtractingDocumentLoader.
> For example I need to put org.apache.tika.parser.pdf.PDFParserConfig with 
> "extractInlineImages" set to true in ParseContext to trigger extraction/OCR 
> recognizing of embedded images from pdf. 
> It would be nice to have possibility to configure created ParseContext due 
> xml-config file like TikaConfig does.
> I would suggest to have following:
> solrconfig.xml:
>class="org.apache.solr.handler.extraction.ExtractingRequestHandler">
> parseContext.config
>   
> parseContext.config:
> 
>value="org.apache.tika.parser.pdf.PDFParserConfig">
> 
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8166 provide config for tika's Pars...

2015-11-04 Thread abinet
Github user abinet commented on the pull request:

https://github.com/apache/lucene-solr/pull/206#issuecomment-153851266
  
Hi, thank you for support and cooperation. I'll close pull request.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8166 provide config for tika's Pars...

2015-11-04 Thread abinet
Github user abinet closed the pull request at:

https://github.com/apache/lucene-solr/pull/206


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8166) Introduce possibility to configure ParseContext in ExtractingRequestHandler/ExtractingDocumentLoader

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990318#comment-14990318
 ] 

ASF GitHub Bot commented on SOLR-8166:
--

Github user uschindler commented on the pull request:

https://github.com/apache/lucene-solr/pull/206#issuecomment-153850780
  
Hi, this was merged into SVN. Can you close the pull request, the automatic 
close did not work...


> Introduce possibility to configure ParseContext in 
> ExtractingRequestHandler/ExtractingDocumentLoader
> 
>
> Key: SOLR-8166
> URL: https://issues.apache.org/jira/browse/SOLR-8166
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 5.3
>Reporter: Andriy Binetsky
>Assignee: Uwe Schindler
> Attachments: SOLR-8166.patch, SOLR-8166.patch
>
>
> Actually there is no possibility to hand over some additional configuration 
> by document extracting with ExtractingRequestHandler/ExtractingDocumentLoader.
> For example I need to put org.apache.tika.parser.pdf.PDFParserConfig with 
> "extractInlineImages" set to true in ParseContext to trigger extraction/OCR 
> recognizing of embedded images from pdf. 
> It would be nice to have possibility to configure created ParseContext due 
> xml-config file like TikaConfig does.
> I would suggest to have following:
> solrconfig.xml:
>class="org.apache.solr.handler.extraction.ExtractingRequestHandler">
> parseContext.config
>   
> parseContext.config:
> 
>value="org.apache.tika.parser.pdf.PDFParserConfig">
> 
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8166 provide config for tika's Pars...

2015-11-04 Thread uschindler
Github user uschindler commented on the pull request:

https://github.com/apache/lucene-solr/pull/206#issuecomment-153850780
  
Hi, this was merged into SVN. Can you close the pull request, the automatic 
close did not work...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8166) Introduce possibility to configure ParseContext in ExtractingRequestHandler/ExtractingDocumentLoader

2015-11-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990316#comment-14990316
 ] 

ASF subversion and git services commented on SOLR-8166:
---

Commit 1712632 from [~thetaphi] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1712632 ]

Merged revision(s) 1712629 from lucene/dev/trunk:
SOLR-8166: Introduce possibility to configure ParseContext in 
ExtractingRequestHandler/ExtractingDocumentLoader
This closes Github PR #206

> Introduce possibility to configure ParseContext in 
> ExtractingRequestHandler/ExtractingDocumentLoader
> 
>
> Key: SOLR-8166
> URL: https://issues.apache.org/jira/browse/SOLR-8166
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 5.3
>Reporter: Andriy Binetsky
>Assignee: Uwe Schindler
> Attachments: SOLR-8166.patch, SOLR-8166.patch
>
>
> Actually there is no possibility to hand over some additional configuration 
> by document extracting with ExtractingRequestHandler/ExtractingDocumentLoader.
> For example I need to put org.apache.tika.parser.pdf.PDFParserConfig with 
> "extractInlineImages" set to true in ParseContext to trigger extraction/OCR 
> recognizing of embedded images from pdf. 
> It would be nice to have possibility to configure created ParseContext due 
> xml-config file like TikaConfig does.
> I would suggest to have following:
> solrconfig.xml:
>class="org.apache.solr.handler.extraction.ExtractingRequestHandler">
> parseContext.config
>   
> parseContext.config:
> 
>value="org.apache.tika.parser.pdf.PDFParserConfig">
> 
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8166) Introduce possibility to configure ParseContext in ExtractingRequestHandler/ExtractingDocumentLoader

2015-11-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990309#comment-14990309
 ] 

ASF subversion and git services commented on SOLR-8166:
---

Commit 1712629 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1712629 ]

SOLR-8166: Introduce possibility to configure ParseContext in 
ExtractingRequestHandler/ExtractingDocumentLoader

> Introduce possibility to configure ParseContext in 
> ExtractingRequestHandler/ExtractingDocumentLoader
> 
>
> Key: SOLR-8166
> URL: https://issues.apache.org/jira/browse/SOLR-8166
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 5.3
>Reporter: Andriy Binetsky
>Assignee: Uwe Schindler
> Attachments: SOLR-8166.patch, SOLR-8166.patch
>
>
> Actually there is no possibility to hand over some additional configuration 
> by document extracting with ExtractingRequestHandler/ExtractingDocumentLoader.
> For example I need to put org.apache.tika.parser.pdf.PDFParserConfig with 
> "extractInlineImages" set to true in ParseContext to trigger extraction/OCR 
> recognizing of embedded images from pdf. 
> It would be nice to have possibility to configure created ParseContext due 
> xml-config file like TikaConfig does.
> I would suggest to have following:
> solrconfig.xml:
>class="org.apache.solr.handler.extraction.ExtractingRequestHandler">
> parseContext.config
>   
> parseContext.config:
> 
>value="org.apache.tika.parser.pdf.PDFParserConfig">
> 
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8166) Introduce possibility to configure ParseContext in ExtractingRequestHandler/ExtractingDocumentLoader

2015-11-04 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-8166:

Attachment: SOLR-8166.patch

> Introduce possibility to configure ParseContext in 
> ExtractingRequestHandler/ExtractingDocumentLoader
> 
>
> Key: SOLR-8166
> URL: https://issues.apache.org/jira/browse/SOLR-8166
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 5.3
>Reporter: Andriy Binetsky
>Assignee: Uwe Schindler
> Attachments: SOLR-8166.patch, SOLR-8166.patch
>
>
> Actually there is no possibility to hand over some additional configuration 
> by document extracting with ExtractingRequestHandler/ExtractingDocumentLoader.
> For example I need to put org.apache.tika.parser.pdf.PDFParserConfig with 
> "extractInlineImages" set to true in ParseContext to trigger extraction/OCR 
> recognizing of embedded images from pdf. 
> It would be nice to have possibility to configure created ParseContext due 
> xml-config file like TikaConfig does.
> I would suggest to have following:
> solrconfig.xml:
>class="org.apache.solr.handler.extraction.ExtractingRequestHandler">
> parseContext.config
>   
> parseContext.config:
> 
>value="org.apache.tika.parser.pdf.PDFParserConfig">
> 
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8166) Introduce possibility to configure ParseContext in ExtractingRequestHandler/ExtractingDocumentLoader

2015-11-04 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-8166:

Attachment: (was: SOLR-8166.patch)

> Introduce possibility to configure ParseContext in 
> ExtractingRequestHandler/ExtractingDocumentLoader
> 
>
> Key: SOLR-8166
> URL: https://issues.apache.org/jira/browse/SOLR-8166
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 5.3
>Reporter: Andriy Binetsky
>Assignee: Uwe Schindler
> Attachments: SOLR-8166.patch, SOLR-8166.patch
>
>
> Actually there is no possibility to hand over some additional configuration 
> by document extracting with ExtractingRequestHandler/ExtractingDocumentLoader.
> For example I need to put org.apache.tika.parser.pdf.PDFParserConfig with 
> "extractInlineImages" set to true in ParseContext to trigger extraction/OCR 
> recognizing of embedded images from pdf. 
> It would be nice to have possibility to configure created ParseContext due 
> xml-config file like TikaConfig does.
> I would suggest to have following:
> solrconfig.xml:
>class="org.apache.solr.handler.extraction.ExtractingRequestHandler">
> parseContext.config
>   
> parseContext.config:
> 
>value="org.apache.tika.parser.pdf.PDFParserConfig">
> 
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8166) Introduce possibility to configure ParseContext in ExtractingRequestHandler/ExtractingDocumentLoader

2015-11-04 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-8166:

Attachment: SOLR-8166.patch

Slightly simplified patch. I was able to remove the long chain of ifs for 
guessing property type. The Java Beans Framework does this automatically, so we 
can set/get properties as String easily using the same beans API.

> Introduce possibility to configure ParseContext in 
> ExtractingRequestHandler/ExtractingDocumentLoader
> 
>
> Key: SOLR-8166
> URL: https://issues.apache.org/jira/browse/SOLR-8166
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 5.3
>Reporter: Andriy Binetsky
>Assignee: Uwe Schindler
> Attachments: SOLR-8166.patch, SOLR-8166.patch
>
>
> Actually there is no possibility to hand over some additional configuration 
> by document extracting with ExtractingRequestHandler/ExtractingDocumentLoader.
> For example I need to put org.apache.tika.parser.pdf.PDFParserConfig with 
> "extractInlineImages" set to true in ParseContext to trigger extraction/OCR 
> recognizing of embedded images from pdf. 
> It would be nice to have possibility to configure created ParseContext due 
> xml-config file like TikaConfig does.
> I would suggest to have following:
> solrconfig.xml:
>class="org.apache.solr.handler.extraction.ExtractingRequestHandler">
> parseContext.config
>   
> parseContext.config:
> 
>value="org.apache.tika.parser.pdf.PDFParserConfig">
> 
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8166) Introduce possibility to configure ParseContext in ExtractingRequestHandler/ExtractingDocumentLoader

2015-11-04 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-8166:

Attachment: SOLR-8166.patch

Hi,
I cleaned up your PR (removed changes from unrelated files, refactored the 
config loading and type checking a bit) and attached as a patch here. I will 
commit soon.

I changed the name of the attribute used for the implementation class to 
"impl=" instead of "value=", which seems wrong.

I also added the remaining native types to your type converter (float, byte, 
short).

> Introduce possibility to configure ParseContext in 
> ExtractingRequestHandler/ExtractingDocumentLoader
> 
>
> Key: SOLR-8166
> URL: https://issues.apache.org/jira/browse/SOLR-8166
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 5.3
>Reporter: Andriy Binetsky
>Assignee: Uwe Schindler
> Attachments: SOLR-8166.patch
>
>
> Actually there is no possibility to hand over some additional configuration 
> by document extracting with ExtractingRequestHandler/ExtractingDocumentLoader.
> For example I need to put org.apache.tika.parser.pdf.PDFParserConfig with 
> "extractInlineImages" set to true in ParseContext to trigger extraction/OCR 
> recognizing of embedded images from pdf. 
> It would be nice to have possibility to configure created ParseContext due 
> xml-config file like TikaConfig does.
> I would suggest to have following:
> solrconfig.xml:
>class="org.apache.solr.handler.extraction.ExtractingRequestHandler">
> parseContext.config
>   
> parseContext.config:
> 
>value="org.apache.tika.parser.pdf.PDFParserConfig">
> 
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-6406) ConcurrentUpdateSolrServer hang in blockUntilFinished.

2015-11-04 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reopened SOLR-6406:


Reopening... something is wrong.

Overview of what happened:
- I tested my update to DistributedUpdateProcessor in SOLR-8203 alone, and 
verified that there were no shard inconsistency failures
- Mark tested his change to use shutdownNow on the updateExecutor alone (w/o my 
change), and reported no shard inconsistency failures, but he did hit hangs, 
which led to me to tackle this issue
- I tested this issue w/o my fix in SOLR-8203, to more easily reproduce the 
hang, and to verify it had been fixed - I was not looking for shard 
inconsistency failures
- Now that both patches are committed, I'm seeing shard inconsistency failures 
again!

Either:
 - I messed up this patch somehow, causing updates to be further reordered
 - This idea of this patch is somehow incompatible with SOLR-8203 (unlikely)
 - Something else in trunk has changed (unlikely)

First, I'm going to go back to trunk w/o both of these patches and start with 
just the check in the DistributedUpdateProcessor, and move on from there until 
I find out what reintroduced the problem.


> ConcurrentUpdateSolrServer hang in blockUntilFinished.
> --
>
> Key: SOLR-6406
> URL: https://issues.apache.org/jira/browse/SOLR-6406
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Yonik Seeley
> Fix For: 5.0, Trunk
>
> Attachments: CPU Sampling.png, SOLR-6406.patch, SOLR-6406.patch, 
> SOLR-6406.patch
>
>
> Not sure what is causing this, but SOLR-6136 may have taken us a step back 
> here. I see this problem occasionally pop up in ChaosMonkeyNothingIsSafeTest 
> now - test fails because of a thread leak, thread leak is due to a 
> ConcurrentUpdateSolrServer hang in blockUntilFinished. Only started popping 
> up recently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b90) - Build # 14497 - Failure!

2015-11-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14497/
Java: 64bit/jdk1.9.0-ea-b90 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 9740 lines...]
[javac] Compiling 595 source files to 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/test
[javac] warning: [options] bootstrap class path not set in conjunction with 
-source 1.7
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestRandomRequestDistribution.java:48:
 error: class gR is public, should be declared in a file named gR.java
[javac] public class gR extends AbstractFullDistribZkTestBase {
[javac]^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 1 error

[...truncated 1 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:785: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:729: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:59: The following error 
occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:233: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:526: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:814: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:828: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:1964: 
Compile failed; see the compiler error output for details.

Total time: 22 minutes 45 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-8238) Make Solr respect preferredLeader at startup

2015-11-04 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990224#comment-14990224
 ] 

Erick Erickson commented on SOLR-8238:
--

The intention here is that the preferredLeader role has two functions:

1> Cause the cluster to _tend_ toward the preferredLeader role coinciding with 
the actual leader.

2> allow pathological out-of-balance situations to be rectified.

It accomplishes <1> by the following: As a node comes up, if it has the 
preferredLeader role set and another node _is_ the leader, it inserts itself as 
the next-in-line for leadership election for its shard. So any time the current 
leader abdicates its leadership role, this node will become the leader. That's 
why we called it "preferredLeader rather than something like "requiredLeader".

2> is the REBALANCELEADERS command. But this noticeably impacted performance in 
a situation where literally hundreds of replicas on a single Solr instance were 
leaders, very special circumstances.

Unless we come up with a use case where the current functionality is 
demonstrably affecting performance, I think the added complexity (especially in 
the case where the entire cluster is being restarted) is not worth the risk. I 
can be talked out of that opinion but it'd have to be for more then aesthetic 
reasons.

In the general case, the leader role imposes a small amount of additional work 
on a node that it can usually be ignored.

> Make Solr respect preferredLeader at startup
> 
>
> Key: SOLR-8238
> URL: https://issues.apache.org/jira/browse/SOLR-8238
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.2.1
>Reporter: Peter Morgan
>Priority: Minor
>
> After setting preferredLeader property, noticed that upon restarting leaders 
> revert to wherever they were previously running before REBALANCE was called.  
>  I would expect the preferredLeader to influence the startup election, but it 
> appears it is not observed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8222) Optimize count-only faceting when there are many expected matches-per-ord

2015-11-04 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-8222.

   Resolution: Fixed
Fix Version/s: 5.4

> Optimize count-only faceting when there are many expected matches-per-ord
> -
>
> Key: SOLR-8222
> URL: https://issues.apache.org/jira/browse/SOLR-8222
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 5.4
>
> Attachments: SOLR-8222.patch, SOLR-8222.patch, SOLR-8222.patch
>
>
> This optimization for the JSON Facet API came up a few months ago on the 
> mailing list (I think by Toke).
> Basically, if one expects many hits per bucket, use a temporary array to 
> accumulate segment ords and map them all at the end to global ords.  This 
> saves redundant segOrd->globalOrd mappings at the cost of having to scan the 
> temp array.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6875) New Serbian Filter

2015-11-04 Thread Nikola Smolenski (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990211#comment-14990211
 ] 

Nikola Smolenski commented on LUCENE-6875:
--

I was considering making two separate factories, but in the end I decided 
against it because all the other analyzers in the chain might need to be 
separate as well (for example there could be a regular stemmer and a bald 
stemmer etc) and so all would need separate factories...

> New Serbian Filter
> --
>
> Key: LUCENE-6875
> URL: https://issues.apache.org/jira/browse/LUCENE-6875
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Nikola Smolenski
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: Trunk, 5.4
>
> Attachments: Lucene-Serbian-Regular-1.patch
>
>
> This is a new Serbian filter that works with regular Latin text (the current 
> filter works with "bald" Latin). I described in detail what does it do and 
> why is it necessary at the wiki.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_66) - Build # 14787 - Failure!

2015-11-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14787/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseSerialGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestMiniSolrCloudCluster

Error Message:
16 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestMiniSolrCloudCluster: 1) Thread[id=8550, 
name=jetty-launcher-2003-thread-1-EventThread, state=WAITING, 
group=TGRP-TestMiniSolrCloudCluster] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
2) Thread[id=8596, 
name=OverseerCollectionConfigSetProcessor-94808591739912202-127.0.0.1:52763_solr-n_03,
 state=TIMED_WAITING, group=Overseer collection creation process.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryDelay(ZkCmdExecutor.java:108)   
  at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:76)
 at 
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:350)
 at 
org.apache.solr.cloud.OverseerTaskProcessor.amILeader(OverseerTaskProcessor.java:355)
 at 
org.apache.solr.cloud.OverseerTaskProcessor.run(OverseerTaskProcessor.java:172) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=8491, 
name=qtp29097261-8491-selector-ServerConnectorManager@110b37d/3, 
state=RUNNABLE, group=TGRP-TestMiniSolrCloudCluster] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.select(SelectorManager.java:600)
 at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.run(SelectorManager.java:549)
 at 
org.eclipse.jetty.util.thread.NonBlockingThread.run(NonBlockingThread.java:52)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
at java.lang.Thread.run(Thread.java:745)4) Thread[id=8477, 
name=qtp29097261-8477-selector-ServerConnectorManager@110b37d/1, 
state=RUNNABLE, group=TGRP-TestMiniSolrCloudCluster] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.select(SelectorManager.java:600)
 at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.run(SelectorManager.java:549)
 at 
org.eclipse.jetty.util.thread.NonBlockingThread.run(NonBlockingThread.java:52)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
at java.lang.Thread.run(Thread.java:745)5) Thread[id=8514, 
name=Scheduler-15849249, state=TIMED_WAITING, 
group=TGRP-TestMiniSolrCloudCluster] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)6) Thread[id=8518, 
name=org.eclipse.jetty.server.session.HashSessionManager@6ef19eTimer, 
state=TIMED_WAITING, group=TGRP-TestMiniSolrCloudCluster] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSup

[jira] [Updated] (SOLR-8147) FieldFacetAccumulator constructor throws too many exceptions

2015-11-04 Thread Scott Stults (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Stults updated SOLR-8147:
---
Attachment: SOLR-8147.patch

This patch is like the last one except it's using IOException rather than 
SolrException. And yeah, let's keep this Jira short and sweet and open a 
separate one for moving the checks. 

> FieldFacetAccumulator constructor throws too many exceptions
> 
>
> Key: SOLR-8147
> URL: https://issues.apache.org/jira/browse/SOLR-8147
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Affects Versions: 5.0, Trunk
>Reporter: Scott Stults
>Assignee: Christine Poerschke
>Priority: Trivial
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-8147.patch, SOLR-8147.patch
>
>
> The constructor and static create method in FieldFacetAccumulator don't need 
> to throw IOException, just SolrException. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8215) SolrCloud can select a core not in active state for querying

2015-11-04 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker resolved SOLR-8215.
-
   Resolution: Fixed
 Assignee: Varun Thacker
Fix Version/s: 5.4

Thanks Mark for the review!

> SolrCloud can select a core not in active state for querying
> 
>
> Key: SOLR-8215
> URL: https://issues.apache.org/jira/browse/SOLR-8215
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 5.4
>
> Attachments: SOLR-8215.patch, SOLR-8215.patch
>
>
> A query can be served by a core which is not in active state if the request 
> hits the node which hosts these non active cores.
> We explicitly check for only active cores to search against  in 
> {{CloudSolrClient#sendRequest}} Line 1043 on trunk.
> But we don't check this if someone uses the REST APIs 
> {{HttpSolrCall#getCoreByCollection}} should only pick cores which are active 
> on line 794 on trunk. 
> We however check it on line 882/883 in HttpSolrCall, when we try to find 
> cores on other nodes when it's not present locally.
> So let's fix {{HttpSolrCall#getCoreByCollection}} to make the active check as 
> well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8215) SolrCloud can select a core not in active state for querying

2015-11-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990197#comment-14990197
 ] 

ASF subversion and git services commented on SOLR-8215:
---

Commit 1712614 from [~varunthacker] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1712614 ]

SOLR-8215: Only active replicas should handle incoming requests against a 
collection (merged trunk 1712601)

> SolrCloud can select a core not in active state for querying
> 
>
> Key: SOLR-8215
> URL: https://issues.apache.org/jira/browse/SOLR-8215
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Attachments: SOLR-8215.patch, SOLR-8215.patch
>
>
> A query can be served by a core which is not in active state if the request 
> hits the node which hosts these non active cores.
> We explicitly check for only active cores to search against  in 
> {{CloudSolrClient#sendRequest}} Line 1043 on trunk.
> But we don't check this if someone uses the REST APIs 
> {{HttpSolrCall#getCoreByCollection}} should only pick cores which are active 
> on line 794 on trunk. 
> We however check it on line 882/883 in HttpSolrCall, when we try to find 
> cores on other nodes when it's not present locally.
> So let's fix {{HttpSolrCall#getCoreByCollection}} to make the active check as 
> well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8238) Make Solr respect preferredLeader at startup

2015-11-04 Thread Peter Morgan (JIRA)
Peter Morgan created SOLR-8238:
--

 Summary: Make Solr respect preferredLeader at startup
 Key: SOLR-8238
 URL: https://issues.apache.org/jira/browse/SOLR-8238
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 5.2.1
Reporter: Peter Morgan
Priority: Minor


After setting preferredLeader property, noticed that upon restarting leaders 
revert to wherever they were previously running before REBALANCE was called.   
I would expect the preferredLeader to influence the startup election, but it 
appears it is not observed.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8222) Optimize count-only faceting when there are many expected matches-per-ord

2015-11-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990171#comment-14990171
 ] 

ASF subversion and git services commented on SOLR-8222:
---

Commit 1712611 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1712611 ]

SOLR-8222: optimize method=dv faceting for counts

> Optimize count-only faceting when there are many expected matches-per-ord
> -
>
> Key: SOLR-8222
> URL: https://issues.apache.org/jira/browse/SOLR-8222
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: SOLR-8222.patch, SOLR-8222.patch, SOLR-8222.patch
>
>
> This optimization for the JSON Facet API came up a few months ago on the 
> mailing list (I think by Toke).
> Basically, if one expects many hits per bucket, use a temporary array to 
> accumulate segment ords and map them all at the end to global ords.  This 
> saves redundant segOrd->globalOrd mappings at the cost of having to scan the 
> temp array.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8222) Optimize count-only faceting when there are many expected matches-per-ord

2015-11-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990106#comment-14990106
 ] 

ASF subversion and git services commented on SOLR-8222:
---

Commit 1712608 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1712608 ]

SOLR-8222: optimize method=dv faceting for counts

> Optimize count-only faceting when there are many expected matches-per-ord
> -
>
> Key: SOLR-8222
> URL: https://issues.apache.org/jira/browse/SOLR-8222
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: SOLR-8222.patch, SOLR-8222.patch, SOLR-8222.patch
>
>
> This optimization for the JSON Facet API came up a few months ago on the 
> mailing list (I think by Toke).
> Basically, if one expects many hits per bucket, use a temporary array to 
> accumulate segment ords and map them all at the end to global ords.  This 
> saves redundant segOrd->globalOrd mappings at the cost of having to scan the 
> temp array.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8057) Change default Sim to BM25 (w/backcompat config handling)

2015-11-04 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990072#comment-14990072
 ] 

Hoss Man commented on SOLR-8057:



The more I work on this and think about it, the more I think my current 
approach of putting luceneMatchVersion conditional logic in DefaultSimFactory 
is the wrong way to go (independent of the bugs that i seem to have uncovered 
in making a SimFactories SolrCoreAware - which i'll confirm & file seperately) 
...

I'm starting to think that a better long term solution would be to split this 
up into 3 discrete tasks/ideas...

{panel:title=Task #1 - Deprecate/rename DefaultSimilarityFactory in 5.x}
* clone DefaultSimilarityFactory -> ClassicSimilarityFactory
* prune DefaultSimilarityFactory down to a trivial subclass of 
ClassicSimilarityFactory
** make it log a warning on init
* change default behavior of IndexSchema to use ClassicSimilarityFactory 
directly
* mark DefaultSimilarityFactory as deprecated in 5.x, remove from trunk/6.0
{panel}

Task #1 would put us in a better position moving forward of having the facotry 
names directly map to the underlying implementation, leaving less ambiguity 
when an explicit factory is specified in the schema.xml (either as the main 
similarity, or as a per field similarity)

{panel:title="Task #2 - Make the wrapped per-field default in 
SchemaSimilarityFactory conditional on luceneMatchVersion"}
* use ClassicSimilarity as per-field default when luceneMatchVersion < 6.0
* use BM25Similarity as per-field default when luceneMatchVersion < 6.0
{panel}

Task #2 would give us better defaults (via BM25) for people using 
SchemaSimilarityFactory moving forward, while existing users would have no back 
compat change.

{panel:title=Task #3 - Change the implicit default Similarity on trunk}
* make the Similariy init logic in IndexSchema conditional on luceneMatchVersion
* use ClassicSimilarityFactory as default when luceneMatchVersion < 6.0
* *use SchemaSimilarityFactory as default when luceneMatchVersion >= 6.0*
** combined with Task #2, this would mean the wrapped per-field default would 
be BM25
{panel}

Task #3 is where things start to get noticibly diff from the goals i outlined 
when i originally filed this jira...

As far as i can tell, the chief reason SchemaSimilarityFactory wasn't made the 
implicit default in IndexSchema when it was introduced is because of how it 
differed/differs from DefaultSimilarity/ClassicSimilarity with respect to 
multi-clause queries -- see SchemaSimilarityFactory's class javadoc notes 
relating to {{queryNorm}} and {{coord}}.  Users were expected to think about 
this trade off when making a concious choice to switch from 
DefaultSimilarity/ClassicSimilarity to SchemaSimilarityFactory.  But (again, 
AFAICT) these discrepencies don't exist between SchemaSimilarityFactory's 
PerFieldSimilarityWrapper and BM25Similiarity.   So if we want to make 
BM25Similiarity the default when luceneMatchVersion >= 6.0, there doesn't seem 
to be any downside to _actually_ making SchemaSimilarityFactory (wrapping 
BM25Similiarity) the default instead.



Task #1 seems like a no brainer to me, and likeise Task #2 seems like a 
sensible change balancing new user experience vs backcompat -- so i'm going to 
go ahead and move forward with individual sub-tasks to tackle those (in that 
order).

If there are no concerns/objections to Task #3 by the time I get to that point, 
and if i haven't changed my mind that it's a good idea, I'll move forward with 
that as well -- the alternative is to stick with the original plan and make 
BM25SimilarityFactory (directly) the default when luceneMatchVersion >= 6.0.


> Change default Sim to BM25 (w/backcompat config handling)
> -
>
> Key: SOLR-8057
> URL: https://issues.apache.org/jira/browse/SOLR-8057
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Blocker
> Fix For: Trunk
>
> Attachments: SOLR-8057.patch, SOLR-8057.patch
>
>
> LUCENE-6789 changed the default similarity for IndexSearcher to BM25 and 
> renamed "DefaultSimilarity" to "ClassicSimilarity"
> Solr needs to be updated accordingly:
> * a "ClassicSimilarityFactory" should exist w/expected behavior/javadocs
> * default behavior (in 6.0) when no similarity is specified in configs should 
> (ultimately) use BM25 depending on luceneMatchVersion
> ** either by assuming BM25SimilarityFactory or by changing the internal 
> behavior of DefaultSimilarityFactory
> * comments in sample configs need updated to reflect new default behavior
> * ref guide needs updated anywhere it mentions/implies that a particular 
> similarity is used (or implies TF-IDF is used by default)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---

[jira] [Commented] (SOLR-8215) SolrCloud can select a core not in active state for querying

2015-11-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990045#comment-14990045
 ] 

ASF subversion and git services commented on SOLR-8215:
---

Commit 1712601 from [~varunthacker] in branch 'dev/trunk'
[ https://svn.apache.org/r1712601 ]

SOLR-8215: Only active replicas should handle incoming requests against a 
collection

> SolrCloud can select a core not in active state for querying
> 
>
> Key: SOLR-8215
> URL: https://issues.apache.org/jira/browse/SOLR-8215
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Attachments: SOLR-8215.patch, SOLR-8215.patch
>
>
> A query can be served by a core which is not in active state if the request 
> hits the node which hosts these non active cores.
> We explicitly check for only active cores to search against  in 
> {{CloudSolrClient#sendRequest}} Line 1043 on trunk.
> But we don't check this if someone uses the REST APIs 
> {{HttpSolrCall#getCoreByCollection}} should only pick cores which are active 
> on line 794 on trunk. 
> We however check it on line 882/883 in HttpSolrCall, when we try to find 
> cores on other nodes when it's not present locally.
> So let's fix {{HttpSolrCall#getCoreByCollection}} to make the active check as 
> well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8222) Optimize count-only faceting when there are many expected matches-per-ord

2015-11-04 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8222:
---
Attachment: SOLR-8222.patch

Performance increase faceting 5M docs:
Field with 10 unique values:  +31%
Field with 100 unique values: +29%
Field with 1000 unique values: +59%
Field with 1 unique values: +88%
Field with 1M unique values: +115% 

> Optimize count-only faceting when there are many expected matches-per-ord
> -
>
> Key: SOLR-8222
> URL: https://issues.apache.org/jira/browse/SOLR-8222
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: SOLR-8222.patch, SOLR-8222.patch, SOLR-8222.patch
>
>
> This optimization for the JSON Facet API came up a few months ago on the 
> mailing list (I think by Toke).
> Basically, if one expects many hits per bucket, use a temporary array to 
> accumulate segment ords and map them all at the end to global ords.  This 
> saves redundant segOrd->globalOrd mappings at the cost of having to scan the 
> temp array.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8222) Optimize count-only faceting when there are many expected matches-per-ord

2015-11-04 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-8222:
--

Assignee: Yonik Seeley

> Optimize count-only faceting when there are many expected matches-per-ord
> -
>
> Key: SOLR-8222
> URL: https://issues.apache.org/jira/browse/SOLR-8222
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: SOLR-8222.patch, SOLR-8222.patch
>
>
> This optimization for the JSON Facet API came up a few months ago on the 
> mailing list (I think by Toke).
> Basically, if one expects many hits per bucket, use a temporary array to 
> accumulate segment ords and map them all at the end to global ords.  This 
> saves redundant segOrd->globalOrd mappings at the cost of having to scan the 
> temp array.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8215) SolrCloud can select a core not in active state for querying

2015-11-04 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-8215:

Attachment: SOLR-8215.patch

Some tweaks to the patch:

- Moved {{verifyReplicaStatus}} to {{AbstractDistribZkTestBase}} so that it can 
be reused.
- {{SolrCore core = 
cores.getCore(leader.getStr(ZkStateReader.CORE_NAME_PROP));}} in the previous 
patch was causing the test code to leave the core open hence the test was 
failing.  It was weird that running the tests from the IDE never hit this . So 
that code is fixed

> SolrCloud can select a core not in active state for querying
> 
>
> Key: SOLR-8215
> URL: https://issues.apache.org/jira/browse/SOLR-8215
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
> Attachments: SOLR-8215.patch, SOLR-8215.patch
>
>
> A query can be served by a core which is not in active state if the request 
> hits the node which hosts these non active cores.
> We explicitly check for only active cores to search against  in 
> {{CloudSolrClient#sendRequest}} Line 1043 on trunk.
> But we don't check this if someone uses the REST APIs 
> {{HttpSolrCall#getCoreByCollection}} should only pick cores which are active 
> on line 794 on trunk. 
> We however check it on line 882/883 in HttpSolrCall, when we try to find 
> cores on other nodes when it's not present locally.
> So let's fix {{HttpSolrCall#getCoreByCollection}} to make the active check as 
> well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8147) FieldFacetAccumulator constructor throws too many exceptions

2015-11-04 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke reassigned SOLR-8147:
-

Assignee: Christine Poerschke

> FieldFacetAccumulator constructor throws too many exceptions
> 
>
> Key: SOLR-8147
> URL: https://issues.apache.org/jira/browse/SOLR-8147
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Affects Versions: 5.0, Trunk
>Reporter: Scott Stults
>Assignee: Christine Poerschke
>Priority: Trivial
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-8147.patch
>
>
> The constructor and static create method in FieldFacetAccumulator don't need 
> to throw IOException, just SolrException. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8147) FieldFacetAccumulator constructor throws too many exceptions

2015-11-04 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989953#comment-14989953
 ] 

Christine Poerschke commented on SOLR-8147:
---

Happy to take this JIRA and apply/commit patch.

[~sstults] - let me know if you'd like to go with the original patch or if 
you'd prefer to attach a revised patch and go with {{IOException}} instead of 
{{SolrException}} instead based on our comments above. Thank you.

> FieldFacetAccumulator constructor throws too many exceptions
> 
>
> Key: SOLR-8147
> URL: https://issues.apache.org/jira/browse/SOLR-8147
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Affects Versions: 5.0, Trunk
>Reporter: Scott Stults
>Priority: Trivial
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-8147.patch
>
>
> The constructor and static create method in FieldFacetAccumulator don't need 
> to throw IOException, just SolrException. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8223) Take care not to accidentally swallow OOMErrors

2015-11-04 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989937#comment-14989937
 ] 

Christine Poerschke commented on SOLR-8223:
---

I think this is good to commit. (Won't get to it today though.)

> Take care not to accidentally swallow OOMErrors
> ---
>
> Key: SOLR-8223
> URL: https://issues.apache.org/jira/browse/SOLR-8223
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.3
>Reporter: Mike Drob
> Fix For: Trunk
>
> Attachments: SOLR-8223.patch
>
>
> This was first noticed with 4.10.3, but it looks like it still applies to 
> trunk. There are a few places in the code where we catch {{Throwable}} and 
> then don't check for OOM or rethrow it. This behaviour means that OOM kill 
> scripts won't run, and the JVM can get into an undesirable state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8223) Take care not to accidentally swallow OOMErrors

2015-11-04 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke reassigned SOLR-8223:
-

Assignee: Christine Poerschke

> Take care not to accidentally swallow OOMErrors
> ---
>
> Key: SOLR-8223
> URL: https://issues.apache.org/jira/browse/SOLR-8223
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.3
>Reporter: Mike Drob
>Assignee: Christine Poerschke
> Fix For: Trunk
>
> Attachments: SOLR-8223.patch
>
>
> This was first noticed with 4.10.3, but it looks like it still applies to 
> trunk. There are a few places in the code where we catch {{Throwable}} and 
> then don't check for OOM or rethrow it. This behaviour means that OOM kill 
> scripts won't run, and the JVM can get into an undesirable state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6885) StandardDirectoryReader (initialCapacity) tweaks

2015-11-04 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-6885:

Attachment: LUCENE-6885.patch

Attaching revised/simplified patch against trunk.

> StandardDirectoryReader (initialCapacity) tweaks
> 
>
> Key: LUCENE-6885
> URL: https://issues.apache.org/jira/browse/LUCENE-6885
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-6885.patch, LUCENE-6885.patch
>
>
> proposed patch against trunk to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8223) Take care not to accidentally swallow OOMErrors

2015-11-04 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989925#comment-14989925
 ] 

Mike Drob commented on SOLR-8223:
-

[~cpoerschke] - do you think this is good to commit, or are there other changes 
you think it should have? I tried to figure out what kind of meaningful tests I 
could add, but couldn't come up with anything that was non-trivial and useful.

> Take care not to accidentally swallow OOMErrors
> ---
>
> Key: SOLR-8223
> URL: https://issues.apache.org/jira/browse/SOLR-8223
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10.3
>Reporter: Mike Drob
> Fix For: Trunk
>
> Attachments: SOLR-8223.patch
>
>
> This was first noticed with 4.10.3, but it looks like it still applies to 
> trunk. There are a few places in the code where we catch {{Throwable}} and 
> then don't check for OOM or rethrow it. This behaviour means that OOM kill 
> scripts won't run, and the JVM can get into an undesirable state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6885) StandardDirectoryReader (initialCapacity) tweaks

2015-11-04 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989927#comment-14989927
 ] 

Christine Poerschke commented on LUCENE-6885:
-

Hadn't considered {{Collections.emptyMap()}} - good idea, thanks!

> StandardDirectoryReader (initialCapacity) tweaks
> 
>
> Key: LUCENE-6885
> URL: https://issues.apache.org/jira/browse/LUCENE-6885
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-6885.patch
>
>
> proposed patch against trunk to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8166 provide config for tika's Pars...

2015-11-04 Thread uschindler
Github user uschindler commented on the pull request:

https://github.com/apache/lucene-solr/pull/206#issuecomment-153793014
  
Hey, yes exactly like that :-) I will review that later. Give me a day or 
two, I am looking in merging it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8166) Introduce possibility to configure ParseContext in ExtractingRequestHandler/ExtractingDocumentLoader

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989914#comment-14989914
 ] 

ASF GitHub Bot commented on SOLR-8166:
--

Github user uschindler commented on the pull request:

https://github.com/apache/lucene-solr/pull/206#issuecomment-153793014
  
Hey, yes exactly like that :-) I will review that later. Give me a day or 
two, I am looking in merging it.


> Introduce possibility to configure ParseContext in 
> ExtractingRequestHandler/ExtractingDocumentLoader
> 
>
> Key: SOLR-8166
> URL: https://issues.apache.org/jira/browse/SOLR-8166
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 5.3
>Reporter: Andriy Binetsky
>Assignee: Uwe Schindler
>
> Actually there is no possibility to hand over some additional configuration 
> by document extracting with ExtractingRequestHandler/ExtractingDocumentLoader.
> For example I need to put org.apache.tika.parser.pdf.PDFParserConfig with 
> "extractInlineImages" set to true in ParseContext to trigger extraction/OCR 
> recognizing of embedded images from pdf. 
> It would be nice to have possibility to configure created ParseContext due 
> xml-config file like TikaConfig does.
> I would suggest to have following:
> solrconfig.xml:
>class="org.apache.solr.handler.extraction.ExtractingRequestHandler">
> parseContext.config
>   
> parseContext.config:
> 
>value="org.apache.tika.parser.pdf.PDFParserConfig">
> 
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7989) Down replica elected leader, stays down after successful election

2015-11-04 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989824#comment-14989824
 ] 

Erick Erickson commented on SOLR-7989:
--

How about just a comment in the test about "don't waste too much time making 
this test pass if LIR code changes" or something?

> Down replica elected leader, stays down after successful election
> -
>
> Key: SOLR-7989
> URL: https://issues.apache.org/jira/browse/SOLR-7989
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
> Attachments: DownLeaderTest.java, DownLeaderTest.java, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, 
> SOLR-8233.patch
>
>
> It is possible that a down replica gets elected as a leader, and that it 
> stays down after the election.
> Here's how I hit upon this:
> * There are 3 replicas: leader, notleader0, notleader1
> * Introduced network partition to isolate notleader0, notleader1 from leader 
> (leader puts these two in LIR via zk).
> * Kill leader, remove partition. Now leader is dead, and both of notleader0 
> and notleader1 are down. There is no leader.
> * Remove LIR znodes in zk.
> * Wait a while, and there happens a (flawed?) leader election.
> * Finally, the state is such that one of notleader0 or notleader1 (which were 
> down before) become leader, but stays down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6885) StandardDirectoryReader (initialCapacity) tweaks

2015-11-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989808#comment-14989808
 ] 

Adrien Grand commented on LUCENE-6885:
--

This looks better to me indeed (or even better Collections.emptyMap() in the 
null case).

> StandardDirectoryReader (initialCapacity) tweaks
> 
>
> Key: LUCENE-6885
> URL: https://issues.apache.org/jira/browse/LUCENE-6885
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-6885.patch
>
>
> proposed patch against trunk to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8237) Invalid parsing with solr edismax operators

2015-11-04 Thread Mahmoud Almokadem (JIRA)
Mahmoud Almokadem created SOLR-8237:
---

 Summary: Invalid parsing with solr edismax operators
 Key: SOLR-8237
 URL: https://issues.apache.org/jira/browse/SOLR-8237
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8.1
 Environment: Windows 2008 R2 - Apache TomCat 7
Reporter: Mahmoud Almokadem
Priority: Critical


Using edismax as the parser we got the undesirable parsed queries and results. 
The following is two different cases with strange behavior: Searching with 
these parameters

 "mm":"2",
 "df":"TotalField",
 "debug":"true",
 "indent":"true",
 "fl":"Title",
 "start":"0",
 "q.op":"AND",
 "fq":"",
 "rows":"10",
 "wt":"json" 
and the query is

"q":"+(public libraries)",
Retrieve 502 documents with these parsed query

"rawquerystring":"+(public libraries)",
"querystring":"+(public libraries)",
"parsedquery":"(+(+(DisjunctionMaxQuery((Title:public^200.0 | 
TotalField:public^0.1)) DisjunctionMaxQuery((Title:libraries^200.0 | 
TotalField:libraries^0.1)/no_coord",
"parsedquery_toString":"+(+((Title:public^200.0 | TotalField:public^0.1) 
(Title:libraries^200.0 | TotalField:libraries^0.1)))"
and if the query is

"q":" (public libraries) "
then it retrieves 8 documents with these parsed query

"rawquerystring":" (public libraries) ",
"querystring":" (public libraries) ",
"parsedquery":"(+((DisjunctionMaxQuery((Title:public^200.0 | 
TotalField:public^0.1)) DisjunctionMaxQuery((Title:libraries^200.0 | 
TotalField:libraries^0.1)))~2))/no_coord",
"parsedquery_toString":"+(((Title:public^200.0 | TotalField:public^0.1) 
(Title:libraries^200.0 | TotalField:libraries^0.1))~2)"
So the results of adding "+" to get all tokens before the parenthesis retrieve 
more results than removing it.


Request Handler



explicit
  10
  TotalField
 AND
 edismax
 Title^200 TotalField^1







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7989) Down replica elected leader, stays down after successful election

2015-11-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989798#comment-14989798
 ] 

Mark Miller commented on SOLR-7989:
---

I'd still lean toward trying to get that test in, even if just in some nightly 
form. I think these specific tests are great. And I'm hoping LIR is not going 
to have to change too much. As recovery code hardens, it should actually 
continually change less and less is my hope.

Up to you though, I get the current concern with the test. But while the test 
is specific, it doesn't seem too terribly complicated. And we are going to want 
to add more and more specific tests over time. Perhaps we should do some work 
to make a test like this more possible with built in support somehow?

> Down replica elected leader, stays down after successful election
> -
>
> Key: SOLR-7989
> URL: https://issues.apache.org/jira/browse/SOLR-7989
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
> Attachments: DownLeaderTest.java, DownLeaderTest.java, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, 
> SOLR-8233.patch
>
>
> It is possible that a down replica gets elected as a leader, and that it 
> stays down after the election.
> Here's how I hit upon this:
> * There are 3 replicas: leader, notleader0, notleader1
> * Introduced network partition to isolate notleader0, notleader1 from leader 
> (leader puts these two in LIR via zk).
> * Kill leader, remove partition. Now leader is dead, and both of notleader0 
> and notleader1 are down. There is no leader.
> * Remove LIR znodes in zk.
> * Wait a while, and there happens a (flawed?) leader election.
> * Finally, the state is such that one of notleader0 or notleader1 (which were 
> down before) become leader, but stays down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8166) Introduce possibility to configure ParseContext in ExtractingRequestHandler/ExtractingDocumentLoader

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989781#comment-14989781
 ] 

ASF GitHub Bot commented on SOLR-8166:
--

Github user abinet commented on the pull request:

https://github.com/apache/lucene-solr/pull/206#issuecomment-153773456
  
All done. it it clean enough for merge?


> Introduce possibility to configure ParseContext in 
> ExtractingRequestHandler/ExtractingDocumentLoader
> 
>
> Key: SOLR-8166
> URL: https://issues.apache.org/jira/browse/SOLR-8166
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - Solr Cell (Tika extraction)
>Affects Versions: 5.3
>Reporter: Andriy Binetsky
>Assignee: Uwe Schindler
>
> Actually there is no possibility to hand over some additional configuration 
> by document extracting with ExtractingRequestHandler/ExtractingDocumentLoader.
> For example I need to put org.apache.tika.parser.pdf.PDFParserConfig with 
> "extractInlineImages" set to true in ParseContext to trigger extraction/OCR 
> recognizing of embedded images from pdf. 
> It would be nice to have possibility to configure created ParseContext due 
> xml-config file like TikaConfig does.
> I would suggest to have following:
> solrconfig.xml:
>class="org.apache.solr.handler.extraction.ExtractingRequestHandler">
> parseContext.config
>   
> parseContext.config:
> 
>value="org.apache.tika.parser.pdf.PDFParserConfig">
> 
>   
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8166 provide config for tika's Pars...

2015-11-04 Thread abinet
Github user abinet commented on the pull request:

https://github.com/apache/lucene-solr/pull/206#issuecomment-153773456
  
All done. it it clean enough for merge?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7989) Down replica elected leader, stays down after successful election

2015-11-04 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989768#comment-14989768
 ] 

Ishan Chattopadhyaya commented on SOLR-7989:


bq. Can we catch this in a unit test using the proxy jetties to simulate the 
partitions?

Right, there's the DownLeaderTest. However, I don't think we should commit the 
test. The test is quite a complex way of proving that there is a problem, and 
that the problem is fixed once this patch goes in. But, it relies on LIR logic, 
and if LIR code changes later, the test will have to keep up with that, which 
is maintenance work. I couldn't find an easier way (apart from LIR) to trigger 
this particular situation (down replica becoming leader, staying down).


> Down replica elected leader, stays down after successful election
> -
>
> Key: SOLR-7989
> URL: https://issues.apache.org/jira/browse/SOLR-7989
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
> Attachments: DownLeaderTest.java, DownLeaderTest.java, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, 
> SOLR-8233.patch
>
>
> It is possible that a down replica gets elected as a leader, and that it 
> stays down after the election.
> Here's how I hit upon this:
> * There are 3 replicas: leader, notleader0, notleader1
> * Introduced network partition to isolate notleader0, notleader1 from leader 
> (leader puts these two in LIR via zk).
> * Kill leader, remove partition. Now leader is dead, and both of notleader0 
> and notleader1 are down. There is no leader.
> * Remove LIR znodes in zk.
> * Wait a while, and there happens a (flawed?) leader election.
> * Finally, the state is such that one of notleader0 or notleader1 (which were 
> down before) become leader, but stays down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7989) Down replica elected leader, stays down after successful election

2015-11-04 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989741#comment-14989741
 ] 

Noble Paul commented on SOLR-7989:
--

That is a unit test. Never has any clusterstate . It only tests for leader 
election

> Down replica elected leader, stays down after successful election
> -
>
> Key: SOLR-7989
> URL: https://issues.apache.org/jira/browse/SOLR-7989
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
> Attachments: DownLeaderTest.java, DownLeaderTest.java, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, 
> SOLR-8233.patch
>
>
> It is possible that a down replica gets elected as a leader, and that it 
> stays down after the election.
> Here's how I hit upon this:
> * There are 3 replicas: leader, notleader0, notleader1
> * Introduced network partition to isolate notleader0, notleader1 from leader 
> (leader puts these two in LIR via zk).
> * Kill leader, remove partition. Now leader is dead, and both of notleader0 
> and notleader1 are down. There is no leader.
> * Remove LIR znodes in zk.
> * Wait a while, and there happens a (flawed?) leader election.
> * Finally, the state is such that one of notleader0 or notleader1 (which were 
> down before) become leader, but stays down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7989) Down replica elected leader, stays down after successful election

2015-11-04 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-7989:


Assignee: Noble Paul

> Down replica elected leader, stays down after successful election
> -
>
> Key: SOLR-7989
> URL: https://issues.apache.org/jira/browse/SOLR-7989
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
> Attachments: DownLeaderTest.java, DownLeaderTest.java, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, 
> SOLR-8233.patch
>
>
> It is possible that a down replica gets elected as a leader, and that it 
> stays down after the election.
> Here's how I hit upon this:
> * There are 3 replicas: leader, notleader0, notleader1
> * Introduced network partition to isolate notleader0, notleader1 from leader 
> (leader puts these two in LIR via zk).
> * Kill leader, remove partition. Now leader is dead, and both of notleader0 
> and notleader1 are down. There is no leader.
> * Remove LIR znodes in zk.
> * Wait a while, and there happens a (flawed?) leader election.
> * Finally, the state is such that one of notleader0 or notleader1 (which were 
> down before) become leader, but stays down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7989) Down replica elected leader, stays down after successful election

2015-11-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989723#comment-14989723
 ] 

Mark Miller commented on SOLR-7989:
---

Nevermind, I see a new test is broken out on it's own, was just looking at the 
patch files. Awesome.

> Down replica elected leader, stays down after successful election
> -
>
> Key: SOLR-7989
> URL: https://issues.apache.org/jira/browse/SOLR-7989
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: DownLeaderTest.java, DownLeaderTest.java, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, 
> SOLR-8233.patch
>
>
> It is possible that a down replica gets elected as a leader, and that it 
> stays down after the election.
> Here's how I hit upon this:
> * There are 3 replicas: leader, notleader0, notleader1
> * Introduced network partition to isolate notleader0, notleader1 from leader 
> (leader puts these two in LIR via zk).
> * Kill leader, remove partition. Now leader is dead, and both of notleader0 
> and notleader1 are down. There is no leader.
> * Remove LIR znodes in zk.
> * Wait a while, and there happens a (flawed?) leader election.
> * Finally, the state is such that one of notleader0 or notleader1 (which were 
> down before) become leader, but stays down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6885) StandardDirectoryReader (initialCapacity) tweaks

2015-11-04 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989719#comment-14989719
 ] 

Christine Poerschke commented on LUCENE-6885:
-

If the {{segmentReaders == null}} check is a complexity concern then perhaps
{code}
final Map segmentReaders = (oldReaders == null ? new 
HashMap<>(1) : new HashMap<>(oldReaders.size()));
{code}
instead?

> StandardDirectoryReader (initialCapacity) tweaks
> 
>
> Key: LUCENE-6885
> URL: https://issues.apache.org/jira/browse/LUCENE-6885
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-6885.patch
>
>
> proposed patch against trunk to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7989) Down replica elected leader, stays down after successful election

2015-11-04 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989720#comment-14989720
 ] 

Mark Miller commented on SOLR-7989:
---

Great catch Ishan!

bq. Here's how I hit upon this:

Can we catch this in a unit test using the proxy jetties to simulate the 
partitions?

> Down replica elected leader, stays down after successful election
> -
>
> Key: SOLR-7989
> URL: https://issues.apache.org/jira/browse/SOLR-7989
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: DownLeaderTest.java, DownLeaderTest.java, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, 
> SOLR-8233.patch
>
>
> It is possible that a down replica gets elected as a leader, and that it 
> stays down after the election.
> Here's how I hit upon this:
> * There are 3 replicas: leader, notleader0, notleader1
> * Introduced network partition to isolate notleader0, notleader1 from leader 
> (leader puts these two in LIR via zk).
> * Kill leader, remove partition. Now leader is dead, and both of notleader0 
> and notleader1 are down. There is no leader.
> * Remove LIR znodes in zk.
> * Wait a while, and there happens a (flawed?) leader election.
> * Finally, the state is such that one of notleader0 or notleader1 (which were 
> down before) become leader, but stays down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6885) StandardDirectoryReader (initialCapacity) tweaks

2015-11-04 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989715#comment-14989715
 ] 

Christine Poerschke commented on LUCENE-6885:
-

Fair point re: code complexity. The intention was to avoid allocation of the 
{{segmentReaders}} when they will remain empty because {{oldReaders}} is null 
and to allocate as many elements as will be needed (usually that would be more 
than the default 10 initial elements).

Would it be clearer to do
{code}
final Map segmentReaders = (oldReaders == null ? null : new 
HashMap<>(oldReaders.size()));
{code}
i.e. no lazy initialisation as such but the {{segmentReaders == null}} check 
later on would remain?

> StandardDirectoryReader (initialCapacity) tweaks
> 
>
> Key: LUCENE-6885
> URL: https://issues.apache.org/jira/browse/LUCENE-6885
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-6885.patch
>
>
> proposed patch against trunk to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b90) - Build # 14785 - Failure!

2015-11-04 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14785/
Java: 64bit/jdk1.9.0-ea-b90 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestAuthenticationFramework

Error Message:
16 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestAuthenticationFramework: 1) Thread[id=507, 
name=qtp536629280-507, state=TIMED_WAITING, 
group=TGRP-TestAuthenticationFramework] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=504, 
name=qtp536629280-504-selector-ServerConnectorManager@3cc0588f/3, 
state=RUNNABLE, group=TGRP-TestAuthenticationFramework] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.select(SelectorManager.java:600)
 at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.run(SelectorManager.java:549)
 at 
org.eclipse.jetty.util.thread.NonBlockingThread.run(NonBlockingThread.java:52)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=563, 
name=OverseerStateUpdate-94807652546904076-127.0.0.1:52344_solr-n_00, 
state=WAITING, group=Overseer state updater.] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342) at 
org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1153) at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:353)  
   at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:350)  
   at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
 at 
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:350)
 at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.amILeader(Overseer.java:411) 
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:143)   
  at java.lang.Thread.run(Thread.java:747)4) Thread[id=508, 
name=qtp536629280-508, state=TIMED_WAITING, 
group=TGRP-TestAuthenticationFramework] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=545, 
name=jetty-launcher-64-thread-4-SendThread(127.0.0.1:56450), 
state=TIMED_WAITING, group=TGRP-TestAuthenticationFramework] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:994)6) 
Thread[id=510, 
name=org.eclipse.jetty.server.session.HashSessionManager@302958dbTimer, 
state=TIMED_WAITING, group=TGRP-TestAuthenticationFramework] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolEx

[jira] [Updated] (LUCENE-6885) StandardDirectoryReader (initialCapacity) tweaks

2015-11-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6885:
-
Assignee: Christine Poerschke  (was: Adrien Grand)

> StandardDirectoryReader (initialCapacity) tweaks
> 
>
> Key: LUCENE-6885
> URL: https://issues.apache.org/jira/browse/LUCENE-6885
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-6885.patch
>
>
> proposed patch against trunk to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-6885) StandardDirectoryReader (initialCapacity) tweaks

2015-11-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand reassigned LUCENE-6885:


Assignee: Adrien Grand  (was: Christine Poerschke)

> StandardDirectoryReader (initialCapacity) tweaks
> 
>
> Key: LUCENE-6885
> URL: https://issues.apache.org/jira/browse/LUCENE-6885
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6885.patch
>
>
> proposed patch against trunk to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7989) Down replica elected leader, stays down after successful election

2015-11-04 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-7989:
---
Attachment: SOLR-7989.patch

>From that test, {{LeaderElectionTest}}, the clusterstate was obtained as null. 
>Added a check for that, so now the test passes smoothly as before.

Attached the updated patch, running full suite of tests now.

> Down replica elected leader, stays down after successful election
> -
>
> Key: SOLR-7989
> URL: https://issues.apache.org/jira/browse/SOLR-7989
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: DownLeaderTest.java, DownLeaderTest.java, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, 
> SOLR-8233.patch
>
>
> It is possible that a down replica gets elected as a leader, and that it 
> stays down after the election.
> Here's how I hit upon this:
> * There are 3 replicas: leader, notleader0, notleader1
> * Introduced network partition to isolate notleader0, notleader1 from leader 
> (leader puts these two in LIR via zk).
> * Kill leader, remove partition. Now leader is dead, and both of notleader0 
> and notleader1 are down. There is no leader.
> * Remove LIR znodes in zk.
> * Wait a while, and there happens a (flawed?) leader election.
> * Finally, the state is such that one of notleader0 or notleader1 (which were 
> down before) become leader, but stays down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6885) StandardDirectoryReader (initialCapacity) tweaks

2015-11-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989684#comment-14989684
 ] 

Adrien Grand commented on LUCENE-6885:
--

I'm concerned the lazy initialization of segmentReaders makes the code more 
complex but does not really buy us anything?

> StandardDirectoryReader (initialCapacity) tweaks
> 
>
> Key: LUCENE-6885
> URL: https://issues.apache.org/jira/browse/LUCENE-6885
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-6885.patch
>
>
> proposed patch against trunk to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7989) Down replica elected leader, stays down after successful election

2015-11-04 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989679#comment-14989679
 ] 

Ishan Chattopadhyaya commented on SOLR-7989:


This check for state, and more precisely the use of zkStateReader to 
update/access the cluster state, is causing a hang/stall in 
`LeaderElectionTest`. I'm looking into the reason / fix.

> Down replica elected leader, stays down after successful election
> -
>
> Key: SOLR-7989
> URL: https://issues.apache.org/jira/browse/SOLR-7989
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: DownLeaderTest.java, DownLeaderTest.java, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-8233.patch
>
>
> It is possible that a down replica gets elected as a leader, and that it 
> stays down after the election.
> Here's how I hit upon this:
> * There are 3 replicas: leader, notleader0, notleader1
> * Introduced network partition to isolate notleader0, notleader1 from leader 
> (leader puts these two in LIR via zk).
> * Kill leader, remove partition. Now leader is dead, and both of notleader0 
> and notleader1 are down. There is no leader.
> * Remove LIR znodes in zk.
> * Wait a while, and there happens a (flawed?) leader election.
> * Finally, the state is such that one of notleader0 or notleader1 (which were 
> down before) become leader, but stays down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6885) StandardDirectoryReader (initialCapacity) tweaks

2015-11-04 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-6885:

Attachment: LUCENE-6885.patch

> StandardDirectoryReader (initialCapacity) tweaks
> 
>
> Key: LUCENE-6885
> URL: https://issues.apache.org/jira/browse/LUCENE-6885
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-6885.patch
>
>
> proposed patch against trunk to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6885) StandardDirectoryReader (initialCapacity) tweaks

2015-11-04 Thread Christine Poerschke (JIRA)
Christine Poerschke created LUCENE-6885:
---

 Summary: StandardDirectoryReader (initialCapacity) tweaks
 Key: LUCENE-6885
 URL: https://issues.apache.org/jira/browse/LUCENE-6885
 Project: Lucene - Core
  Issue Type: Task
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


proposed patch against trunk to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6884) Analyzer.tokenStream() shouldn't throw IOException

2015-11-04 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989628#comment-14989628
 ] 

David Smiley commented on LUCENE-6884:
--

It's fine.

> Analyzer.tokenStream() shouldn't throw IOException
> --
>
> Key: LUCENE-6884
> URL: https://issues.apache.org/jira/browse/LUCENE-6884
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-6884.patch
>
>
> I'm guessing that in the past, calling Analyzer.tokenStream() would call 
> TokenStream.reset() somewhere downstream, meaning that we had to deal with 
> IOExceptions.  However, tokenstreams are created entirely lazily now, so this 
> is unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6884) Analyzer.tokenStream() shouldn't throw IOException

2015-11-04 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-6884:
--
Attachment: LUCENE-6884.patch

Patch, mostly just removing now-redundant try-catch blocks in tests.  This also 
removes IOExceptions from Tokenizer.setReader()

The only place that was actually throwing an IOException inside setReader was 
AbstractSpatialPrefixTreeFieldType, which I've changed to wrap as a 
RuntimeException.  It's apparently only used for the Solr analysis UI, so I 
think this should be fine, but it would be good if [~dsmiley] could 
double-check that.

> Analyzer.tokenStream() shouldn't throw IOException
> --
>
> Key: LUCENE-6884
> URL: https://issues.apache.org/jira/browse/LUCENE-6884
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-6884.patch
>
>
> I'm guessing that in the past, calling Analyzer.tokenStream() would call 
> TokenStream.reset() somewhere downstream, meaning that we had to deal with 
> IOExceptions.  However, tokenstreams are created entirely lazily now, so this 
> is unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 578 - Failure

2015-11-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/578/

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
Captured an uncaught exception in thread: Thread[id=3362, 
name=RecoveryThread-source_collection_shard1_replica2, state=RUNNABLE, 
group=TGRP-CdcrReplicationHandlerTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=3362, 
name=RecoveryThread-source_collection_shard1_replica2, state=RUNNABLE, 
group=TGRP-CdcrReplicationHandlerTest]
at 
__randomizedtesting.SeedInfo.seed([A7C25BB36EFF5B30:86E31703444889]:0)
Caused by: org.apache.solr.common.cloud.ZooKeeperException: 
at __randomizedtesting.SeedInfo.seed([A7C25BB36EFF5B30]:0)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:232)
Caused by: org.apache.solr.common.SolrException: java.io.FileNotFoundException: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/solr/build/solr-core/test/J0/temp/solr.cloud.CdcrReplicationHandlerTest_A7C25BB36EFF5B30-001/jetty-002/cores/source_collection_shard1_replica2/data/tlog/tlog.006.1516917179223638016
 (No such file or directory)
at 
org.apache.solr.update.CdcrTransactionLog.reopenOutputStream(CdcrTransactionLog.java:244)
at 
org.apache.solr.update.CdcrTransactionLog.incref(CdcrTransactionLog.java:173)
at 
org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1079)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1579)
at 
org.apache.solr.update.UpdateLog.seedBucketsWithHighestVersion(UpdateLog.java:1610)
at org.apache.solr.core.SolrCore.seedVersionBuckets(SolrCore.java:877)
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:534)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:225)
Caused by: java.io.FileNotFoundException: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/solr/build/solr-core/test/J0/temp/solr.cloud.CdcrReplicationHandlerTest_A7C25BB36EFF5B30-001/jetty-002/cores/source_collection_shard1_replica2/data/tlog/tlog.006.1516917179223638016
 (No such file or directory)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.(RandomAccessFile.java:243)
at 
org.apache.solr.update.CdcrTransactionLog.reopenOutputStream(CdcrTransactionLog.java:236)
... 7 more




Build Log:
[...truncated 9986 lines...]
   [junit4] Suite: org.apache.solr.cloud.CdcrReplicationHandlerTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/solr/build/solr-core/test/J0/temp/solr.cloud.CdcrReplicationHandlerTest_A7C25BB36EFF5B30-001/init-core-data-001
   [junit4]   2> 366480 INFO  
(SUITE-CdcrReplicationHandlerTest-seed#[A7C25BB36EFF5B30]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true)
   [junit4]   2> 366480 INFO  
(SUITE-CdcrReplicationHandlerTest-seed#[A7C25BB36EFF5B30]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 366489 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[A7C25BB36EFF5B30]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 366495 INFO  (Thread-1621) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 366495 INFO  (Thread-1621) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 366595 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[A7C25BB36EFF5B30]) [] 
o.a.s.c.ZkTestServer start zk server on port:35426
   [junit4]   2> 366596 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[A7C25BB36EFF5B30]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 366598 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[A7C25BB36EFF5B30]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 366602 INFO  (zkCallback-323-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@33e7adb8 
name:ZooKeeperConnection Watcher:127.0.0.1:35426 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 366602 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[A7C25BB36EFF5B30]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 366602 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[A7C25BB36EFF5B30]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 366602 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[A7C25BB36EFF5B30]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 366605 INFO  
(TEST-CdcrReplicationHandlerTest.doTest-seed#[A7C25BB36EFF5B30]) [] 
o.a.s.c.c.SolrZkCli

[jira] [Commented] (LUCENE-6276) Add matchCost() api to TwoPhaseDocIdSetIterator

2015-11-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989595#comment-14989595
 ] 

Adrien Grand commented on LUCENE-6276:
--

 - can you add to the javadocs of TwoPhaseIterator#matchCost that match costs 
need to be a positive number?
 - can you add some comments around the cost computation for 
disjunctions/conjunctions to explain the reasoning?
 - I would prefer termPositionsCost to be duplicated in PhraseWeight and 
SpanNearQuery than in TwoPhaseIterator, like we do for disjunctions 
(SpanOrQuery and DisjunctionScorer). I can understand the concerns around 
duplication but I think it's still cleaner than trying to share the logic by 
adding utility methods to TwoPhaseIterator.
 - I think SpanTermQuery.PHRASE_TO_SPAN_TERM_POSITIONS_COST should be static?

Otherwise the change looks good to me, I like the cost definition for 
conjunctions/disjunctions/phrases and we can tackle other queries in follow-up 
issues, but I think this is already a great start and will help execute slow 
queries more efficiently!

> Add matchCost() api to TwoPhaseDocIdSetIterator
> ---
>
> Key: LUCENE-6276
> URL: https://issues.apache.org/jira/browse/LUCENE-6276
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-6276-ExactPhraseOnly.patch, 
> LUCENE-6276-NoSpans.patch, LUCENE-6276-NoSpans2.patch, LUCENE-6276.patch, 
> LUCENE-6276.patch, LUCENE-6276.patch, LUCENE-6276.patch, LUCENE-6276.patch, 
> LUCENE-6276.patch
>
>
> We could add a method like TwoPhaseDISI.matchCost() defined as something like 
> estimate of nanoseconds or similar. 
> ConjunctionScorer could use this method to sort its 'twoPhaseIterators' array 
> so that cheaper ones are called first. Today it has no idea if one scorer is 
> a simple phrase scorer on a short field vs another that might do some geo 
> calculation or more expensive stuff.
> PhraseScorers could implement this based on index statistics (e.g. 
> totalTermFreq/maxDoc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6884) Analyzer.tokenStream() shouldn't throw IOException

2015-11-04 Thread Alan Woodward (JIRA)
Alan Woodward created LUCENE-6884:
-

 Summary: Analyzer.tokenStream() shouldn't throw IOException
 Key: LUCENE-6884
 URL: https://issues.apache.org/jira/browse/LUCENE-6884
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward
Priority: Minor


I'm guessing that in the past, calling Analyzer.tokenStream() would call 
TokenStream.reset() somewhere downstream, meaning that we had to deal with 
IOExceptions.  However, tokenstreams are created entirely lazily now, so this 
is unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6875) New Serbian Filter

2015-11-04 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989556#comment-14989556
 ] 

Robert Muir commented on LUCENE-6875:
-

in general most are 1-1, but in this case i think the factory setup is fine, i 
think there should be an exception list in the test?

> New Serbian Filter
> --
>
> Key: LUCENE-6875
> URL: https://issues.apache.org/jira/browse/LUCENE-6875
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Nikola Smolenski
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: Trunk, 5.4
>
> Attachments: Lucene-Serbian-Regular-1.patch
>
>
> This is a new Serbian filter that works with regular Latin text (the current 
> filter works with "bald" Latin). I described in detail what does it do and 
> why is it necessary at the wiki.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8236) Federated Search (new) - NumFound

2015-11-04 Thread Tom Winch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Winch updated SOLR-8236:

Description: 
This issue describes a search component for estimating numFounds in federated 
search - that is, distributed search over documents stored in separated 
instances of SOLR (for example, one server per continent), where a single 
document (identified by an agreed, common unique id) may be stored in more than 
one server instance, with (possibly) differing fields and data.

When documents are present on more than one distributed server, which is 
normally the case in the federated search situation, then the numFound reported 
by the search is incorrect. For small result sets we may return all the 
document ids matching the query from each server, in order to compute an exact 
numFound. For large result sets this is impractical, and the numFound may be 
estimated using statistical techniques.

Statistical techniques may be driven by the following heuristic: if two shards 
always return the same numFound for queries, then they contain the same 
document ids, and the combined numFound is the same as for each. On the other 
hand, if two shards always return different numFounds for queries, then they 
likely contain independent document ids, and the numFounds should be summed.

This issue combines with others to provide full federated search support. See 
also SOLR-8234 and SOLR-8235.

–

Note that this is part of a new implementation of federated search as opposed 
to the older issues SOLR-3799 through SOLR-3805.

> Federated Search (new) - NumFound
> -
>
> Key: SOLR-8236
> URL: https://issues.apache.org/jira/browse/SOLR-8236
> Project: Solr
>  Issue Type: New Feature
>Reporter: Tom Winch
>Priority: Minor
>
> This issue describes a search component for estimating numFounds in federated 
> search - that is, distributed search over documents stored in separated 
> instances of SOLR (for example, one server per continent), where a single 
> document (identified by an agreed, common unique id) may be stored in more 
> than one server instance, with (possibly) differing fields and data.
> When documents are present on more than one distributed server, which is 
> normally the case in the federated search situation, then the numFound 
> reported by the search is incorrect. For small result sets we may return all 
> the document ids matching the query from each server, in order to compute an 
> exact numFound. For large result sets this is impractical, and the numFound 
> may be estimated using statistical techniques.
> Statistical techniques may be driven by the following heuristic: if two 
> shards always return the same numFound for queries, then they contain the 
> same document ids, and the combined numFound is the same as for each. On the 
> other hand, if two shards always return different numFounds for queries, then 
> they likely contain independent document ids, and the numFounds should be 
> summed.
> This issue combines with others to provide full federated search support. See 
> also SOLR-8234 and SOLR-8235.
> –
> Note that this is part of a new implementation of federated search as opposed 
> to the older issues SOLR-3799 through SOLR-3805.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8235) Federated Search (new) - Merge

2015-11-04 Thread Tom Winch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Winch updated SOLR-8235:

Description: 
This issue describes a SearchComponent for merging search results obtained from 
a DJoin distributed search (see SOLR-8234) as part of federated search - that 
is, distributed search over documents stored in separated instances of SOLR 
(for example, one server per continent), where a single document (identified by 
an agreed, common unique id) may be stored in more than one server instance, 
with (possibly) differing fields and data.

In the use of this search component, it is assumed that there is a single SOLR 
server (the "aggregator") that uses distributed search (shards=) to collect 
documents from other SOLR servers using DJoin (see SOLR-8234). The DJoin 
generates a result set containing parent documents each with child documents 
having the same unique id. This merge component turns each set of child 
documents into a single document conforming to the aggregator schema.

For example, suppose the aggregator schema defines a multi-valued integer 
field, 'num', and three shards return field values "48", 23, and "strawberry". 
Then the resulting merged field value would be [48, 23] and an error would 
included for the NumberFormatException.

Custom field merge behaviour may be specified by defining custom field types in 
the usual way and this is supported via a MergeAbstractFieldType class.

This issue combines with others to provide full federated search support. See 
also SOLR-8234 and SOLR-8236.

–

Note that this is part of a new implementation of federated search as opposed 
to the older issues SOLR-3799 through SOLR-3805.

  was:
This issue describes a SearchComponent for merging search results obtained from 
a DJoin distributed search (see SOLR-8234) as part of federated search - that 
is, distributed search over documents stored in separated instances of SOLR 
(for example, one server per continent), where a single document (identified by 
an agreed, common unique id) may be stored in more than one server instance, 
with (possibly) differing fields and data.

In the use of this search component, it is assumed that there is a single SOLR 
server (the "aggregator") that uses distributed search (shards=) to collect 
documents from other SOLR servers using DJoin (see SOLR-8234). The DJoin 
generates a result set containing parent documents each with child documents 
having the same unique id. This merge component turns each set of child 
documents into a single document conforming to the aggregator schema.

For example, suppose the aggregator schema defines a multi-valued integer 
field, 'num', and three shards return field values "48", 23, and "strawberry". 
Then the resulting merged field value would be [48, 23] and an error would 
included for the NumberFormatException.

Custom field merge behaviour may be specified by defining custom field types in 
the usual way and this is supported via a MergeAbstractFieldType class.

This issue combines with others to provide full federated search support. See 
also SOLR-8234 and SOLR-8236.
–
Note that this is part of a new implementation of federated search as opposed 
to the older issues SOLR-3799 through SOLR-3805.


> Federated Search (new) - Merge
> --
>
> Key: SOLR-8235
> URL: https://issues.apache.org/jira/browse/SOLR-8235
> Project: Solr
>  Issue Type: New Feature
>Reporter: Tom Winch
>Priority: Minor
>  Labels: federated_search
> Fix For: 4.10.3
>
>
> This issue describes a SearchComponent for merging search results obtained 
> from a DJoin distributed search (see SOLR-8234) as part of federated search - 
> that is, distributed search over documents stored in separated instances of 
> SOLR (for example, one server per continent), where a single document 
> (identified by an agreed, common unique id) may be stored in more than one 
> server instance, with (possibly) differing fields and data.
> In the use of this search component, it is assumed that there is a single 
> SOLR server (the "aggregator") that uses distributed search (shards=) to 
> collect documents from other SOLR servers using DJoin (see SOLR-8234). The 
> DJoin generates a result set containing parent documents each with child 
> documents having the same unique id. This merge component turns each set of 
> child documents into a single document conforming to the aggregator schema.
> For example, suppose the aggregator schema defines a multi-valued integer 
> field, 'num', and three shards return field values "48", 23, and 
> "strawberry". Then the resulting merged field value would be [48, 23] and an 
> error would included for the NumberFormatException.
> Custom field merge behaviour may be specified by defining custom field types 
> in the usual way and 

[jira] [Commented] (LUCENE-6878) TopDocs.merge should use updateTop instead of pop / add

2015-11-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989501#comment-14989501
 ] 

Adrien Grand commented on LUCENE-6878:
--

These results make sense to me given the change. Thank you Daniel!

> TopDocs.merge should use updateTop instead of pop / add
> ---
>
> Key: LUCENE-6878
> URL: https://issues.apache.org/jira/browse/LUCENE-6878
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: Trunk
>Reporter: Daniel Jelinski
>Assignee: Adrien Grand
>Priority: Trivial
> Fix For: 6.0, 5.4
>
> Attachments: LUCENE-6878.patch, speedtest.tar.gz
>
>
> The function TopDocs.merge uses PriorityQueue in a pattern: pop, update value 
> (ref.hitIndex++), add. JavaDocs for PriorityQueue.updateTop say that using 
> this function instead should be at least twice as fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6883) Getting exception _t.si (No such file or directory)

2015-11-04 Thread Tejas Jethva (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tejas Jethva updated LUCENE-6883:
-
Description: 
We are getting following exception when we are trying to update cache. 
Following are two scenario when we get this error

scenario 1:

2015-11-03 06:45:18,213 [main] ERROR java.io.FileNotFoundException: 
/app/cache/index-persecurity/PERSECURITY_INDEX-QCH/_mb.si (No such file or 
directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:241)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:193)
at 
org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:50)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:301)
at 
org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:56)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:783)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65)
.

scenario 2:

java.io.FileNotFoundException: 
/app/1.0.5_loadtest/index-persecurity/PERSECURITY_INDEX-ITQ/_t.si (No such file 
or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:241)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:193)
at 
org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:50)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:301)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:347)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:783)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:630)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:343)
at 
org.apache.lucene.index.StandardDirectoryReader.isCurrent(StandardDirectoryReader.java:326)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenNoWriter(StandardDirectoryReader.java:284)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:247)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:235)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:169)
..



What might be the possible reasons for this?

  was:
We are getting following exception when we are trying to update cache. 
Following are two scenario when we get this error

scenario 1:

2015-11-03 06:45:18,213 [main] ERROR 
com.goldensource.activegateway.backoffice.launcher.BackOfficeJobLauncher  - 
Error while launching the Back Office job : java.io.FileNotFoundException: 
/ii010/app/cache/index-persecurity/PERSECURITY_INDEX-QUOTE_COMPOSITE_HISTORY/_mb.si
 (No such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:241)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:193)
at 
org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:50)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:301)
at 
org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:56)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:783)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65)
at 
com.goldensource.activegateway.cache.indexersearcher.impl.GSLuceneIndexAccessor.isDummyCommitRequired(GSLuceneIndexAccessor.java:1114)
at 
com.goldensource.activegateway.cache.indexersearcher.impl.GSLuceneIndexAccessor.createIndexSearcher(GSLuceneIndexAccessor.java:1065)
at 
com.goldensource.activegateway.cache.indexersearcher.impl.GSLuceneIndexerSearcherImpl.createIndexSearcher(GSLuceneIndexerSearcherImpl.java:224)
at 
com.goldensource.activegateway.backoffice.cache.GSBackOfficeCacheUpdateManagerImpl.recreateIndexSearchers(GSBackOfficeCacheUpdateManagerImpl.java:249)
at 
com.goldensource.activegateway.backoffice.launcher.BackOfficeJobLauncher.main(BackOfficeJobLauncher.java:433)

scenario 2:

2015-08-19 21:31:37,788 [Camel (gsource) thread #3 - seda://updateCacheQueue] 
ERROR 
com.goldensource.activegateway.services.cache.impl.GSBBCacheUpdateManagerImpl  
- Error in updating cache java.io.FileNotFoundException: 
/app/1.0.5_loadtest/index-pe

[jira] [Updated] (SOLR-8235) Federated Search (new) - Merge

2015-11-04 Thread Tom Winch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Winch updated SOLR-8235:

 Labels: federated_search  (was: )
Description: 
This issue describes a SearchComponent for merging search results obtained from 
a DJoin distributed search (see SOLR-8234) as part of federated search - that 
is, distributed search over documents stored in separated instances of SOLR 
(for example, one server per continent), where a single document (identified by 
an agreed, common unique id) may be stored in more than one server instance, 
with (possibly) differing fields and data.

In the use of this search component, it is assumed that there is a single SOLR 
server (the "aggregator") that uses distributed search (shards=) to collect 
documents from other SOLR servers using DJoin (see SOLR-8234). The DJoin 
generates a result set containing parent documents each with child documents 
having the same unique id. This merge component turns each set of child 
documents into a single document conforming to the aggregator schema.

For example, suppose the aggregator schema defines a multi-valued integer 
field, 'num', and three shards return field values "48", 23, and "strawberry". 
Then the resulting merged field value would be [48, 23] and an error would 
included for the NumberFormatException.

Custom field merge behaviour may be specified by defining custom field types in 
the usual way and this is supported via a MergeAbstractFieldType class.

This issue combines with others to provide full federated search support. See 
also SOLR-8234 and SOLR-8236.
–
Note that this is part of a new implementation of federated search as opposed 
to the older issues SOLR-3799 through SOLR-3805.

> Federated Search (new) - Merge
> --
>
> Key: SOLR-8235
> URL: https://issues.apache.org/jira/browse/SOLR-8235
> Project: Solr
>  Issue Type: New Feature
>Reporter: Tom Winch
>Priority: Minor
>  Labels: federated_search
> Fix For: 4.10.3
>
>
> This issue describes a SearchComponent for merging search results obtained 
> from a DJoin distributed search (see SOLR-8234) as part of federated search - 
> that is, distributed search over documents stored in separated instances of 
> SOLR (for example, one server per continent), where a single document 
> (identified by an agreed, common unique id) may be stored in more than one 
> server instance, with (possibly) differing fields and data.
> In the use of this search component, it is assumed that there is a single 
> SOLR server (the "aggregator") that uses distributed search (shards=) to 
> collect documents from other SOLR servers using DJoin (see SOLR-8234). The 
> DJoin generates a result set containing parent documents each with child 
> documents having the same unique id. This merge component turns each set of 
> child documents into a single document conforming to the aggregator schema.
> For example, suppose the aggregator schema defines a multi-valued integer 
> field, 'num', and three shards return field values "48", 23, and 
> "strawberry". Then the resulting merged field value would be [48, 23] and an 
> error would included for the NumberFormatException.
> Custom field merge behaviour may be specified by defining custom field types 
> in the usual way and this is supported via a MergeAbstractFieldType class.
> This issue combines with others to provide full federated search support. See 
> also SOLR-8234 and SOLR-8236.
> –
> Note that this is part of a new implementation of federated search as opposed 
> to the older issues SOLR-3799 through SOLR-3805.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6883) Getting exception _t.si (No such file or directory)

2015-11-04 Thread Tejas Jethva (JIRA)
Tejas Jethva created LUCENE-6883:


 Summary: Getting exception _t.si (No such file or directory)
 Key: LUCENE-6883
 URL: https://issues.apache.org/jira/browse/LUCENE-6883
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 4.2
Reporter: Tejas Jethva


We are getting following exception when we are trying to update cache. 
Following are two scenario when we get this error

scenario 1:

2015-11-03 06:45:18,213 [main] ERROR 
com.goldensource.activegateway.backoffice.launcher.BackOfficeJobLauncher  - 
Error while launching the Back Office job : java.io.FileNotFoundException: 
/ii010/app/cache/index-persecurity/PERSECURITY_INDEX-QUOTE_COMPOSITE_HISTORY/_mb.si
 (No such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:241)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:193)
at 
org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:50)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:301)
at 
org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:56)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:783)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:52)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:65)
at 
com.goldensource.activegateway.cache.indexersearcher.impl.GSLuceneIndexAccessor.isDummyCommitRequired(GSLuceneIndexAccessor.java:1114)
at 
com.goldensource.activegateway.cache.indexersearcher.impl.GSLuceneIndexAccessor.createIndexSearcher(GSLuceneIndexAccessor.java:1065)
at 
com.goldensource.activegateway.cache.indexersearcher.impl.GSLuceneIndexerSearcherImpl.createIndexSearcher(GSLuceneIndexerSearcherImpl.java:224)
at 
com.goldensource.activegateway.backoffice.cache.GSBackOfficeCacheUpdateManagerImpl.recreateIndexSearchers(GSBackOfficeCacheUpdateManagerImpl.java:249)
at 
com.goldensource.activegateway.backoffice.launcher.BackOfficeJobLauncher.main(BackOfficeJobLauncher.java:433)

scenario 2:

2015-08-19 21:31:37,788 [Camel (gsource) thread #3 - seda://updateCacheQueue] 
ERROR 
com.goldensource.activegateway.services.cache.impl.GSBBCacheUpdateManagerImpl  
- Error in updating cache java.io.FileNotFoundException: 
/app/1.0.5_loadtest/index-persecurity/PERSECURITY_INDEX-INTRADAY_TRADES_AND_QUOTES/_t.si
 (No such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:241)
at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:193)
at 
org.apache.lucene.codecs.lucene40.Lucene40SegmentInfoReader.read(Lucene40SegmentInfoReader.java:50)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:301)
at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:347)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:783)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:630)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:343)
at 
org.apache.lucene.index.StandardDirectoryReader.isCurrent(StandardDirectoryReader.java:326)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenNoWriter(StandardDirectoryReader.java:284)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:247)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:235)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:169)
at 
com.goldensource.activegateway.cache.indexersearcher.impl.GSLuceneIndexAccessor.createIndexSearcher(GSLuceneIndexAccessor.java:1074)
at 
com.goldensource.activegateway.cache.indexersearcher.impl.GSLuceneIndexerSearcherImpl.createIndexSearcher(GSLuceneIndexerSearcherImpl.java:224)
at 
com.goldensource.activegateway.services.cache.impl.GSBBCacheUpdateManagerImpl.updateCache(GSBBCacheUpdateManagerImpl.java:777)
at 
com.goldensource.activegateway.services.cache.impl.GSBBCacheUpdateManagerImpl.updateDataInCache(GSBBCacheUpdateManagerImpl.java:336)
at 
com.goldensource.activegateway.camel.processor.CacheUpdateProcessor.process(CacheUpdateProcessor.java:102)
at 
org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)
at 
org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)
at 
org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsync

[jira] [Updated] (SOLR-7989) Down replica elected leader, stays down after successful election

2015-11-04 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-7989:
---
Attachment: SOLR-7989.patch

Forgot the check for current state in the last patch. Added it now.

> Down replica elected leader, stays down after successful election
> -
>
> Key: SOLR-7989
> URL: https://issues.apache.org/jira/browse/SOLR-7989
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
> Attachments: DownLeaderTest.java, DownLeaderTest.java, 
> SOLR-7989.patch, SOLR-7989.patch, SOLR-7989.patch, SOLR-8233.patch
>
>
> It is possible that a down replica gets elected as a leader, and that it 
> stays down after the election.
> Here's how I hit upon this:
> * There are 3 replicas: leader, notleader0, notleader1
> * Introduced network partition to isolate notleader0, notleader1 from leader 
> (leader puts these two in LIR via zk).
> * Kill leader, remove partition. Now leader is dead, and both of notleader0 
> and notleader1 are down. There is no leader.
> * Remove LIR znodes in zk.
> * Wait a while, and there happens a (flawed?) leader election.
> * Finally, the state is such that one of notleader0 or notleader1 (which were 
> down before) become leader, but stays down.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >