[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+147) - Build # 18661 - Unstable!

2016-12-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18661/
Java: 32bit/jdk-9-ea+147 -server -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPISolrJTest.testBalanceShardUnique

Error Message:
Error from server at https://127.0.0.1:43079/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:43079/solr: create the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([CE243C078004CB9B:869C8A4A3AA46358]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:439)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:391)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1344)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1095)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1037)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testBalanceShardUnique(CollectionsAPISolrJTest.java:335)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:538)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-6.x - Build # 636 - Unstable

2016-12-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/636/

1 tests failed.
FAILED:  
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeMixedAdds

Error Message:
soft530 after hard529 but no hard530: 21544033565363257 !<= 21544033564229486

Stack Trace:
java.lang.AssertionError: soft530 after hard529 but no hard530: 
21544033565363257 !<= 21544033564229486
at 
__randomizedtesting.SeedInfo.seed([F8544A8E92E2702A:A980B30E2391408D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.SoftAutoCommitTest.testSoftAndHardCommitMaxTimeMixedAdds(SoftAutoCommitTest.java:167)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11533 lines...]
   [junit4] Suite: org.apache.solr.update.SoftAutoCommitTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788897#comment-15788897
 ] 

David Smiley commented on SOLR-9684:


"priority" is way better than "schedule" IMO.

bq. (quoting me) We've already got a merge() streaming expression that seems 
remarkably close to this... the only difference here is favoring one stream's 
tuples over another. Maybe you could call the feature here mergePrioritized or 
something like that?

What do you think of my statement there?  Is it at least similar conceptually 
to merge()?  Then shouldn't it be named similarly?  No matter what name is 
chosen, the docs for merge() should point to the one created in this issue as 
it's awfully similar, even if the code might be fairly different.

> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 222 - Failure

2016-12-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/222/

No tests ran.

Build Log:
[...truncated 41934 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 260 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (33.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.4.0-src.tgz...
   [smoker] 30.5 MB in 0.03 sec (1185.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.4.0.tgz...
   [smoker] 65.0 MB in 0.05 sec (1190.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.4.0.zip...
   [smoker] 75.9 MB in 0.08 sec (919.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.4.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6184 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.4.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6184 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.4.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 229 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (56.1 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-6.4.0-src.tgz...
   [smoker] 40.1 MB in 0.52 sec (76.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.4.0.tgz...
   [smoker] 140.3 MB in 1.70 sec (82.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.4.0.zip...
   [smoker] 149.7 MB in 1.37 sec (109.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-6.4.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-6.4.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.4.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.4.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.4.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.4.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.4.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]  
   [smoker] Started Solr server on port 8983 (pid=3299). Happy searching!
   [smoker] 
   [smoker] 
   

[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2016-12-30 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788811#comment-15788811
 ] 

Cao Manh Dat commented on SOLR-9835:


Currently, PeerSync sync on tlog, so it is not a problem if the indexes on all 
replicas are the same. So Leader Election will be the same as today except that 
when new leader sync success with other replicas, it must replay its tlog to 
make all necessary changes to its index.

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Installing PyLucene

2016-12-30 Thread Andi Vajda

> On Dec 30, 2016, at 15:07, marco turchi  wrote:
> 
> Dear Andi,
> thanks a lot for you answers!
> 
> 
>> You do not need root privileges if you don't modify the system python. One
>> way to achieve that is to setup a python virtualenv first and install jcc
>> and pylucene into it instead of the system python.
>> 
>> 
> Do you mean to install a new version of python in one of my folders and us
> it for installing JCC and pyLucene?

No, I mean to setup a python virtualenv.

Andi..


> 
> 
>>> I'm using a
>>> version of python (2.7.5) available in anaconda and our cluster is not
>>> connected to the WEB, so I cannot use setuptools.
>> 
>> You can use setuptools without a web connection, why not ?
>> 
> 
> Sorry, you are right I thought that setuptools needs to be connected to the
> Web to download the required libraries
> 
> 
>> 
>> 
>> Ah, here, to build Java Lucene, ivy is required and without a web
>> connection, it's going to be more difficult. You need to somehow make sure
>> that all things ivy is going to download during the Lucene build (a one
>> time setup only) are already there when you build Lucene.
>> You could do this on an equivalent machine that has a web connection and
>> then copy the local ivy tree to the machine that doesn't.
>> 
> 
> This is a great suggestion, thanks a lot! I'm going to try this in the next
> days!!
> 
> Best,
> Marco
> 
> 
>> 
>> Andi..
>> 
>>> 
>>> resolve:
>>> 
>>> Am I doing anything wrong? do you have any suggestions to help me to
>>> proceed with the installation?
>>> 
>>> Thanks a lot in advance for your help!
>>> 
>>> Best Regards,
>>> Marco
>> 
>> 


[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+147) - Build # 2552 - Unstable!

2016-12-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2552/
Java: 32bit/jdk-9-ea+147 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.metrics.reporters.SolrGangliaReporterTest.testReporter

Error Message:


Stack Trace:
java.util.ConcurrentModificationException
at 
__randomizedtesting.SeedInfo.seed([7771410E074DBDD4:28956C396C412E91]:0)
at 
java.base/java.util.ArrayList$Itr.checkForComodification(ArrayList.java:938)
at java.base/java.util.ArrayList$Itr.next(ArrayList.java:888)
at 
org.apache.solr.metrics.reporters.SolrGangliaReporterTest.testReporter(SolrGangliaReporterTest.java:76)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:538)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 12751 lines...]
   [junit4] Suite: org.apache.solr.metrics.reporters.SolrGangliaReporterTest
   [junit4]   2> Creating dataDir: 

[jira] [Comment Edited] (SOLR-9495) AIOBE with confusing message for incomplete sort spec in Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788744#comment-15788744
 ] 

Joel Bernstein edited comment on SOLR-9495 at 12/31/16 2:13 AM:


Thanks [~gus_heck]!


was (Author: joel.bernstein):
Thanks [~gus_heck]!]

> AIOBE with confusing message for incomplete sort spec in Streaming Expression
> -
>
> Key: SOLR-9495
> URL: https://issues.apache.org/jira/browse/SOLR-9495
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.2
> Environment: 6.2.0_RC1
>Reporter: Gus Heck
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9495.patch
>
>
> I was thinking of using streaming expressions for something, and started to 
> play around with it, but I made a bonehaded mistake, and got an error that's 
> pretty confusing: 
> {code}{"result-set":{"docs":[
> {"EXCEPTION":"1","EOF":true}]}}{code}
> This turns out to be due to: 
> {code}
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.solr.client.solrj.io.stream.expr.StreamFactory.createInstance(StreamFactory.java:316)
>   ... 33 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.parseComp(CloudSolrStream.java:334)
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.init(CloudSolrStream.java:274)
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.(CloudSolrStream.java:181)
>   ... 38 more
> {code}
> The mistake I made was omitting a direction from the sort spec. Attaching 
> trivial patch to provide a better error message...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9495) AIOBE with confusing message for incomplete sort spec in Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788744#comment-15788744
 ] 

Joel Bernstein commented on SOLR-9495:
--

Thanks [~gus_heck]!]

> AIOBE with confusing message for incomplete sort spec in Streaming Expression
> -
>
> Key: SOLR-9495
> URL: https://issues.apache.org/jira/browse/SOLR-9495
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.2
> Environment: 6.2.0_RC1
>Reporter: Gus Heck
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9495.patch
>
>
> I was thinking of using streaming expressions for something, and started to 
> play around with it, but I made a bonehaded mistake, and got an error that's 
> pretty confusing: 
> {code}{"result-set":{"docs":[
> {"EXCEPTION":"1","EOF":true}]}}{code}
> This turns out to be due to: 
> {code}
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.solr.client.solrj.io.stream.expr.StreamFactory.createInstance(StreamFactory.java:316)
>   ... 33 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.parseComp(CloudSolrStream.java:334)
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.init(CloudSolrStream.java:274)
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.(CloudSolrStream.java:181)
>   ... 38 more
> {code}
> The mistake I made was omitting a direction from the sort spec. Attaching 
> trivial patch to provide a better error message...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9495) AIOBE with confusing message for incomplete sort spec in Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9495.
--
   Resolution: Resolved
Fix Version/s: 6.4
   master (7.0)

> AIOBE with confusing message for incomplete sort spec in Streaming Expression
> -
>
> Key: SOLR-9495
> URL: https://issues.apache.org/jira/browse/SOLR-9495
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.2
> Environment: 6.2.0_RC1
>Reporter: Gus Heck
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9495.patch
>
>
> I was thinking of using streaming expressions for something, and started to 
> play around with it, but I made a bonehaded mistake, and got an error that's 
> pretty confusing: 
> {code}{"result-set":{"docs":[
> {"EXCEPTION":"1","EOF":true}]}}{code}
> This turns out to be due to: 
> {code}
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.solr.client.solrj.io.stream.expr.StreamFactory.createInstance(StreamFactory.java:316)
>   ... 33 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.parseComp(CloudSolrStream.java:334)
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.init(CloudSolrStream.java:274)
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.(CloudSolrStream.java:181)
>   ... 38 more
> {code}
> The mistake I made was omitting a direction from the sort spec. Attaching 
> trivial patch to provide a better error message...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9495) AIOBE with confusing message for incomplete sort spec in Streaming Expression

2016-12-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788739#comment-15788739
 ] 

ASF subversion and git services commented on SOLR-9495:
---

Commit ecac79b4e5ab75261bd604f8a874a4c38653146a in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ecac79b ]

SOLR-9495: AIOBE with confusing message for incomplete sort spec in Streaming 
Expression


> AIOBE with confusing message for incomplete sort spec in Streaming Expression
> -
>
> Key: SOLR-9495
> URL: https://issues.apache.org/jira/browse/SOLR-9495
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.2
> Environment: 6.2.0_RC1
>Reporter: Gus Heck
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9495.patch
>
>
> I was thinking of using streaming expressions for something, and started to 
> play around with it, but I made a bonehaded mistake, and got an error that's 
> pretty confusing: 
> {code}{"result-set":{"docs":[
> {"EXCEPTION":"1","EOF":true}]}}{code}
> This turns out to be due to: 
> {code}
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.solr.client.solrj.io.stream.expr.StreamFactory.createInstance(StreamFactory.java:316)
>   ... 33 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.parseComp(CloudSolrStream.java:334)
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.init(CloudSolrStream.java:274)
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.(CloudSolrStream.java:181)
>   ... 38 more
> {code}
> The mistake I made was omitting a direction from the sort spec. Attaching 
> trivial patch to provide a better error message...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9495) AIOBE with confusing message for incomplete sort spec in Streaming Expression

2016-12-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788740#comment-15788740
 ] 

ASF subversion and git services commented on SOLR-9495:
---

Commit a7bb14b6cd9bcb91b2a53d30d6463b86afd39c52 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a7bb14b ]

SOLR-9495: Update CHANGES.txt


> AIOBE with confusing message for incomplete sort spec in Streaming Expression
> -
>
> Key: SOLR-9495
> URL: https://issues.apache.org/jira/browse/SOLR-9495
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.2
> Environment: 6.2.0_RC1
>Reporter: Gus Heck
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9495.patch
>
>
> I was thinking of using streaming expressions for something, and started to 
> play around with it, but I made a bonehaded mistake, and got an error that's 
> pretty confusing: 
> {code}{"result-set":{"docs":[
> {"EXCEPTION":"1","EOF":true}]}}{code}
> This turns out to be due to: 
> {code}
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.solr.client.solrj.io.stream.expr.StreamFactory.createInstance(StreamFactory.java:316)
>   ... 33 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.parseComp(CloudSolrStream.java:334)
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.init(CloudSolrStream.java:274)
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.(CloudSolrStream.java:181)
>   ... 38 more
> {code}
> The mistake I made was omitting a direction from the sort spec. Attaching 
> trivial patch to provide a better error message...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2016-12-30 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788716#comment-15788716
 ] 

Pushkar Raste commented on SOLR-9835:
-

How are we handling leader failure here. if replicas are some what out of sync 
with the original leader, how would we elect a new leader. 

When the leader fails and a new leader gets elected, the  new leader asks all 
the replicas to sync with the new leader. My understanding is, "since we are 
replicating index by fetching segments from leader, most of the segments on all 
the replicas should look the same, hence all the replicas will not go into full 
index copying". Is that correct ?

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9495) AIOBE with confusing message for incomplete sort spec in Streaming Expression

2016-12-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788703#comment-15788703
 ] 

ASF subversion and git services commented on SOLR-9495:
---

Commit 832d02bf494c8fea02398db31b55de4314f2be8a in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=832d02b ]

SOLR-9495: Update CHANGES.txt


> AIOBE with confusing message for incomplete sort spec in Streaming Expression
> -
>
> Key: SOLR-9495
> URL: https://issues.apache.org/jira/browse/SOLR-9495
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.2
> Environment: 6.2.0_RC1
>Reporter: Gus Heck
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9495.patch
>
>
> I was thinking of using streaming expressions for something, and started to 
> play around with it, but I made a bonehaded mistake, and got an error that's 
> pretty confusing: 
> {code}{"result-set":{"docs":[
> {"EXCEPTION":"1","EOF":true}]}}{code}
> This turns out to be due to: 
> {code}
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.solr.client.solrj.io.stream.expr.StreamFactory.createInstance(StreamFactory.java:316)
>   ... 33 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.parseComp(CloudSolrStream.java:334)
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.init(CloudSolrStream.java:274)
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.(CloudSolrStream.java:181)
>   ... 38 more
> {code}
> The mistake I made was omitting a direction from the sort spec. Attaching 
> trivial patch to provide a better error message...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9495) AIOBE with confusing message for incomplete sort spec in Streaming Expression

2016-12-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788697#comment-15788697
 ] 

ASF subversion and git services commented on SOLR-9495:
---

Commit 61676188d7f592f697933b6051806c0bc55b406a in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6167618 ]

SOLR-9495: AIOBE with confusing message for incomplete sort spec in Streaming 
Expression


> AIOBE with confusing message for incomplete sort spec in Streaming Expression
> -
>
> Key: SOLR-9495
> URL: https://issues.apache.org/jira/browse/SOLR-9495
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.2
> Environment: 6.2.0_RC1
>Reporter: Gus Heck
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-9495.patch
>
>
> I was thinking of using streaming expressions for something, and started to 
> play around with it, but I made a bonehaded mistake, and got an error that's 
> pretty confusing: 
> {code}{"result-set":{"docs":[
> {"EXCEPTION":"1","EOF":true}]}}{code}
> This turns out to be due to: 
> {code}
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.solr.client.solrj.io.stream.expr.StreamFactory.createInstance(StreamFactory.java:316)
>   ... 33 more
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.parseComp(CloudSolrStream.java:334)
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.init(CloudSolrStream.java:274)
>   at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.(CloudSolrStream.java:181)
>   ... 38 more
> {code}
> The mistake I made was omitting a direction from the sort spec. Attaching 
> trivial patch to provide a better error message...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 242 - Still Unstable

2016-12-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/242/

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
  at sun.reflect.GeneratedConstructorAccessor153.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:729)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:791)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1042)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:907)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:799)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:877)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:529)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor153.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:729)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:791)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1042)
at org.apache.solr.core.SolrCore.(SolrCore.java:907)
at org.apache.solr.core.SolrCore.(SolrCore.java:799)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:877)
at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:529)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([E84C126075BAB855]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:266)
at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_112) - Build # 6321 - Unstable!

2016-12-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6321/
Java: 32bit/jdk1.8.0_112 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
[C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_D60F14DA91FDF1AE-001\solr-instance-007\.\collection1\data\snapshot_metadata,
 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_D60F14DA91FDF1AE-001\solr-instance-007\.\collection1\data\,
 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_D60F14DA91FDF1AE-001\solr-instance-007\.\collection1\data\index.20161230165436675,
 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_D60F14DA91FDF1AE-001\solr-instance-007\.\collection1\data\index.20161230165436784]
 expected:<3> but was:<4>

Stack Trace:
java.lang.AssertionError: 
[C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_D60F14DA91FDF1AE-001\solr-instance-007\.\collection1\data\snapshot_metadata,
 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_D60F14DA91FDF1AE-001\solr-instance-007\.\collection1\data\,
 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_D60F14DA91FDF1AE-001\solr-instance-007\.\collection1\data\index.20161230165436675,
 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_D60F14DA91FDF1AE-001\solr-instance-007\.\collection1\data\index.20161230165436784]
 expected:<3> but was:<4>
at 
__randomizedtesting.SeedInfo.seed([D60F14DA91FDF1AE:217CFA8257155E48]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:907)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1339)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[jira] [Comment Edited] (SOLR-9867) The schemaless example can not be started after being stopped.

2016-12-30 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788522#comment-15788522
 ] 

Mark Miller edited comment on SOLR-9867 at 12/30/16 11:07 PM:
--

Thanks for looking into this Varun.

I think this behavior was built into CoreContainer previously and must have 
been removed in a refactoring. Now I see no code that actually waits for a core 
to load, just the methods on CoreContainer for it.

So we can restore the expected behavior here and wait for the core to load 
rather than throw an exception:

{noformat}
if (core != null) {
  path = path.substring(idx);
} else if (cores.isCoreLoading(corename)) { // extra mem barriers, so 
don't look at this before trying to get core
  throw new SolrException(ErrorCode.SERVICE_UNAVAILABLE, "SolrCore is 
loading");
} else {
  // the core may have just finished loading
  core = cores.getCore(corename);
  if (core != null) {
path = path.substring(idx);
  } 
}
{noformat}



was (Author: markrmil...@gmail.com):
Thanks for looking into this Varun.

I think this behavior was built into CoreContainer previously and must have 
been removed in a refactoring. Now I see no code that actually waits for a core 
to load, just the methods on CoreContainer for it.

So we can restore the expected behavior here and wait for the core to load 
rather than throw an exception:

{noformat}
if (core != null) {
  path = path.substring(idx);
} else if (cores.isCoreLoading(corename)) { // extra mem barriers, so 
don't look at this before trying to get core
  throw new SolrException(ErrorCode.SERVICE_UNAVAILABLE, "SolrCore is 
loading");
} else {
  // the core may have just finished loading
  core = cores.getCore(corename);
  if (core != null) {
path = path.substring(idx);
  } 
}
{noformat}

I think it now causes a loading exception to be thrown rather than waiting for 
the core to load though.

> The schemaless example can not be started after being stopped.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
> Fix For: master (7.0), 6.4
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Installing PyLucene

2016-12-30 Thread marco turchi
Dear Andi,
thanks a lot for you answers!


> You do not need root privileges if you don't modify the system python. One
> way to achieve that is to setup a python virtualenv first and install jcc
> and pylucene into it instead of the system python.
>
>
Do you mean to install a new version of python in one of my folders and us
it for installing JCC and pyLucene?


> > I'm using a
> > version of python (2.7.5) available in anaconda and our cluster is not
> > connected to the WEB, so I cannot use setuptools.
>
> You can use setuptools without a web connection, why not ?
>

Sorry, you are right I thought that setuptools needs to be connected to the
Web to download the required libraries


>
>
> Ah, here, to build Java Lucene, ivy is required and without a web
> connection, it's going to be more difficult. You need to somehow make sure
> that all things ivy is going to download during the Lucene build (a one
> time setup only) are already there when you build Lucene.
> You could do this on an equivalent machine that has a web connection and
> then copy the local ivy tree to the machine that doesn't.
>

This is a great suggestion, thanks a lot! I'm going to try this in the next
days!!

Best,
Marco


>
> Andi..
>
> >
> > resolve:
> >
> > Am I doing anything wrong? do you have any suggestions to help me to
> > proceed with the installation?
> >
> > Thanks a lot in advance for your help!
> >
> > Best Regards,
> > Marco
>
>


[jira] [Commented] (SOLR-9867) The schemaless example can not be started after being stopped.

2016-12-30 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788522#comment-15788522
 ] 

Mark Miller commented on SOLR-9867:
---

Thanks for looking into this Varun.

I think this behavior was built into CoreContainer previously and must have 
been removed in a refactoring. Now I see no code that actually waits for a core 
to load, just the methods on CoreContainer for it.

So we can restore the expected behavior here and wait for the core to load 
rather than throw an exception:

{noformat}
if (core != null) {
  path = path.substring(idx);
} else if (cores.isCoreLoading(corename)) { // extra mem barriers, so 
don't look at this before trying to get core
  throw new SolrException(ErrorCode.SERVICE_UNAVAILABLE, "SolrCore is 
loading");
} else {
  // the core may have just finished loading
  core = cores.getCore(corename);
  if (core != null) {
path = path.substring(idx);
  } 
}
{noformat}

I think it now causes a loading exception to be thrown rather than waiting for 
the core to load though.

> The schemaless example can not be started after being stopped.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
> Fix For: master (7.0), 6.4
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9867) The schemaless example can not be started after being stopped.

2016-12-30 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-9867:
--
Fix Version/s: 6.4
   master (7.0)

> The schemaless example can not be started after being stopped.
> --
>
> Key: SOLR-9867
> URL: https://issues.apache.org/jira/browse/SOLR-9867
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
> Fix For: master (7.0), 6.4
>
>
> I'm having trouble when I start up the schemaless example after shutting down.
> I first tracked this down to the fact that the run example tool is getting an 
> error when it tries to create the SolrCore (again, it already exists) and so 
> it deletes the cores instance dir which leads to tlog and index lock errors 
> in Solr.
> The reason it seems to be trying to create the core when it already exists is 
> that the run example tool uses a core status call to check existence and 
> because the core is loading, we don't consider it as existing. I added a 
> check to look for core.properties.
> That seemed to let me start up, but my first requests failed because the core 
> was still loading. It appears CoreContainer#getCore  is supposed to be 
> blocking so you don't have this problem, but there must be an issue, because 
> it is not blocking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8362) Add docValues support for TextField

2016-12-30 Thread Yago Riveiro (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788494#comment-15788494
 ] 

Yago Riveiro commented on SOLR-8362:


Streams only works with fields that have configured docValues. As TextField 
doesn't support docValues I had think that maybe if the field type had 
docValues the streams would work.

We want the stored value instead, your explanation makes sense :)

> Add docValues support for TextField
> ---
>
> Key: SOLR-8362
> URL: https://issues.apache.org/jira/browse/SOLR-8362
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
>
> At the last lucene/solr revolution, Toke asked a question about why TextField 
> doesn't support docValues.  The short answer is because no one ever added it, 
> but the longer answer was because we would have to think through carefully 
> the _intent_ of supporting docValues for  a "tokenized" field like TextField, 
> and how to support various conflicting usecases where they could be handy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2646) Integrate Solr benchmarking support into the Benchmark module

2016-12-30 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788487#comment-15788487
 ] 

Mark Miller commented on SOLR-2646:
---

I'm mainly waiting on SOLR-9867 to commit this.

> Integrate Solr benchmarking support into the Benchmark module
> -
>
> Key: SOLR-2646
> URL: https://issues.apache.org/jira/browse/SOLR-2646
> Project: Solr
>  Issue Type: New Feature
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: Dev-SolrBenchmarkModule.pdf, SOLR-2646.patch, 
> SOLR-2646.patch, SOLR-2646.patch, SOLR-2646.patch, SOLR-2646.patch, 
> SOLR-2646.patch, SOLR-2646.patch, SOLR-2646.patch, SOLR-2646.patch, 
> SOLR-2646.patch, SOLR-2646.patch, SOLR-2646.patch, 
> SolrIndexingPerfHistory.pdf, chart.jpg
>
>
> As part of my buzzwords Solr pef talk, I did some work to allow some Solr 
> benchmarking with the benchmark module.
> I'll attach a patch with the current work I've done soon - there is still a 
> fair amount to clean up and fix - a couple hacks or three - but it's already 
> fairly useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reopened SOLR-9684:
--

Re-opening to possibly change the expression name.

> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788456#comment-15788456
 ] 

Joel Bernstein commented on SOLR-9684:
--

We could also consider naming the expression 'priority':
{code}
executor(priority(topic(), topic())
{code}

I'll reopen the ticket until this is decided.


> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7608) Add a git .mailmap file to dedupe authors.

2016-12-30 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788439#comment-15788439
 ] 

Mark Miller commented on LUCENE-7608:
-

It's not something I really knew about, I was just trying to address the dupe 
author issue with a tool and the tool FAQ said, git has a feature for this 
already.

> Add a git .mailmap file to dedupe authors.
> --
>
> Key: LUCENE-7608
> URL: https://issues.apache.org/jira/browse/LUCENE-7608
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Attachments: .mailmap, .mailmap
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7608) Add a git .mailmap file to dedupe authors.

2016-12-30 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788435#comment-15788435
 ] 

Mark Miller commented on LUCENE-7608:
-

Yeah, sorry, I've only done Solr so far, I still have to get Lucene.

> Add a git .mailmap file to dedupe authors.
> --
>
> Key: LUCENE-7608
> URL: https://issues.apache.org/jira/browse/LUCENE-7608
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Attachments: .mailmap, .mailmap
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788401#comment-15788401
 ] 

Joel Bernstein commented on SOLR-9684:
--

It think it makes sense for the *executor* to wrap a *scheduler*.  The 
semantics of this is nice. We can also use the schedule function as a facade to 
build out more scheduling capabilities by passing a scheduling algorithm. for 
example:

executor(schedule(COST, topic(
executor(schedule(CRON, search(
executor(schedule(PRIORITY, topic(), topic( 

The initial release is simple, but a nice first step.

> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8362) Add docValues support for TextField

2016-12-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788393#comment-15788393
 ] 

David Smiley commented on SOLR-8362:


The semantics of using DocValues on tokenized text to re-index using 
UpdateStream is, I think, not at all a fit.  Instead... it'd be great if 
streaming expressions had a mechanism to consume stored-value fields for all 
docs, ideally in Lucene docId order for performance.  Definitely a separate 
issue from this one :-)

> Add docValues support for TextField
> ---
>
> Key: SOLR-8362
> URL: https://issues.apache.org/jira/browse/SOLR-8362
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
>
> At the last lucene/solr revolution, Toke asked a question about why TextField 
> doesn't support docValues.  The short answer is because no one ever added it, 
> but the longer answer was because we would have to think through carefully 
> the _intent_ of supporting docValues for  a "tokenized" field like TextField, 
> and how to support various conflicting usecases where they could be handy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788373#comment-15788373
 ] 

David Smiley commented on SOLR-9684:


Using cost() for this sorta thing sounds great... then you could decorate a 
stream if you want to fix the cost, and then merge() could perhaps use cost.  
In any case, I really don't like the name "schedule" for this stream.

> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788344#comment-15788344
 ] 

Joel Bernstein edited comment on SOLR-9684 at 12/30/16 9:21 PM:


One of the things I think would be interesting would be to include a cost based 
scheduler, which we could build into this implementation.

Each expression implements a cost() method which could be used to determine 
which tasks to schedule together. But this is going to take more thought and 
probably involve walking the parse tree to find which collections are involved 
in the expression.

Currently also the cost() method is not implemented so we'd have to put some 
thought into how expressions calculate cost. Fairly soon we'll have to 
calculate cost for many expressions to support the Calcite cost based join 
optimizer. 


was (Author: joel.bernstein):
One of the things I think would be interesting would be to include a cost based 
scheduler, which we could build into this implementation.

Each expression implements a cost() method which could be used to determine 
which tasks to schedule together. But this is going to take more thought and 
probably involve walking the parse tree to find which collections are involved 
in the expression.

Currently also the cost() method is not implemented so we'd have to put some 
thought into how expressions calculate cost.

> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1044 - Unstable!

2016-12-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1044/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([BDE24E105BFF7712:D55D7B3A8B6565FE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.cancelDelegationToken(TestDelegationWithHadoopAuth.java:128)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail(TestDelegationWithHadoopAuth.java:289)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788344#comment-15788344
 ] 

Joel Bernstein edited comment on SOLR-9684 at 12/30/16 9:10 PM:


One of the things I think would be interesting would be to include a cost based 
scheduler, which we could build into this implementation.

Each expression implements a cost() method which could be used to determine 
which tasks to schedule together. But this is going to take more thought and 
probably involve walking the parse tree to find which collections are involved 
in the expression.

Currently also the cost() method is not implemented so we'd have to put some 
thought into how expressions calculate cost.


was (Author: joel.bernstein):
One of the things I think would be interesting would be to include a cost based 
scheduler, which we could build into this implementation.

Each expression implements a cost() method which could be used to determine 
which tasks to schedule together. But this going to take more thought and 
probably involve walking the parse tree to find which collections are involved 
in that expression.

Currently also the cost() method is not implemented so we'd have to put some 
thought into how expression calculate cost.

> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8362) Add docValues support for TextField

2016-12-30 Thread Yago Riveiro (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788348#comment-15788348
 ] 

Yago Riveiro commented on SOLR-8362:


Without support to DocValues to text fields, reindex a collection using the 
Update Stream Decorator it's not possible also.

Streams are great to reindex data with a decent throughput. 

> Add docValues support for TextField
> ---
>
> Key: SOLR-8362
> URL: https://issues.apache.org/jira/browse/SOLR-8362
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
>
> At the last lucene/solr revolution, Toke asked a question about why TextField 
> doesn't support docValues.  The short answer is because no one ever added it, 
> but the longer answer was because we would have to think through carefully 
> the _intent_ of supporting docValues for  a "tokenized" field like TextField, 
> and how to support various conflicting usecases where they could be handy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788344#comment-15788344
 ] 

Joel Bernstein edited comment on SOLR-9684 at 12/30/16 9:09 PM:


One of the things I think would be interesting would be to include a cost based 
scheduler, which we could build into this implementation.

Each expression implements a cost() method which could be used to determine 
which tasks to schedule together. But this going to take more thought and 
probably involve walking the parse tree to find which collections are involved 
in that expression.

Currently also the cost() method is not implemented so we'd have to put some 
thought into how expression calculate cost.


was (Author: joel.bernstein):
One of the things I think would be interesting would be to include a cost based 
scheduler, which we could build into this implementation.

Each expression implements a cost() method which we could be used to determine 
which tasks to schedule to schedule together. But this going to take more 
thought and probably involve walking the parse tree to find which collections 
are involved in that expression.

> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788344#comment-15788344
 ] 

Joel Bernstein commented on SOLR-9684:
--

One of the things I think would be interesting would be to include a cost based 
scheduler, which we could build into this implementation.

Each expression implements a cost() method which we could be used to determine 
which tasks to schedule to schedule together. But this going to take more 
thought and probably involve walking the parse tree to find which collections 
are involved in that expression.

> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7613) Make Surround use DisjunctionMaxQuery for multiple fields

2016-12-30 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788338#comment-15788338
 ] 

Paul Elschot commented on LUCENE-7613:
--

I would not mind to make a similar update for LUCENE-5205, but I am not 
familiar enough with the code there.

> Make Surround use DisjunctionMaxQuery for multiple fields
> -
>
> Key: LUCENE-7613
> URL: https://issues.apache.org/jira/browse/LUCENE-7613
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: LUCENE-7613.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7613) Make Surround use DisjunctionMaxQuery for multiple fields

2016-12-30 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788332#comment-15788332
 ] 

Paul Elschot edited comment on LUCENE-7613 at 12/30/16 9:01 PM:


Patch of 30 Dec 2016.

This does not affect the syntax of surround, this only adapts the lucene side 
to make better use of lucene facilities that are newer than the initial version 
of surround.

This uses DisjunctionMaxQuery when a query specifies multiple fields.
The method to convert to a lucene query also allows multiple default fields.

This adds methods to BasicQueryFactory to create a new SpanNearQuery and to 
create a new DisjunctionMaxQuery.

This uses SpanBoostQuery when proximity (sub)queries are boosted. There is no 
effect on the scores yet, LUCENE-7580 can change that.

This updates the test code to use CheckHits, and one test case is added.
The changes to the test code form the larger part of the patch.



was (Author: paul.elsc...@xs4all.nl):
Patch of 30 Dec 2016.

This does not affect the syntax of surround, this only adapts the lucene side 
to make better use of lucene facilities that are newer than the current version.

This uses DisjunctionMaxQuery when a query specifies multiple fields.
The method to convert to a lucene query also allows multiple default fields.

This adds methods to BasicQueryFactory to create a new SpanNearQuery and to 
create a new DisjunctionMaxQuery.

This uses SpanBoostQuery when proximity (sub)queries are boosted. There is no 
effect on the scores yet, LUCENE-7580 can change that.

This updates the test code to use CheckHits, and one test case is added.
The changes to the test code form the larger part of the patch.


> Make Surround use DisjunctionMaxQuery for multiple fields
> -
>
> Key: LUCENE-7613
> URL: https://issues.apache.org/jira/browse/LUCENE-7613
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: LUCENE-7613.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7613) Make Surround use DisjunctionMaxQuery for multiple fields

2016-12-30 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-7613:
-
Attachment: LUCENE-7613.patch

Patch of 30 Dec 2016.

This does not affect the syntax of surround, this only adapts the lucene side 
to make better use of lucene facilities that are newer than the current version.

This uses DisjunctionMaxQuery when a query specifies multiple fields.
The method to convert to a lucene query also allows multiple default fields.

This adds methods to BasicQueryFactory to create a new SpanNearQuery and to 
create a new DisjunctionMaxQuery.

This uses SpanBoostQuery when proximity (sub)queries are boosted. There is no 
effect on the scores yet, LUCENE-7580 can change that.

This updates the test code to use CheckHits, and one test case is added.
The changes to the test code form the larger part of the patch.


> Make Surround use DisjunctionMaxQuery for multiple fields
> -
>
> Key: LUCENE-7613
> URL: https://issues.apache.org/jira/browse/LUCENE-7613
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: LUCENE-7613.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788328#comment-15788328
 ] 

Joel Bernstein commented on SOLR-9684:
--

We can think about the naming of this some more.

 The reason why I called it 'schedule' is that it *schedules* higher priority 
tasks ahead of lower priority tasks. Possibly more scheduling features could be 
added in the future. 

> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788321#comment-15788321
 ] 

Joel Bernstein commented on SOLR-9684:
--

Ah, missed it. Reading now..

> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788316#comment-15788316
 ] 

David Smiley commented on SOLR-9684:


[~joel.bernstein] did you not see my feedback?

> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7613) Make Surround use DisjunctionMaxQuery for multiple fields

2016-12-30 Thread Paul Elschot (JIRA)
Paul Elschot created LUCENE-7613:


 Summary: Make Surround use DisjunctionMaxQuery for multiple fields
 Key: LUCENE-7613
 URL: https://issues.apache.org/jira/browse/LUCENE-7613
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Reporter: Paul Elschot
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9684.
--
   Resolution: Resolved
Fix Version/s: 6.4
   master (7.0)

> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788305#comment-15788305
 ] 

ASF subversion and git services commented on SOLR-9684:
---

Commit d396f2d81e8ff52e65a8c2743ec3d4cafca13bc5 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d396f2d ]

SOLR-9684: Update CHANGES.txt


> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788302#comment-15788302
 ] 

ASF subversion and git services commented on SOLR-9684:
---

Commit 36a691c50d680d1c6977e6185448e06cb21f653d in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=36a691c ]

SOLR-9684: Update CHANGES.txt


> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9905) Add NullStream to isolate the performance of the ExportWriter

2016-12-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9905:
-
Fix Version/s: 6.4
   master (7.0)

> Add NullStream to isolate the performance of the ExportWriter
> -
>
> Key: SOLR-9905
> URL: https://issues.apache.org/jira/browse/SOLR-9905
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9905.patch
>
>
> The NullStream is a utility function to test the raw performance of the 
> ExportWriter. This is a nice utility to have to diagnose bottlenecks in 
> streaming MapReduce operations. The NullStream will allow developers to test 
> the performance of the shuffling (Sorting, Partitioning, Exporting) in 
> isolation from the reduce operation (Rollup, Join, Group, etc..). 
> The NullStream simply iterates it's internal stream and eats the tuples. It 
> returns a single Tuple from each worker with the number of Tuples processed. 
> The idea is to iterate the stream without additional overhead so the 
> performance of the underlying stream can be isolated.
> Sample syntax:
> {code}
> parallel(collection2, workers=7, sort="nullCount desc", 
>   null(search(collection1, 
>q=*:*, 
>fl="id", 
>sort="id desc", 
>qt="/export", 
>wt="javabin", 
>partitionKeys=id)))
> {code}
> In the example above the NullStream is sent to 7 workers. Each worker will 
> iterate the search() expression and the NullStream will eat the tuples so the 
> raw performance of the search() can be understood.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9905) Add NullStream to isolate the performance of the ExportWriter

2016-12-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9905.
--
Resolution: Resolved

> Add NullStream to isolate the performance of the ExportWriter
> -
>
> Key: SOLR-9905
> URL: https://issues.apache.org/jira/browse/SOLR-9905
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-9905.patch
>
>
> The NullStream is a utility function to test the raw performance of the 
> ExportWriter. This is a nice utility to have to diagnose bottlenecks in 
> streaming MapReduce operations. The NullStream will allow developers to test 
> the performance of the shuffling (Sorting, Partitioning, Exporting) in 
> isolation from the reduce operation (Rollup, Join, Group, etc..). 
> The NullStream simply iterates it's internal stream and eats the tuples. It 
> returns a single Tuple from each worker with the number of Tuples processed. 
> The idea is to iterate the stream without additional overhead so the 
> performance of the underlying stream can be isolated.
> Sample syntax:
> {code}
> parallel(collection2, workers=7, sort="nullCount desc", 
>   null(search(collection1, 
>q=*:*, 
>fl="id", 
>sort="id desc", 
>qt="/export", 
>wt="javabin", 
>partitionKeys=id)))
> {code}
> In the example above the NullStream is sent to 7 workers. Each worker will 
> iterate the search() expression and the NullStream will eat the tuples so the 
> raw performance of the search() can be understood.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788296#comment-15788296
 ] 

ASF subversion and git services commented on SOLR-9684:
---

Commit be119d2aa082e176c88dd72c674dbd406d5ec9a2 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=be119d2 ]

SOLR-9684: Add schedule Streaming Expression


> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788277#comment-15788277
 ] 

ASF subversion and git services commented on SOLR-9684:
---

Commit f3fe487970f1e21300bd556d226461a2d51b00f9 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f3fe487 ]

SOLR-9684: Add schedule Streaming Expression


> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7603) Support Graph Token Streams in QueryBuilder

2016-12-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788263#comment-15788263
 ] 

ASF GitHub Bot commented on LUCENE-7603:


Github user mattweber commented on the issue:

https://github.com/apache/lucene-solr/pull/129
  
Thanks @dsmiley!  I have just pushed up code with your suggestions except 
for using `BytesRefHash` due to the fact we might have the same `BytesRef` but 
need a different id because we have position gap.

This has been great, love the feedback!


> Support Graph Token Streams in QueryBuilder
> ---
>
> Key: LUCENE-7603
> URL: https://issues.apache.org/jira/browse/LUCENE-7603
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser, core/search
>Reporter: Matt Weber
>
> With [LUCENE-6664|https://issues.apache.org/jira/browse/LUCENE-6664] we can 
> use multi-term synonyms query time.  A "graph token stream" will be created 
> which which is nothing more than using the position length attribute on 
> stacked tokens to indicate how many positions a token should span.  Currently 
> the position length attribute on tokens is ignored during query parsing.  
> This issue will add support for handling these graph token streams inside the 
> QueryBuilder utility class used by query parsers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #129: LUCENE-7603: Support Graph Token Streams in QueryBui...

2016-12-30 Thread mattweber
Github user mattweber commented on the issue:

https://github.com/apache/lucene-solr/pull/129
  
Thanks @dsmiley!  I have just pushed up code with your suggestions except 
for using `BytesRefHash` due to the fact we might have the same `BytesRef` but 
need a different id because we have position gap.

This has been great, love the feedback!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9684) Add schedule Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9684:
-
Summary: Add schedule Streaming Expression  (was: Add scheduler Streaming 
Expression)

> Add schedule Streaming Expression
> -
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9684) Add scheduler Streaming Expression

2016-12-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788137#comment-15788137
 ] 

David Smiley commented on SOLR-9684:


When I saw the title of this issue, I thought this was something quite 
different than what it was -- I thought this was about executing something (or 
emitting tuples) at a certain time or in a periodic fashion.  

We've already got a {{merge()}} streaming expression that seems remarkably 
close to this... the only difference here is favoring one stream's tuples over 
another.  Maybe you could call the feature here mergePrioritized or something 
like that?

> Add scheduler Streaming Expression
> --
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9668) Support cursor paging in SolrEntityProcessor

2016-12-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788136#comment-15788136
 ] 

ASF subversion and git services commented on SOLR-9668:
---

Commit b2d54f645db6e365497660cee1b3e059c6c2b4ca in lucene-solr's branch 
refs/heads/branch_6x from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b2d54f6 ]

SOLR-9668: introduce cursorMark='true' for SolrEntityProcessor


> Support cursor paging in SolrEntityProcessor
> 
>
> Key: SOLR-9668
> URL: https://issues.apache.org/jira/browse/SOLR-9668
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Reporter: Yegor Kozlov
>Assignee: Mikhail Khludnev
>Priority: Minor
>  Labels: dataimportHandler
> Fix For: master (7.0)
>
> Attachments: SOLR-9668.patch, SOLR-9668.patch
>
>
> SolrEntityProcessor paginates using the start and rows parameters which can 
> be very inefficient at large offsets. In fact, the current implementation  is 
> impracticable to import large amounts of data (10M+ documents) because the 
> data import rate degrades from 1000docs/second to 10docs/second and the 
> import gets stuck.
> This patch introduces support for cursor paging which offers more or less 
> predictable performance. In my tests the time to fetch the 1st and 1000th 
> pages was about the same and the data import rate was stable throughout the 
> entire import. 
> To enable cursor paging a user needs to:
>  * add {{cursorMark='true'}} (!) attribute in the entity configuration;
>  * "sort" attribute in the entity configuration see note about sort at 
> https://cwiki.apache.org/confluence/display/solr/Pagination+of+Results ;
>  * remove {{timeout}} attribute.
> {code}
> 
> 
>   
>  query="*:*"
> rows="1000"
> cursorMark='true'
> sort="id asc"  
> url="http://localhost:8983/solr/collection1;>
> 
>   
> 
> {code}
> If the {{cursorMark}} attribute is missing or is not {{'true'}} then the 
> default start/rows pagination is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9684) Add scheduler Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9684:
-
Description: 
SOLR-9559 adds a general purpose *parallel task executor* for streaming 
expressions. The executor() function executes a stream of tasks and doesn't 
have any concept of task priority.

The scheduler() function wraps two streams, a high priority stream and a low 
priority stream. The scheduler function emits tuples from the high priority 
stream first, and then the low priority stream.

The executor() function can then wrap the scheduler function to see tasks in 
priority order.

Pseudo syntax:
{code}
daemon(executor(scheduler(topic(tasks, q="priority:high"), topic(tasks, 
q="priority:low"
{code}








  was:
SOLR-9559 adds a general purpose *parallel task executor* for streaming 
expressions. The executor() function executes a stream of tasks and doesn't 
have any concept of task priority.

The scheduler() function wraps two streams, a high priority stream and a low 
priority stream. The scheduler function emits tuples from the high priority 
stream first, and then the low priority stream.

The executor() function can then wrap the scheduler function to see tasks in 
priority order.

Pseudo syntax:
{code}
daemon(executor(scheduler(topic(tasks, q="priority:high"), topic(tasks, 
"priority:low"
{code}









> Add scheduler Streaming Expression
> --
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(scheduler(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Mikhail Khludnev to the PMC

2016-12-30 Thread Martin Gainty
welcome Mikhail!


Martin
__



From: Erick Erickson 
Sent: Friday, December 30, 2016 11:31 AM
To: dev@lucene.apache.org
Subject: Re: Welcome Mikhail Khludnev to the PMC

Congrats Mikhail! Well deserved...

On Fri, Dec 30, 2016 at 8:27 AM, Shalin Shekhar Mangar
 wrote:
> Welcome Mikhail!
>
> On Fri, Dec 30, 2016 at 8:45 PM, Adrien Grand  wrote:
>> I am pleased to announce that Mikhail Khludnev has accepted the PMC's
>> invitation to join.
>>
>> Welcome Mikhail!
>>
>> Adrien
>
>
>
> --
> Regards,
> Shalin Shekhar Mangar.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9684) Add scheduler Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9684:
-
Description: 
SOLR-9559 adds a general purpose *parallel task executor* for streaming 
expressions. The executor() function executes a stream of tasks and doesn't 
have any concept of task priority.

The scheduler() function wraps two streams, a high priority stream and a low 
priority stream. The scheduler function emits tuples from the high priority 
stream first, and then the low priority stream.

The executor() function can then wrap the scheduler function to see tasks in 
priority order.

Pseudo syntax:
{code}
daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
q="priority:low"
{code}








  was:
SOLR-9559 adds a general purpose *parallel task executor* for streaming 
expressions. The executor() function executes a stream of tasks and doesn't 
have any concept of task priority.

The scheduler() function wraps two streams, a high priority stream and a low 
priority stream. The scheduler function emits tuples from the high priority 
stream first, and then the low priority stream.

The executor() function can then wrap the scheduler function to see tasks in 
priority order.

Pseudo syntax:
{code}
daemon(executor(scheduler(topic(tasks, q="priority:high"), topic(tasks, 
q="priority:low"
{code}









> Add scheduler Streaming Expression
> --
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(schedule(topic(tasks, q="priority:high"), topic(tasks, 
> q="priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9668) Support cursor paging in SolrEntityProcessor

2016-12-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788126#comment-15788126
 ] 

ASF subversion and git services commented on SOLR-9668:
---

Commit cc862d8e67f32d5447599d265f5d126541ed92c9 in lucene-solr's branch 
refs/heads/master from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cc862d8 ]

SOLR-9668: introduce cursorMark='true' for SolrEntityProcessor


> Support cursor paging in SolrEntityProcessor
> 
>
> Key: SOLR-9668
> URL: https://issues.apache.org/jira/browse/SOLR-9668
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Reporter: Yegor Kozlov
>Assignee: Mikhail Khludnev
>Priority: Minor
>  Labels: dataimportHandler
> Fix For: master (7.0)
>
> Attachments: SOLR-9668.patch, SOLR-9668.patch
>
>
> SolrEntityProcessor paginates using the start and rows parameters which can 
> be very inefficient at large offsets. In fact, the current implementation  is 
> impracticable to import large amounts of data (10M+ documents) because the 
> data import rate degrades from 1000docs/second to 10docs/second and the 
> import gets stuck.
> This patch introduces support for cursor paging which offers more or less 
> predictable performance. In my tests the time to fetch the 1st and 1000th 
> pages was about the same and the data import rate was stable throughout the 
> entire import. 
> To enable cursor paging a user needs to:
>  * add {{cursorMark='true'}} (!) attribute in the entity configuration;
>  * "sort" attribute in the entity configuration see note about sort at 
> https://cwiki.apache.org/confluence/display/solr/Pagination+of+Results ;
>  * remove {{timeout}} attribute.
> {code}
> 
> 
>   
>  query="*:*"
> rows="1000"
> cursorMark='true'
> sort="id asc"  
> url="http://localhost:8983/solr/collection1;>
> 
>   
> 
> {code}
> If the {{cursorMark}} attribute is missing or is not {{'true'}} then the 
> default start/rows pagination is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Mikhail Khludnev to the PMC

2016-12-30 Thread Joel Bernstein
Welcome Mikhail!

Joel Bernstein
http://joelsolr.blogspot.com/

On Fri, Dec 30, 2016 at 12:35 PM, Uwe Schindler  wrote:

> Welcome Mikhail!
>
> Uwe
>
>
> Am 30. Dezember 2016 16:15:57 MEZ schrieb Adrien Grand  >:
>>
>> I am pleased to announce that Mikhail Khludnev has accepted the PMC's
>> invitation to join.
>>
>> Welcome Mikhail!
>>
>> Adrien
>>
>
> --
> Uwe Schindler
> Achterdiek 19, 28357 Bremen
> https://www.thetaphi.de
>


[jira] [Updated] (SOLR-9684) Add scheduler Streaming Expression

2016-12-30 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9684:
-
Attachment: SOLR-9684.patch

New patch with tests

> Add scheduler Streaming Expression
> --
>
> Key: SOLR-9684
> URL: https://issues.apache.org/jira/browse/SOLR-9684
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9684.patch, SOLR-9684.patch, SOLR-9684.patch
>
>
> SOLR-9559 adds a general purpose *parallel task executor* for streaming 
> expressions. The executor() function executes a stream of tasks and doesn't 
> have any concept of task priority.
> The scheduler() function wraps two streams, a high priority stream and a low 
> priority stream. The scheduler function emits tuples from the high priority 
> stream first, and then the low priority stream.
> The executor() function can then wrap the scheduler function to see tasks in 
> priority order.
> Pseudo syntax:
> {code}
> daemon(executor(scheduler(topic(tasks, q="priority:high"), topic(tasks, 
> "priority:low"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3743 - Unstable!

2016-12-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3743/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenRenew

Error Message:
expected:<200> but was:<403>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<403>
at 
__randomizedtesting.SeedInfo.seed([5870257F08ED7759:6FEBD1613021AAFD]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.renewDelegationToken(TestDelegationWithHadoopAuth.java:118)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.verifyDelegationTokenRenew(TestDelegationWithHadoopAuth.java:301)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenRenew(TestDelegationWithHadoopAuth.java:318)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+147) - Build # 2549 - Unstable!

2016-12-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2549/
Java: 32bit/jdk-9-ea+147 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.metrics.reporters.SolrGangliaReporterTest.testReporter

Error Message:


Stack Trace:
java.util.ConcurrentModificationException
at 
__randomizedtesting.SeedInfo.seed([81B50BEB03A6F18B:DE5126DC68AA62CE]:0)
at 
java.base/java.util.ArrayList$Itr.checkForComodification(ArrayList.java:938)
at java.base/java.util.ArrayList$Itr.next(ArrayList.java:888)
at 
org.apache.solr.metrics.reporters.SolrGangliaReporterTest.testReporter(SolrGangliaReporterTest.java:76)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:538)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 12768 lines...]
   [junit4] Suite: org.apache.solr.metrics.reporters.SolrGangliaReporterTest
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1197 - Failure

2016-12-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1197/

10 tests failed.
FAILED:  
org.apache.lucene.index.TestIndexingSequenceNumbers.testStressConcurrentCommit

Error Message:
this IndexWriter is closed

Stack Trace:
org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:753)
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:767)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3173)
at 
org.apache.lucene.index.TestIndexingSequenceNumbers.testStressConcurrentCommit(TestIndexingSequenceNumbers.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded


FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterLeaderChange

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard2

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard2
at 
__randomizedtesting.SeedInfo.seed([BEA162A4B70D103B:6C512E47E9A2B609]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795)
at 

Re: Installing PyLucene

2016-12-30 Thread Andi Vajda

> On Dec 30, 2016, at 08:47, marco turchi  wrote:
> 
> Dear All,
> I'm newer with PyLucene and I'm trying to instal it on my local home on a
> cluster. In this environment, I do not have root privilege,

You do not need root privileges if you don't modify the system python. One way 
to achieve that is to setup a python virtualenv first and install jcc and 
pylucene into it instead of the system python.

> I'm using a
> version of python (2.7.5) available in anaconda and our cluster is not
> connected to the WEB, so I cannot use setuptools.

You can use setuptools without a web connection, why not ?

> I have followed the instructions for JCC and I have used:
> python setup.py build
> python setup.py install --users
> 
> and JCC is installed in $home/.local
> 
> Then I have started to install pyLucene, I have changed the Makefile (using
> the Linux (Debian Jessie 64-bit, Python 2.7.9, Oracle Java 1.8)
> configuration). The installation starts but it got stacked (for ages) here:
> ivy-configure:
> 
> [ivy:configure] :: loading settings :: file =
> /hltsrv0/turchi/Projects/QT21/WorkingFolder/NMT/software/pylucene-6.2.0/lucene-java-6.2.0/lucene/top-level-ivy-settings.xml

Ah, here, to build Java Lucene, ivy is required and without a web connection, 
it's going to be more difficult. You need to somehow make sure that all things 
ivy is going to download during the Lucene build (a one time setup only) are 
already there when you build Lucene.
You could do this on an equivalent machine that has a web connection and then 
copy the local ivy tree to the machine that doesn't.

Andi..

> 
> resolve:
> 
> Am I doing anything wrong? do you have any suggestions to help me to
> proceed with the installation?
> 
> Thanks a lot in advance for your help!
> 
> Best Regards,
> Marco



Re: Welcome Christine Poerschke to the PMC

2016-12-30 Thread Uwe Schindler
Welcome Christine!

Greetings from Germany,
Uwe

Am 30. Dezember 2016 14:32:08 MEZ schrieb Dawid Weiss :
>Welcome Christine!
>
>On Fri, Dec 30, 2016 at 2:29 PM, Shalin Shekhar Mangar
> wrote:
>> Welcome Christine!
>>
>> On Fri, Dec 30, 2016 at 6:16 PM, Adrien Grand 
>wrote:
>>> I am pleased to announce that Christine Poerschke has accepted the
>PMC's
>>> invitation to join.
>>>
>>> Welcome Christine!
>>>
>>> Adrien
>>
>>
>>
>> --
>> Regards,
>> Shalin Shekhar Mangar.
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
>-
>To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>For additional commands, e-mail: dev-h...@lucene.apache.org

--
Uwe Schindler
Achterdiek 19, 28357 Bremen
https://www.thetaphi.de

Re: Welcome Mikhail Khludnev to the PMC

2016-12-30 Thread Uwe Schindler
Welcome Mikhail!

Uwe

Am 30. Dezember 2016 16:15:57 MEZ schrieb Adrien Grand :
>I am pleased to announce that Mikhail Khludnev has accepted the PMC's
>invitation to join.
>
>Welcome Mikhail!
>
>Adrien

--
Uwe Schindler
Achterdiek 19, 28357 Bremen
https://www.thetaphi.de

[jira] [Resolved] (SOLR-9907) in solr 6.2 select query with edismax and bf=rord(datecreated) is not working

2016-12-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-9907.
--
Resolution: Duplicate

Please raise issues like this on the user's list, many more people will see it 
and you'll likely get help much more quickly.

If it's determined that this is a new problem with Solr code, _then_ you should 
raise a JIRA. 

This is a duplicate of SOLR-7495.

The current workaround is to declare the field multiValued="true". If you need 
to sort, use copyField to a single-valued field and sort on that.

> in solr 6.2 select query with edismax and bf=rord(datecreated) is not working 
> --
>
> Key: SOLR-9907
> URL: https://issues.apache.org/jira/browse/SOLR-9907
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
> Environment: x86_64 GNU/Linux
>Reporter: pramod kishore
>
> We have solr cloud with 3 shard and 3 replica with solr 6.2 installed.Select 
> query with edismax and bf=rord(datecreated) , where datecreated is a date 
> field gives error.
> Error Details:
> --
> "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   
> "root-error-class","org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException"],
> "msg":"org.apache.solr.client.solrj.SolrServerException: No live 
> available to handle this 
> request:[http://xyz:8983/solr/client_sku_shard3_replica3, 
> http://xyz:8983/solr/client_sku_shard2_replica2, 
> http://xyz:8983/solr/client_sku_shard2_replica1, 
> http://xyz:8983/solr/client_sku_shard2_replica3, 
> http://xyz:8983/solr/client_sku_shard1_replica1, 
> http://xyz:8983/solr/client_sku_shard3_replica2, 
> http://yxz:8983/solr/client_sku_shard1_replica2, 
> http://xyz:8983/solr/client_sku_shard3_replica1, 
> http://xyz:8983/solr/client_sku_shard1_replica3];,
> "trace":"org.apache.solr.common.SolrException: 
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this 
> request:[http://xyz:8983/solr/client_sku_shard3_replica3, 
> http://xyz:8983/solr/client_sku_shard2_replica2, 
> http://xyz:8983/solr/client_sku_shard2_replica1, 
> http://xyz:8983/solr/client_sku_shard2_replica3, 
> http://xyz:8983/solr/client_sku_shard1_replica1, 
> http://xyz:8983/solr/client_sku_shard3_replica2, 
> http://yxz:8983/solr/client_sku_shard1_replica2, 
> http://xyz:8983/solr/client_sku_shard3_replica1, 
> http://xyz:8983/solr/client_sku_shard1_replica3]\n\tat 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:415)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2089)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
> 

PyLucene Subscribe

2016-12-30 Thread marco turchi
Hi
I'm new in pylucene and I'd like to subscribe to the mailing list to post
some question about the installation.

Best Regards,
Marco


[jira] [Commented] (LUCENE-7603) Support Graph Token Streams in QueryBuilder

2016-12-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788037#comment-15788037
 ] 

ASF GitHub Bot commented on LUCENE-7603:


Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/129#discussion_r94244009
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/util/graph/GraphTokenStreamFiniteStrings.java
 ---
@@ -80,22 +77,41 @@ public boolean incrementToken() throws IOException {
 }
   }
 
+  private GraphTokenStreamFiniteStrings() {
+this.builder = new Automaton.Builder();
+  }
+
   /**
* Gets the list of finite string token streams from the given input 
graph token stream.
*/
-  public List getTokenStreams(final TokenStream in) throws 
IOException {
-// build automation
+  public static List getTokenStreams(final TokenStream in) 
throws IOException {
+GraphTokenStreamFiniteStrings gfs = new 
GraphTokenStreamFiniteStrings();
+return gfs.process(in);
+  }
+
+  /**
+   * Builds automaton and builds the finite string token streams.
+   */
+  private List process(final TokenStream in) throws 
IOException {
 build(in);
 
 List tokenStreams = new ArrayList<>();
 final FiniteStringsIterator finiteStrings = new 
FiniteStringsIterator(det);
 for (IntsRef string; (string = finiteStrings.next()) != null; ) {
   final BytesRef[] tokens = new BytesRef[string.length];
--- End diff --

Hmm; rather than materializing an array of tokens and increments, maybe you 
could simply give the IntsRefString  to BytesRefArrayTokenStream (and make 
BRATS not static) so that it could do this on the fly?  Not a big deal either 
way (current or my proposal).  If you do as I suggest then BRATS would no 
longer be a suitable name; maybe simply FiniteStringTokenStream or 
CustomTokenStream.


> Support Graph Token Streams in QueryBuilder
> ---
>
> Key: LUCENE-7603
> URL: https://issues.apache.org/jira/browse/LUCENE-7603
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser, core/search
>Reporter: Matt Weber
>
> With [LUCENE-6664|https://issues.apache.org/jira/browse/LUCENE-6664] we can 
> use multi-term synonyms query time.  A "graph token stream" will be created 
> which which is nothing more than using the position length attribute on 
> stacked tokens to indicate how many positions a token should span.  Currently 
> the position length attribute on tokens is ignored during query parsing.  
> This issue will add support for handling these graph token streams inside the 
> QueryBuilder utility class used by query parsers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7603) Support Graph Token Streams in QueryBuilder

2016-12-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788038#comment-15788038
 ] 

ASF GitHub Bot commented on LUCENE-7603:


Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/129#discussion_r94241922
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/util/graph/GraphTokenStreamFiniteStrings.java
 ---
@@ -80,22 +77,41 @@ public boolean incrementToken() throws IOException {
 }
   }
 
+  private GraphTokenStreamFiniteStrings() {
+this.builder = new Automaton.Builder();
--- End diff --

The other fields are initialized at the declaration; might as well move 
this here too?


> Support Graph Token Streams in QueryBuilder
> ---
>
> Key: LUCENE-7603
> URL: https://issues.apache.org/jira/browse/LUCENE-7603
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser, core/search
>Reporter: Matt Weber
>
> With [LUCENE-6664|https://issues.apache.org/jira/browse/LUCENE-6664] we can 
> use multi-term synonyms query time.  A "graph token stream" will be created 
> which which is nothing more than using the position length attribute on 
> stacked tokens to indicate how many positions a token should span.  Currently 
> the position length attribute on tokens is ignored during query parsing.  
> This issue will add support for handling these graph token streams inside the 
> QueryBuilder utility class used by query parsers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7603) Support Graph Token Streams in QueryBuilder

2016-12-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788035#comment-15788035
 ] 

ASF GitHub Bot commented on LUCENE-7603:


Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/129#discussion_r94243375
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/util/graph/GraphTokenStreamFiniteStrings.java
 ---
@@ -210,85 +199,41 @@ private void finish() {
*/
   private void finish(int maxDeterminizedStates) {
 Automaton automaton = builder.finish();
-
--- End diff --

So all this code here removed wasn't needed after all?  It's nice to see it 
all go away (less to maintain / less complexity) :-)


> Support Graph Token Streams in QueryBuilder
> ---
>
> Key: LUCENE-7603
> URL: https://issues.apache.org/jira/browse/LUCENE-7603
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser, core/search
>Reporter: Matt Weber
>
> With [LUCENE-6664|https://issues.apache.org/jira/browse/LUCENE-6664] we can 
> use multi-term synonyms query time.  A "graph token stream" will be created 
> which which is nothing more than using the position length attribute on 
> stacked tokens to indicate how many positions a token should span.  Currently 
> the position length attribute on tokens is ignored during query parsing.  
> This issue will add support for handling these graph token streams inside the 
> QueryBuilder utility class used by query parsers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7603) Support Graph Token Streams in QueryBuilder

2016-12-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15788036#comment-15788036
 ] 

ASF GitHub Bot commented on LUCENE-7603:


Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/129#discussion_r94243010
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/util/graph/GraphTokenStreamFiniteStrings.java
 ---
@@ -210,85 +199,41 @@ private void finish() {
*/
   private void finish(int maxDeterminizedStates) {
 Automaton automaton = builder.finish();
-
-// System.out.println("before det:\n" + automaton.toDot());
-
-Transition t = new Transition();
-
-// TODO: should we add "eps back to initial node" for all states,
-// and det that?  then we don't need to revisit initial node at
-// every position?  but automaton could blow up?  And, this makes it
-// harder to skip useless positions at search time?
-
-if (anyTermID != -1) {
-
-  // Make sure there are no leading or trailing ANY:
-  int count = automaton.initTransition(0, t);
-  for (int i = 0; i < count; i++) {
-automaton.getNextTransition(t);
-if (anyTermID >= t.min && anyTermID <= t.max) {
-  throw new IllegalStateException("automaton cannot lead with an 
ANY transition");
-}
-  }
-
-  int numStates = automaton.getNumStates();
-  for (int i = 0; i < numStates; i++) {
-count = automaton.initTransition(i, t);
-for (int j = 0; j < count; j++) {
-  automaton.getNextTransition(t);
-  if (automaton.isAccept(t.dest) && anyTermID >= t.min && 
anyTermID <= t.max) {
-throw new IllegalStateException("automaton cannot end with an 
ANY transition");
-  }
-}
-  }
-
-  int termCount = termToID.size();
-
-  // We have to carefully translate these transitions so automaton
-  // realizes they also match all other terms:
-  Automaton newAutomaton = new Automaton();
-  for (int i = 0; i < numStates; i++) {
-newAutomaton.createState();
-newAutomaton.setAccept(i, automaton.isAccept(i));
-  }
-
-  for (int i = 0; i < numStates; i++) {
-count = automaton.initTransition(i, t);
-for (int j = 0; j < count; j++) {
-  automaton.getNextTransition(t);
-  int min, max;
-  if (t.min <= anyTermID && anyTermID <= t.max) {
-// Match any term
-min = 0;
-max = termCount - 1;
-  } else {
-min = t.min;
-max = t.max;
-  }
-  newAutomaton.addTransition(t.source, t.dest, min, max);
-}
-  }
-  newAutomaton.finishState();
-  automaton = newAutomaton;
-}
-
 det = Operations.removeDeadStates(Operations.determinize(automaton, 
maxDeterminizedStates));
   }
 
-  private int getTermID(BytesRef term) {
-Integer id = termToID.get(term);
-if (id == null) {
-  id = termToID.size();
-  if (term != null) {
-term = BytesRef.deepCopyOf(term);
-  }
-  termToID.put(term, id);
+  /**
+   * Gets an integer id for a given term.
+   *
+   * If there is no position gaps for this token then we can reuse the id 
for the same term if it appeared at another
+   * position without a gap.  If we have a position gap generate a new id 
so we can keep track of the position
+   * increment.
+   */
+  private int getTermID(int incr, int prevIncr, BytesRef term) {
+assert term != null;
+boolean isStackedGap = incr == 0 && prevIncr > 1;
+boolean hasGap = incr > 1;
+term = BytesRef.deepCopyOf(term);
--- End diff --

The deepCopyOf is only needed if you generate a new ID, not for an existing 
one.  

BTW... have you seen BytesRefHash?  I think re-using that could minimize 
the code here to deal with this stuff.


> Support Graph Token Streams in QueryBuilder
> ---
>
> Key: LUCENE-7603
> URL: https://issues.apache.org/jira/browse/LUCENE-7603
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser, core/search
>Reporter: Matt Weber
>
> With [LUCENE-6664|https://issues.apache.org/jira/browse/LUCENE-6664] we can 
> use multi-term synonyms query time.  A "graph token stream" will be created 
> which which is nothing more than using the position length attribute on 
> stacked tokens to indicate how many positions a token should span.  Currently 
> the position length 

[GitHub] lucene-solr pull request #129: LUCENE-7603: Support Graph Token Streams in Q...

2016-12-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/129#discussion_r94243010
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/util/graph/GraphTokenStreamFiniteStrings.java
 ---
@@ -210,85 +199,41 @@ private void finish() {
*/
   private void finish(int maxDeterminizedStates) {
 Automaton automaton = builder.finish();
-
-// System.out.println("before det:\n" + automaton.toDot());
-
-Transition t = new Transition();
-
-// TODO: should we add "eps back to initial node" for all states,
-// and det that?  then we don't need to revisit initial node at
-// every position?  but automaton could blow up?  And, this makes it
-// harder to skip useless positions at search time?
-
-if (anyTermID != -1) {
-
-  // Make sure there are no leading or trailing ANY:
-  int count = automaton.initTransition(0, t);
-  for (int i = 0; i < count; i++) {
-automaton.getNextTransition(t);
-if (anyTermID >= t.min && anyTermID <= t.max) {
-  throw new IllegalStateException("automaton cannot lead with an 
ANY transition");
-}
-  }
-
-  int numStates = automaton.getNumStates();
-  for (int i = 0; i < numStates; i++) {
-count = automaton.initTransition(i, t);
-for (int j = 0; j < count; j++) {
-  automaton.getNextTransition(t);
-  if (automaton.isAccept(t.dest) && anyTermID >= t.min && 
anyTermID <= t.max) {
-throw new IllegalStateException("automaton cannot end with an 
ANY transition");
-  }
-}
-  }
-
-  int termCount = termToID.size();
-
-  // We have to carefully translate these transitions so automaton
-  // realizes they also match all other terms:
-  Automaton newAutomaton = new Automaton();
-  for (int i = 0; i < numStates; i++) {
-newAutomaton.createState();
-newAutomaton.setAccept(i, automaton.isAccept(i));
-  }
-
-  for (int i = 0; i < numStates; i++) {
-count = automaton.initTransition(i, t);
-for (int j = 0; j < count; j++) {
-  automaton.getNextTransition(t);
-  int min, max;
-  if (t.min <= anyTermID && anyTermID <= t.max) {
-// Match any term
-min = 0;
-max = termCount - 1;
-  } else {
-min = t.min;
-max = t.max;
-  }
-  newAutomaton.addTransition(t.source, t.dest, min, max);
-}
-  }
-  newAutomaton.finishState();
-  automaton = newAutomaton;
-}
-
 det = Operations.removeDeadStates(Operations.determinize(automaton, 
maxDeterminizedStates));
   }
 
-  private int getTermID(BytesRef term) {
-Integer id = termToID.get(term);
-if (id == null) {
-  id = termToID.size();
-  if (term != null) {
-term = BytesRef.deepCopyOf(term);
-  }
-  termToID.put(term, id);
+  /**
+   * Gets an integer id for a given term.
+   *
+   * If there is no position gaps for this token then we can reuse the id 
for the same term if it appeared at another
+   * position without a gap.  If we have a position gap generate a new id 
so we can keep track of the position
+   * increment.
+   */
+  private int getTermID(int incr, int prevIncr, BytesRef term) {
+assert term != null;
+boolean isStackedGap = incr == 0 && prevIncr > 1;
+boolean hasGap = incr > 1;
+term = BytesRef.deepCopyOf(term);
--- End diff --

The deepCopyOf is only needed if you generate a new ID, not for an existing 
one.  

BTW... have you seen BytesRefHash?  I think re-using that could minimize 
the code here to deal with this stuff.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #129: LUCENE-7603: Support Graph Token Streams in Q...

2016-12-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/129#discussion_r94243375
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/util/graph/GraphTokenStreamFiniteStrings.java
 ---
@@ -210,85 +199,41 @@ private void finish() {
*/
   private void finish(int maxDeterminizedStates) {
 Automaton automaton = builder.finish();
-
--- End diff --

So all this code here removed wasn't needed after all?  It's nice to see it 
all go away (less to maintain / less complexity) :-)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #129: LUCENE-7603: Support Graph Token Streams in Q...

2016-12-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/129#discussion_r94241922
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/util/graph/GraphTokenStreamFiniteStrings.java
 ---
@@ -80,22 +77,41 @@ public boolean incrementToken() throws IOException {
 }
   }
 
+  private GraphTokenStreamFiniteStrings() {
+this.builder = new Automaton.Builder();
--- End diff --

The other fields are initialized at the declaration; might as well move 
this here too?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #129: LUCENE-7603: Support Graph Token Streams in Q...

2016-12-30 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/129#discussion_r94244009
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/util/graph/GraphTokenStreamFiniteStrings.java
 ---
@@ -80,22 +77,41 @@ public boolean incrementToken() throws IOException {
 }
   }
 
+  private GraphTokenStreamFiniteStrings() {
+this.builder = new Automaton.Builder();
+  }
+
   /**
* Gets the list of finite string token streams from the given input 
graph token stream.
*/
-  public List getTokenStreams(final TokenStream in) throws 
IOException {
-// build automation
+  public static List getTokenStreams(final TokenStream in) 
throws IOException {
+GraphTokenStreamFiniteStrings gfs = new 
GraphTokenStreamFiniteStrings();
+return gfs.process(in);
+  }
+
+  /**
+   * Builds automaton and builds the finite string token streams.
+   */
+  private List process(final TokenStream in) throws 
IOException {
 build(in);
 
 List tokenStreams = new ArrayList<>();
 final FiniteStringsIterator finiteStrings = new 
FiniteStringsIterator(det);
 for (IntsRef string; (string = finiteStrings.next()) != null; ) {
   final BytesRef[] tokens = new BytesRef[string.length];
--- End diff --

Hmm; rather than materializing an array of tokens and increments, maybe you 
could simply give the IntsRefString  to BytesRefArrayTokenStream (and make 
BRATS not static) so that it could do this on the fly?  Not a big deal either 
way (current or my proposal).  If you do as I suggest then BRATS would no 
longer be a suitable name; maybe simply FiniteStringTokenStream or 
CustomTokenStream.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7603) Support Graph Token Streams in QueryBuilder

2016-12-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15787983#comment-15787983
 ] 

ASF GitHub Bot commented on LUCENE-7603:


Github user mattweber commented on the issue:

https://github.com/apache/lucene-solr/pull/129
  
@mikemccand I addressed you comments.  I also added some more tests and 
fixed a bug that would yield wrong increment when a term that had previously 
been seen was found again with an increment of 0.  Tests were added.  I have 
squashed these changes with the previous commit so it is clear to see the 
difference between the original PR which did not support position increments 
and the new one that does.


> Support Graph Token Streams in QueryBuilder
> ---
>
> Key: LUCENE-7603
> URL: https://issues.apache.org/jira/browse/LUCENE-7603
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser, core/search
>Reporter: Matt Weber
>
> With [LUCENE-6664|https://issues.apache.org/jira/browse/LUCENE-6664] we can 
> use multi-term synonyms query time.  A "graph token stream" will be created 
> which which is nothing more than using the position length attribute on 
> stacked tokens to indicate how many positions a token should span.  Currently 
> the position length attribute on tokens is ignored during query parsing.  
> This issue will add support for handling these graph token streams inside the 
> QueryBuilder utility class used by query parsers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #129: LUCENE-7603: Support Graph Token Streams in QueryBui...

2016-12-30 Thread mattweber
Github user mattweber commented on the issue:

https://github.com/apache/lucene-solr/pull/129
  
@mikemccand I addressed you comments.  I also added some more tests and 
fixed a bug that would yield wrong increment when a term that had previously 
been seen was found again with an increment of 0.  Tests were added.  I have 
squashed these changes with the previous commit so it is clear to see the 
difference between the original PR which did not support position increments 
and the new one that does.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Installing PyLucene

2016-12-30 Thread marco turchi
Dear All,
I'm newer with PyLucene and I'm trying to instal it on my local home on a
cluster. In this environment, I do not have root privilege, I'm using a
version of python (2.7.5) available in anaconda and our cluster is not
connected to the WEB, so I cannot use setuptools.

I have followed the instructions for JCC and I have used:
python setup.py build
python setup.py install --users

and JCC is installed in $home/.local

Then I have started to install pyLucene, I have changed the Makefile (using
the Linux (Debian Jessie 64-bit, Python 2.7.9, Oracle Java 1.8)
configuration). The installation starts but it got stacked (for ages) here:
ivy-configure:

[ivy:configure] :: loading settings :: file =
/hltsrv0/turchi/Projects/QT21/WorkingFolder/NMT/software/pylucene-6.2.0/lucene-java-6.2.0/lucene/top-level-ivy-settings.xml

resolve:

Am I doing anything wrong? do you have any suggestions to help me to
proceed with the installation?

Thanks a lot in advance for your help!

Best Regards,
Marco


Re: Welcome Christine Poerschke to the PMC

2016-12-30 Thread Erick Erickson
Congrats Christine! Welcome!

On Fri, Dec 30, 2016 at 7:12 AM, Joel Bernstein  wrote:
> Welcome Christine!
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Fri, Dec 30, 2016 at 10:05 AM, Tomás Fernández Löbbe
>  wrote:
>>
>> Welcome Christine!
>>
>> On Fri, Dec 30, 2016 at 11:32 AM, Yonik Seeley  wrote:
>>>
>>> Congrats Christine!
>>>
>>> -Yonik
>>>
>>>
>>> On Fri, Dec 30, 2016 at 7:46 AM, Adrien Grand  wrote:
>>> > I am pleased to announce that Christine Poerschke has accepted the
>>> > PMC's
>>> > invitation to join.
>>> >
>>> > Welcome Christine!
>>> >
>>> > Adrien
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Mikhail Khludnev to the PMC

2016-12-30 Thread Erick Erickson
Congrats Mikhail! Well deserved...

On Fri, Dec 30, 2016 at 8:27 AM, Shalin Shekhar Mangar
 wrote:
> Welcome Mikhail!
>
> On Fri, Dec 30, 2016 at 8:45 PM, Adrien Grand  wrote:
>> I am pleased to announce that Mikhail Khludnev has accepted the PMC's
>> invitation to join.
>>
>> Welcome Mikhail!
>>
>> Adrien
>
>
>
> --
> Regards,
> Shalin Shekhar Mangar.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Mikhail Khludnev to the PMC

2016-12-30 Thread Shalin Shekhar Mangar
Welcome Mikhail!

On Fri, Dec 30, 2016 at 8:45 PM, Adrien Grand  wrote:
> I am pleased to announce that Mikhail Khludnev has accepted the PMC's
> invitation to join.
>
> Welcome Mikhail!
>
> Adrien



-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Mikhail Khludnev to the PMC

2016-12-30 Thread Dawid Weiss
Welcome Mikhail!

Dawid

On Fri, Dec 30, 2016 at 4:59 PM, Yonik Seeley  wrote:
> Congrats Mikhail!
>
> -Yonik
>
>
> On Fri, Dec 30, 2016 at 10:15 AM, Adrien Grand  wrote:
>> I am pleased to announce that Mikhail Khludnev has accepted the PMC's
>> invitation to join.
>>
>> Welcome Mikhail!
>>
>> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_112) - Build # 655 - Still Unstable!

2016-12-30 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/655/
Java: 32bit/jdk1.8.0_112 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
Mismatch in counts between replicas

Stack Trace:
java.lang.AssertionError: Mismatch in counts between replicas
at 
__randomizedtesting.SeedInfo.seed([F18DFDDDE9A25A0A:79D9C207475E37F2]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.RecoveryZkTest.assertShardConsistency(RecoveryZkTest.java:143)
at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11857 lines...]
   [junit4] Suite: org.apache.solr.cloud.RecoveryZkTest
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.RecoveryZkTest_F18DFDDDE9A25A0A-001\init-core-data-001
   

Re: Welcome Mikhail Khludnev to the PMC

2016-12-30 Thread Yonik Seeley
Congrats Mikhail!

-Yonik


On Fri, Dec 30, 2016 at 10:15 AM, Adrien Grand  wrote:
> I am pleased to announce that Mikhail Khludnev has accepted the PMC's
> invitation to join.
>
> Welcome Mikhail!
>
> Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Mikhail Khludnev to the PMC

2016-12-30 Thread Alan Woodward
Welcome Mikhail!

Alan Woodward
www.flax.co.uk


> On 30 Dec 2016, at 15:15, Adrien Grand  wrote:
> 
> I am pleased to announce that Mikhail Khludnev has accepted the PMC's 
> invitation to join.
> 
> Welcome Mikhail!
> 
> Adrien



Re: Welcome Mikhail Khludnev to the PMC

2016-12-30 Thread Steve Rowe
Welcome Mikhail!

--
Steve
www.lucidworks.com

> On Dec 30, 2016, at 10:15 AM, Adrien Grand  wrote:
> 
> I am pleased to announce that Mikhail Khludnev has accepted the PMC's 
> invitation to join.
> 
> Welcome Mikhail!
> 
> Adrien


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7612) Remove suggester dependency on misc

2016-12-30 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-7612:

Description: 
AnalyzingInfixSuggester uses IndexSorter, which was in the misc module when the 
dependency was added in LUCENE-5477.  IndexSorter is in core now, though, so 
this dependency can be removed.

{{lucene/misc/src/java/org/apache/lucene/index/Sorter.java}} became 
{{lucene/core/src/java/org/apache/lucene/index/Sorter.java}} as part of 
LUCENE-6766

  was:AnalyzingInfixSuggester uses IndexSorter, which was in the misc module 
when the dependency was added in LUCENE-5477.  IndexSorter is in core now, 
though, so this dependency can be removed.


> Remove suggester dependency on misc
> ---
>
> Key: LUCENE-7612
> URL: https://issues.apache.org/jira/browse/LUCENE-7612
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7612.patch
>
>
> AnalyzingInfixSuggester uses IndexSorter, which was in the misc module when 
> the dependency was added in LUCENE-5477.  IndexSorter is in core now, though, 
> so this dependency can be removed.
> {{lucene/misc/src/java/org/apache/lucene/index/Sorter.java}} became 
> {{lucene/core/src/java/org/apache/lucene/index/Sorter.java}} as part of 
> LUCENE-6766



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Welcome Mikhail Khludnev to the PMC

2016-12-30 Thread Adrien Grand
I am pleased to announce that Mikhail Khludnev has accepted the PMC's
invitation to join.

Welcome Mikhail!

Adrien


Re: Welcome Christine Poerschke to the PMC

2016-12-30 Thread Joel Bernstein
Welcome Christine!

Joel Bernstein
http://joelsolr.blogspot.com/

On Fri, Dec 30, 2016 at 10:05 AM, Tomás Fernández Löbbe <
tomasflo...@gmail.com> wrote:

> Welcome Christine!
>
> On Fri, Dec 30, 2016 at 11:32 AM, Yonik Seeley  wrote:
>
>> Congrats Christine!
>>
>> -Yonik
>>
>>
>> On Fri, Dec 30, 2016 at 7:46 AM, Adrien Grand  wrote:
>> > I am pleased to announce that Christine Poerschke has accepted the PMC's
>> > invitation to join.
>> >
>> > Welcome Christine!
>> >
>> > Adrien
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


Re: Welcome Christine Poerschke to the PMC

2016-12-30 Thread Tomás Fernández Löbbe
Welcome Christine!

On Fri, Dec 30, 2016 at 11:32 AM, Yonik Seeley  wrote:

> Congrats Christine!
>
> -Yonik
>
>
> On Fri, Dec 30, 2016 at 7:46 AM, Adrien Grand  wrote:
> > I am pleased to announce that Christine Poerschke has accepted the PMC's
> > invitation to join.
> >
> > Welcome Christine!
> >
> > Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (LUCENE-7612) Remove suggester dependency on misc

2016-12-30 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15787821#comment-15787821
 ] 

Christine Poerschke commented on LUCENE-7612:
-

bq. ... I think the maven and eclipse templates will pick this up automatically?

Yes, for maven that looks to be the case, not sure about eclipse though.

{code}
ant clean-maven-build
ant get-maven-poms
cp maven-build/lucene/suggest/pom.xml before-lucene-suggest-pom.xml

git apply LUCENE-7612.patch

ant clean-maven-build
ant get-maven-poms
cp maven-build/lucene/suggest/pom.xml after-lucene-suggest-pom.xml

diff -c before-lucene-suggest-pom.xml after-lucene-suggest-pom.xml
...
  
  
org.apache.lucene
-   lucene-misc
- 
- 
-   org.apache.lucene
lucene-queries
  
...
{code}

> Remove suggester dependency on misc
> ---
>
> Key: LUCENE-7612
> URL: https://issues.apache.org/jira/browse/LUCENE-7612
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7612.patch
>
>
> AnalyzingInfixSuggester uses IndexSorter, which was in the misc module when 
> the dependency was added in LUCENE-5477.  IndexSorter is in core now, though, 
> so this dependency can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7612) Remove suggester dependency on misc

2016-12-30 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-7612:
--
Attachment: LUCENE-7612.patch

Patch, editing the suggest module's build.xml and the IDEA config.  I think the 
maven and eclipse templates will pick this up automatically?

> Remove suggester dependency on misc
> ---
>
> Key: LUCENE-7612
> URL: https://issues.apache.org/jira/browse/LUCENE-7612
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7612.patch
>
>
> AnalyzingInfixSuggester uses IndexSorter, which was in the misc module when 
> the dependency was added in LUCENE-5477.  IndexSorter is in core now, though, 
> so this dependency can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Christine Poerschke to the PMC

2016-12-30 Thread Yonik Seeley
Congrats Christine!

-Yonik


On Fri, Dec 30, 2016 at 7:46 AM, Adrien Grand  wrote:
> I am pleased to announce that Christine Poerschke has accepted the PMC's
> invitation to join.
>
> Welcome Christine!
>
> Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Christine Poerschke to the PMC

2016-12-30 Thread Steve Rowe
Welcome Christine!

--
Steve
www.lucidworks.com

> On Dec 30, 2016, at 7:46 AM, Adrien Grand  wrote:
> 
> I am pleased to announce that Christine Poerschke has accepted the PMC's 
> invitation to join.
> 
> Welcome Christine!
> 
> Adrien


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7612) Remove suggester dependency on misc

2016-12-30 Thread Alan Woodward (JIRA)
Alan Woodward created LUCENE-7612:
-

 Summary: Remove suggester dependency on misc
 Key: LUCENE-7612
 URL: https://issues.apache.org/jira/browse/LUCENE-7612
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward


AnalyzingInfixSuggester uses IndexSorter, which was in the misc module when the 
dependency was added in LUCENE-5477.  IndexSorter is in core now, though, so 
this dependency can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7611) Make suggester module use LongValuesSource

2016-12-30 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-7611:
--
Attachment: LUCENE-7611.patch

Patch, to be applied after the patches on LUCENE-7609 and LUCENE-7610

> Make suggester module use LongValuesSource
> --
>
> Key: LUCENE-7611
> URL: https://issues.apache.org/jira/browse/LUCENE-7611
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7611.patch
>
>
> This allows us to remove the suggester module's dependency on the queries 
> module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7611) Make suggester module use LongValuesSource

2016-12-30 Thread Alan Woodward (JIRA)
Alan Woodward created LUCENE-7611:
-

 Summary: Make suggester module use LongValuesSource
 Key: LUCENE-7611
 URL: https://issues.apache.org/jira/browse/LUCENE-7611
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward
Assignee: Alan Woodward


This allows us to remove the suggester module's dependency on the queries 
module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7610) Migrate facets module from ValueSource to Double/LongValuesSource

2016-12-30 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-7610:
--
Attachment: LUCENE-7610.patch

Patch, to be applied after the patch on LUCENE-7609

> Migrate facets module from ValueSource to Double/LongValuesSource
> -
>
> Key: LUCENE-7610
> URL: https://issues.apache.org/jira/browse/LUCENE-7610
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7610.patch
>
>
> Unfortunately this doesn't allow us to break the facets dependency on the 
> queries module, because facets also uses TermsQuery - perhaps this should 
> move to core as well?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >