[jira] [Updated] (SOLR-9649) Distributed grouping can return 'too many' results?

2016-10-15 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-9649:
--
Attachment: SOLR-9649.patch

minimal test to demonstrate the unexpected(\?) behavior

> Distributed grouping can return 'too many' results?
> ---
>
> Key: SOLR-9649
> URL: https://issues.apache.org/jira/browse/SOLR-9649
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9649.patch
>
>
> stumbled across this whilst looking at SOLR-6203 and trying to factor 
> {{GroupingSpecification.\[group\](sort|offset|limit)}} into 
> {{GroupingSpecification.\[group\](sortSpec)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9649) Distributed grouping can return 'too many' results?

2016-10-15 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-9649:
-

 Summary: Distributed grouping can return 'too many' results?
 Key: SOLR-9649
 URL: https://issues.apache.org/jira/browse/SOLR-9649
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


stumbled across this whilst looking at SOLR-6203 and trying to factor 
{{GroupingSpecification.\[group\](sort|offset|limit)}} into 
{{GroupingSpecification.\[group\](sortSpec)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 481 - Unstable

2016-10-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/481/

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, SolrCore, 
MDCAwareThreadPoolExecutor, MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:426)  
at org.apache.solr.core.SolrCore.(SolrCore.java:756)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:688)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:66)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:586)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:762)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:688)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at org.apache.solr.core.SolrCore.(SolrCore.java:850)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:688)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at org.apache.solr.core.SolrCore.(SolrCore.java:711)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:688)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:324)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:554)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:762)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:688)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at 

[jira] [Commented] (SOLR-8396) Add support for PointFields in Solr

2016-10-15 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15579286#comment-15579286
 ] 

David Smiley commented on SOLR-8396:


Is anything changing with DocValues in this issue?  i.e. if I went from 
LongTrieField -> LongPointField (or whatever the naming is) as proposed in this 
issue and I hypothetically had index=false but docValues=true, then is there 
any real change?  I anticipate none.

> Add support for PointFields in Solr
> ---
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8396) Add support for PointFields in Solr

2016-10-15 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15579266#comment-15579266
 ] 

Yonik Seeley commented on SOLR-8396:


The docvalues format is heavily related when looking at the larger picture 
(i.e. from the user perspective, we're creating a new numeric field type as a 
whole).
Different JIRAs or not doesn't matter... what does matter is if points are 
released/exposed in an official release with the old docvalues format. That 
definitely impacts future support, back compat, interfaces, etc.

> Add support for PointFields in Solr
> ---
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 1959 - Still Unstable!

2016-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1959/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DeleteReplicaTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.DeleteReplicaTest:
 1) Thread[id=88059, 
name=OverseerHdfsCoreFailoverThread-96769688707334156-127.0.0.1:39956_solr-n_02,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.DeleteReplicaTest: 
   1) Thread[id=88059, 
name=OverseerHdfsCoreFailoverThread-96769688707334156-127.0.0.1:39956_solr-n_02,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([2902DF2B1D8E958]:0)




Build Log:
[...truncated 11525 lines...]
   [junit4] Suite: org.apache.solr.cloud.DeleteReplicaTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.DeleteReplicaTest_2902DF2B1D8E958-001/init-core-data-001
   [junit4]   2> 1034250 INFO  
(SUITE-DeleteReplicaTest-seed#[2902DF2B1D8E958]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 1034250 INFO  
(SUITE-DeleteReplicaTest-seed#[2902DF2B1D8E958]-worker) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 4 servers in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.DeleteReplicaTest_2902DF2B1D8E958-001/tempDir-001
   [junit4]   2> 1034250 INFO  
(SUITE-DeleteReplicaTest-seed#[2902DF2B1D8E958]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1034250 INFO  (Thread-839) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1034250 INFO  (Thread-839) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1034350 INFO  
(SUITE-DeleteReplicaTest-seed#[2902DF2B1D8E958]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:41760
   [junit4]   2> 1034355 INFO  (jetty-launcher-23728-thread-1) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 1034355 INFO  (jetty-launcher-23728-thread-3) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 1034355 INFO  (jetty-launcher-23728-thread-2) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 1034356 INFO  (jetty-launcher-23728-thread-4) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 1034356 INFO  (jetty-launcher-23728-thread-3) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@214beeaf{/solr,null,AVAILABLE}
   [junit4]   2> 1034356 INFO  (jetty-launcher-23728-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@766afd4c{/solr,null,AVAILABLE}
   [junit4]   2> 1034357 INFO  (jetty-launcher-23728-thread-2) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@75805c01{/solr,null,AVAILABLE}
   [junit4]   2> 1034358 INFO  (jetty-launcher-23728-thread-4) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@55b51ed3{/solr,null,AVAILABLE}
   [junit4]   2> 1034363 INFO  (jetty-launcher-23728-thread-3) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@4b79a164{HTTP/1.1,[http/1.1]}{127.0.0.1:๓๙๙๕๖}
   [junit4]   2> 1034363 INFO  (jetty-launcher-23728-thread-3) [] 
o.e.j.s.Server Started @๑๐๓๖๘๓๑ms
   [junit4]   2> 1034363 INFO  (jetty-launcher-23728-thread-3) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=39956}
   [junit4]   2> 1034363 ERROR (jetty-launcher-23728-thread-3) [] 
o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 1034363 INFO  (jetty-launcher-23728-thread-3) [] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr™ version 
6.3.0
   [junit4]   2> 1034363 INFO  (jetty-launcher-23728-thread-3) [] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1034364 INFO  (jetty-launcher-23728-thread-3) [] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 1034364 INFO  (jetty-launcher-23728-thread-3) [] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|Start time: 
2016-10-16T03:24:27.742Z
   [junit4]   2> 1034364 INFO  (jetty-launcher-23728-thread-1) 

[jira] [Commented] (SOLR-8396) Add support for PointFields in Solr

2016-10-15 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15579235#comment-15579235
 ] 

David Smiley commented on SOLR-8396:


bq. the plan is that PointFields will use SortedNumericDocValues instead of 
SortedSetDocValues for multi-valued cases. Doing that involves much more work, 
since we need to change all the consumers. This patch is already getting large, 
so I think it may be better to tackle that in a followup Jira. 

Yes; definitely a separate issue... it doesn't seem related to this issue as 
this issue is about the index structure (value/range -> docs) not the doc value 
(doc -> value).

> Add support for PointFields in Solr
> ---
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 911 - Unstable!

2016-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/911/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.TestLeaderInitiatedRecoveryThread.testPublishDownState

Error Message:
expected:<29> but was:<30>

Stack Trace:
java.lang.AssertionError: expected:<29> but was:<30>
at 
__randomizedtesting.SeedInfo.seed([9DA66FD612ADD799:C3DBCD28B6B09E64]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestLeaderInitiatedRecoveryThread.testPublishDownState(TestLeaderInitiatedRecoveryThread.java:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Updated] (SOLR-9417) Allow daemons to terminate

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9417:
-
Description: 
The daemon expression currently runs until it's killed. This ticket will add a 
new *terminate* parameter to the daemon expression that will allow the daemon 
to shut itself down when it's finished processing a topic queue.

There a couple of small changes that need to be made to allow the daemon to 
terminate on it's own:

1) The daemon will need to be passed the Map of all daemons in the /stream 
handler. This will allow the DaemonStream to remove itself from the Map when it 
terminates.
2) Logic needs to be added for the daemon to exit it's run loop if the topic 
signals it had a zero Tuple run. The *sleepMillis* value in the EOF Tuple can 
be used for this purpose. If sleepMillis is greater then 0 then this signals a 
zero Tuple run.

  was:The daemon expression currently runs until it's killed. This ticket will 
add a new *terminate* parameter to the daemon expression that will allow the 
daemon to shut itself down when it's finished processing a topic queue.


> Allow daemons to terminate
> --
>
> Key: SOLR-9417
> URL: https://issues.apache.org/jira/browse/SOLR-9417
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.3
>
>
> The daemon expression currently runs until it's killed. This ticket will add 
> a new *terminate* parameter to the daemon expression that will allow the 
> daemon to shut itself down when it's finished processing a topic queue.
> There a couple of small changes that need to be made to allow the daemon to 
> terminate on it's own:
> 1) The daemon will need to be passed the Map of all daemons in the /stream 
> handler. This will allow the DaemonStream to remove itself from the Map when 
> it terminates.
> 2) Logic needs to be added for the daemon to exit it's run loop if the topic 
> signals it had a zero Tuple run. The *sleepMillis* value in the EOF Tuple can 
> be used for this purpose. If sleepMillis is greater then 0 then this signals 
> a zero Tuple run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9417) Allow daemons to terminate

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9417:
-
Fix Version/s: 6.3

> Allow daemons to terminate
> --
>
> Key: SOLR-9417
> URL: https://issues.apache.org/jira/browse/SOLR-9417
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.3
>
>
> The daemon expression currently runs until it's killed. This ticket will add 
> a new *terminate* parameter to the daemon expression that will allow the 
> daemon to shut itself down when it's finished processing a topic queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 176 - Still unstable

2016-10-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/176/

7 tests failed.
FAILED:  org.apache.lucene.search.TestFuzzyQuery.testRandom

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([54785754D624E476]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestFuzzyQuery

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([54785754D624E476]:0)


FAILED:  org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testOps

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard1

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard1
at 
__randomizedtesting.SeedInfo.seed([BAADB79CF21D9217:A6D52B7FBD2E99CA]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testOps(CdcrReplicationDistributedZkTest.java:463)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (SOLR-9648) Wrap all solr merge policies with SolrMergePolicy

2016-10-15 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15579083#comment-15579083
 ] 

Keith Laban edited comment on SOLR-9648 at 10/16/16 1:39 AM:
-

Adding a naive implementation that will do the upgrade of segments on startup 
(no tests). As of now this doesn't allow any configuration options to be 
passed, but can be easily added. Initial patch is intended as a POC to start 
the dialogue .


was (Author: k317h):
Adding a naive implementation that will do the upgrade of segments on startup 
(no tests). As of now this doesn't allow any configuration options to be 
passed, but can be easily added.

> Wrap all solr merge policies with SolrMergePolicy
> -
>
> Key: SOLR-9648
> URL: https://issues.apache.org/jira/browse/SOLR-9648
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Keith Laban
> Attachments: SOLR-9648-WIP.patch
>
>
> Wrap the entry point for all merge policies with a single entry point merge 
> policy for more fine grained control over merging with minimal configuration. 
> The main benefit will be to allow upgrading of segments on startup when 
> lucene version changes. Ideally we can use the same approach for adding and 
> removing of doc values when the schema changes and hopefully other index type 
> changes such as Trie -> Point types, or even analyzer changes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9648) Wrap all solr merge policies with SolrMergePolicy

2016-10-15 Thread Keith Laban (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Laban updated SOLR-9648:
--
Attachment: SOLR-9648-WIP.patch

Adding a naive implementation that will upgrade of segments on startup (no 
tests). As of now this doesn't allow any configuration options to be passed, 
but can be easily added.

> Wrap all solr merge policies with SolrMergePolicy
> -
>
> Key: SOLR-9648
> URL: https://issues.apache.org/jira/browse/SOLR-9648
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Keith Laban
> Attachments: SOLR-9648-WIP.patch
>
>
> Wrap the entry point for all merge policies with a single entry point merge 
> policy for more fine grained control over merging with minimal configuration. 
> The main benefit will be to allow upgrading of segments on startup when 
> lucene version changes. Ideally we can use the same approach for adding and 
> removing of doc values when the schema changes and hopefully other index type 
> changes such as Trie -> Point types, or even analyzer changes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9648) Wrap all solr merge policies with SolrMergePolicy

2016-10-15 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15579083#comment-15579083
 ] 

Keith Laban edited comment on SOLR-9648 at 10/16/16 1:37 AM:
-

Adding a naive implementation that will do the upgrade of segments on startup 
(no tests). As of now this doesn't allow any configuration options to be 
passed, but can be easily added.


was (Author: k317h):
Adding a naive implementation that will upgrade of segments on startup (no 
tests). As of now this doesn't allow any configuration options to be passed, 
but can be easily added.

> Wrap all solr merge policies with SolrMergePolicy
> -
>
> Key: SOLR-9648
> URL: https://issues.apache.org/jira/browse/SOLR-9648
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Keith Laban
> Attachments: SOLR-9648-WIP.patch
>
>
> Wrap the entry point for all merge policies with a single entry point merge 
> policy for more fine grained control over merging with minimal configuration. 
> The main benefit will be to allow upgrading of segments on startup when 
> lucene version changes. Ideally we can use the same approach for adding and 
> removing of doc values when the schema changes and hopefully other index type 
> changes such as Trie -> Point types, or even analyzer changes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9648) Wrap all solr merge policies with SolrMergePolicy

2016-10-15 Thread Keith Laban (JIRA)
Keith Laban created SOLR-9648:
-

 Summary: Wrap all solr merge policies with SolrMergePolicy
 Key: SOLR-9648
 URL: https://issues.apache.org/jira/browse/SOLR-9648
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Keith Laban


Wrap the entry point for all merge policies with a single entry point merge 
policy for more fine grained control over merging with minimal configuration. 

The main benefit will be to allow upgrading of segments on startup when lucene 
version changes. Ideally we can use the same approach for adding and removing 
of doc values when the schema changes and hopefully other index type changes 
such as Trie -> Point types, or even analyzer changes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-10-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15579072#comment-15579072
 ] 

ASF subversion and git services commented on SOLR-6203:
---

Commit e87072bc5abb6b7c6f7a6e494d9360814689dffd in lucene-solr's branch 
refs/heads/branch_6x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e87072b ]

SOLR-6203: in QueryComponent rename groupSortStr to sortWithinGroupStr (so that 
name and meaning match)


> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
> at 
> org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   ...
> {noformat}
> It looks like {{serializeSearchGroup}} is matching the sort expression as the 
> {{"*"}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute stored Streaming Expressions

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Fix Version/s: 6.3

> Add ExecutorStream to execute stored Streaming Expressions
> --
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.3
>
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> *Sample syntax*:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute stored Streaming Expressions

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Description: 
The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel on a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

*Sample syntax*:
{code}
daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
{code}

In the example above a *daemon* wraps an *executor* which wraps a *topic* that 
is reading stored Streaming Expressions. The daemon will call the executor at 
intervals which will execute all the expressions retrieved by the topic.





  was:
The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel on a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

Sample syntax:
{code}
daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
{code}

In the example above a *daemon* wraps an *executor* which wraps a *topic* that 
is reading stored Streaming Expressions. The daemon will call the executor at 
intervals which will execute all the expressions retrieved by the topic.






> Add ExecutorStream to execute stored Streaming Expressions
> --
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> *Sample syntax*:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute stored Streaming Expressions

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Summary: Add ExecutorStream to execute stored Streaming Expressions  (was: 
Add ExecutorStream to execute a stream of Streaming Expressions)

> Add ExecutorStream to execute stored Streaming Expressions
> --
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> Sample syntax:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute a stream of Streaming Expressions

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Description: 
The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel on a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

Sample syntax:
{code}
daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
{code}

In the example above a *daemon* wraps an *executor* which wraps a *topic* that 
is reading stored Streaming Expressions. The daemon will call the executor at 
intervals which will execute all the expressions retrieved by the topic.





  was:
The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel on a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

Sample syntax:
{code}
daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
{code}

In the example above a *daemon* wraps an *executor* which wraps a *topic* that 
is reading a stored Streaming Expressions. The daemon will call the executor at 
intervals which will execute all the expressions retrieved by the topic.






> Add ExecutorStream to execute a stream of Streaming Expressions
> ---
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> Sample syntax:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute a stream of Streaming Expressions

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Description: 
The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel on a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

Sample syntax:
{code}
daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
{code}

In the example above a *daemon* wraps an *executor* which wraps a *topic* that 
is reading a stored Streaming Expressions. The daemon will call the executor at 
intervals which will execute all the expressions retrieved by the topic.





  was:
The ExecutorStream will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel on a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

Sample syntax:
{code}
daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
{code}

In the example above a *daemon* wraps an *executor* which wraps a *topic* that 
is reading a stored Streaming Expressions. The daemon will call the executor at 
intervals which will execute all the expressions retrieved by the topic.






> Add ExecutorStream to execute a stream of Streaming Expressions
> ---
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The *ExecutorStream* will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> Sample syntax:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading a stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute a stream of Streaming Expressions

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Description: 
The ExecutorStream will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel on a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

Sample syntax:
{code}
daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
{code}

In the example above a *daemon* wraps an *executor* which wraps a *topic* that 
is reading a stored Streaming Expressions. The daemon will call the executor at 
intervals which will execute all the expressions retrieved by the topic.





  was:
The ExecutorStream will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel on a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

Sample syntax:
{code}
daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
{code}

In the example above a *daemon* wraps an *executor* which wraps a *topic* that 
is reading a stored Streaming Expressions. The daemon will call the executor at 
intervals which will execute all the expressions retrieved by the topic.




> Add ExecutorStream to execute a stream of Streaming Expressions
> ---
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The ExecutorStream will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> Sample syntax:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading a stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6203) cast exception while searching with sort function and result grouping

2016-10-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15579056#comment-15579056
 ] 

ASF subversion and git services commented on SOLR-6203:
---

Commit a4a314d1602458cd7427b337d32eca60049c72da in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a4a314d ]

SOLR-6203: in QueryComponent rename groupSortStr to sortWithinGroupStr (so that 
name and meaning match)


> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203-unittest.patch, SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
> at 
> org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   ...
> {noformat}
> It looks like {{serializeSearchGroup}} is matching the sort expression as the 
> {{"*"}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute a stream of Streaming Expressions

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Description: 
The ExecutorStream will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel on a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

Sample syntax:
{code}
daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
{code}

In the example above a *daemon* wraps an *executor* which wraps a *topic* that 
is reading a stored Streaming Expressions. The daemon will call the executor at 
intervals which will execute all the expressions retrieved by the topic.



  was:
The ExecutorStream will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel in a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

Sample syntax:
{code}
daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
{code}

In the example above a *daemon* wraps an *executor* which wraps a *topic* that 
is reading a stored Streaming Expressions. The daemon will call the executor at 
intervals which will execute all the expressions retrieved by the topic.




> Add ExecutorStream to execute a stream of Streaming Expressions
> ---
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The ExecutorStream will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel on a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> Sample syntax:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading a stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute a stream of Streaming Expressions

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Description: 
The ExecutorStream will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel in a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

Sample syntax:
{code}
daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
{code}

In the example above a *daemon* wraps an *executor* which wraps a *topic* that 
is reading a stored Streaming Expressions. The daemon will call the executor at 
intervals which will execute all the expressions retrieved by the topic.



  was:
The ExecutorStream will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel in a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

Sample syntax:
{code}
executor(topic(storedExpressions, fl="expr", ...))
{code}


> Add ExecutorStream to execute a stream of Streaming Expressions
> ---
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The ExecutorStream will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel in a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> Sample syntax:
> {code}
> daemon(executor(threads=10, topic(storedExpressions, fl="expr", ...)))
> {code}
> In the example above a *daemon* wraps an *executor* which wraps a *topic* 
> that is reading a stored Streaming Expressions. The daemon will call the 
> executor at intervals which will execute all the expressions retrieved by the 
> topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute a stream of Streaming Expressions

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Description: 
The ExecutorStream will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel in a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

Sample syntax:
{code}
executor(topic(storedExpressions, fl="expr", ...))
{code}

  was:
The ExecutorStream will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel in a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

Sample syntax:
*code*
executor(topic(storedExpressions, fl="expr", ...))
*code*


> Add ExecutorStream to execute a stream of Streaming Expressions
> ---
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The ExecutorStream will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel in a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> Sample syntax:
> {code}
> executor(topic(storedExpressions, fl="expr", ...))
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute a stream of Streaming Expressions

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Summary: Add ExecutorStream to execute a stream of Streaming Expressions  
(was: Add ExecutorStream to execute a stream of Expressions)

> Add ExecutorStream to execute a stream of Streaming Expressions
> ---
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The ExecutorStream will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel in a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> Sample syntax:
> *code*
> executor(topic(storedExpressions, fl="expr", ...))
> *code*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute a stream of Expressions

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Description: 
The ExecutorStream will wrap a stream which contains Tuples with Streaming 
Expressions to execute. By default the ExecutorStream will look for the 
expression in the *expr* field in the Tuples.

The ExecutorStream will have an internal thread pool so expressions can be 
executed in parallel in a single worker. The ExecutorStream can also be wrapped 
by the parallel function to partition the Streaming Expressions that need to be 
executed across a cluster of worker nodes.

Sample syntax:
*code*
executor(topic(storedExpressions, fl="expr", ...))
*code*

  was:
The ExecutorStream will execute the stored topics and macros from SOLR-9387.

The ExecutorStream can be pointed at a SolrCloud collection where the topics 
are stored and it will execute the topics and macros in batches.

The ExecutorStream will support parallel execution of topics/macros as well. 
This will allow the workload to be spread across a cluster of worker nodes.


> Add ExecutorStream to execute a stream of Expressions
> -
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The ExecutorStream will wrap a stream which contains Tuples with Streaming 
> Expressions to execute. By default the ExecutorStream will look for the 
> expression in the *expr* field in the Tuples.
> The ExecutorStream will have an internal thread pool so expressions can be 
> executed in parallel in a single worker. The ExecutorStream can also be 
> wrapped by the parallel function to partition the Streaming Expressions that 
> need to be executed across a cluster of worker nodes.
> Sample syntax:
> *code*
> executor(topic(storedExpressions, fl="expr", ...))
> *code*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9387) Allow topic expression to store queries and macros

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9387.
--
Resolution: Won't Fix

Closing this for an alternative design described in SOLR-9559.

> Allow topic expression to store queries and macros
> --
>
> Key: SOLR-9387
> URL: https://issues.apache.org/jira/browse/SOLR-9387
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The topic expression already stores the checkpoints for a topic. This ticket 
> will allow the topic to store the topic query and a *macro* to be performed 
> with the topic. 
> Macros will be run using Solr's built-in parameter substitution:
> Sample syntax:
> {code}
> topic(collection1, q="*:*", macro="update(classify(model, ${topic}))")
> {code}
> The query and macro will be stored with the topic. Topics can be retrieved 
> and executed as part of the larger macro using Solr's built in parameter 
> substitution.
> {code}
> http://localhost:8983/solr/collection1/stream?expr=update(classify(model, 
> ${topic}))=topic(collection1,)
> {code}
> Because topics are stored in a SolrCloud collection this will allow for 
> storing millions of topics and macros.
> The parallel function can then be used to run the topics/macros in parallel 
> across a large number of workers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9559) Add ExecutorStream to execute a stream of Expressions

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9559:
-
Summary: Add ExecutorStream to execute a stream of Expressions  (was: Add 
ExecutorStream to execute stored topics and macros)

> Add ExecutorStream to execute a stream of Expressions
> -
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The ExecutorStream will execute the stored topics and macros from SOLR-9387.
> The ExecutorStream can be pointed at a SolrCloud collection where the topics 
> are stored and it will execute the topics and macros in batches.
> The ExecutorStream will support parallel execution of topics/macros as well. 
> This will allow the workload to be spread across a cluster of worker nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9559) Add ExecutorStream to execute stored topics and macros

2016-10-15 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-9559:


Assignee: Joel Bernstein

> Add ExecutorStream to execute stored topics and macros
> --
>
> Key: SOLR-9559
> URL: https://issues.apache.org/jira/browse/SOLR-9559
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The ExecutorStream will execute the stored topics and macros from SOLR-9387.
> The ExecutorStream can be pointed at a SolrCloud collection where the topics 
> are stored and it will execute the topics and macros in batches.
> The ExecutorStream will support parallel execution of topics/macros as well. 
> This will allow the workload to be spread across a cluster of worker nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+138) - Build # 1958 - Unstable!

2016-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1958/
Java: 64bit/jdk-9-ea+138 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:40071","node_name":"127.0.0.1:40071_","state":"active","leader":"true"}];
 clusterState: 
DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/19)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:34730;,   
"core":"c8n_1x3_lf_shard1_replica2",   "node_name":"127.0.0.1:34730_"}, 
"core_node2":{   "core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:44733;,   "node_name":"127.0.0.1:44733_",  
 "state":"down"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:40071;,   "node_name":"127.0.0.1:40071_",  
 "state":"active",   "leader":"true",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:40071","node_name":"127.0.0.1:40071_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/19)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:34730;,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:34730_"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:44733;,
  "node_name":"127.0.0.1:44733_",
  "state":"down"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:40071;,
  "node_name":"127.0.0.1:40071_",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([4B29F43B19CAD479:C37DCBE1B736B981]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[jira] [Resolved] (SOLR-9625) Add HelloWorldSolrCloudTestCase class

2016-10-15 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-9625.
---
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.x

> Add HelloWorldSolrCloudTestCase class
> -
>
> Key: SOLR-9625
> URL: https://issues.apache.org/jira/browse/SOLR-9625
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 6.x, master (7.0)
>
> Attachments: SOLR-9625.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9643) Pagination issue occurs in solr cloud when results are grouped on a field

2016-10-15 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15579009#comment-15579009
 ] 

Christine Poerschke commented on SOLR-9643:
---

Let's consider why co-locating documents with the same group works.

The easiest way to co-locate is to have all documents on one shard:
{code}
"shard1" : [ { "family":"A" ... } ... {"family":"N", "state":"nj", ... }, 
{"family":"N", "state":"ny", ... } ... { "family":"Z" ... } ]
# 26 groups [A ... Z] overall
{code}

Alternatively, across multiple shards, documents with the same group can be 
co-located e.g. {{"nj"}} and {{"ny"}} in group/family {{"N"}} on shard2:
{code}
"shard1" : [ { "family":"A" ... } ...   
 ... { "family":"Y" ... } ]
"shard2" : [ { "family":"B" ... } ... {"family":"N", "state":"nj", ... }, 
{"family":"N", "state":"ny", ... } ... { "family":"Z" ... } ]
# shard1 has 13 groups, shard2 has 13 groups, overall we have 13+13=26 groups
{code}

Lastly, if documents with the same group are _not_ co-located ...
{code}
# documents distributed across (say) two shards with documents in the same 
group _not_ co-located on the same shard
"shard1" : [ { "family":"A" ... } ... {"family":"N", "state":"nj", ... } ... { 
"family":"Y" ... } ]
"shard2" : [ { "family":"B" ... } ... {"family":"N", "state":"ny", ... } ... { 
"family":"Z" ... } ]
# shard1 has 14 groups [A C E G I K M  N  O Q S U W Y]
# shard2 has 13 groups [B D F H J LN  P R T V X Z]
# overall:
# approximate result: shard1 has 14 groups, shard2 has 13 groups, overall we 
have approximately 14+13=27 groups
# accurate result: intersect([A C E G I K M N O Q S U W Y],[B D F H J L N P R T 
V X Z]) = [A ... Z] = 26 groups
{code}
... then the calculation of accurate group counts would be expensive, requiring 
intersection of the {{A...N...Y}} and {{B...N...Z}} lists.

I am not aware of any plans to change the existing behaviour.

> Pagination issue occurs in solr cloud when results are grouped on a field
> -
>
> Key: SOLR-9643
> URL: https://issues.apache.org/jira/browse/SOLR-9643
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.1
> Environment: Solr cloud is deployed on AWS linux server. 4 Solr 
> servers and apache zookeeper is setup
>Reporter: Paras Diwan
>Priority: Critical
> Fix For: 6.1.1
>
>
> Either value of ngroups in grouped query is inaccurate or there is some issue 
> in returning documents of later pages. 
> select?q=*:*=true=family=true=0=1
> For above mentioned query i get ngroups = 396324
> but for the same query when i modify start to 396320. it returns 0 docs, an 
> empty page.
> Instead the last result is at 386887.
> Please look into this issue or offer some solution 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8396) Add support for PointFields in Solr

2016-10-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15578981#comment-15578981
 ] 

Tomás Fernández Löbbe edited comment on SOLR-8396 at 10/16/16 12:09 AM:


After talking with Adrien, and later with some more people at the Lucene 
Revolution, the plan is that PointFields will use {{SortedNumericDocValues}} 
instead of {{SortedSetDocValues}} for multi-valued cases. Doing that involves 
much more work, since we need to change all the consumers. This patch is 
already getting large, so I think it may be better to tackle that in a followup 
Jira. 
In the recent branch commits:
* I removed the use of {{SortedSetDocValues}} from the PointFields and I throw 
an exception if the user tries to create a PointField with MultiValued DV (I'm 
now ignoring the tests which required MV fields with DV). 
* I fixed the issue with returning DV as stored fields that I was hitting. 
* Added LongPointField.

[~steve_rowe] had some concerns about naming, since Solr already has a 
{{PointType}} and in the schemas I'm using "pTYPE", which could be confused 
with the old "Plain numeric fields" (Solr 1.4-ish?). I'm open to suggestions. 


was (Author: tomasflobbe):
After talking with Adrian, and later with some more people at the Lucene 
Revolution, the plan is that PointFields will use {{SortedNumericDocValues}} 
instead of {{SortedSetDocValues}} for multi-valued cases. Doing that involves 
much more work, since we need to change all the consumers. This patch is 
already getting large, so I think it may be better to tackle that in a followup 
Jira. 
In the recent branch commits:
* I removed the use of {{SortedSetDocValues}} from the PointFields and I throw 
an exception if the user tries to create a PointField with MultiValued DV (I'm 
now ignoring the tests which required MV fields with DV). 
* I fixed the issue with returning DV as stored fields that I was hitting. 
* Added LongPointField.

[~steve_rowe] had some concerns about naming, since Solr already has a 
{{PointType}} and in the schemas I'm using "pTYPE", which could be confused 
with the old "Plain numeric fields" (Solr 1.4-ish?). I'm open to suggestions. 

> Add support for PointFields in Solr
> ---
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8396) Add support for PointFields in Solr

2016-10-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15578981#comment-15578981
 ] 

Tomás Fernández Löbbe commented on SOLR-8396:
-

After talking with Adrian, and later with some more people at the Lucene 
Revolution, the plan is that PointFields will use {{SortedNumericDocValues}} 
instead of {{SortedSetDocValues}} for multi-valued cases. Doing that involves 
much more work, since we need to change all the consumers. This patch is 
already getting large, so I think it may be better to tackle that in a followup 
Jira. 
In the recent branch commits:
* I removed the use of {{SortedSetDocValues}} from the PointFields and I throw 
an exception if the user tries to create a PointField with MultiValued DV (I'm 
now ignoring the tests which required MV fields with DV). 
* I fixed the issue with returning DV as stored fields that I was hitting. 
* Added LongPointField.

[~steve_rowe] had some concerns about naming, since Solr already has a 
{{PointType}} and in the schemas I'm using "pTYPE", which could be confused 
with the old "Plain numeric fields" (Solr 1.4-ish?). I'm open to suggestions. 

> Add support for PointFields in Solr
> ---
>
> Key: SOLR-8396
> URL: https://issues.apache.org/jira/browse/SOLR-8396
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, SOLR-8396.patch, 
> SOLR-8396.patch
>
>
> In LUCENE-6917, [~mikemccand] mentioned that DimensionalValues are better 
> than NumericFields in most respects. We should explore the benefits of using 
> it in Solr and hence, if appropriate, switch over to using them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3608 - Unstable!

2016-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3608/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=42379, 
name=SocketProxy-Response-54359:54732, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=42379, name=SocketProxy-Response-54359:54732, 
state=RUNNABLE, group=TGRP-HttpPartitionTest]
at 
__randomizedtesting.SeedInfo.seed([39D9B932AF95744C:B18D86E8016919B4]:0)
Caused by: java.lang.RuntimeException: java.net.SocketException: Socket is 
closed
at __randomizedtesting.SeedInfo.seed([39D9B932AF95744C]:0)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:347)
Caused by: java.net.SocketException: Socket is closed
at java.net.Socket.setSoTimeout(Socket.java:1137)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:344)




Build Log:
[...truncated 12028 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2> Creating dataDir: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_39D9B932AF95744C-001/init-core-data-001
   [junit4]   2> 2816854 INFO  
(SUITE-HttpPartitionTest-seed#[39D9B932AF95744C]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-5776)
 w/ MAC_OS_X supressed clientAuth
   [junit4]   2> 2816855 INFO  
(SUITE-HttpPartitionTest-seed#[39D9B932AF95744C]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 2816857 INFO  
(TEST-HttpPartitionTest.test-seed#[39D9B932AF95744C]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2816858 INFO  (Thread-3987) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2816858 INFO  (Thread-3987) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2816861 INFO  
(TEST-HttpPartitionTest.test-seed#[39D9B932AF95744C]) [] 
o.a.s.c.ZkTestServer start zk server on port:54355
   [junit4]   2> 2816942 INFO  
(TEST-HttpPartitionTest.test-seed#[39D9B932AF95744C]) [] 
o.a.s.c.AbstractZkTestCase put 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 2816960 INFO  
(TEST-HttpPartitionTest.test-seed#[39D9B932AF95744C]) [] 
o.a.s.c.AbstractZkTestCase put 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/conf/schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 2816977 INFO  
(TEST-HttpPartitionTest.test-seed#[39D9B932AF95744C]) [] 
o.a.s.c.AbstractZkTestCase put 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 2816981 INFO  
(TEST-HttpPartitionTest.test-seed#[39D9B932AF95744C]) [] 
o.a.s.c.AbstractZkTestCase put 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 2816987 INFO  
(TEST-HttpPartitionTest.test-seed#[39D9B932AF95744C]) [] 
o.a.s.c.AbstractZkTestCase put 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 2816991 INFO  
(TEST-HttpPartitionTest.test-seed#[39D9B932AF95744C]) [] 
o.a.s.c.AbstractZkTestCase put 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 2816996 INFO  
(TEST-HttpPartitionTest.test-seed#[39D9B932AF95744C]) [] 
o.a.s.c.AbstractZkTestCase put 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 2817007 INFO  
(TEST-HttpPartitionTest.test-seed#[39D9B932AF95744C]) [] 
o.a.s.c.AbstractZkTestCase put 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 2817023 INFO  
(TEST-HttpPartitionTest.test-seed#[39D9B932AF95744C]) [] 
o.a.s.c.AbstractZkTestCase put 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt
 to /configs/conf1/mapping-ISOLatin1Accent.txt
   [junit4]   2> 2817059 INFO  
(TEST-HttpPartitionTest.test-seed#[39D9B932AF95744C]) [] 

[jira] [Closed] (SOLR-6687) eDisMax query parser does not parse phrases with facet filtering enabled

2016-10-15 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson closed SOLR-6687.

Resolution: Incomplete

The user never got back with any clarification, there's not enough information 
here to debug.

> eDisMax query parser does not parse phrases with facet filtering enabled
> 
>
> Key: SOLR-6687
> URL: https://issues.apache.org/jira/browse/SOLR-6687
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers, SolrJ
>Affects Versions: 4.10
> Environment: SolrJ Library, Eclipse IDE
>Reporter: Tim H
>
> I am writing a search bar application with Solr which I'd like to have the 
> following two features:
> phrase matching for user queries - results which match user phrase are 
> boosted.
> Field faceting based on 'tags' field.  
> When I execute this query:
> q=steve jobs&
> fq=storeid:527bd613e4b0564cc755460a&
> sort=score desc&
> start=50&
> rows=2&
> fl=*,score&
> qt=/query&
> defType=edismax&
> pf=concept_name^15 note_text^5 file_text^2.5&
> pf3=1&
> pf2=1&
> ps=1&
> group=true&
> group.field=conceptid&
> group.limit=10&
> group.ngroups=true
> The phrase boosting feature operates correctly and boosts results which 
> closer match the phrase query "Steve Jobs"
> However, when I execute the query after the user has selected a facet field 
> (The facet fields are bought up from a seperate query) and execute the 
> following query:
> q=steve jobs&
> fq=storeid:527bd613e4b0564cc755460a&
> fq=tag:Person&
> sort=score desc&
> start=0&
> rows=50&
> fl=*,score&
> qt=/query&
> defType=edismax&
> pf=concept_name^15 note_text^5 file_text^2.5&
> pf3=1&
> pf2=1&
> ps=1&
> group=true&
> group.field=conceptid&
> group.limit=10&
> group.ngroups=true
> The phrase boosting does not work, even though the facet filtering does.  
> I'm not sure if this is a bug, but if it is not can someone point me to the 
> relevant documentation that will help me fix this issue?  All queries were 
> written using the SolrJ Library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9647) CollectionsAPIDistributedZkTest got stuck, reproduces failure

2016-10-15 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15578736#comment-15578736
 ] 

Mikhail Khludnev commented on SOLR-9647:


Here are excerpts from the failure log tail.
{code}
 2> 90   INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[355E7B68C1B5A5B6]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
...
  2> 263082 INFO  (zkCallback-32-thread-2-processing-n:127.0.0.1:49743_) 
[n:127.0.0.1:49743_] o.a.s.c.Overseer Overseer 
(id=96767662755807251-127.0.0.1:49743_-n_03) starting
  2> 263083 INFO  (zkCallback-39-thread-4-processing-n:127.0.0.1:49770_) 
[n:127.0.0.1:49770_ c:collection1 s:shard1 r:core_node4 x:collection1] 
o.a.s.c.ShardLeaderElectionContextBase No version found for ephemeral leader 
parent node, won't remove previous leader registration.
  2> 263087 INFO  (zkCallback-39-thread-4-processing-n:127.0.0.1:49770_) 
[n:127.0.0.1:49770_ c:collection1 s:shard1 r:core_node4 x:collection1] 
o.a.s.c.ActionThrottle The last leader attempt started 21ms ago.
  2> 263087 INFO  (zkCallback-39-thread-4-processing-n:127.0.0.1:49770_) 
[n:127.0.0.1:49770_ c:collection1 s:shard1 r:core_node4 x:collection1] 
o.a.s.c.ActionThrottle Throttling leader attempts - waiting for 4978ms
  2> 264298 ERROR (zkCallback-15-thread-2-EventThread) [] 
o.a.s.c.c.ZkStateReader Error reading cluster properties from zookeeper
  2> org.apache.zookeeper.KeeperException$SessionExpiredException: 
KeeperErrorCode = Session expired for /clusterprops.json
  2>at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
...
{code}

{code}
268216 WARN  (Thread-1) [] o.a.s.c.ZkTestServer Watch limit violations: 
  2> Maximum concurrent create/delete watches above limit:
  2> 
  2>12  /solr/aliases.json
  2>5   /solr/security.json
  2>5   /solr/configs/conf1
  2>4   /solr/collections/collection1/state.json
  2> 
  2> Maximum concurrent data watches above limit:
  2> 
  2>12  /solr/clusterstate.json
  2>12  /solr/clusterprops.json
  2> 
  2> Maximum concurrent children watches above limit:
  2> 
  2>109 /solr/overseer/collection-queue-work
  2>39  /solr/overseer/queue
  2>12  /solr/live_nodes
  2>12  /solr/collections
  2>11  /solr/overseer/queue-work
  2> 
{code}

I don't know the details but what "ActionThrottle Throttling leader attempts - 
waiting for 4978ms" is about? Is the test aware about such trotting? 
Even concurrent watches limits does/means nothing, isn't there are leak of 
watches? 

> CollectionsAPIDistributedZkTest got stuck, reproduces failure
> -
>
> Key: SOLR-9647
> URL: https://issues.apache.org/jira/browse/SOLR-9647
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
>
>  I have to shoot 
> https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1129/ just 
> because "Took 1 day 12 hr on lucene".
>[junit4] HEARTBEAT J0 PID(30506@lucene1-us-west): 2016-10-15T00:08:30, 
> stalled for 48990s at: CollectionsAPIDistributedZkTest.test
>[junit4] HEARTBEAT J0 PID(30506@lucene1-us-west): 2016-10-15T00:09:30, 
> stalled for 49050s at: CollectionsAPIDistributedZkTest.test
>  It's just got stuck. Then I run it locally, it passes from Eclipse, but 
> fails when I run from cmd>ant. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9647) CollectionsAPIDistributedZkTest got stuck, reproduces failure

2016-10-15 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-9647:
--

 Summary: CollectionsAPIDistributedZkTest got stuck, reproduces 
failure
 Key: SOLR-9647
 URL: https://issues.apache.org/jira/browse/SOLR-9647
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mikhail Khludnev


 I have to shoot 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1129/ just 
because "Took 1 day 12 hr on lucene".
   [junit4] HEARTBEAT J0 PID(30506@lucene1-us-west): 2016-10-15T00:08:30, 
stalled for 48990s at: CollectionsAPIDistributedZkTest.test
   [junit4] HEARTBEAT J0 PID(30506@lucene1-us-west): 2016-10-15T00:09:30, 
stalled for 49050s at: CollectionsAPIDistributedZkTest.test

 It's just got stuck. Then I run it locally, it passes from Eclipse, but fails 
when I run from cmd>ant. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9643) Pagination issue occurs in solr cloud when results are grouped on a field

2016-10-15 Thread Paras Diwan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15578730#comment-15578730
 ] 

Paras Diwan commented on SOLR-9643:
---

i'm sorry i forgot to mention that the collection is sharded. Thanks for your 
help, guess i will have to co-locate common group results on the one shard. 
Nonetheless, will this issue be fixed anytime soon? 

> Pagination issue occurs in solr cloud when results are grouped on a field
> -
>
> Key: SOLR-9643
> URL: https://issues.apache.org/jira/browse/SOLR-9643
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.1
> Environment: Solr cloud is deployed on AWS linux server. 4 Solr 
> servers and apache zookeeper is setup
>Reporter: Paras Diwan
>Priority: Critical
> Fix For: 6.1.1
>
>
> Either value of ngroups in grouped query is inaccurate or there is some issue 
> in returning documents of later pages. 
> select?q=*:*=true=family=true=0=1
> For above mentioned query i get ngroups = 396324
> but for the same query when i modify start to 396320. it returns 0 docs, an 
> empty page.
> Instead the last result is at 386887.
> Please look into this issue or offer some solution 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 18056 - Unstable!

2016-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18056/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DistributedQueueTest.testPeekElements

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([1F9BD285842C746D:E2B568A454152070]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.DistributedQueueTest.testPeekElements(DistributedQueueTest.java:181)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12068 lines...]
   [junit4] Suite: org.apache.solr.cloud.DistributedQueueTest
   [junit4]   2> Creating dataDir: 

[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+138) - Build # 1956 - Still Unstable!

2016-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1956/
Java: 32bit/jdk-9-ea+138 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.OverseerTaskQueueTest.testPeekElements

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([533C15679CC8CFC3:AE12AF464CF19BDE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.DistributedQueueTest.testPeekElements(DistributedQueueTest.java:177)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:843)




Build Log:
[...truncated 11063 

[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_102) - Build # 1955 - Unstable!

2016-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1955/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
expected:<204> but was:<187>

Stack Trace:
java.lang.AssertionError: expected:<204> but was:<187>
at 
__randomizedtesting.SeedInfo.seed([236A8C932ECFF850:AB3EB349803395A8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:280)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-9625) Add HelloWorldSolrCloudTestCase class

2016-10-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15578203#comment-15578203
 ] 

ASF subversion and git services commented on SOLR-9625:
---

Commit c620fc2954090d7cfc38988c7e490113ee3ce4a4 in lucene-solr's branch 
refs/heads/branch_6x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c620fc2 ]

SOLR-9625: Add HelloWorldSolrCloudTestCase class (Christine Poerschke, Alan 
Woodward, Alexandre Rafalovitch)


> Add HelloWorldSolrCloudTestCase class
> -
>
> Key: SOLR-9625
> URL: https://issues.apache.org/jira/browse/SOLR-9625
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9625.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9625) Add HelloWorldSolrCloudTestCase class

2016-10-15 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15578175#comment-15578175
 ] 

ASF subversion and git services commented on SOLR-9625:
---

Commit 5261eb0acd54848a5aafb542990d7391d77f94c1 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5261eb0 ]

SOLR-9625: Add HelloWorldSolrCloudTestCase class (Christine Poerschke, Alan 
Woodward, Alexandre Rafalovitch)


> Add HelloWorldSolrCloudTestCase class
> -
>
> Key: SOLR-9625
> URL: https://issues.apache.org/jira/browse/SOLR-9625
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9625.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9643) Pagination issue occurs in solr cloud when results are grouped on a field

2016-10-15 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15578140#comment-15578140
 ] 

Christine Poerschke commented on SOLR-9643:
---

Hello [~parasdiwan] - you don't mention whether or not your collection is 
sharded, if it is sharded then the [Distributed Result Grouping 
Caveats|https://cwiki.apache.org/confluence/display/solr/Result+Grouping#ResultGrouping-DistributedResultGroupingCaveats]
 might apply to the behaviour you observe i.e.
bq. ... {{group.ngroups}} and {{group.facet}} require that all documents in 
each group must be co-located on the same shard in order for accurate counts to 
be returned. ...


> Pagination issue occurs in solr cloud when results are grouped on a field
> -
>
> Key: SOLR-9643
> URL: https://issues.apache.org/jira/browse/SOLR-9643
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.1
> Environment: Solr cloud is deployed on AWS linux server. 4 Solr 
> servers and apache zookeeper is setup
>Reporter: Paras Diwan
>Priority: Critical
> Fix For: 6.1.1
>
>
> Either value of ngroups in grouped query is inaccurate or there is some issue 
> in returning documents of later pages. 
> select?q=*:*=true=family=true=0=1
> For above mentioned query i get ngroups = 396324
> but for the same query when i modify start to 396320. it returns 0 docs, an 
> empty page.
> Instead the last result is at 386887.
> Please look into this issue or offer some solution 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 453 - Still Unstable!

2016-10-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/453/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([1F40A2A85396EE36:77FF9782830CFCDA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:144)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:298)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at