[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1157 - Still Failing

2018-10-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1157/

No tests ran.

Build Log:
[...truncated 23268 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2435 links (1987 relative) to 3184 anchors in 247 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:


[JENKINS] Lucene-Solr-BadApples-master-Linux (64bit/jdk1.8.0_172) - Build # 109 - Still Unstable!

2018-10-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/109/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting

Error Message:
Error from server at 
https://127.0.0.1:35251/solr/collection1_shard2_replica_n2: Expected mime type 
application/octet-stream but got text/html.Error 404 
Can not find: /solr/collection1_shard2_replica_n2/update  
HTTP ERROR 404 Problem accessing 
/solr/collection1_shard2_replica_n2/update. Reason: Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at https://127.0.0.1:35251/solr/collection1_shard2_replica_n2: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n2/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n2/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([B662572CBE8419AF:74D56B44BDC4E9D7]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:238)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_172) - Build # 841 - Unstable!

2018-10-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/841/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([455DCDD2198BC9B9:CD09F208B777A441]:0)
at 
org.apache.solr.handler.TestSolrConfigHandlerConcurrent.test(TestSolrConfigHandlerConcurrent.java:90)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1010)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14596 lines...]
   [junit4] Suite: org.apache.solr.handler.TestSolrConfigHandlerConcurrent
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-4146) Error handling 'status' action, cannot access GUI

2018-10-19 Thread Vasily Volkov (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657698#comment-16657698
 ] 

Vasily Volkov commented on SOLR-4146:
-

Occured on Solr 7.2.1

> Error handling 'status' action, cannot access GUI
> -
>
> Key: SOLR-4146
> URL: https://issues.apache.org/jira/browse/SOLR-4146
> Project: Solr
>  Issue Type: Bug
>  Components: Admin UI, SolrCloud
>Affects Versions: 6.0
>Reporter: Markus Jelsma
>Priority: Major
> Fix For: 6.0
>
> Attachments: 2016-04-26_1547.png, solr.png
>
>
> We sometimes see a node not responding to GUI requests. It then generates the 
> stack trace below. It does respond to search requests.
> {code}
> 2012-12-05 15:53:24,329 ERROR [solr.core.SolrCore] - [http-8080-exec-7] - : 
> org.apache.solr.common.SolrException: Error handling 'status' action 
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:725)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:158)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:372)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
> at 
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:744)
> at 
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2274)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.solr.common.SolrException: 
> java.util.concurrent.RejectedExecutionException
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1674)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1330)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1265)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.getCoreStatus(CoreAdminHandler.java:997)
> at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:711)
> ... 18 more
> Caused by: java.util.concurrent.RejectedExecutionException
> at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1768)
> at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767)
> at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658)
> at 
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92)
> at 
> java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:603)
> at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1605)
> ... 22 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1672 - Still Unstable

2018-10-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1672/

3 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting

Error Message:
Error from server at http://127.0.0.1:45575/solr/collection1_shard2_replica_n2: 
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: 
/solr/collection1_shard2_replica_n2/update  HTTP ERROR 
404 Problem accessing /solr/collection1_shard2_replica_n2/update. 
Reason: Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:45575/solr/collection1_shard2_replica_n2: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n2/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n2/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([78F7700AE3159A3C:BA404C62E0556A44]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:238)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.4) - Build # 7573 - Failure!

2018-10-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7573/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 14674 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\temp\junit4-J1-20181020_003310_3746630827865750030358.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  EXCEPTION_ACCESS_VIOLATION (0xc005) at 
pc=0x69ca0b2f, pid=13260, tid=2564
   [junit4] #
   [junit4] # JRE version: OpenJDK Runtime Environment (9.0+11) (build 9.0.4+11)
   [junit4] # Java VM: OpenJDK 64-Bit Server VM (9.0.4+11, mixed mode, tiered, 
serial gc, windows-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [jvm.dll+0x460b2f]
   [junit4] #
   [junit4] # No core dump will be written. Minidumps are not enabled by 
default on client versions of Windows
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\hs_err_pid13260.log
   [junit4] #
   [junit4] # Compiler replay data is saved as:
   [junit4] # 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\replay_pid13260.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J1: EOF 

[...truncated 767 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
C:\Users\jenkins\tools\java\64bit\jdk-9.0.4\bin\java.exe -XX:-UseCompressedOops 
-XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\heapdumps
 -ea -esa --illegal-access=deny -Dtests.prefix=tests 
-Dtests.seed=9ADFE3F9FD5EE61C -Xmx512M -Dtests.iters= -Dtests.verbose=false 
-Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=8.0.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\tools\junit4\logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Dcommon.dir=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene 
-Dclover.db.dir=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\clover\db
 
-Djava.security.policy=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\tools\junit4\solr-tests.policy
 -Dtests.LUCENE_VERSION=8.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1
 
-Djunit4.tempDir=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\temp
 -Djunit4.childvm.id=1 -Djunit4.childvm.count=2 -Dfile.encoding=UTF-8 
-Dtests.disableHdfs=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false 
-classpath 

[JENKINS] Lucene-Solr-repro - Build # 1727 - Still Unstable

2018-10-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1727/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1671/consoleText

[repro] Revision: fd9164801e703b278922dae6cc3c53e0578fa1d6

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=DeleteNodeTest -Dtests.method=test 
-Dtests.seed=BBDF5DA8FB090389 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=fr-FR -Dtests.timezone=Europe/Moscow -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=CdcrReplicationHandlerTest 
-Dtests.method=testReplicationWithBufferedUpdates -Dtests.seed=BBDF5DA8FB090389 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=da-DK -Dtests.timezone=Asia/Samarkand -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=HdfsRestartWhileUpdatingTest 
-Dtests.method=test -Dtests.seed=BBDF5DA8FB090389 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-NZ -Dtests.timezone=America/Halifax -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=LIROnShardRestartTest 
-Dtests.method=testSeveralReplicasInLIR -Dtests.seed=BBDF5DA8FB090389 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=pl -Dtests.timezone=Antarctica/Rothera -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=LIROnShardRestartTest 
-Dtests.method=testAllReplicasInLIR -Dtests.seed=BBDF5DA8FB090389 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=pl -Dtests.timezone=Antarctica/Rothera -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
e1da5f953731b4e2990e054d09ec0bcb2e5146b8
[repro] git fetch
[repro] git checkout fd9164801e703b278922dae6cc3c53e0578fa1d6

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   CdcrReplicationHandlerTest
[repro]   HdfsRestartWhileUpdatingTest
[repro]   DeleteNodeTest
[repro]   LIROnShardRestartTest
[repro] ant compile-test

[...truncated 3423 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.CdcrReplicationHandlerTest|*.HdfsRestartWhileUpdatingTest|*.DeleteNodeTest|*.LIROnShardRestartTest"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=BBDF5DA8FB090389 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=da-DK -Dtests.timezone=Asia/Samarkand -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 7150 lines...]
   [junit4]   2> 553804 ERROR (Finalizer) [] o.a.s.c.SolrCore REFCOUNT 
ERROR: unreferenced org.apache.solr.core.SolrCore@eace80b 
(collection1_shard1_replica_n21) has a reference count of -1
   [junit4]   2> 553943 INFO  
(coreLoadExecutor-878-thread-1-processing-n:127.0.0.1:42513_solr) 
[n:127.0.0.1:42513_solr] o.a.s.s.IndexSchema Loaded schema 
default-config/1.6 with uniqueid field id
   [junit4]   2> 554022 INFO  
(coreLoadExecutor-878-thread-1-processing-n:127.0.0.1:42513_solr) 
[n:127.0.0.1:42513_solr c:severalReplicasInLIR s:shard1 r:core_node3 
x:severalReplicasInLIR_shard1_replica_n1] o.a.s.c.RequestParams conf resource 
params.json loaded . version : 0 
   [junit4]   2> 554022 INFO  
(coreLoadExecutor-878-thread-1-processing-n:127.0.0.1:42513_solr) 
[n:127.0.0.1:42513_solr c:severalReplicasInLIR s:shard1 r:core_node3 
x:severalReplicasInLIR_shard1_replica_n1] o.a.s.c.RequestParams request params 
refreshed to version 0
   [junit4]   2> 554022 WARN  
(coreLoadExecutor-878-thread-1-processing-n:127.0.0.1:42513_solr) 
[n:127.0.0.1:42513_solr c:severalReplicasInLIR s:shard1 r:core_node3 

[jira] [Updated] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-19 Thread Elizabeth Haubert (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elizabeth Haubert updated SOLR-12243:
-
Attachment: multiword-synonyms.txt
schema.xml
solrconfig.xml

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch, multiword-synonyms.txt, schema.xml, 
> solrconfig.xml
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3-1 6-3 930%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-19 Thread Elizabeth Haubert (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657638#comment-16657638
 ] 

Elizabeth Haubert commented on SOLR-12243:
--

I think something is not right, but am not sure what.

Running current master without the patch applied in the debugger. Making a core 
with the attached configs; the sanity check is curl -XGET 
"http://localhost:8983/solr/new_core/test_qparse_error?debugQuery=on=edismax=aspirin%20dose%20in%20rats;

where aspirin had the same "aspirin, acetylsalicilic acid" synonyms as 
previously.

Query is coming through with the original bug of the empty parens where clauses 
should be:

+(((text:"acetylsalicylic acid" text:aspirin)^100.0) ((text:dose)^100.0) 
((text:in)^100.0) ((text:rats)^100.0)) () ((text:"dose in"~11) (text:"in 
rats"~11)) ((text:"dose in rats"~22)^1000.0)

That is kinda the expected behavior, since my understanding of the Lucene patch 
was that it wasn't going to be a SpanQuery object coming through anymore. 

Put breakpoints at ExtendedDismaxQParser.java in getQuery, and it looks like it 
is getting a NullPointerException and falling out at ln.1449

 

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3-1 6-3 930%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 2944 - Unstable!

2018-10-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2944/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseParallelGC

7 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting

Error Message:
Error from server at http://127.0.0.1:46383/solr/collection1_shard2_replica_n2: 
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: 
/solr/collection1_shard2_replica_n2/update  HTTP ERROR 
404 Problem accessing /solr/collection1_shard2_replica_n2/update. 
Reason: Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:46383/solr/collection1_shard2_replica_n2: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n2/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n2/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([BD1184ECE1E47682:7FA6B884E2A486FA]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:237)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:269)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 

[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-19 Thread Elizabeth Haubert (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657603#comment-16657603
 ] 

Elizabeth Haubert commented on SOLR-12243:
--

Pulling to check.

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3-1 6-3 930%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 23059 - Still Unstable!

2018-10-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23059/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseG1GC

7 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testVersionsAreReturned

Error Message:
Error from server at http://127.0.0.1:43839/solr/collection1_shard2_replica_n2: 
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: 
/solr/collection1_shard2_replica_n2/update  HTTP ERROR 
404 Problem accessing /solr/collection1_shard2_replica_n2/update. 
Reason: Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:43839/solr/collection1_shard2_replica_n2: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n2/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n2/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([2DFE376FC34E9F9D:D538CE5A50570755]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:237)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testVersionsAreReturned(CloudSolrClientTest.java:725)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)

[ANNOUNCE] Apache PyLucene 7.5.0

2018-10-19 Thread Andi Vajda



I am pleased to announce the availability of Apache PyLucene 7.5.0.

Apache PyLucene, a subproject of Apache Lucene, is a Python extension for
accessing Apache Lucene Core. Its goal is to allow you to use Lucene's text
indexing and searching capabilities from Python. It is API compatible with
the latest version of Lucene 7.x Core, 7.5.0.

For changes in this release, please review:
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_7_5_0/CHANGES
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_7_5_0/jcc/CHANGES
http://lucene.apache.org/core/7_4_0/changes/Changes.html

Apache PyLucene is available from the following download page:
http://www.apache.org/dyn/closer.cgi/lucene/pylucene/pylucene-7.5.0-src.tar.gz

When downloading from a mirror site, please remember to verify the downloads
using signatures found on the Apache site:
https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS

For more information on Apache PyLucene, visit the project home page:
  http://lucene.apache.org/pylucene

Andi..


[JENKINS] Lucene-Solr-repro - Build # 1726 - Still Unstable

2018-10-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1726/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1156/consoleText

[repro] Revision: 1a8188d92b8148f2d937bd038f48f103526fcbcc

[repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9
[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=F0867C1CF539211F 
-Dtests.multiplier=2 -Dtests.locale=en-AU -Dtests.timezone=America/Shiprock 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
e1da5f953731b4e2990e054d09ec0bcb2e5146b8
[repro] git fetch
[repro] git checkout 1a8188d92b8148f2d937bd038f48f103526fcbcc

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3423 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=F0867C1CF539211F -Dtests.multiplier=2 -Dtests.locale=en-AU 
-Dtests.timezone=America/Shiprock -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 13635 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout e1da5f953731b4e2990e054d09ec0bcb2e5146b8

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-BadApples-NightlyTests-master - Build # 33 - Still Unstable

2018-10-19 Thread Apache Jenkins Server
Build: 
https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/33/

6 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsChaosMonkeyNothingIsSafeTest

Error Message:
4 threads leaked from SUITE scope at 
org.apache.solr.cloud.hdfs.HdfsChaosMonkeyNothingIsSafeTest: 1) 
Thread[id=223856, 
name=TEST-HdfsChaosMonkeyNothingIsSafeTest.test-seed#[9E21EF6315D422A4]-EventThread,
 state=WAITING, group=TGRP-HdfsChaosMonkeyNothingIsSafeTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
2) Thread[id=223854, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-HdfsChaosMonkeyNothingIsSafeTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:748)3) Thread[id=223857, 
name=zkConnectionManagerCallback-10531-thread-1, state=WAITING, 
group=TGRP-HdfsChaosMonkeyNothingIsSafeTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)4) Thread[id=223855, 
name=TEST-HdfsChaosMonkeyNothingIsSafeTest.test-seed#[9E21EF6315D422A4]-SendThread(127.0.0.1:33431),
 state=TIMED_WAITING, group=TGRP-HdfsChaosMonkeyNothingIsSafeTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)   
  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 4 threads leaked from SUITE 
scope at org.apache.solr.cloud.hdfs.HdfsChaosMonkeyNothingIsSafeTest: 
   1) Thread[id=223856, 
name=TEST-HdfsChaosMonkeyNothingIsSafeTest.test-seed#[9E21EF6315D422A4]-EventThread,
 state=WAITING, group=TGRP-HdfsChaosMonkeyNothingIsSafeTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
   2) Thread[id=223854, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-HdfsChaosMonkeyNothingIsSafeTest]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.lang.Thread.run(Thread.java:748)
   3) Thread[id=223857, name=zkConnectionManagerCallback-10531-thread-1, 
state=WAITING, group=TGRP-HdfsChaosMonkeyNothingIsSafeTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
   4) Thread[id=223855, 
name=TEST-HdfsChaosMonkeyNothingIsSafeTest.test-seed#[9E21EF6315D422A4]-SendThread(127.0.0.1:33431),
 state=TIMED_WAITING, group=TGRP-HdfsChaosMonkeyNothingIsSafeTest]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)
at __randomizedtesting.SeedInfo.seed([9E21EF6315D422A4]:0)


FAILED:  

[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-19 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657479#comment-16657479
 ] 

Uwe Schindler commented on SOLR-12243:
--

Nevertheless there should be a test for slop=0 and slop!=0 in Edismax tests.

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3-1 6-3 930%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-19 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657477#comment-16657477
 ] 

Uwe Schindler commented on SOLR-12243:
--

Hi, the Lucene issue was committed. I think we can now test this. Nevertheless, 
according to my understanding, as for slop!=0 it no longer creates span 
queries, the bug is fixed anyways. For slop=0 it creates (faster) span queries, 
so  the fixes here should apply.
[~ehaubert]: Can you check the patch with the recent Lucene commits included? I 
can manually try that tomorrow, maybe you're faster.

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3-1 6-3 930%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-19 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657475#comment-16657475
 ] 

Uwe Schindler commented on LUCENE-8531:
---

Thanks Jim!

> QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
> ---
>
> Key: LUCENE-8531
> URL: https://issues.apache.org/jira/browse/LUCENE-8531
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8531.patch
>
>
> QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
> phraseSlop, but hard-codes inOrder ctor param as true.
> Before multi-term synonym support and graph token streams introduced the 
> possibility of generating SpanNearQuery-s, QueryBuilder generated 
> (Multi)PhraseQuery-s, which always interpret slop as allowing reordering 
> edits.  Solr's eDismax query parser generates phrase queries when its 
> pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a 
> graph-aware synonym filter, SpanNearQuery-s are generated that require 
> clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits 
> are not allowed, so this is a kind of regression.  See SOLR-12243 for edismax 
> pf/pf2/pf3 context.  (Note that the patch on SOLR-12243 also addresses 
> another problem that blocks eDismax from generating queries *at all* under 
> the above-described circumstances.)
> I propose adding a new analyzeGraphPhrase() method that allows configuration 
> of inOrder, which would allow eDismax to specify inOrder=false.  The existing 
> analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so 
> existing client behavior would remain unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7642) Should launching Solr in cloud mode using a ZooKeeper chroot create the chroot znode if it doesn't exist?

2018-10-19 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657467#comment-16657467
 ] 

Shawn Heisey commented on SOLR-7642:


bq.  Instead of including the chroot in ZK_HOST, it could bet set separately, 
so it would have to be a conscious decision.

The format of ZK_HOST is dictated by the ZooKeeper project.  We did not come up 
with that format.  We simply pass the string into the ZK code.  It is not ours 
to modify.

> Should launching Solr in cloud mode using a ZooKeeper chroot create the 
> chroot znode if it doesn't exist?
> -
>
> Key: SOLR-7642
> URL: https://issues.apache.org/jira/browse/SOLR-7642
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Priority: Minor
> Attachments: SOLR-7642.patch, SOLR-7642.patch, 
> SOLR-7642_tag_7.5.0.patch
>
>
> If you launch Solr for the first time in cloud mode using a ZooKeeper 
> connection string that includes a chroot leads to the following 
> initialization error:
> {code}
> ERROR - 2015-06-05 17:15:50.410; [   ] org.apache.solr.common.SolrException; 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/lan
> at 
> org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:113)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:339)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:140)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:110)
> at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:138)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:852)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1349)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1342)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:505)
> {code}
> The work-around for this is to use the scripts/cloud-scripts/zkcli.sh script 
> to create the chroot znode (bootstrap action does this).
> I'm wondering if we shouldn't just create the znode if it doesn't exist? Or 
> is that some violation of using a chroot?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7642) Should launching Solr in cloud mode using a ZooKeeper chroot create the chroot znode if it doesn't exist?

2018-10-19 Thread Isabelle Giguere (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657458#comment-16657458
 ] 

Isabelle Giguere commented on SOLR-7642:


My 2 cents is that creating the chroot only if it is /solr doesn't solve the 
original issue.  To impose /solr is not equivalent to allowing the creation of 
the chroot.  It's just less messy than having everything at the root '/'.

How about a specific property in solr.in.sh / solr.in.cmd ?  Instead of 
including the chroot in ZK_HOST, it could bet set separately, so it would have 
to be a conscious decision.

And about typos... 
Using zkCLI, the user creates the chroot (possibly making a typo in command 
line) then, has to write the same (typo) chroot in ZK_HOST in solr.in.sh to 
avoid 2 ZK paths created.
I would rather just use zkCLI to clean-up what was added by mistake, than to 
have to systematically use it once to create the chroot, then repeat the same 
chroot in solr.in.sh

I think if the chroot is specified only once, that's less likely to create 
confusion.



> Should launching Solr in cloud mode using a ZooKeeper chroot create the 
> chroot znode if it doesn't exist?
> -
>
> Key: SOLR-7642
> URL: https://issues.apache.org/jira/browse/SOLR-7642
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Priority: Minor
> Attachments: SOLR-7642.patch, SOLR-7642.patch, 
> SOLR-7642_tag_7.5.0.patch
>
>
> If you launch Solr for the first time in cloud mode using a ZooKeeper 
> connection string that includes a chroot leads to the following 
> initialization error:
> {code}
> ERROR - 2015-06-05 17:15:50.410; [   ] org.apache.solr.common.SolrException; 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/lan
> at 
> org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:113)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:339)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:140)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:110)
> at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:138)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:852)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1349)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1342)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:505)
> {code}
> The work-around for this is to use the scripts/cloud-scripts/zkcli.sh script 
> to create the chroot znode (bootstrap action does this).
> I'm wondering if we shouldn't just create the znode if it doesn't exist? Or 
> is that some violation of using a chroot?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 888 - Still Unstable!

2018-10-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/888/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:58238/solr

Stack Trace:
java.lang.AssertionError: Can not find doc 7 in https://127.0.0.1:58238/solr
at 
__randomizedtesting.SeedInfo.seed([3B77B4AB27EAD1B:C24702E69F2E67BC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:902)
at 
org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:567)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:58552/solr

Stack Trace:
java.lang.AssertionError: 

[jira] [Created] (SOLR-12888) NestedUpdateProcessor code should activate automatically in 8.0

2018-10-19 Thread David Smiley (JIRA)
David Smiley created SOLR-12888:
---

 Summary: NestedUpdateProcessor code should activate automatically 
in 8.0
 Key: SOLR-12888
 URL: https://issues.apache.org/jira/browse/SOLR-12888
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley
 Fix For: master (8.0)


If the schema supports it, the NestedUpdateProcessor URP should be registered 
automatically somehow.  The Factory for this already looks for the existence of 
certain special fields in the schema, so that's good.  But the URP Factory 
needs to be added to your chain in any of the ways we support that.  _In 8.0 
the user shouldn't have to do anything to their solrconfig._  

We might un-URP this and call directly somewhere.  Or perhaps we might add a 
special named URP chain (needn't document), defined automatically, that 
activates at RunURP.  Perhaps other things could be added to this in the future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11738) Add singular value decomposition Stream Evaluator

2018-10-19 Thread Michael Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Suzuki updated SOLR-11738:
--
Attachment: SOLR-11738.patch

> Add singular value decomposition Stream Evaluator
> -
>
> Key: SOLR-11738
> URL: https://issues.apache.org/jira/browse/SOLR-11738
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11738.patch
>
>
> This ticket adds support for the singular value matrix decomposition to the 
> Stream Expression machine learning library. Implementation provided by Apache 
> Commons Math.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8921) Potential NPE in pivot facet

2018-10-19 Thread Isabelle Giguere (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657331#comment-16657331
 ] 

Isabelle Giguere edited comment on SOLR-8921 at 10/19/18 7:49 PM:
--

I'm attaching unit tests, with 2 collections, and a few fields in the schema.
But, this issue has nothing do do with the number of collections, and not much 
to do with the field types.
An NPE can happen if the string that is used as facet pivot value is a 
stopword, or an empty string.

PivotFacetProcessor.getDocSet(DocSet base, SchemaField field, String pivotValue)

// if pivotValue = "a", or if pivotValue = "in" (stopword)
ft.getFieldQuery(null, field, pivotValue)
  -> returns null
 searcher.getDocSet(query, base);
  -> throws NPE
  
// if pivotValue = ""  (empty str)
 ft.getFieldQuery(null, field, pivotValue)
  -> returned query= name:
 searcher.getDocSet(query, base);
  -> returns DocSet size -1
  -> NPE could be thrown from PivotFacetProcessor.getSubsetSize(DocSet base, 
SchemaField field, String pivotValue) (? - to validate)

***
Maybe it is more likely to happen on multiple collections if the collections 
are meant for different languages, and a non-stopword in one language is a 
stopword in another language ?





was (Author: igiguere):
I'm attaching unit tests, with 2 collections, and a few fields in the schema.
But, this issue has nothing do do with the number of collections, and not much 
to do with the field types.
An NPE can happen if the string that is used as facet pivot value is a 
stopword, or an empty string.

PivotFacetProcessor.getDocSet(DocSet base, SchemaField field, String pivotValue)

// if pivotValue = "a", or if pivotValue = "in" (stopword)
ft.getFieldQuery(null, field, pivotValue)
  -> returns null
 searcher.getDocSet(query, base);
  -> throws NPE
  
// if pivotValue = ""  (empty str)
 ft.getFieldQuery(null, field, pivotValue)
  -> returned query= name:
 searcher.getDocSet(query, base);
  -> returns DocSet size -1
  -> NPE could be thrown from PivotFacetProcessor.getSubsetSize(DocSet base, 
SchemaField field, String pivotValue) (? - to validate)




> Potential NPE in pivot facet
> 
>
> Key: SOLR-8921
> URL: https://issues.apache.org/jira/browse/SOLR-8921
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Steve Molloy
>Priority: Major
> Attachments: SOLR-8921.patch, SOLR-8921.patch, 
> SOLR-8921_tag_7.5.0.patch, SOLR-8921_unit-tests_tag_7.5.0.patch
>
>
> For some queries distributed over multiple collections, I've hit a NPE when 
> SolrIndexSearcher tries to fetch results from cache. Basically, query 
> generated to compute pivot on document sub set is null, causing the NPE on 
> lookup.
> 2016-03-28 11:34:58.361 ERROR (qtp268141378-751) [c:otif_fr s:shard1 
> r:core_node1 x:otif_fr_shard1_replica1] o.a.s.h.RequestHandlerBase 
> java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at 
> org.apache.solr.util.ConcurrentLFUCache.get(ConcurrentLFUCache.java:92)
>   at org.apache.solr.search.LFUCache.get(LFUCache.java:153)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:940)
>   at 
> org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:2098)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.getSubsetSize(PivotFacetProcessor.java:356)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:219)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:167)
>   at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:263)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> 

[jira] [Commented] (SOLR-8921) Potential NPE in pivot facet

2018-10-19 Thread Isabelle Giguere (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657331#comment-16657331
 ] 

Isabelle Giguere commented on SOLR-8921:


I'm attaching unit tests, with 2 collections, and a few fields in the schema.
But, this issue has nothing do do with the number of collections, and not much 
to do with the field types.
An NPE can happen if the string that is used as facet pivot value is a 
stopword, or an empty string.

PivotFacetProcessor.getDocSet(DocSet base, SchemaField field, String pivotValue)

// if pivotValue = "a", or if pivotValue = "in" (stopword)
ft.getFieldQuery(null, field, pivotValue)
  -> returns null
 searcher.getDocSet(query, base);
  -> throws NPE
  
// if pivotValue = ""  (empty str)
 ft.getFieldQuery(null, field, pivotValue)
  -> returned query= name:
 searcher.getDocSet(query, base);
  -> returns DocSet size -1
  -> NPE could be thrown from PivotFacetProcessor.getSubsetSize(DocSet base, 
SchemaField field, String pivotValue) (? - to validate)




> Potential NPE in pivot facet
> 
>
> Key: SOLR-8921
> URL: https://issues.apache.org/jira/browse/SOLR-8921
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Steve Molloy
>Priority: Major
> Attachments: SOLR-8921.patch, SOLR-8921.patch, 
> SOLR-8921_tag_7.5.0.patch, SOLR-8921_unit-tests_tag_7.5.0.patch
>
>
> For some queries distributed over multiple collections, I've hit a NPE when 
> SolrIndexSearcher tries to fetch results from cache. Basically, query 
> generated to compute pivot on document sub set is null, causing the NPE on 
> lookup.
> 2016-03-28 11:34:58.361 ERROR (qtp268141378-751) [c:otif_fr s:shard1 
> r:core_node1 x:otif_fr_shard1_replica1] o.a.s.h.RequestHandlerBase 
> java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at 
> org.apache.solr.util.ConcurrentLFUCache.get(ConcurrentLFUCache.java:92)
>   at org.apache.solr.search.LFUCache.get(LFUCache.java:153)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:940)
>   at 
> org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:2098)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.getSubsetSize(PivotFacetProcessor.java:356)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:219)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:167)
>   at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:263)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   

[jira] [Updated] (SOLR-8921) Potential NPE in pivot facet

2018-10-19 Thread Isabelle Giguere (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Isabelle Giguere updated SOLR-8921:
---
Attachment: SOLR-8921_unit-tests_tag_7.5.0.patch

> Potential NPE in pivot facet
> 
>
> Key: SOLR-8921
> URL: https://issues.apache.org/jira/browse/SOLR-8921
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Steve Molloy
>Priority: Major
> Attachments: SOLR-8921.patch, SOLR-8921.patch, 
> SOLR-8921_tag_7.5.0.patch, SOLR-8921_unit-tests_tag_7.5.0.patch
>
>
> For some queries distributed over multiple collections, I've hit a NPE when 
> SolrIndexSearcher tries to fetch results from cache. Basically, query 
> generated to compute pivot on document sub set is null, causing the NPE on 
> lookup.
> 2016-03-28 11:34:58.361 ERROR (qtp268141378-751) [c:otif_fr s:shard1 
> r:core_node1 x:otif_fr_shard1_replica1] o.a.s.h.RequestHandlerBase 
> java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at 
> org.apache.solr.util.ConcurrentLFUCache.get(ConcurrentLFUCache.java:92)
>   at org.apache.solr.search.LFUCache.get(LFUCache.java:153)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:940)
>   at 
> org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:2098)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.getSubsetSize(PivotFacetProcessor.java:356)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:219)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:167)
>   at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:263)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10465) setIdField should be deprecated in favor of SolrClientBuilder methods

2018-10-19 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657322#comment-16657322
 ] 

Jason Gerlowski commented on SOLR-10465:


Thanks for the cross-reference Erick!  Def would've missed that.  And I agree 
on the rename

And thanks for taking interest in this Charles.  I didn't realize this when I 
created the JIRA, but I think the fate of this setter should be tied to that of 
{{setDefaultCollection}} (see SOLR-10466).  If we lock down the routing-field 
only, we're kindof undercutting {{setDefaultCollection}} (that method becomes 
useless/unusable unless several of your collections all have the same 
routing-field-name).  This point also extends to the SolrClient methods which 
take in an overriding collection.  If we lock down the routing field, we're 
making {{SolrClient.add(String collection, SolrInputDocument doc)}} tougher to 
use.

I think the end goal of thread-safety is still something to aim for, but I'm 
not sure moving this setter to the builder now is the right way to go with the 
current obstacles.  Sorry to only catch this after you'd put work into it 
Charles.  Sorry to waste your time; I should have caught some of the problems 
here earlier and closed this jira out (or at least posted a word-of-warning 
here).  That's on me and I'm sorry!

If you're still interested in helping out and improving this area of the code, 
there are some other steps we _can_ take though.  We can deprecate/rename the 
method to {{setRoutingField}} or something similar as Erick suggested.  There's 
also other setters that cause thread-safety issues, where the simple 
move-it-to-the-builder approach still makes sense.  Examples of this are: 
SOLR-10467, SOLR-10468, SOLR-10462, and SOLR-10461.  Sorry again for the 
confusion/trouble.

> setIdField should be deprecated in favor of SolrClientBuilder methods
> -
>
> Key: SOLR-10465
> URL: https://issues.apache.org/jira/browse/SOLR-10465
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: 7.0
>
> Attachments: SOLR-10465.patch, SOLR-10465.patch
>
>
> Now that builders are in place for {{SolrClients}}, the setters used in each 
> {{SolrClient}} can be deprecated, and their functionality moved over to the 
> Builders. This change brings a few benefits:
> - unifies {{SolrClient}} configuration under the new Builders. It'll be nice 
> to have all the knobs, and levers used to tweak {{SolrClient}}s available in 
> a single place (the Builders).
> - reduces {{SolrClient}} thread-safety concerns. Currently, clients are 
> mutable. Using some {{SolrClient}} setters can result in erratic and "trappy" 
> behavior when the clients are used across multiple threads.
> This subtask endeavors to change this behavior for the {{setIdField}} setter 
> on all {{SolrClient}} implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12638) Support atomic updates of nested/child documents for nested-enabled schema

2018-10-19 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657301#comment-16657301
 ] 

David Smiley commented on SOLR-12638:
-

[~caomanhdat] would you mind looking over the goals of this issue and sharing 
any thoughts you may have on it's relationship to the UpdateLog or SolrCloud 
(e.g. gotchas or advice / pointers)?  Sorry that's a little vague... but I'm 
trying to feel out unknown-unknowns that prevent me from asking a specific 
question at this stage.  I'm sure we can get our tests to pass by ourselves but 
that's no guarantee against edge cases we don't even know about (either due to 
circumstances of real-world behavior / timings / safety) or things we should 
test for but didn't.

In summary we want atomic updates to support nested documents.  This way you 
could not only update existing parent/child documents but even add/remove 
entire nested trees.

> Support atomic updates of nested/child documents for nested-enabled schema
> --
>
> Key: SOLR-12638
> URL: https://issues.apache.org/jira/browse/SOLR-12638
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12638-delete-old-block-no-commit.patch, 
> SOLR-12638-nocommit.patch
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> I have been toying with the thought of using this transformer in conjunction 
> with NestedUpdateProcessor and AtomicUpdate to allow SOLR to completely 
> re-index the entire nested structure. This is just a thought, I am still 
> thinking about implementation details. Hopefully I will be able to post a 
> more concrete proposal soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8537) ant test command fails under lucene/tools

2018-10-19 Thread Peter Somogyi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657291#comment-16657291
 ] 

Peter Somogyi commented on LUCENE-8537:
---

Thanks for reviewing [~thetaphi]!

> ant test command fails under lucene/tools
> -
>
> Key: LUCENE-8537
> URL: https://issues.apache.org/jira/browse/LUCENE-8537
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (8.0)
>Reporter: Peter Somogyi
>Assignee: Uwe Schindler
>Priority: Minor
> Attachments: LUCENE-8537.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{ant test}} command executed under {{lucene/tools}} folder fails because 
> it does not have {{junit.classpath}} property. Since the module does not have 
> any test folder we could override the {{-test}} and {{-check-totals}} targets.
> {noformat}
> bash-3.2$ pwd
> /Users/peter.somogyi/repos/lucene-solr/lucene/tools
> bash-3.2$ ant test
> Buildfile: /Users/peter.somogyi/repos/lucene-solr/lucene/tools/build.xml
> ...
> -test:
>[junit4]  says ciao! Master seed: 9A2ACC9B4A3C8553
> BUILD FAILED
> /Users/peter.somogyi/repos/lucene-solr/lucene/common-build.xml:1567: The 
> following error occurred while executing this line:
> /Users/peter.somogyi/repos/lucene-solr/lucene/common-build.xml:1092: 
> Reference junit.classpath not found.
> Total time: 1 second
> {noformat}
> I ran into this issue when uploaded a patch where I removed an import from 
> this module. This triggered a module-level build during precommit that failed 
> with this error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 23058 - Unstable!

2018-10-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23058/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseSerialGC

10 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:34447_solr, 
127.0.0.1:40825_solr, 127.0.0.1:41595_solr] Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/12)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"raceDeleteReplica_false_shard1_replica_n1",
   "base_url":"https://127.0.0.1:46679/solr;,   
"node_name":"127.0.0.1:46679_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node6":{   
"core":"raceDeleteReplica_false_shard1_replica_n5",   
"base_url":"https://127.0.0.1:46679/solr;,   
"node_name":"127.0.0.1:46679_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:34447_solr, 127.0.0.1:40825_solr, 127.0.0.1:41595_solr]
Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/12)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"raceDeleteReplica_false_shard1_replica_n1",
  "base_url":"https://127.0.0.1:46679/solr;,
  "node_name":"127.0.0.1:46679_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_false_shard1_replica_n5",
  "base_url":"https://127.0.0.1:46679/solr;,
  "node_name":"127.0.0.1:46679_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([6ACC5BEBF40E1913:DA3A3B9CFC53D9]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:328)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:224)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 2885 - Still Unstable

2018-10-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2885/

8 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.MultiThreadedOCPTest

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MDCAwareThreadPoolExecutor, SolrIndexSearcher, MockDirectoryWrapper, 
MockDirectoryWrapper, MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:898)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:869)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1143)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1053)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.search.SolrIndexSearcher  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.search.SolrIndexSearcher.(SolrIndexSearcher.java:310)  at 
org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2096)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2248)  at 
org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1097)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:986)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:869)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1143)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1053)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:358)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:737)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:960)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:869)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1143)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1053)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 

[jira] [Updated] (SOLR-12768) Determine how _nest_path_ should be analyzed to support various use-cases

2018-10-19 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12768:

 Priority: Blocker  (was: Major)
Fix Version/s: master (8.0)

> Determine how _nest_path_ should be analyzed to support various use-cases
> -
>
> Key: SOLR-12768
> URL: https://issues.apache.org/jira/browse/SOLR-12768
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Blocker
> Fix For: master (8.0)
>
>
> We know we need {{\_nest\_path\_}} in the schema for the new nested documents 
> support, and we loosely know what goes in it.  From a DocValues perspective, 
> we've got it down; though we might tweak it.  From an indexing (text 
> analysis) perspective, we're not quite sure yet, though we've got a test 
> schema, {{schema-nest.xml}} with a decent shot at it.  Ultimately, how we 
> index it will depend on the query/filter use-cases we need to support.  So 
> we'll review some of them here.
> TBD: Not sure if the outcome of this task is just a "decide" or wether we 
> also potentially add a few tests for some of these cases, and/or if we also 
> add a FieldType to make declaring it as easy as a one-liner.  A FieldType 
> would have other benefits too once we're ready to make querying on the path 
> easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12633) JSON Loader: remove anonChildDoc option

2018-10-19 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12633:

Priority: Blocker  (was: Major)

> JSON Loader: remove anonChildDoc option
> ---
>
> Key: SOLR-12633
> URL: https://issues.apache.org/jira/browse/SOLR-12633
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Blocker
> Fix For: master (8.0)
>
>
> In 8.0/master, we should drop "anonChildDocs" that we added.  It was a 
> temporary flag.  Assume it's not anonymous unless the field name is 
> {{\_childDocuments_\}}.  That exception to the rule should have been added 
> already but was overlooked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657274#comment-16657274
 ] 

ASF subversion and git services commented on LUCENE-8531:
-

Commit 36ce83bc9add02a900e38b396b42c3c729846598 in lucene-solr's branch 
refs/heads/branch_7x from [~jim.ferenczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=36ce83b ]

LUCENE-8531: QueryBuilder#analyzeGraphPhrase now creates one phrase query per 
finite strings in the graph if the slop is greater than 0.
Span queries cannot be used in this case because they don't handle slop the 
same way than phrase queries.


> QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
> ---
>
> Key: LUCENE-8531
> URL: https://issues.apache.org/jira/browse/LUCENE-8531
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8531.patch
>
>
> QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
> phraseSlop, but hard-codes inOrder ctor param as true.
> Before multi-term synonym support and graph token streams introduced the 
> possibility of generating SpanNearQuery-s, QueryBuilder generated 
> (Multi)PhraseQuery-s, which always interpret slop as allowing reordering 
> edits.  Solr's eDismax query parser generates phrase queries when its 
> pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a 
> graph-aware synonym filter, SpanNearQuery-s are generated that require 
> clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits 
> are not allowed, so this is a kind of regression.  See SOLR-12243 for edismax 
> pf/pf2/pf3 context.  (Note that the patch on SOLR-12243 also addresses 
> another problem that blocks eDismax from generating queries *at all* under 
> the above-described circumstances.)
> I propose adding a new analyzeGraphPhrase() method that allows configuration 
> of inOrder, which would allow eDismax to specify inOrder=false.  The existing 
> analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so 
> existing client behavior would remain unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-19 Thread Jim Ferenczi (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi resolved LUCENE-8531.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.6

Thanks [~steve_rowe] and [~thetaphi].

> QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
> ---
>
> Key: LUCENE-8531
> URL: https://issues.apache.org/jira/browse/LUCENE-8531
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8531.patch
>
>
> QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
> phraseSlop, but hard-codes inOrder ctor param as true.
> Before multi-term synonym support and graph token streams introduced the 
> possibility of generating SpanNearQuery-s, QueryBuilder generated 
> (Multi)PhraseQuery-s, which always interpret slop as allowing reordering 
> edits.  Solr's eDismax query parser generates phrase queries when its 
> pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a 
> graph-aware synonym filter, SpanNearQuery-s are generated that require 
> clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits 
> are not allowed, so this is a kind of regression.  See SOLR-12243 for edismax 
> pf/pf2/pf3 context.  (Note that the patch on SOLR-12243 also addresses 
> another problem that blocks eDismax from generating queries *at all* under 
> the above-described circumstances.)
> I propose adding a new analyzeGraphPhrase() method that allows configuration 
> of inOrder, which would allow eDismax to specify inOrder=false.  The existing 
> analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so 
> existing client behavior would remain unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657266#comment-16657266
 ] 

ASF subversion and git services commented on LUCENE-8531:
-

Commit e1da5f953731b4e2990e054d09ec0bcb2e5146b8 in lucene-solr's branch 
refs/heads/master from [~jim.ferenczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e1da5f9 ]

LUCENE-8531: QueryBuilder#analyzeGraphPhrase now creates one phrase query per 
finite strings in the graph if the slop is greater than 0.
Span queries cannot be used in this case because they don't handle slop the 
same way than phrase queries.


> QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
> ---
>
> Key: LUCENE-8531
> URL: https://issues.apache.org/jira/browse/LUCENE-8531
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: LUCENE-8531.patch
>
>
> QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
> phraseSlop, but hard-codes inOrder ctor param as true.
> Before multi-term synonym support and graph token streams introduced the 
> possibility of generating SpanNearQuery-s, QueryBuilder generated 
> (Multi)PhraseQuery-s, which always interpret slop as allowing reordering 
> edits.  Solr's eDismax query parser generates phrase queries when its 
> pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a 
> graph-aware synonym filter, SpanNearQuery-s are generated that require 
> clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits 
> are not allowed, so this is a kind of regression.  See SOLR-12243 for edismax 
> pf/pf2/pf3 context.  (Note that the patch on SOLR-12243 also addresses 
> another problem that blocks eDismax from generating queries *at all* under 
> the above-described circumstances.)
> I propose adding a new analyzeGraphPhrase() method that allows configuration 
> of inOrder, which would allow eDismax to specify inOrder=false.  The existing 
> analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so 
> existing client behavior would remain unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #455: SOLR-12638

2018-10-19 Thread dsmiley
Github user dsmiley commented on the issue:

https://github.com/apache/lucene-solr/pull/455
  
As a general point, I feel we should prefer "nested" terminology instead of 
"block".  If we were purely working within Lucene then I think "block" might be 
okay but at the Solr layer people see this stuff as "nested" docs.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release PyLucene 7.5.0 (rc2)

2018-10-19 Thread Andi Vajda



This vote has passed !
Thank you all who voted and made this release possible.

Andi..

On Wed, 17 Oct 2018, Tommaso Teofili wrote:


+1

Tommaso
Il giorno mar 16 ott 2018 alle ore 06:46 Andi Vajda 
ha scritto:



The PyLucene 7.5.0 (rc2) release tracking the recent release of
Apache Lucene 7.5.0 is ready.

A release candidate is available from:
   https://dist.apache.org/repos/dist/dev/lucene/pylucene/7.5.0-rc2/

PyLucene 7.5.0 is built with JCC 3.3 included in these release artifacts.

JCC 3.3 supports Python 3.3+ (in addition to Python 2.3+).
PyLucene may be built with Python 2 or Python 3.

Please vote to release these artifacts as PyLucene 7.5.0.
Anyone interested in this release can and should vote !

Thanks !

Andi..

ps: the KEYS file for PyLucene release signing is at:
https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS
https://dist.apache.org/repos/dist/dev/lucene/pylucene/KEYS

pps: here is my +1




[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-19 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r226709714
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateBlockTest.java 
---
@@ -59,39 +52,147 @@ public void before() {
 
   @Test
   public void testMergeChildDoc() throws Exception {
-SolrInputDocument doc = new SolrInputDocument();
-doc.setField("id", "1");
-doc.setField("cat_ss", new String[]{"aaa", "ccc"});
-doc.setField("child", Collections.singletonList(sdoc("id", "2", 
"cat_ss", "child")));
+SolrInputDocument newChildDoc = sdoc("id", "3", "cat_ss", "child");
+SolrInputDocument addedDoc = sdoc("id", "1",
+"cat_ss", Collections.singletonMap("add", "bbb"),
+"child", Collections.singletonMap("add", sdocs(newChildDoc)));
+
+SolrInputDocument dummyBlock = sdoc("id", "1",
+"cat_ss", new ArrayList<>(Arrays.asList("aaa", "ccc")),
+"_root_", "1", "child", new ArrayList<>(sdocs(addedDoc)));
+dummyBlock.removeField(VERSION);
+
+SolrInputDocument preMergeDoc = new SolrInputDocument(dummyBlock);
+AtomicUpdateDocumentMerger docMerger = new 
AtomicUpdateDocumentMerger(req());
+docMerger.merge(addedDoc, dummyBlock);
+assertEquals("merged document should have the same id", 
preMergeDoc.getFieldValue("id"), dummyBlock.getFieldValue("id"));
+assertDocContainsSubset(preMergeDoc, dummyBlock);
+assertDocContainsSubset(addedDoc, dummyBlock);
+assertDocContainsSubset(newChildDoc, (SolrInputDocument) ((List) 
dummyBlock.getFieldValues("child")).get(1));
+assertEquals(dummyBlock.getFieldValue("id"), 
dummyBlock.getFieldValue("id"));
+  }
+
+  @Test
+  public void testBlockAtomicQuantities() throws Exception {
+SolrInputDocument doc = sdoc("id", "1", "string_s", "root");
 addDoc(adoc(doc), "nested-rtg");
 
-BytesRef rootDocId = new BytesRef("1");
-SolrCore core = h.getCore();
-SolrInputDocument block = RealTimeGetComponent.getInputDocument(core, 
rootDocId, true);
-// assert block doc has child docs
-assertTrue(block.containsKey("child"));
+List docs = IntStream.range(10, 20).mapToObj(x -> 
sdoc("id", String.valueOf(x), "string_s", 
"child")).collect(Collectors.toList());
+doc = sdoc("id", "1", "children", Collections.singletonMap("add", 
docs));
+addAndGetVersion(doc, params("update.chain", "nested-rtg", "wt", 
"json"));
 
-assertJQ(req("q","id:1")
-,"/response/numFound==0"
+assertU(commit());
+
+assertJQ(req("q", "_root_:1"),
+"/response/numFound==11");
+
+assertJQ(req("q", "string_s:child", "fl", "*", "rows", "100"),
+"/response/numFound==10");
+
+// ensure updates work when block has more than 10 children
+for(int i = 10; i < 20; ++i) {
+  System.out.println("indexing " + i);
+  docs = IntStream.range(i * 10, (i * 10) + 5).mapToObj(x -> 
sdoc("id", String.valueOf(x), "string_s", 
"grandChild")).collect(Collectors.toList());
+  doc = sdoc("id", String.valueOf(i), "grandChildren", 
Collections.singletonMap("add", docs));
+  addAndGetVersion(doc, params("update.chain", "nested-rtg", "wt", 
"json"));
+  assertU(commit());
+}
+
+assertJQ(req("q", "id:114", "fl", "*", "rows", "100"),
--- End diff --

Why set the "fl" or "rows" in these queries?  Your assertion only checks 
numFound and not the content of those that were found.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-19 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r226709264
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateBlockTest.java 
---
@@ -59,39 +52,147 @@ public void before() {
 
   @Test
   public void testMergeChildDoc() throws Exception {
-SolrInputDocument doc = new SolrInputDocument();
-doc.setField("id", "1");
-doc.setField("cat_ss", new String[]{"aaa", "ccc"});
-doc.setField("child", Collections.singletonList(sdoc("id", "2", 
"cat_ss", "child")));
+SolrInputDocument newChildDoc = sdoc("id", "3", "cat_ss", "child");
+SolrInputDocument addedDoc = sdoc("id", "1",
+"cat_ss", Collections.singletonMap("add", "bbb"),
+"child", Collections.singletonMap("add", sdocs(newChildDoc)));
+
+SolrInputDocument dummyBlock = sdoc("id", "1",
+"cat_ss", new ArrayList<>(Arrays.asList("aaa", "ccc")),
+"_root_", "1", "child", new ArrayList<>(sdocs(addedDoc)));
+dummyBlock.removeField(VERSION);
+
+SolrInputDocument preMergeDoc = new SolrInputDocument(dummyBlock);
+AtomicUpdateDocumentMerger docMerger = new 
AtomicUpdateDocumentMerger(req());
+docMerger.merge(addedDoc, dummyBlock);
+assertEquals("merged document should have the same id", 
preMergeDoc.getFieldValue("id"), dummyBlock.getFieldValue("id"));
+assertDocContainsSubset(preMergeDoc, dummyBlock);
+assertDocContainsSubset(addedDoc, dummyBlock);
+assertDocContainsSubset(newChildDoc, (SolrInputDocument) ((List) 
dummyBlock.getFieldValues("child")).get(1));
+assertEquals(dummyBlock.getFieldValue("id"), 
dummyBlock.getFieldValue("id"));
+  }
+
+  @Test
+  public void testBlockAtomicQuantities() throws Exception {
+SolrInputDocument doc = sdoc("id", "1", "string_s", "root");
 addDoc(adoc(doc), "nested-rtg");
 
-BytesRef rootDocId = new BytesRef("1");
-SolrCore core = h.getCore();
-SolrInputDocument block = RealTimeGetComponent.getInputDocument(core, 
rootDocId, true);
-// assert block doc has child docs
-assertTrue(block.containsKey("child"));
+List docs = IntStream.range(10, 20).mapToObj(x -> 
sdoc("id", String.valueOf(x), "string_s", 
"child")).collect(Collectors.toList());
+doc = sdoc("id", "1", "children", Collections.singletonMap("add", 
docs));
+addAndGetVersion(doc, params("update.chain", "nested-rtg", "wt", 
"json"));
 
-assertJQ(req("q","id:1")
-,"/response/numFound==0"
+assertU(commit());
+
+assertJQ(req("q", "_root_:1"),
+"/response/numFound==11");
+
+assertJQ(req("q", "string_s:child", "fl", "*", "rows", "100"),
+"/response/numFound==10");
+
+// ensure updates work when block has more than 10 children
--- End diff --

Why is 10 special?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7075) Clean up LegacyNumericUtils usage.

2018-10-19 Thread Jim Ferenczi (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi resolved LUCENE-7075.
--
   Resolution: Fixed
Fix Version/s: master (8.0)

Thanks Robert and everyone involved in this issue. Please reopen if I missed 
anything.

> Clean up LegacyNumericUtils usage.
> --
>
> Key: LUCENE-7075
> URL: https://issues.apache.org/jira/browse/LUCENE-7075
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Robert Muir
>Priority: Blocker
> Fix For: master (8.0)
>
>
> Tons of code is still on the deprecated LegacyNumericUtils. We will never be 
> able to remove these or even move them to somewhere better (like the 
> backwards jar) if we don't clean this up!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-19 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657094#comment-16657094
 ] 

Uwe Schindler commented on SOLR-12243:
--

The Lucene issue is about to be committed, so let's adapt the instanceof checks 
here (because it no longer creates SpanQueries for all types of phrases).

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3-1 6-3 930%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-19 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657089#comment-16657089
 ] 

Uwe Schindler edited comment on LUCENE-8531 at 10/19/18 4:58 PM:
-

+1, please do this. I will then take care of the Solr issue. This is not fully 
related, but the Solr code depends on the structure of Lucene queries produced 
and then reorders them with lots of instanceof checks. Which is bad 
spaghetti-code, but that's how it is.

I'd like to get a Lucene class that allows you to generate edismax-like queries 
that parses some text, creates bigram and trigram shingles out of it to allow a 
"match" query to assign a higher score for hits when you have terms in order 
and close to each other (put a higher precedence if bigrams or trigrams in your 
query string are close together in the document). A lot of people use this, but 
currently it only works with Solr's edismax and whenever you want to use this 
for other custom Solr query parser or custom elasticsearch qp, you have to 
reimplement the shingling.


was (Author: thetaphi):
+1, please do this. I will then take care of the Solr issue. This is not fully 
related, but the Solr code depends on the structure of Lucene queries produced 
and then reorders them with lots of instanceof checks. Which is bad 
spaghetti-code, but that's how it is.

I'd like to get a Lucene class that allows you to generate edismax-like queries 
that parses some text, creates bigram and trigram shingles out of it to allow a 
"match" query to assign a higher score for hits when you have terms in order 
and close to each other (put a higher precedence if bigrams or trigrams in your 
query string are close together in the document). A lot of people use this, but 
currently it only works with Solr's edismax and whenever you want to use this 
for other query parser or elasticsearch, you have to reimplement the shingling.

> QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
> ---
>
> Key: LUCENE-8531
> URL: https://issues.apache.org/jira/browse/LUCENE-8531
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: LUCENE-8531.patch
>
>
> QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
> phraseSlop, but hard-codes inOrder ctor param as true.
> Before multi-term synonym support and graph token streams introduced the 
> possibility of generating SpanNearQuery-s, QueryBuilder generated 
> (Multi)PhraseQuery-s, which always interpret slop as allowing reordering 
> edits.  Solr's eDismax query parser generates phrase queries when its 
> pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a 
> graph-aware synonym filter, SpanNearQuery-s are generated that require 
> clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits 
> are not allowed, so this is a kind of regression.  See SOLR-12243 for edismax 
> pf/pf2/pf3 context.  (Note that the patch on SOLR-12243 also addresses 
> another problem that blocks eDismax from generating queries *at all* under 
> the above-described circumstances.)
> I propose adding a new analyzeGraphPhrase() method that allows configuration 
> of inOrder, which would allow eDismax to specify inOrder=false.  The existing 
> analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so 
> existing client behavior would remain unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-19 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657089#comment-16657089
 ] 

Uwe Schindler commented on LUCENE-8531:
---

+1, please do this. I will then take care of the Solr issue. This is not fully 
related, but the Solr code depends on the structure of Lucene queries produced 
and then reorders them with lots of instanceof checks. Which is bad 
spaghetti-code, but that's how it is.

I'd like to get a Lucene class that allows you to generate edismax-like queries 
that parses some text, creates bigram and trigram shingles out of it to allow a 
"match" query to assign a higher score for hits when you have terms in order 
and close to each other (put a higher precedence if bigrams or trigrams in your 
query string are close together in the document). A lot of people use this, but 
currently it only works with Solr's edismax and whenever you want to use this 
for other query parser or elasticsearch, you have to reimplement the shingling.

> QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
> ---
>
> Key: LUCENE-8531
> URL: https://issues.apache.org/jira/browse/LUCENE-8531
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: LUCENE-8531.patch
>
>
> QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
> phraseSlop, but hard-codes inOrder ctor param as true.
> Before multi-term synonym support and graph token streams introduced the 
> possibility of generating SpanNearQuery-s, QueryBuilder generated 
> (Multi)PhraseQuery-s, which always interpret slop as allowing reordering 
> edits.  Solr's eDismax query parser generates phrase queries when its 
> pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a 
> graph-aware synonym filter, SpanNearQuery-s are generated that require 
> clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits 
> are not allowed, so this is a kind of regression.  See SOLR-12243 for edismax 
> pf/pf2/pf3 context.  (Note that the patch on SOLR-12243 also addresses 
> another problem that blocks eDismax from generating queries *at all* under 
> the above-described circumstances.)
> I propose adding a new analyzeGraphPhrase() method that allows configuration 
> of inOrder, which would allow eDismax to specify inOrder=false.  The existing 
> analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so 
> existing client behavior would remain unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-19 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657074#comment-16657074
 ] 

Uwe Schindler edited comment on SOLR-12243 at 10/19/18 4:51 PM:


That's waht I mean, it's still linked together. The main bug is still in 
Lucene, because the Lucene Query builder creates a query that does not 
correctly implement span queries on multi-term synonyms, because it uses the 
wrong query type. The issues here are coming from the fact that dismax relies 
on the interal implementation of the lucene code, which is not a good thing. 
The solr code should not do this and instead we should add something into 
Lucene that can create those pf auto-phrase queries. I was missing that in an 
own query parser, too. So basically it would be good to have some additional 
query builder method in Lucene that analyzes some text and then builds 
configureable shingles that are connected with span/phrase using a slop. This 
code should not depend on the structure of a span/boolean query that was parsed 
before.

I'd like to wait a few days until the Lucene issue is solved and then review 
the changes here and adapt them as necessary. On the longer term, I'd like to 
get rid of the query instanceof spaghetticode and move the query construction 
for dismax-like queries using term shingles (bigrams, trigrams) to a separate 
builder class. So it's better resuseable.


was (Author: thetaphi):
That's waht I mean, it's still linked together. The main bug is still in 
Lucene, because the Lucene Query builder creates a query that does not 
correctly implement span queries on multi-term synonyms, because it uses the 
wrong query type. The issues here are coming from the fact that dismax relies 
on the interal implementation of the lucene code, which is not a good thing. 
The solr code should not do this and instead we should add something into 
Lucene that can create those pf auto-phrase queries. I was missing that in an 
own query parser, too. So basically it would be good to have some additional 
query builder method in Lucene that analyzes some text and then builds 
configureable shingles that are connected with span/phrase using a slop. This 
code should not depend on the structure of a span/boolean query that was parsed 
before.

I'd like to wait a few days until the Lucene issue is solved and then review 
the changes here and adapt them as necessary. On the longer term, I'd like to 
get rid of the query instance of shingling and move the query construction for 
dismax-like queries to a separate builder class. So it's better resuseable.

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3-1 6-3 930%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-19 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657074#comment-16657074
 ] 

Uwe Schindler commented on SOLR-12243:
--

That's waht I mean, it's still linked together. The main bug is still in 
Lucene, because the Lucene Query builder creates a query that does not 
correctly implement span queries on multi-term synonyms, because it uses the 
wrong query type. The issues here are coming from the fact that dismax relies 
on the interal implementation of the lucene code, which is not a good thing. 
The solr code should not do this and instead we should add something into 
Lucene that can create those pf auto-phrase queries. I was missing that in an 
own query parser, too. So basically it would be good to have some additional 
query builder method in Lucene that analyzes some text and then builds 
configureable shingles that are connected with span/phrase using a slop. This 
code should not depend on the structure of a span/boolean query that was parsed 
before.

I'd like to wait a few days until the Lucene issue is solved and then review 
the changes here and adapt them as necessary. On the longer term, I'd like to 
get rid of the query instance of shingling and move the query construction for 
dismax-like queries to a separate builder class. So it's better resuseable.

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3-1 6-3 930%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 2942 - Unstable!

2018-10-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2942/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseG1GC

6 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:42309_solr, 
127.0.0.1:43203_solr, 127.0.0.1:46137_solr] Last available state: 
DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/12)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node4":{   "core":"raceDeleteReplica_true_shard1_replica_n2", 
  "base_url":"https://127.0.0.1:46497/solr;,   
"node_name":"127.0.0.1:46497_solr",   "state":"down",   
"type":"NRT",   "leader":"true"}, "core_node6":{   
"core":"raceDeleteReplica_true_shard1_replica_n5",   
"base_url":"https://127.0.0.1:46497/solr;,   
"node_name":"127.0.0.1:46497_solr",   "state":"down",   
"type":"NRT"}, "core_node3":{   
"core":"raceDeleteReplica_true_shard1_replica_n1",   
"base_url":"https://127.0.0.1:46137/solr;,   
"node_name":"127.0.0.1:46137_solr",   "state":"down",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"1",  
 "autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:42309_solr, 127.0.0.1:43203_solr, 127.0.0.1:46137_solr]
Last available state: 
DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/12)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node4":{
  "core":"raceDeleteReplica_true_shard1_replica_n2",
  "base_url":"https://127.0.0.1:46497/solr;,
  "node_name":"127.0.0.1:46497_solr",
  "state":"down",
  "type":"NRT",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_true_shard1_replica_n5",
  "base_url":"https://127.0.0.1:46497/solr;,
  "node_name":"127.0.0.1:46497_solr",
  "state":"down",
  "type":"NRT"},
"core_node3":{
  "core":"raceDeleteReplica_true_shard1_replica_n1",
  "base_url":"https://127.0.0.1:46137/solr;,
  "node_name":"127.0.0.1:46137_solr",
  "state":"down",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([B750B988AB893558:DD46D858C37B7F92]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:334)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:229)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Commented] (LUCENE-8531) QueryBuilder hard-codes inOrder=true for generated sloppy span near queries

2018-10-19 Thread Jim Ferenczi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657043#comment-16657043
 ] 

Jim Ferenczi commented on LUCENE-8531:
--

Since this is a bug I am planning to commit the proposed patch soon unless 
there are objections. It will be a bit slower than the current version as 
[~thetaphi] outlined but I think consistency is more important here.

> QueryBuilder hard-codes inOrder=true for generated sloppy span near queries
> ---
>
> Key: LUCENE-8531
> URL: https://issues.apache.org/jira/browse/LUCENE-8531
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: LUCENE-8531.patch
>
>
> QueryBuilder.analyzeGraphPhrase() generates SpanNearQuery-s with passed-in 
> phraseSlop, but hard-codes inOrder ctor param as true.
> Before multi-term synonym support and graph token streams introduced the 
> possibility of generating SpanNearQuery-s, QueryBuilder generated 
> (Multi)PhraseQuery-s, which always interpret slop as allowing reordering 
> edits.  Solr's eDismax query parser generates phrase queries when its 
> pf/pf2/pf3 params are specified, and when multi-term synonyms are used with a 
> graph-aware synonym filter, SpanNearQuery-s are generated that require 
> clauses to be in order; unlike with (Multi)PhraseQuery-s, reordering edits 
> are not allowed, so this is a kind of regression.  See SOLR-12243 for edismax 
> pf/pf2/pf3 context.  (Note that the patch on SOLR-12243 also addresses 
> another problem that blocks eDismax from generating queries *at all* under 
> the above-described circumstances.)
> I propose adding a new analyzeGraphPhrase() method that allows configuration 
> of inOrder, which would allow eDismax to specify inOrder=false.  The existing 
> analyzeGraphPhrase() method would remain with its hard-coded inOrder=true, so 
> existing client behavior would remain unchanged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1725 - Unstable

2018-10-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1725/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/190/consoleText

[repro] Revision: 2f61f96bfae9d97e3536305e49865433e28737c2

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testVersionsAreReturned -Dtests.seed=7D130DC52E46D34A 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=is-IS -Dtests.timezone=America/Phoenix -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
1a8188d92b8148f2d937bd038f48f103526fcbcc
[repro] git fetch
[repro] git checkout 2f61f96bfae9d97e3536305e49865433e28737c2

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 2572 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=7D130DC52E46D34A -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=is-IS -Dtests.timezone=America/Phoenix 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 861 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro] git checkout 1a8188d92b8148f2d937bd038f48f103526fcbcc

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-7.x - Build # 960 - Still Unstable

2018-10-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/960/

1 tests failed.
FAILED:  org.apache.solr.cloud.OverseerRolesTest.testOverseerRole

Error Message:
Timed out waiting for overseer state change

Stack Trace:
java.lang.AssertionError: Timed out waiting for overseer state change
at 
__randomizedtesting.SeedInfo.seed([FC64A1B454A42F26:1DAF5C206F1719F7]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.OverseerRolesTest.waitForNewOverseer(OverseerRolesTest.java:63)
at 
org.apache.solr.cloud.OverseerRolesTest.testOverseerRole(OverseerRolesTest.java:143)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13895 lines...]
   [junit4] Suite: org.apache.solr.cloud.OverseerRolesTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J1/temp/solr.cloud.OverseerRolesTest_FC64A1B454A42F26-001/init-core-data-001
   

[jira] [Commented] (SOLR-12259) Robustly upgrade indexes

2018-10-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16657001#comment-16657001
 ] 

Tomás Fernández Löbbe commented on SOLR-12259:
--

Thanks for opening this Jira Erick, upgrading is a major issue for us too. 
+1 for not mixing things with the optimize command. I think that one has a 
different objective (regardless of if in fact re-writes segments). How do you 
feel about making a new Request Handler instead, that we could include in the 
solrconfig (or implicit, like /update), that way we don’t pollute the 
UpdateHandler with upgrading logic and don’t mix the APIs with different kinds 
of parameters that this handler could accept. Something like:

{{/solr/collection_or_core/upgrade}}

In this endpoint we could include logic for upgrading Lucene/Solr versions (now 
maybe only upgrading the index, but maybe more things in the future, things 
like “MIGRATESTATEFORMAT” collection API could belong here too). I’m also 
thinking this endpoint could provide information regarding upgrades (in a sort 
of dry-run option), like:

{{/solr/collection_or_core/upgrade?action=status}}
And return something like (this is just a random example):
{code}
index:
    creationVersion:Lucene 6.3
    codecVersion:Lucene 6.1
segment versions:[...]
canUpgrade: YES
Solr:
runningVersions: [7.3,6.3] 
clusterStateFormat: legacy
...
{code}

Then the user could do a: 
{{/solr/collection_or_core/upgrade?action=upgradeIndex}} or something like that 
to start an upgrade in the index, Solr would then check whatever it can to see 
if the upgrade is possible, do core/collections calls, or whatever it needs to 
do to upgrade.


> Robustly upgrade indexes
> 
>
> Key: SOLR-12259
> URL: https://issues.apache.org/jira/browse/SOLR-12259
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> The general problem statement is that the current upgrade path is trappy and 
> cumbersome.  It would be a great help "in the field" to make the upgrade 
> process less painful.
> Additionally one of the most common things users want to do is enable 
> docValues, but currently they often have to re-index.
> Issues:
> 1> if I upgrade from 5x to 6x and then 7x, theres no guarantee that when I go 
> to 7x all the segments have been rewritten in 6x format. Say I have a segment 
> at max size that has no deletions. It'll never be rewritten until it has 
> deleted docs. And perhaps 50% deleted docs currently.
> 2> IndexUpgraderTool explicitly does a forcemerge to 1 segment, which is bad.
> 3> in a large distributed system, running IndexUpgraderTool on all the nodes 
> is cumbersome even if <2> is acceptable.
> 4> Users who realize specifying docValues on a field would be A Good Thing 
> have to re-index. We have UninvertDocValuesMergePolicyFactory. Wouldn't it be 
> nice to be able to have this done all at once without forceMerging to one 
> segment.
> Proposal:
> Somehow avoid the above. Currently LUCENE-7976 is a start in that direction. 
> It will make TMP respect max segments size so can avoid forceMerges that 
> result in one segment. What it does _not_ do is rewrite segments with zero 
> (or a small percentage) deleted documents.
> So it  doesn't seem like a huge stretch to be able to specify to TMP the 
> option to rewrite segments that have no deleted documents. Perhaps a new 
> parameter to optimize?
> This would likely require another change to TMP or whatever.
> So upgrading to a new solr would look like
> 1> install the new Solr
> 2> execute 
> "http://node:port/solr/collection_or_core/update?optimize=true=true;
> What's not clear to me is whether we'd require 
> UninvertDocValuesMergePolicyFactory to be specified and wrap TMP or not.
> Anyway, let's discuss. I'll create yet another LUCENE JIRA for TMP do rewrite 
> all segments that I'll link.
> I'll also link several other JIRAs in here, they're coalescing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8921) Potential NPE in pivot facet

2018-10-19 Thread Isabelle Giguere (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646667#comment-16646667
 ] 

Isabelle Giguere edited comment on SOLR-8921 at 10/19/18 4:05 PM:
--

Solr 7.5.0 :
Reproduced with a query on a text field, with an alias, even if each 
collections in the alias respond without error individually
'fileName' : text field, split on '.', single valued
'author' : text field, full analysis, multivalued 
'fileType' : text field, lower cased only, single valued
- collection=de_alias=author=fileName = NPE
- collection=lang_de=author=fileName = response OK
- collection=emptyText=author=fileName = response OK
- collection=de_alias=author=fileType = response OK

I'll try to find time to devise a unit test to illustrate.

Alternatively to this patch on PivotFacetProcessor, 
org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(Query q) could 
return DocSet.EMPTY if input Query is null, but that would have repercussions 
everywhere.


was (Author: igiguere):
Solr 7.5.0 :
Reproduced with a query on a text field, with an alias, even if each 
collections in the alias respond without error individually
'fileName' : text field, single valued
'author' : text field, multivalued 
'fileType' : string field, single valued
- collection=de_alias=author=fileName = NPE
- collection=lang_de=author=fileName = response OK
- collection=emptyText=author=fileName = response OK
- collection=de_alias=author=fileType = response OK

I'll try to find time to devise a unit test to illustrate.

Alternatively to this patch on PivotFacetProcessor, 
org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(Query q) could 
return DocSet.EMPTY if input Query is null, but that would have repercussions 
everywhere.

> Potential NPE in pivot facet
> 
>
> Key: SOLR-8921
> URL: https://issues.apache.org/jira/browse/SOLR-8921
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Steve Molloy
>Priority: Major
> Attachments: SOLR-8921.patch, SOLR-8921.patch, 
> SOLR-8921_tag_7.5.0.patch
>
>
> For some queries distributed over multiple collections, I've hit a NPE when 
> SolrIndexSearcher tries to fetch results from cache. Basically, query 
> generated to compute pivot on document sub set is null, causing the NPE on 
> lookup.
> 2016-03-28 11:34:58.361 ERROR (qtp268141378-751) [c:otif_fr s:shard1 
> r:core_node1 x:otif_fr_shard1_replica1] o.a.s.h.RequestHandlerBase 
> java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at 
> org.apache.solr.util.ConcurrentLFUCache.get(ConcurrentLFUCache.java:92)
>   at org.apache.solr.search.LFUCache.get(LFUCache.java:153)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:940)
>   at 
> org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:2098)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.getSubsetSize(PivotFacetProcessor.java:356)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:219)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:167)
>   at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:263)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> 

[jira] [Commented] (SOLR-12884) Admin UI, admin/luke and *Point fields

2018-10-19 Thread Alexandre Rafalovitch (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656937#comment-16656937
 ] 

Alexandre Rafalovitch commented on SOLR-12884:
--

I am also seeing similar behavior for Solr 7.5's treatment of _str fields 
generated by the schemaless mode. I am not sure if the cause is related or not.

> Admin UI, admin/luke and *Point fields
> --
>
> Key: SOLR-12884
> URL: https://issues.apache.org/jira/browse/SOLR-12884
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Erick Erickson
>Priority: Major
>
> One of the conference attendees noted that you go to the schema browser and 
> click on, say, a pint field, then click "load term info", nothing is shown.
> admin/luke similarly doesn't show much interesting, here's the response for a 
> pint .vs. a tint field:
> "popularity":\{ "type":"pint", "schema":"I-SD-OF--"},
> "popularityt":{ "type":"tint", "schema":"I-S--OF--",
>                        "index":"-TS--", "docs":15},
>  
> What, if anything, should we do in these two cases? Since  the points-based 
> numerics don't have terms like Trie* fields, I don't think we _can_ show much 
> more so the above makes sense, it's just jarring to end users and looks like 
> a bug.
> WDYT about putting in some useful information though. Say for the Admin UI 
> for points-based "terms cannot be shown for points-based fields" or some such?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12887) TRA: document re-dating (question, test, docs)

2018-10-19 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656929#comment-16656929
 ] 

Christine Poerschke commented on SOLR-12887:


Attached draft illustrative patch. Intuitively:
* an add/update to redate the document within its existing collection should be 
just fine, but
* an add/update that changes the timestamp to a value outside the existing 
collection would result in two documents in two collections.
* If two documents are not what is intended by the client then they can do a 
delete followed by an add instead of just an add/update alone.

Interestingly the draft patch in its current form suggests that query behaviour 
is variable when two TRA collections have a document with the same key i.e. 
sometimes it says "found 1" and sometimes it says "found 2" even if 
"cache=false" was sent in the query. Could be to do with which url the test's 
CloudSolrClient used for the requests, haven't yet looked into further.


> TRA: document re-dating (question, test, docs)
> --
>
> Key: SOLR-12887
> URL: https://issues.apache.org/jira/browse/SOLR-12887
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12887.patch
>
>
> This ticket is sort of a combination of a question with small test and 
> documentation additions.
> After a document is added, can subsequent updates to it include a change of 
> its timestamp? What happens if a timestamp change logically 'moves' the 
> document out of its original collection?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12887) TRA: document re-dating (question, test, docs)

2018-10-19 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-12887:
---
Attachment: SOLR-12887.patch

> TRA: document re-dating (question, test, docs)
> --
>
> Key: SOLR-12887
> URL: https://issues.apache.org/jira/browse/SOLR-12887
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12887.patch
>
>
> This ticket is sort of a combination of a question with small test and 
> documentation additions.
> After a document is added, can subsequent updates to it include a change of 
> its timestamp? What happens if a timestamp change logically 'moves' the 
> document out of its original collection?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12887) TRA: document re-dating (question, test, docs)

2018-10-19 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-12887:
--

 Summary: TRA: document re-dating (question, test, docs)
 Key: SOLR-12887
 URL: https://issues.apache.org/jira/browse/SOLR-12887
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Christine Poerschke


This ticket is sort of a combination of a question with small test and 
documentation additions.

After a document is added, can subsequent updates to it include a change of its 
timestamp? What happens if a timestamp change logically 'moves' the document 
out of its original collection?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5004) Allow a shard to be split into 'n' sub-shards

2018-10-19 Thread Anshum Gupta (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5004:
---
Attachment: (was: SOLR-5004.04.patch)

> Allow a shard to be split into 'n' sub-shards
> -
>
> Key: SOLR-5004
> URL: https://issues.apache.org/jira/browse/SOLR-5004
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-5004.01.patch, SOLR-5004.02.patch, 
> SOLR-5004.03.patch, SOLR-5004.patch, SOLR-5004.patch
>
>
> As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the 
> parent one. Accept a parameter to split into n sub-shards.
> Default it to 2 and perhaps also have an upper bound to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5004) Allow a shard to be split into 'n' sub-shards

2018-10-19 Thread Anshum Gupta (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-5004:
---
Attachment: SOLR-5004.04.patch

> Allow a shard to be split into 'n' sub-shards
> -
>
> Key: SOLR-5004
> URL: https://issues.apache.org/jira/browse/SOLR-5004
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-5004.01.patch, SOLR-5004.02.patch, 
> SOLR-5004.03.patch, SOLR-5004.04.patch, SOLR-5004.patch, SOLR-5004.patch
>
>
> As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the 
> parent one. Accept a parameter to split into n sub-shards.
> Default it to 2 and perhaps also have an upper bound to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 348 - Failure

2018-10-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/348/

No tests ran.

Build Log:
[...truncated 23295 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2434 links (1986 relative) to 3182 anchors in 246 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.6.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:

[jira] [Comment Edited] (SOLR-8921) Potential NPE in pivot facet

2018-10-19 Thread Isabelle Giguere (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646667#comment-16646667
 ] 

Isabelle Giguere edited comment on SOLR-8921 at 10/19/18 2:58 PM:
--

Solr 7.5.0 :
Reproduced with a query on a text field, with an alias, even if each 
collections in the alias respond without error individually
'fileName' : text field, single valued
'author' : text field, multivalued 
'fileType' : string field, single valued
- collection=de_alias=author=fileName = NPE
- collection=lang_de=author=fileName = response OK
- collection=emptyText=author=fileName = response OK
- collection=de_alias=author=fileType = response OK

I'll try to find time to devise a unit test to illustrate.

Alternatively to this patch on PivotFacetProcessor, 
org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(Query q) could 
return DocSet.EMPTY if input Query is null, but that would have repercussions 
everywhere.


was (Author: igiguere):
Solr 7.5.0 :
Reproduced with a query on a text field, with an alias, even if each 
collections in the alias respond without error individually
'fileName' and 'author' are text field, 'fileType' is a string field
- collection=de_alias=author=fileName = NPE
- collection=lang_de=author=fileName = response OK
- collection=emptyText=author=fileName = response OK
- collection=de_alias=author=fileType = response OK

I'll try to find time to devise a unit test to illustrate.

Alternatively to this patch on PivotFacetProcessor, 
org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(Query q) could 
return DocSet.EMPTY if input Query is null, but that would have repercussions 
everywhere.

> Potential NPE in pivot facet
> 
>
> Key: SOLR-8921
> URL: https://issues.apache.org/jira/browse/SOLR-8921
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Steve Molloy
>Priority: Major
> Attachments: SOLR-8921.patch, SOLR-8921.patch, 
> SOLR-8921_tag_7.5.0.patch
>
>
> For some queries distributed over multiple collections, I've hit a NPE when 
> SolrIndexSearcher tries to fetch results from cache. Basically, query 
> generated to compute pivot on document sub set is null, causing the NPE on 
> lookup.
> 2016-03-28 11:34:58.361 ERROR (qtp268141378-751) [c:otif_fr s:shard1 
> r:core_node1 x:otif_fr_shard1_replica1] o.a.s.h.RequestHandlerBase 
> java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at 
> org.apache.solr.util.ConcurrentLFUCache.get(ConcurrentLFUCache.java:92)
>   at org.apache.solr.search.LFUCache.get(LFUCache.java:153)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:940)
>   at 
> org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:2098)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.getSubsetSize(PivotFacetProcessor.java:356)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:219)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:167)
>   at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:263)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> 

[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-19 Thread Elizabeth Haubert (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656893#comment-16656893
 ] 

Elizabeth Haubert commented on SOLR-12243:
--

Why do we need to wait for the lucene patch, if that has been backed out of
this jira?

I thought the ticket was split, as there are two distinct issues. One that
the clauses are missing entirely, which would be handled here,  and a
second one that when the span clauses are generated with the attached
patch, the semantics between phrase clauses with and without multi term
synonyns are different without the lucene change.

Depending on how reordering in span queries is implemented in lucene, there
may need to be additional logic to edismax to take advantage, but
presumably that would need another issue?





> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3-1 6-3 930%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-19 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656862#comment-16656862
 ] 

Uwe Schindler commented on SOLR-12243:
--

We have to wait for the Lucene issue to be solved. It also affects 
Elasticsearch.

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3-1 6-3 930%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8921) Potential NPE in pivot facet

2018-10-19 Thread Isabelle Giguere (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646667#comment-16646667
 ] 

Isabelle Giguere edited comment on SOLR-8921 at 10/19/18 2:25 PM:
--

Solr 7.5.0 :
Reproduced with a query on a text field, with an alias, even if each 
collections in the alias respond without error individually
'fileName' and 'author' are text field, 'fileType' is a string field
- collection=de_alias=author=fileName = NPE
- collection=lang_de=author=fileName = response OK
- collection=emptyText=author=fileName = response OK
- collection=de_alias=author=fileType = response OK

I'll try to find time to devise a unit test to illustrate.

Alternatively to this patch on PivotFacetProcessor, 
org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(Query q) could 
return DocSet.EMPTY if input Query is null, but that would have repercussions 
everywhere.


was (Author: igiguere):
Solr 7.5.0 :
Reproduced with a query on an alias and text field, even if each collections in 
the alias respond without error individually
'name' and 'author' are text field, 'fileType' is a string field
- collection=de_alias=author=name = NPE
- collection=lang_de=author=name = respone OK
- collection=emptyText=author=name = respone OK
- collection=de=author=fileType = respone OK

I'll try to find time to devise a unit test to illustrate.

Alternatively to this patch on PivotFacetProcessor, 
org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(Query q) could 
return DocSet.EMPTY if input Query is null, but that would have repercussions 
everywhere.

> Potential NPE in pivot facet
> 
>
> Key: SOLR-8921
> URL: https://issues.apache.org/jira/browse/SOLR-8921
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.4.1
>Reporter: Steve Molloy
>Priority: Major
> Attachments: SOLR-8921.patch, SOLR-8921.patch, 
> SOLR-8921_tag_7.5.0.patch
>
>
> For some queries distributed over multiple collections, I've hit a NPE when 
> SolrIndexSearcher tries to fetch results from cache. Basically, query 
> generated to compute pivot on document sub set is null, causing the NPE on 
> lookup.
> 2016-03-28 11:34:58.361 ERROR (qtp268141378-751) [c:otif_fr s:shard1 
> r:core_node1 x:otif_fr_shard1_replica1] o.a.s.h.RequestHandlerBase 
> java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
>   at 
> org.apache.solr.util.ConcurrentLFUCache.get(ConcurrentLFUCache.java:92)
>   at org.apache.solr.search.LFUCache.get(LFUCache.java:153)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:940)
>   at 
> org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:2098)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.getSubsetSize(PivotFacetProcessor.java:356)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:219)
>   at 
> org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:167)
>   at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:263)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:273)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:223)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_172) - Build # 7572 - Still unstable!

2018-10-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7572/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testZipFDistribution

Error Message:
Zipf distribution not descending!!!

Stack Trace:
java.lang.Exception: Zipf distribution not descending!!!
at 
__randomizedtesting.SeedInfo.seed([C6C581C49ED1EB27:E270ECE08979E30F]:0)
at 
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testZipFDistribution(MathExpressionTest.java:2928)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 16343 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.io.stream.MathExpressionTest
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-solrj\test\J1\temp\solr.client.solrj.io.stream.MathExpressionTest_C6C581C49ED1EB27-001\init-core-data-001
   [junit4]   

[jira] [Commented] (SOLR-12259) Robustly upgrade indexes

2018-10-19 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656849#comment-16656849
 ] 

David Smiley commented on SOLR-12259:
-

I like the proposal in your last comment here -- a new "rewriteIndex=true" and 
not conflate this with "optimize".  I'm not sure it should be some parameter 
since it's so rare to do this and it's not something you'd do on the fly.  It 
seems to me it's logically some admin operation.  Heck, perhaps, optimize ought 
to be an admin operation too :-)

> Robustly upgrade indexes
> 
>
> Key: SOLR-12259
> URL: https://issues.apache.org/jira/browse/SOLR-12259
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> The general problem statement is that the current upgrade path is trappy and 
> cumbersome.  It would be a great help "in the field" to make the upgrade 
> process less painful.
> Additionally one of the most common things users want to do is enable 
> docValues, but currently they often have to re-index.
> Issues:
> 1> if I upgrade from 5x to 6x and then 7x, theres no guarantee that when I go 
> to 7x all the segments have been rewritten in 6x format. Say I have a segment 
> at max size that has no deletions. It'll never be rewritten until it has 
> deleted docs. And perhaps 50% deleted docs currently.
> 2> IndexUpgraderTool explicitly does a forcemerge to 1 segment, which is bad.
> 3> in a large distributed system, running IndexUpgraderTool on all the nodes 
> is cumbersome even if <2> is acceptable.
> 4> Users who realize specifying docValues on a field would be A Good Thing 
> have to re-index. We have UninvertDocValuesMergePolicyFactory. Wouldn't it be 
> nice to be able to have this done all at once without forceMerging to one 
> segment.
> Proposal:
> Somehow avoid the above. Currently LUCENE-7976 is a start in that direction. 
> It will make TMP respect max segments size so can avoid forceMerges that 
> result in one segment. What it does _not_ do is rewrite segments with zero 
> (or a small percentage) deleted documents.
> So it  doesn't seem like a huge stretch to be able to specify to TMP the 
> option to rewrite segments that have no deleted documents. Perhaps a new 
> parameter to optimize?
> This would likely require another change to TMP or whatever.
> So upgrading to a new solr would look like
> 1> install the new Solr
> 2> execute 
> "http://node:port/solr/collection_or_core/update?optimize=true=true;
> What's not clear to me is whether we'd require 
> UninvertDocValuesMergePolicyFactory to be specified and wrap TMP or not.
> Anyway, let's discuss. I'll create yet another LUCENE JIRA for TMP do rewrite 
> all segments that I'll link.
> I'll also link several other JIRAs in here, they're coalescing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12884) Admin UI, admin/luke and *Point fields

2018-10-19 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656815#comment-16656815
 ] 

David Smiley commented on SOLR-12884:
-

bq. WDYT about putting in some useful information though. 

+1   Your suggested text is fine.

> Admin UI, admin/luke and *Point fields
> --
>
> Key: SOLR-12884
> URL: https://issues.apache.org/jira/browse/SOLR-12884
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Erick Erickson
>Priority: Major
>
> One of the conference attendees noted that you go to the schema browser and 
> click on, say, a pint field, then click "load term info", nothing is shown.
> admin/luke similarly doesn't show much interesting, here's the response for a 
> pint .vs. a tint field:
> "popularity":\{ "type":"pint", "schema":"I-SD-OF--"},
> "popularityt":{ "type":"tint", "schema":"I-S--OF--",
>                        "index":"-TS--", "docs":15},
>  
> What, if anything, should we do in these two cases? Since  the points-based 
> numerics don't have terms like Trie* fields, I don't think we _can_ show much 
> more so the above makes sense, it's just jarring to end users and looks like 
> a bug.
> WDYT about putting in some useful information though. Say for the Admin UI 
> for points-based "terms cannot be shown for points-based fields" or some such?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-19 Thread Elizabeth Haubert (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656748#comment-16656748
 ] 

Elizabeth Haubert commented on SOLR-12243:
--

There are also tests included with the patch.




> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3-1 6-3 930%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 8.0

2018-10-19 Thread Erick Erickson
+1, this gives us all a chance to prioritize getting the blockers out
of the way in a careful manner.
On Fri, Oct 19, 2018 at 7:56 AM jim ferenczi  wrote:
>
> +1 too. With this new perspective we could create the branch just after the 
> 7.6 release and target the 8.0 release for January 2019 which gives almost 3 
> month to finish the blockers ?
>
> Le jeu. 18 oct. 2018 à 23:56, David Smiley  a écrit 
> :
>>
>> +1 to a 7.6 —lots of stuff in there
>> On Thu, Oct 18, 2018 at 4:47 PM Nicholas Knize  wrote:
>>>
>>> If we're planning to postpone cutting an 8.0 branch until a few weeks from 
>>> now then I'd like to propose (and volunteer to RM) a 7.6 release targeted 
>>> for late November or early December (following the typical 2 month release 
>>> pattern). It feels like this might give a little breathing room for 
>>> finishing up 8.0 blockers? And looking at the change log there appear to be 
>>> a healthy list of features, bug fixes, and improvements to both Solr and 
>>> Lucene that warrant a 7.6 release? Personally I wouldn't mind releasing the 
>>> LatLonShape encoding changes in LUCENE-8521 and selective indexing work 
>>> done in LUCENE-8496. Any objections or thoughts?
>>>
>>> - Nick
>>>
>>>
>>> On Thu, Oct 18, 2018 at 5:32 AM Đạt Cao Mạnh  
>>> wrote:

 Thanks Cassandra and Jim,

 I created a blocker issue for Solr 8.0 SOLR-12883, currently in jira/http2 
 branch there are a draft-unmature implementation of SPNEGO authentication 
 which enough to makes the test pass, this implementation will be removed 
 when SOLR-12883 gets resolved . Therefore I don't see any problem on 
 merging jira/http2 to master branch in the next week.

 On Thu, Oct 18, 2018 at 2:33 AM jim ferenczi  
 wrote:
>
> > But if you're working with a different assumption - that just the 
> > existence of the branch does not stop Dat from still merging his work 
> > and the work being included in 8.0 - then I agree, waiting for him to 
> > merge doesn't need to stop the creation of the branch.
>
> Yes that's my reasoning. This issue is a blocker so we won't release 
> without it but we can work on the branch in the meantime and let other 
> people work on new features that are not targeted to 8.
>
> Le mer. 17 oct. 2018 à 20:51, Cassandra Targett  a 
> écrit :
>>
>> OK - I was making an assumption that the timeline for the first 8.0 RC 
>> would be ASAP after the branch is created.
>>
>> It's a common perception that making a branch freezes adding new 
>> features to the release, perhaps in an unofficial way (more of a 
>> courtesy rather than a rule). But if you're working with a different 
>> assumption - that just the existence of the branch does not stop Dat 
>> from still merging his work and the work being included in 8.0 - then I 
>> agree, waiting for him to merge doesn't need to stop the creation of the 
>> branch.
>>
>> If, however, once the branch is there people object to Dat merging his 
>> work because it's "too late", then the branch shouldn't be created yet 
>> because we want to really try to clear that blocker for 8.0.
>>
>> Cassandra
>>
>> On Wed, Oct 17, 2018 at 12:13 PM jim ferenczi  
>> wrote:
>>>
>>> Ok thanks for answering.
>>>
>>> > - I think Solr needs a couple more weeks since the work Dat is doing 
>>> > isn't quite done yet.
>>>
>>> We can wait a few more weeks to create the branch but I don't think 
>>> that one action (creating the branch) prevents the other (the work Dat 
>>> is doing).
>>> HTTP/2 is one of the blocker for the release but it can be done in 
>>> master and backported to the appropriate branch as any other feature ? 
>>> We just need an issue with the blocker label to ensure that
>>> we don't miss it ;). Creating the branch early would also help in case 
>>> you don't want to release all the work at once in 8.0.0.
>>> Next week was just a proposal, what I meant was soon because we target 
>>> a release in a few months.
>>>
>>>
>>> Le mer. 17 oct. 2018 à 17:52, Cassandra Targett  
>>> a écrit :

 IMO next week is a bit too soon for the branch - I think Solr needs a 
 couple more weeks since the work Dat is doing isn't quite done yet.

 Solr needs the HTTP/2 work Dat has been doing, and he told me 
 yesterday he feels it is nearly ready to be merged into master. 
 However, it does require a new release of Jetty to Solr is able to 
 retain Kerberos authentication support (Dat has been working with that 
 team to help test the changes Jetty needs to support Kerberos with 
 HTTP/2). They should get that release out soon, but we are dependent 
 on them a little bit.

 He can hopefully reply with more details on his status and what else 

[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-10-19 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656723#comment-16656723
 ] 

Shawn Heisey commented on SOLR-12243:
-

The central problem in this issue was unclear to me, so I asked [~ehaubert] if 
she could explain it.  With that information, I was able to do a test that 
makes it pretty clear.

With a 7.5.0 example setup, I created a "title" field using the default 
text_general fieldType (which uses SynonymGraphFilter at query time), and 
included the two configs provided in the issue description (synonyms and 
handler).  Here's the parsed queries for a couple of examples.  The difference 
here is one includes "dog" which has a multiterm synonym, and the other 
includes "rat" which only has single-term synonyms:

with q=allergic reaction dog
{noformat}
+Synonym(title:allergic title:hypersensitive))^100.0)~0.4
((title:reaction)^100.0)~0.4 ((title:canine (+title:canis +title:familiris)
(+title:k +title:9) title:dog)^100.0)~0.4)~3) () (title:\"(hypersensitive 
allergic) reaction\"~11)~0.4 ()
{noformat}

with q=allergic reaction rat
{noformat}
+Synonym(title:allergic title:hypersensitive))^100.0)~0.4
((title:reaction)^100.0)~0.4 ((Synonym(title:rat title:rattus))^100.0)~0.4)~3)
((title:\"(hypersensitive allergic) reaction (rattus rat)\"~20)^5000.0)~0.4
((title:\"(hypersensitive allergic) reaction\"~11)~0.4 (title:\"reaction 
(rattus rat)\"~11)~0.4)
((title:\"(hypersensitive allergic) reaction (rattus rat)\"~22)^1000.0)~0.4
{noformat}


> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Uwe Schindler
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3-1 6-3 930%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 8.0

2018-10-19 Thread jim ferenczi
+1 too. With this new perspective we could create the branch just after the
7.6 release and target the 8.0 release for January 2019 which gives almost
3 month to finish the blockers ?

Le jeu. 18 oct. 2018 à 23:56, David Smiley  a
écrit :

> +1 to a 7.6 —lots of stuff in there
> On Thu, Oct 18, 2018 at 4:47 PM Nicholas Knize  wrote:
>
>> If we're planning to postpone cutting an 8.0 branch until a few weeks
>> from now then I'd like to propose (and volunteer to RM) a 7.6 release
>> targeted for late November or early December (following the typical 2 month
>> release pattern). It feels like this might give a little breathing room for
>> finishing up 8.0 blockers? And looking at the change log there appear to be
>> a healthy list of features, bug fixes, and improvements to both Solr and
>> Lucene that warrant a 7.6 release? Personally I wouldn't mind releasing the
>> LatLonShape encoding changes in LUCENE-8521
>>  and selective
>> indexing work done in LUCENE-8496
>> . Any objections or
>> thoughts?
>>
>> - Nick
>>
>>
>> On Thu, Oct 18, 2018 at 5:32 AM Đạt Cao Mạnh 
>> wrote:
>>
>>> Thanks Cassandra and Jim,
>>>
>>> I created a blocker issue for Solr 8.0 SOLR-12883
>>> , currently in
>>> jira/http2 branch there are a draft-unmature implementation of SPNEGO
>>> authentication which enough to makes the test pass, this implementation
>>> will be removed when SOLR-12883 gets resolved . Therefore I don't see any
>>> problem on merging jira/http2 to master branch in the next week.
>>>
>>> On Thu, Oct 18, 2018 at 2:33 AM jim ferenczi 
>>> wrote:
>>>
 > But if you're working with a different assumption - that just the
 existence of the branch does not stop Dat from still merging his work and
 the work being included in 8.0 - then I agree, waiting for him to merge
 doesn't need to stop the creation of the branch.

 Yes that's my reasoning. This issue is a blocker so we won't release
 without it but we can work on the branch in the meantime and let other
 people work on new features that are not targeted to 8.

 Le mer. 17 oct. 2018 à 20:51, Cassandra Targett 
 a écrit :

> OK - I was making an assumption that the timeline for the first 8.0 RC
> would be ASAP after the branch is created.
>
> It's a common perception that making a branch freezes adding new
> features to the release, perhaps in an unofficial way (more of a courtesy
> rather than a rule). But if you're working with a different assumption -
> that just the existence of the branch does not stop Dat from still merging
> his work and the work being included in 8.0 - then I agree, waiting for 
> him
> to merge doesn't need to stop the creation of the branch.
>
> If, however, once the branch is there people object to Dat merging his
> work because it's "too late", then the branch shouldn't be created yet
> because we want to really try to clear that blocker for 8.0.
>
> Cassandra
>
> On Wed, Oct 17, 2018 at 12:13 PM jim ferenczi 
> wrote:
>
>> Ok thanks for answering.
>>
>> > - I think Solr needs a couple more weeks since the work Dat is
>> doing isn't quite done yet.
>>
>> We can wait a few more weeks to create the branch but I don't think
>> that one action (creating the branch) prevents the other (the work Dat is
>> doing).
>> HTTP/2 is one of the blocker for the release but it can be done in
>> master and backported to the appropriate branch as any other feature ? We
>> just need an issue with the blocker label to ensure that
>> we don't miss it ;). Creating the branch early would also help in
>> case you don't want to release all the work at once in 8.0.0.
>> Next week was just a proposal, what I meant was soon because we
>> target a release in a few months.
>>
>>
>> Le mer. 17 oct. 2018 à 17:52, Cassandra Targett <
>> casstarg...@gmail.com> a écrit :
>>
>>> IMO next week is a bit too soon for the branch - I think Solr needs
>>> a couple more weeks since the work Dat is doing isn't quite done yet.
>>>
>>> Solr needs the HTTP/2 work Dat has been doing, and he told me
>>> yesterday he feels it is nearly ready to be merged into master. 
>>> However, it
>>> does require a new release of Jetty to Solr is able to retain Kerberos
>>> authentication support (Dat has been working with that team to help test
>>> the changes Jetty needs to support Kerberos with HTTP/2). They should 
>>> get
>>> that release out soon, but we are dependent on them a little bit.
>>>
>>> He can hopefully reply with more details on his status and what else
>>> needs to be done.
>>>
>>> Once Dat merges his work, IMO we should leave it in master for a
>>> little bit. 

[jira] [Created] (SOLR-12886) QParserPlugin to support SolrCoreAware

2018-10-19 Thread Markus Jelsma (JIRA)
Markus Jelsma created SOLR-12886:


 Summary: QParserPlugin to support SolrCoreAware
 Key: SOLR-12886
 URL: https://issues.apache.org/jira/browse/SOLR-12886
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Markus Jelsma
 Fix For: master (8.0)
 Attachments: SOLR-12886.patch

Currently QParserPlugin does not support SolrCoreAware due to SOLR-8311. This 
adds supports similar to SOLR-11735




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12886) QParserPlugin to support SolrCoreAware

2018-10-19 Thread Markus Jelsma (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated SOLR-12886:
-
Attachment: SOLR-12886.patch

> QParserPlugin to support SolrCoreAware
> --
>
> Key: SOLR-12886
> URL: https://issues.apache.org/jira/browse/SOLR-12886
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Markus Jelsma
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: SOLR-12886.patch
>
>
> Currently QParserPlugin does not support SolrCoreAware due to SOLR-8311. This 
> adds supports similar to SOLR-11735



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 23056 - Still Unstable!

2018-10-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23056/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

7 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeLost

Error Message:
org.apache.solr.common.SolrException: 

Stack Trace:
org.apache.solr.common.SolrException: org.apache.solr.common.SolrException: 
at 
__randomizedtesting.SeedInfo.seed([B8141FB1454B7787:701D14FC6A11201]:0)
at 
org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:78)
at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchTagValues(SolrClientNodeStateProvider.java:139)
at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:128)
at 
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.printState(ComputePlanActionTest.java:162)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:993)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: org.apache.solr.common.SolrException: 
at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:300)
at 

Re: Solr

2018-10-19 Thread Shawn Heisey

On 10/19/2018 1:41 AM, wulf wrote:

Hi,Mr./Ms. Can I ask some question for you about Solr?


This mailing list is designed for messages related to the *development* 
of Lucene and Solr.  If your question is a general one, or relates to 
the *using* Solr, then it's off topic for this list.


The solr-user list is where most user questions belong, including 
questions about the development of user code related to Solr. There are 
about five times as many people subscribed to solr-user, compared to the 
subscriber count on this list.


If your question is about the source code for Lucene or Solr, then just 
jump right in and ask your question.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1722 - Still Unstable

2018-10-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1722/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/350/consoleText

[repro] Revision: 804afbfd47cc8d86ceda6ea66f0afe304af1ad1b

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testRouting -Dtests.seed=B050E0AF0C6B0B20 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=pt-BR -Dtests.timezone=CET -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
1a8188d92b8148f2d937bd038f48f103526fcbcc
[repro] git fetch
[repro] git checkout 804afbfd47cc8d86ceda6ea66f0afe304af1ad1b

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 2573 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=B050E0AF0C6B0B20 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=pt-BR -Dtests.timezone=CET -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 758 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro] git checkout 1a8188d92b8148f2d937bd038f48f103526fcbcc

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12873) A few solrconfig.xml still use LUCENE_CURRENT instead of LATEST

2018-10-19 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656626#comment-16656626
 ] 

Lucene/Solr QA commented on SOLR-12873:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  2m  7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 13s{color} 
| {color:red} core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 15s{color} 
| {color:red} solrj in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.api.collections.TestHdfsCloudBackupRestore |
|   | solr.cloud.api.collections.TestLocalFSCloudBackupRestore |
|   | solr.client.solrj.impl.CloudSolrClientTest |
|   | solr.client.solrj.io.graph.GraphTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12873 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943994/SOLR-12873.patch |
| Optional Tests |  compile  javac  unit  ratsources  validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 1a8188d |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/206/artifact/out/patch-unit-solr_core.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/206/artifact/out/patch-unit-solr_solrj.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/206/testReport/ |
| modules | C: solr/core solr/solrj U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/206/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> A few solrconfig.xml still use LUCENE_CURRENT instead of LATEST
> ---
>
> Key: SOLR-12873
> URL: https://issues.apache.org/jira/browse/SOLR-12873
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12873.patch
>
>
> There are a few config files still referring to {{LUCENE_CURRENT}} instead of 
> {{LATEST}}. This is to remove them, following on from LUCENE-5901 a while 
> back.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 2940 - Still Unstable!

2018-10-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2940/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseSerialGC

56 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([310F502AC8F6EF20]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([310F502AC8F6EF20]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Commented] (LUCENE-8537) ant test command fails under lucene/tools

2018-10-19 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656458#comment-16656458
 ] 

Uwe Schindler commented on LUCENE-8537:
---

I think we can ignore the Yetis build results. Let's commit this.

Uwe

> ant test command fails under lucene/tools
> -
>
> Key: LUCENE-8537
> URL: https://issues.apache.org/jira/browse/LUCENE-8537
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (8.0)
>Reporter: Peter Somogyi
>Priority: Minor
> Attachments: LUCENE-8537.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{ant test}} command executed under {{lucene/tools}} folder fails because 
> it does not have {{junit.classpath}} property. Since the module does not have 
> any test folder we could override the {{-test}} and {{-check-totals}} targets.
> {noformat}
> bash-3.2$ pwd
> /Users/peter.somogyi/repos/lucene-solr/lucene/tools
> bash-3.2$ ant test
> Buildfile: /Users/peter.somogyi/repos/lucene-solr/lucene/tools/build.xml
> ...
> -test:
>[junit4]  says ciao! Master seed: 9A2ACC9B4A3C8553
> BUILD FAILED
> /Users/peter.somogyi/repos/lucene-solr/lucene/common-build.xml:1567: The 
> following error occurred while executing this line:
> /Users/peter.somogyi/repos/lucene-solr/lucene/common-build.xml:1092: 
> Reference junit.classpath not found.
> Total time: 1 second
> {noformat}
> I ran into this issue when uploaded a patch where I removed an import from 
> this module. This triggered a module-level build during precommit that failed 
> with this error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-8537) ant test command fails under lucene/tools

2018-10-19 Thread Uwe Schindler (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reassigned LUCENE-8537:
-

Assignee: Uwe Schindler

> ant test command fails under lucene/tools
> -
>
> Key: LUCENE-8537
> URL: https://issues.apache.org/jira/browse/LUCENE-8537
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (8.0)
>Reporter: Peter Somogyi
>Assignee: Uwe Schindler
>Priority: Minor
> Attachments: LUCENE-8537.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{ant test}} command executed under {{lucene/tools}} folder fails because 
> it does not have {{junit.classpath}} property. Since the module does not have 
> any test folder we could override the {{-test}} and {{-check-totals}} targets.
> {noformat}
> bash-3.2$ pwd
> /Users/peter.somogyi/repos/lucene-solr/lucene/tools
> bash-3.2$ ant test
> Buildfile: /Users/peter.somogyi/repos/lucene-solr/lucene/tools/build.xml
> ...
> -test:
>[junit4]  says ciao! Master seed: 9A2ACC9B4A3C8553
> BUILD FAILED
> /Users/peter.somogyi/repos/lucene-solr/lucene/common-build.xml:1567: The 
> following error occurred while executing this line:
> /Users/peter.somogyi/repos/lucene-solr/lucene/common-build.xml:1092: 
> Reference junit.classpath not found.
> Total time: 1 second
> {noformat}
> I ran into this issue when uploaded a patch where I removed an import from 
> this module. This triggered a module-level build during precommit that failed 
> with this error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8537) ant test command fails under lucene/tools

2018-10-19 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656451#comment-16656451
 ] 

Lucene/Solr QA commented on LUCENE-8537:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m  2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m  2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} tools in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8537 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944606/LUCENE-8537.patch |
| Optional Tests |  compile  javac  unit  ratsources  validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 1a8188d |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | 1.8.0_172 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/107/testReport/ |
| modules | C: lucene lucene/tools U: lucene |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/107/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> ant test command fails under lucene/tools
> -
>
> Key: LUCENE-8537
> URL: https://issues.apache.org/jira/browse/LUCENE-8537
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (8.0)
>Reporter: Peter Somogyi
>Priority: Minor
> Attachments: LUCENE-8537.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{ant test}} command executed under {{lucene/tools}} folder fails because 
> it does not have {{junit.classpath}} property. Since the module does not have 
> any test folder we could override the {{-test}} and {{-check-totals}} targets.
> {noformat}
> bash-3.2$ pwd
> /Users/peter.somogyi/repos/lucene-solr/lucene/tools
> bash-3.2$ ant test
> Buildfile: /Users/peter.somogyi/repos/lucene-solr/lucene/tools/build.xml
> ...
> -test:
>[junit4]  says ciao! Master seed: 9A2ACC9B4A3C8553
> BUILD FAILED
> /Users/peter.somogyi/repos/lucene-solr/lucene/common-build.xml:1567: The 
> following error occurred while executing this line:
> /Users/peter.somogyi/repos/lucene-solr/lucene/common-build.xml:1092: 
> Reference junit.classpath not found.
> Total time: 1 second
> {noformat}
> I ran into this issue when uploaded a patch where I removed an import from 
> this module. This triggered a module-level build during precommit that failed 
> with this error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1671 - Still Unstable

2018-10-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1671/

5 tests failed.
FAILED:  org.apache.solr.cloud.DeleteNodeTest.test

Error Message:
Could not load collection from ZK: deletenodetest_coll

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
deletenodetest_coll
at 
__randomizedtesting.SeedInfo.seed([BBDF5DA8FB090389:338B627255F56E71]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1321)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:737)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:148)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:131)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:117)
at org.apache.solr.cloud.DeleteNodeTest.test(DeleteNodeTest.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23055 - Still Unstable!

2018-10-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23055/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseSerialGC

16 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.MultiSolrCloudTestCaseTest

Error Message:
duplicate clusterId cloud1

Stack Trace:
java.lang.AssertionError: duplicate clusterId cloud1
at __randomizedtesting.SeedInfo.seed([6DADDAF691F08EF7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.MultiSolrCloudTestCase.doSetupClusters(MultiSolrCloudTestCase.java:93)
at 
org.apache.solr.cloud.MultiSolrCloudTestCaseTest.setupClusters(MultiSolrCloudTestCaseTest.java:53)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.MultiSolrCloudTestCaseTest

Error Message:
duplicate clusterId cloud1

Stack Trace:
java.lang.AssertionError: duplicate clusterId cloud1
at __randomizedtesting.SeedInfo.seed([6DADDAF691F08EF7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.MultiSolrCloudTestCase.doSetupClusters(MultiSolrCloudTestCase.java:93)
at 
org.apache.solr.cloud.MultiSolrCloudTestCaseTest.setupClusters(MultiSolrCloudTestCaseTest.java:53)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 

Solr

2018-10-19 Thread wulf
Hi,Mr./Ms. Can I ask some question for you about Solr?

--
Best Regards
_
吴立法
T : +86 591 87303106
F : +86 591 87303103
M : +86 18120803980
Mail: w...@vcomcn.co
集时通(福建)信息科技有限公司
福州市铜盘路软件园F区4座14层


 

 

 自动判断 
中文中文(简体)中文(香港)中文(繁体)英语日语朝鲜语德语法语俄语泰语南非语阿拉伯语阿塞拜疆语比利时语保加利亚语加泰隆语捷克语威尔士语丹麦语第维埃语希腊语世界语西班牙语爱沙尼亚语巴士克语法斯语芬兰语法罗语加里西亚语古吉拉特语希伯来语印地语克罗地亚语匈牙利语亚美尼亚语印度尼西亚语冰岛语意大利语格鲁吉亚语哈萨克语卡纳拉语孔卡尼语吉尔吉斯语立陶宛语拉脱维亚语毛利语马其顿语蒙古语马拉地语马来语马耳他语挪威语(伯克梅尔)荷兰语北梭托语旁遮普语波兰语葡萄牙语克丘亚语罗马尼亚语梵文北萨摩斯语斯洛伐克语斯洛文尼亚语阿尔巴尼亚语瑞典语斯瓦希里语叙利亚语泰米尔语泰卢固语塔加路语茨瓦纳语土耳其语宗加语鞑靼语乌克兰语乌都语乌兹别克语越南语班图语祖鲁语
  自动选择 
中文中文(简体)中文(香港)中文(繁体)英语日语朝鲜语德语法语俄语泰语南非语阿拉伯语阿塞拜疆语比利时语保加利亚语加泰隆语捷克语威尔士语丹麦语第维埃语希腊语世界语西班牙语爱沙尼亚语巴士克语法斯语芬兰语法罗语加里西亚语古吉拉特语希伯来语印地语克罗地亚语匈牙利语亚美尼亚语印度尼西亚语冰岛语意大利语格鲁吉亚语哈萨克语卡纳拉语孔卡尼语吉尔吉斯语立陶宛语拉脱维亚语毛利语马其顿语蒙古语马拉地语马来语马耳他语挪威语(伯克梅尔)荷兰语北梭托语旁遮普语波兰语葡萄牙语克丘亚语罗马尼亚语梵文北萨摩斯语斯洛伐克语斯洛文尼亚语阿尔巴尼亚语瑞典语斯瓦希里语叙利亚语泰米尔语泰卢固语塔加路语茨瓦纳语土耳其语宗加语鞑靼语乌克兰语乌都语乌兹别克语越南语班图语祖鲁语
 有道翻译 百度翻译 谷歌翻译 谷歌翻译(国内)   翻译朗读 复制 正在查询,请稍候……重试  朗读 复制  
  复制  朗读 复制  via 谷歌翻译(国内)   译

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1156 - Failure

2018-10-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1156/

No tests ran.

Build Log:
[...truncated 23268 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2435 links (1987 relative) to 3184 anchors in 247 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:


[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 2939 - Still Unstable!

2018-10-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2939/
Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseG1GC

35 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest: 1) 
Thread[id=4697, name=test-2061-thread-2-EventThread, state=WAITING, 
group=TGRP-TimeRoutedAliasUpdateProcessorTest] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)2) 
Thread[id=4689, name=test-2061-thread-2, state=WAITING, 
group=TGRP-TimeRoutedAliasUpdateProcessorTest] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)3) 
Thread[id=4696, name=test-2061-thread-2-SendThread(127.0.0.1:35457), 
state=TIMED_WAITING, group=TGRP-TimeRoutedAliasUpdateProcessorTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1054)4) 
Thread[id=4688, name=test-2061-thread-1, state=WAITING, 
group=TGRP-TimeRoutedAliasUpdateProcessorTest] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)5) 
Thread[id=4698, name=zkConnectionManagerCallback-1368-thread-1, state=WAITING, 
group=TGRP-TimeRoutedAliasUpdateProcessorTest] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest: 
   1) Thread[id=4697, name=test-2061-thread-2-EventThread, state=WAITING, 
group=TGRP-TimeRoutedAliasUpdateProcessorTest]
at java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
   2) Thread[id=4689, name=test-2061-thread-2, state=WAITING, 
group=TGRP-TimeRoutedAliasUpdateProcessorTest]
at 

[jira] [Created] (SOLR-12885) BinaryResponseWriter (javabin format) should directly copy from Bytesref to output

2018-10-19 Thread Noble Paul (JIRA)
Noble Paul created SOLR-12885:
-

 Summary: BinaryResponseWriter (javabin format) should directly 
copy from Bytesref to output
 Key: SOLR-12885
 URL: https://issues.apache.org/jira/browse/SOLR-12885
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul


The format format in which bytes are stored in {{BytesRef}} and the javabin 
string format are both the same. We don't need to convert the string/text 
fields from {{BytesRef}} to String and back to UTF8 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org