[JENKINS-MAVEN] Lucene-Solr-Maven-7.x #425: POMs out of sync

2019-01-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-7.x/425/

No tests ran.

Build Log:
[...truncated 19473 lines...]
BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/build.xml:672: The 
following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/build.xml:209: The 
following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/lucene/build.xml:411:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/lucene/common-build.xml:2268:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/lucene/common-build.xml:1726:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/lucene/common-build.xml:657:
 Error deploying artifact 'org.apache.lucene:lucene-highlighter:jar': Error 
installing artifact's metadata: Error while deploying metadata: Error 
transferring file

Total time: 17 minutes 43 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Closed] (SOLR-5401) In Solr's ResourceLoader, add a check for @Deprecated annotation in the plugin/analysis/... class loading code, so we print a warning in the log if a deprecated factory cla

2019-01-29 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev closed SOLR-5401.
--

> In Solr's ResourceLoader, add a check for @Deprecated annotation in the 
> plugin/analysis/... class loading code, so we print a warning in the log if a 
> deprecated factory class is used
> --
>
> Key: SOLR-5401
> URL: https://issues.apache.org/jira/browse/SOLR-5401
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis
>Affects Versions: 3.6, 4.5
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: 4.6, 6.0
>
> Attachments: SOLR-5401.patch
>
>
> While changing an antique 3.6 schema.xml to Solr 4.5, I noticed that some 
> factories were deprecated in 3.x and were no longer available in 4.x (e.g. 
> "solr._Language_PorterStemFilterFactory"). If the user would have got a 
> notice before, this could have been prevented and user would have upgraded 
> before.
> In fact the factories were @Deprecated in 3.6, but the Solr loader does not 
> print any warning. My proposal is to add some simple code to 
> SolrResourceLoader that it prints a warning about the deprecated class, if 
> any configuartion setting loads a class with @Deprecated warning. So we can 
> prevent that problem in the future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_172) - Build # 3475 - Unstable!

2019-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3475/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestCloudSearcherWarming

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestCloudSearcherWarming: 1) Thread[id=13343, 
name=ProcessThread(sid:0 cport:35789):, state=WAITING, 
group=TGRP-TestCloudSearcherWarming] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:123)
2) Thread[id=13339, name=ZkTestServer Run Thread, state=WAITING, 
group=TGRP-TestCloudSearcherWarming] at java.lang.Object.wait(Native 
Method) at java.lang.Thread.join(Thread.java:1252) at 
java.lang.Thread.join(Thread.java:1326) at 
org.apache.zookeeper.server.NIOServerCnxnFactory.join(NIOServerCnxnFactory.java:313)
 at 
org.apache.solr.cloud.ZkTestServer$ZKServerMain.runFromConfig(ZkTestServer.java:343)
 at org.apache.solr.cloud.ZkTestServer$2.run(ZkTestServer.java:564)
3) Thread[id=13340, name=NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0, 
state=RUNNABLE, group=TGRP-TestCloudSearcherWarming] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:196)
 at java.lang.Thread.run(Thread.java:748)4) Thread[id=13341, 
name=SessionTracker, state=TIMED_WAITING, group=TGRP-TestCloudSearcherWarming]  
   at java.lang.Object.wait(Native Method) at 
org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:147) 
   5) Thread[id=13342, name=SyncThread:0, state=WAITING, 
group=TGRP-TestCloudSearcherWarming] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:127)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestCloudSearcherWarming: 
   1) Thread[id=13343, name=ProcessThread(sid:0 cport:35789):, state=WAITING, 
group=TGRP-TestCloudSearcherWarming]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:123)
   2) Thread[id=13339, name=ZkTestServer Run Thread, state=WAITING, 
group=TGRP-TestCloudSearcherWarming]
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.join(NIOServerCnxnFactory.java:313)
at 
org.apache.solr.cloud.ZkTestServer$ZKServerMain.runFromConfig(ZkTestServer.java:343)
at org.apache.solr.cloud.ZkTestServer$2.run(ZkTestServer.java:564)
   3) Thread[id=13340, name=NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0, 
state=RUNNABLE, group=TGRP-TestCloudSearcherWarming]
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:196)
at java.lang.Thread.run(Thread.java:748)
   4) Thread[id=13341, name=SessionTracker, state=TIMED_WAITING, 
group=TGRP-TestCloudSearcherWarming]
at java.lang.Object.wait(Native Method)
at 
org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:147)
   5) Thread[id=13342, name=SyncThread:0, state=WAITING, 

[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk1.8.0_172) - Build # 104 - Failure!

2019-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/104/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDocAbsent

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:38665/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:38665/solr
at 
__randomizedtesting.SeedInfo.seed([40147AEB7EC6E7CD:9EA3A1639B97A708]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:661)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:256)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:213)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1110)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:972)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755663#comment-16755663
 ] 

ASF subversion and git services commented on SOLR-13189:


Commit 73cfa810c7fcf8e5299a6b9c2fcecceee44d2846 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=73cfa81 ]

disable TestInjection in TestStressCloudBlindAtomicUpdates

work around for SOLR-13189

(cherry picked from commit 0a01b9e12787e56604aab3a0c3792d2aa060ae74)


> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755664#comment-16755664
 ] 

ASF subversion and git services commented on SOLR-13189:


Commit 0a01b9e12787e56604aab3a0c3792d2aa060ae74 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0a01b9e ]

disable TestInjection in TestStressCloudBlindAtomicUpdates

work around for SOLR-13189


> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755662#comment-16755662
 ] 

ASF subversion and git services commented on SOLR-13189:


Commit 21d2b024f4590175f97b82839ff69f96bd022df2 in lucene-solr's branch 
refs/heads/branch_7x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=21d2b02 ]

disable TestInjection in TestStressCloudBlindAtomicUpdates

work around for SOLR-13189

(cherry picked from commit 0a01b9e12787e56604aab3a0c3792d2aa060ae74)


> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 23595 - Failure!

2019-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23595/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 2034 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J1-20190130_033119_52516031188075785919696.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 6 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J0-20190130_033119_52511166928391324960591.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J2-20190130_033119_52515905438401959533649.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 304 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J1-20190130_033951_10513940108023935293104.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J0-20190130_033951_10517609347902162946618.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J2-20190130_033951_10513193698164577460353.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 1080 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J0-20190130_034120_0513893203093204500795.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 6 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J1-20190130_034120_0516499951759979529524.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J2-20190130_034120_0518699530747788740161.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 255 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/icu/test/temp/junit4-J1-20190130_034310_21112442306567155238099.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/icu/test/temp/junit4-J2-20190130_034310_2113753143658544334224.syserr
   

[jira] [Resolved] (SOLR-13104) Add natural and repeat Stream Evaluators

2019-01-29 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-13104.
---
   Resolution: Resolved
Fix Version/s: master (9.0)
   8.0

> Add natural and repeat Stream Evaluators
> 
>
> Key: SOLR-13104
> URL: https://issues.apache.org/jira/browse/SOLR-13104
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-13104.patch
>
>
> The *natural* Stream Evaluator returns a vector of natural numbers. This is 
> useful for creating a sequence of numbers 0...N for plotting an x-axis.
> Sample syntax:
> {code:java}
> let(a=natural(10)){code}
> The *repeat* Stream Evaluator creates a vector with a number repeated N 
> times. This useful for plotting a straight line.
> Sample syntax:
> {code:java}
> let(a=repeat(5.5, 100)){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-8.x - Build # 14 - Still Unstable

2019-01-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-8.x/14/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [SolrCore, 
InternalHttpClient, MMapDirectory, MMapDirectory] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1056)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:876)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1189)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1099)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:736)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:164)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:703)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:502)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)  at 
org.eclipse.jetty.server.HttpChannel.run(HttpChannel.java:305)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:321)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:330)
  at 
org.apache.solr.handler.IndexFetcher.createHttpClient(IndexFetcher.java:225)  
at org.apache.solr.handler.IndexFetcher.(IndexFetcher.java:267)  at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:420) 
 at org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:237) 
 at 
org.apache.solr.cloud.RecoveryStrategy.doReplicateOnlyRecovery(RecoveryStrategy.java:382)
  at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:328)  
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:307)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 

[jira] [Resolved] (SOLR-13134) Allow the knnRegress Stream Evaluator to more easily perform bivariate regression

2019-01-29 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-13134.
---
   Resolution: Resolved
Fix Version/s: master (9.0)
   8.0

> Allow the knnRegress Stream Evaluator to more easily perform bivariate 
> regression
> -
>
> Key: SOLR-13134
> URL: https://issues.apache.org/jira/browse/SOLR-13134
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-13134.patch, SOLR-13134.patch, Screen Shot 
> 2019-01-12 at 2.38.57 PM.png
>
>
> Currently the knnRegress function operates over an observations *matrix* to 
> support multi-variate regression. This ticket will allow the knnRegress 
> function to operate over an observations vector to allow knnRegress to be 
> used more easily for bi-variate regression.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13104) Add natural and repeat Stream Evaluators

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755625#comment-16755625
 ] 

ASF subversion and git services commented on SOLR-13104:


Commit 79901ae2eb214e554dd6b764e0a1d25bda2d0c75 in lucene-solr's branch 
refs/heads/branch_8_0 from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=79901ae ]

SOLR-13104: Update CHANGES.txt


> Add natural and repeat Stream Evaluators
> 
>
> Key: SOLR-13104
> URL: https://issues.apache.org/jira/browse/SOLR-13104
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13104.patch
>
>
> The *natural* Stream Evaluator returns a vector of natural numbers. This is 
> useful for creating a sequence of numbers 0...N for plotting an x-axis.
> Sample syntax:
> {code:java}
> let(a=natural(10)){code}
> The *repeat* Stream Evaluator creates a vector with a number repeated N 
> times. This useful for plotting a straight line.
> Sample syntax:
> {code:java}
> let(a=repeat(5.5, 100)){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13104) Add natural and repeat Stream Evaluators

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755623#comment-16755623
 ] 

ASF subversion and git services commented on SOLR-13104:


Commit b3c9082f779f17f76ecc328864574706b917d320 in lucene-solr's branch 
refs/heads/branch_8x from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b3c9082 ]

SOLR-13104: Update CHANGES.txt


> Add natural and repeat Stream Evaluators
> 
>
> Key: SOLR-13104
> URL: https://issues.apache.org/jira/browse/SOLR-13104
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13104.patch
>
>
> The *natural* Stream Evaluator returns a vector of natural numbers. This is 
> useful for creating a sequence of numbers 0...N for plotting an x-axis.
> Sample syntax:
> {code:java}
> let(a=natural(10)){code}
> The *repeat* Stream Evaluator creates a vector with a number repeated N 
> times. This useful for plotting a straight line.
> Sample syntax:
> {code:java}
> let(a=repeat(5.5, 100)){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13104) Add natural and repeat Stream Evaluators

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755622#comment-16755622
 ] 

ASF subversion and git services commented on SOLR-13104:


Commit 79d0dabed469c4e0e8967b4bb77fae7518930a9a in lucene-solr's branch 
refs/heads/master from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=79d0dab ]

SOLR-13104: Update CHANGES.txt


> Add natural and repeat Stream Evaluators
> 
>
> Key: SOLR-13104
> URL: https://issues.apache.org/jira/browse/SOLR-13104
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13104.patch
>
>
> The *natural* Stream Evaluator returns a vector of natural numbers. This is 
> useful for creating a sequence of numbers 0...N for plotting an x-axis.
> Sample syntax:
> {code:java}
> let(a=natural(10)){code}
> The *repeat* Stream Evaluator creates a vector with a number repeated N 
> times. This useful for plotting a straight line.
> Sample syntax:
> {code:java}
> let(a=repeat(5.5, 100)){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13134) Allow the knnRegress Stream Evaluator to more easily perform bivariate regression

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755617#comment-16755617
 ] 

ASF subversion and git services commented on SOLR-13134:


Commit ad765074f71ffc73af814db9bc7c6157cac8166f in lucene-solr's branch 
refs/heads/branch_8_0 from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ad76507 ]

SOLR-13134: Update CHANGES.txt


> Allow the knnRegress Stream Evaluator to more easily perform bivariate 
> regression
> -
>
> Key: SOLR-13134
> URL: https://issues.apache.org/jira/browse/SOLR-13134
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13134.patch, SOLR-13134.patch, Screen Shot 
> 2019-01-12 at 2.38.57 PM.png
>
>
> Currently the knnRegress function operates over an observations *matrix* to 
> support multi-variate regression. This ticket will allow the knnRegress 
> function to operate over an observations vector to allow knnRegress to be 
> used more easily for bi-variate regression.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13134) Allow the knnRegress Stream Evaluator to more easily perform bivariate regression

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755616#comment-16755616
 ] 

ASF subversion and git services commented on SOLR-13134:


Commit 768a62702a7227c95dcf1a382cee257fbbc708c8 in lucene-solr's branch 
refs/heads/branch_8x from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=768a627 ]

SOLR-13134: Update CHANGES.txt


> Allow the knnRegress Stream Evaluator to more easily perform bivariate 
> regression
> -
>
> Key: SOLR-13134
> URL: https://issues.apache.org/jira/browse/SOLR-13134
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13134.patch, SOLR-13134.patch, Screen Shot 
> 2019-01-12 at 2.38.57 PM.png
>
>
> Currently the knnRegress function operates over an observations *matrix* to 
> support multi-variate regression. This ticket will allow the knnRegress 
> function to operate over an observations vector to allow knnRegress to be 
> used more easily for bi-variate regression.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13134) Allow the knnRegress Stream Evaluator to more easily perform bivariate regression

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755615#comment-16755615
 ] 

ASF subversion and git services commented on SOLR-13134:


Commit 25478979b1709abf619445c4a886d284df89a8de in lucene-solr's branch 
refs/heads/master from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2547897 ]

SOLR-13134: Update CHANGES.txt


> Allow the knnRegress Stream Evaluator to more easily perform bivariate 
> regression
> -
>
> Key: SOLR-13134
> URL: https://issues.apache.org/jira/browse/SOLR-13134
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13134.patch, SOLR-13134.patch, Screen Shot 
> 2019-01-12 at 2.38.57 PM.png
>
>
> Currently the knnRegress function operates over an observations *matrix* to 
> support multi-variate regression. This ticket will allow the knnRegress 
> function to operate over an observations vector to allow knnRegress to be 
> used more easily for bi-variate regression.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13088) Add zplot Stream Evaluator to plot math expressions in Apache Zeppelin

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755604#comment-16755604
 ] 

ASF subversion and git services commented on SOLR-13088:


Commit a10a989263dc2469f8b4fd6d656cc999ec508533 in lucene-solr's branch 
refs/heads/master from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a10a989 ]

SOLR-13088: Update CHANGES.txt


> Add zplot Stream Evaluator to plot math expressions in Apache Zeppelin
> --
>
> Key: SOLR-13088
> URL: https://issues.apache.org/jira/browse/SOLR-13088
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13088.patch, SOLR-13088.patch, SOLR-13088.patch, 
> Screen Shot 2018-12-21 at 5.53.18 PM.png, Screen Shot 2018-12-22 at 4.04.41 
> PM.png
>
>
> The Solr Zeppelin interpreter ([https://github.com/lucidworks/zeppelin-solr]) 
> can already execute Streaming Expressions and therefore Math Expressions.  
> The *zplot* function will export the results of Solr Math Expressions in a 
> format the Solr Zeppelin interpreter can work with. This will allow results 
> of Solr Math Expressions to be plotted by *Apache Zeppelin.*
> Sample syntax:
> {code:java}
> let(a=array(1,2,3),
> b=array(4,5,6),
> zplot(line1=a, line2=b, linec=array(7,8,9))){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13088) Add zplot Stream Evaluator to plot math expressions in Apache Zeppelin

2019-01-29 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-13088.
---
   Resolution: Resolved
Fix Version/s: 7.7

> Add zplot Stream Evaluator to plot math expressions in Apache Zeppelin
> --
>
> Key: SOLR-13088
> URL: https://issues.apache.org/jira/browse/SOLR-13088
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.7
>
> Attachments: SOLR-13088.patch, SOLR-13088.patch, SOLR-13088.patch, 
> Screen Shot 2018-12-21 at 5.53.18 PM.png, Screen Shot 2018-12-22 at 4.04.41 
> PM.png
>
>
> The Solr Zeppelin interpreter ([https://github.com/lucidworks/zeppelin-solr]) 
> can already execute Streaming Expressions and therefore Math Expressions.  
> The *zplot* function will export the results of Solr Math Expressions in a 
> format the Solr Zeppelin interpreter can work with. This will allow results 
> of Solr Math Expressions to be plotted by *Apache Zeppelin.*
> Sample syntax:
> {code:java}
> let(a=array(1,2,3),
> b=array(4,5,6),
> zplot(line1=a, line2=b, linec=array(7,8,9))){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13088) Add zplot Stream Evaluator to plot math expressions in Apache Zeppelin

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755613#comment-16755613
 ] 

ASF subversion and git services commented on SOLR-13088:


Commit 235db293d5bf9aa1862cafabe1697ad2d85ce6ef in lucene-solr's branch 
refs/heads/branch_8_0 from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=235db29 ]

SOLR-13088: Update CHANGES.txt


> Add zplot Stream Evaluator to plot math expressions in Apache Zeppelin
> --
>
> Key: SOLR-13088
> URL: https://issues.apache.org/jira/browse/SOLR-13088
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13088.patch, SOLR-13088.patch, SOLR-13088.patch, 
> Screen Shot 2018-12-21 at 5.53.18 PM.png, Screen Shot 2018-12-22 at 4.04.41 
> PM.png
>
>
> The Solr Zeppelin interpreter ([https://github.com/lucidworks/zeppelin-solr]) 
> can already execute Streaming Expressions and therefore Math Expressions.  
> The *zplot* function will export the results of Solr Math Expressions in a 
> format the Solr Zeppelin interpreter can work with. This will allow results 
> of Solr Math Expressions to be plotted by *Apache Zeppelin.*
> Sample syntax:
> {code:java}
> let(a=array(1,2,3),
> b=array(4,5,6),
> zplot(line1=a, line2=b, linec=array(7,8,9))){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13088) Add zplot Stream Evaluator to plot math expressions in Apache Zeppelin

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755611#comment-16755611
 ] 

ASF subversion and git services commented on SOLR-13088:


Commit 767f1be7d545bc0bdcc37ab7613c3f0356c5498d in lucene-solr's branch 
refs/heads/branch_7_7 from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=767f1be ]

SOLR-13088: Update CHANGES.txt


> Add zplot Stream Evaluator to plot math expressions in Apache Zeppelin
> --
>
> Key: SOLR-13088
> URL: https://issues.apache.org/jira/browse/SOLR-13088
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13088.patch, SOLR-13088.patch, SOLR-13088.patch, 
> Screen Shot 2018-12-21 at 5.53.18 PM.png, Screen Shot 2018-12-22 at 4.04.41 
> PM.png
>
>
> The Solr Zeppelin interpreter ([https://github.com/lucidworks/zeppelin-solr]) 
> can already execute Streaming Expressions and therefore Math Expressions.  
> The *zplot* function will export the results of Solr Math Expressions in a 
> format the Solr Zeppelin interpreter can work with. This will allow results 
> of Solr Math Expressions to be plotted by *Apache Zeppelin.*
> Sample syntax:
> {code:java}
> let(a=array(1,2,3),
> b=array(4,5,6),
> zplot(line1=a, line2=b, linec=array(7,8,9))){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13088) Add zplot Stream Evaluator to plot math expressions in Apache Zeppelin

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755612#comment-16755612
 ] 

ASF subversion and git services commented on SOLR-13088:


Commit d4342c4d78f0fb1aede10084ad9f7fcc3c408e37 in lucene-solr's branch 
refs/heads/branch_8x from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d4342c4 ]

SOLR-13088: Update CHANGES.txt


> Add zplot Stream Evaluator to plot math expressions in Apache Zeppelin
> --
>
> Key: SOLR-13088
> URL: https://issues.apache.org/jira/browse/SOLR-13088
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13088.patch, SOLR-13088.patch, SOLR-13088.patch, 
> Screen Shot 2018-12-21 at 5.53.18 PM.png, Screen Shot 2018-12-22 at 4.04.41 
> PM.png
>
>
> The Solr Zeppelin interpreter ([https://github.com/lucidworks/zeppelin-solr]) 
> can already execute Streaming Expressions and therefore Math Expressions.  
> The *zplot* function will export the results of Solr Math Expressions in a 
> format the Solr Zeppelin interpreter can work with. This will allow results 
> of Solr Math Expressions to be plotted by *Apache Zeppelin.*
> Sample syntax:
> {code:java}
> let(a=array(1,2,3),
> b=array(4,5,6),
> zplot(line1=a, line2=b, linec=array(7,8,9))){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13088) Add zplot Stream Evaluator to plot math expressions in Apache Zeppelin

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755608#comment-16755608
 ] 

ASF subversion and git services commented on SOLR-13088:


Commit 880ef35c218a7ed542cdf6b2f08389cc7f2de632 in lucene-solr's branch 
refs/heads/branch_7x from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=880ef35 ]

SOLR-13088: Update CHANGES.txt


> Add zplot Stream Evaluator to plot math expressions in Apache Zeppelin
> --
>
> Key: SOLR-13088
> URL: https://issues.apache.org/jira/browse/SOLR-13088
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13088.patch, SOLR-13088.patch, SOLR-13088.patch, 
> Screen Shot 2018-12-21 at 5.53.18 PM.png, Screen Shot 2018-12-22 at 4.04.41 
> PM.png
>
>
> The Solr Zeppelin interpreter ([https://github.com/lucidworks/zeppelin-solr]) 
> can already execute Streaming Expressions and therefore Math Expressions.  
> The *zplot* function will export the results of Solr Math Expressions in a 
> format the Solr Zeppelin interpreter can work with. This will allow results 
> of Solr Math Expressions to be plotted by *Apache Zeppelin.*
> Sample syntax:
> {code:java}
> let(a=array(1,2,3),
> b=array(4,5,6),
> zplot(line1=a, line2=b, linec=array(7,8,9))){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12984) The search Streaming Expression should properly support and push down paging when using the /select handler

2019-01-29 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-12984.
---
   Resolution: Resolved
Fix Version/s: (was: 8.0)
   7.7

> The search Streaming Expression should properly support and push down paging 
> when using the /select handler
> ---
>
> Key: SOLR-12984
> URL: https://issues.apache.org/jira/browse/SOLR-12984
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.7
>
> Attachments: SOLR-12984.patch, SOLR-12984.patch, SOLR-12984.patch, 
> SOLR-12984.patch
>
>
> Currently the search Streaming Expression doesn't properly support paging 
> even when going to the /select handler. This is due to very old 
> implementation decisions that were geared towards supporting streaming entire 
> result sets from the /export handler. It's time to change this behavior the 
> so the search expression can be used to handle typical paging scenarios.
> This ticket will maintain the same behavior for when qt=/export, but will 
> push down 'rows' and 'start' parameters when using /select handler. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12984) The search Streaming Expression should properly support and push down paging when using the /select handler

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755598#comment-16755598
 ] 

ASF subversion and git services commented on SOLR-12984:


Commit cba36bd8c4c953823d78b5fae482b207f644f45b in lucene-solr's branch 
refs/heads/branch_8_0 from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=cba36bd ]

SOLR-12984: Update CHANGES.txt


> The search Streaming Expression should properly support and push down paging 
> when using the /select handler
> ---
>
> Key: SOLR-12984
> URL: https://issues.apache.org/jira/browse/SOLR-12984
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 8.0
>
> Attachments: SOLR-12984.patch, SOLR-12984.patch, SOLR-12984.patch, 
> SOLR-12984.patch
>
>
> Currently the search Streaming Expression doesn't properly support paging 
> even when going to the /select handler. This is due to very old 
> implementation decisions that were geared towards supporting streaming entire 
> result sets from the /export handler. It's time to change this behavior the 
> so the search expression can be used to handle typical paging scenarios.
> This ticket will maintain the same behavior for when qt=/export, but will 
> push down 'rows' and 'start' parameters when using /select handler. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12984) The search Streaming Expression should properly support and push down paging when using the /select handler

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755597#comment-16755597
 ] 

ASF subversion and git services commented on SOLR-12984:


Commit a1595a18aa661ce2a5a22b17efbf10b01903f6f4 in lucene-solr's branch 
refs/heads/branch_8x from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a1595a1 ]

SOLR-12984: Update CHANGES.txt


> The search Streaming Expression should properly support and push down paging 
> when using the /select handler
> ---
>
> Key: SOLR-12984
> URL: https://issues.apache.org/jira/browse/SOLR-12984
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 8.0
>
> Attachments: SOLR-12984.patch, SOLR-12984.patch, SOLR-12984.patch, 
> SOLR-12984.patch
>
>
> Currently the search Streaming Expression doesn't properly support paging 
> even when going to the /select handler. This is due to very old 
> implementation decisions that were geared towards supporting streaming entire 
> result sets from the /export handler. It's time to change this behavior the 
> so the search expression can be used to handle typical paging scenarios.
> This ticket will maintain the same behavior for when qt=/export, but will 
> push down 'rows' and 'start' parameters when using /select handler. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12984) The search Streaming Expression should properly support and push down paging when using the /select handler

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755596#comment-16755596
 ] 

ASF subversion and git services commented on SOLR-12984:


Commit 799e3cff3c5f2f906cba82e3a17fca63b53d197c in lucene-solr's branch 
refs/heads/branch_7_7 from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=799e3cf ]

SOLR-12984: Update CHANGES.txt


> The search Streaming Expression should properly support and push down paging 
> when using the /select handler
> ---
>
> Key: SOLR-12984
> URL: https://issues.apache.org/jira/browse/SOLR-12984
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 8.0
>
> Attachments: SOLR-12984.patch, SOLR-12984.patch, SOLR-12984.patch, 
> SOLR-12984.patch
>
>
> Currently the search Streaming Expression doesn't properly support paging 
> even when going to the /select handler. This is due to very old 
> implementation decisions that were geared towards supporting streaming entire 
> result sets from the /export handler. It's time to change this behavior the 
> so the search expression can be used to handle typical paging scenarios.
> This ticket will maintain the same behavior for when qt=/export, but will 
> push down 'rows' and 'start' parameters when using /select handler. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12984) The search Streaming Expression should properly support and push down paging when using the /select handler

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755593#comment-16755593
 ] 

ASF subversion and git services commented on SOLR-12984:


Commit aabbd8ab741ce4102811a37719fe45f4ebd0d916 in lucene-solr's branch 
refs/heads/branch_7x from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=aabbd8a ]

SOLR-12984: Update CHANGES.txt


> The search Streaming Expression should properly support and push down paging 
> when using the /select handler
> ---
>
> Key: SOLR-12984
> URL: https://issues.apache.org/jira/browse/SOLR-12984
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 8.0
>
> Attachments: SOLR-12984.patch, SOLR-12984.patch, SOLR-12984.patch, 
> SOLR-12984.patch
>
>
> Currently the search Streaming Expression doesn't properly support paging 
> even when going to the /select handler. This is due to very old 
> implementation decisions that were geared towards supporting streaming entire 
> result sets from the /export handler. It's time to change this behavior the 
> so the search expression can be used to handle typical paging scenarios.
> This ticket will maintain the same behavior for when qt=/export, but will 
> push down 'rows' and 'start' parameters when using /select handler. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12984) The search Streaming Expression should properly support and push down paging when using the /select handler

2019-01-29 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755589#comment-16755589
 ] 

ASF subversion and git services commented on SOLR-12984:


Commit 239905edf7dbb0635237ec022fbb1ce3b45c6c8e in lucene-solr's branch 
refs/heads/master from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=239905e ]

SOLR-12984: Update CHANGES.txt


> The search Streaming Expression should properly support and push down paging 
> when using the /select handler
> ---
>
> Key: SOLR-12984
> URL: https://issues.apache.org/jira/browse/SOLR-12984
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 8.0
>
> Attachments: SOLR-12984.patch, SOLR-12984.patch, SOLR-12984.patch, 
> SOLR-12984.patch
>
>
> Currently the search Streaming Expression doesn't properly support paging 
> even when going to the /select handler. This is due to very old 
> implementation decisions that were geared towards supporting streaming entire 
> result sets from the /export handler. It's time to change this behavior the 
> so the search expression can be used to handle typical paging scenarios.
> This ticket will maintain the same behavior for when qt=/export, but will 
> push down 'rows' and 'start' parameters when using /select handler. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] madrob opened a new pull request #554: SOLR-13190 Treat fuzzy term search errors as client errors

2019-01-29 Thread GitBox
madrob opened a new pull request #554: SOLR-13190 Treat fuzzy term search 
errors as client errors
URL: https://github.com/apache/lucene-solr/pull/554
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13190) Fuzzy search treated as server error instead of client error when terms are too complex

2019-01-29 Thread Mike Drob (JIRA)
Mike Drob created SOLR-13190:


 Summary: Fuzzy search treated as server error instead of client 
error when terms are too complex
 Key: SOLR-13190
 URL: https://issues.apache.org/jira/browse/SOLR-13190
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: search
Affects Versions: master (9.0)
Reporter: Mike Drob
Assignee: Mike Drob


We've seen a fuzzy search end up breaking the automaton and getting reported as 
a server error. This usage should be improved by
1) reporting as a client error, because it's similar to something like too many 
boolean clauses queries in how an operator should deal with it
2) report what field is causing the error, since that currently must be deduced 
from adjacent query logs and can be difficult if there are multiple terms in 
the search

This trigger was added to defend against adversarial regex but somehow hits 
fuzzy terms as well, I don't understand enough about the automaton mechanisms 
to really know how to approach a fix there, but improving the operability is a 
good first step.

relevant stack trace:

{noformat}
org.apache.lucene.util.automaton.TooComplexToDeterminizeException: 
Determinizing automaton with 13632 states and 21348 transitions would result in 
more than 1 states.
at 
org.apache.lucene.util.automaton.Operations.determinize(Operations.java:746)
at 
org.apache.lucene.util.automaton.RunAutomaton.(RunAutomaton.java:69)
at 
org.apache.lucene.util.automaton.ByteRunAutomaton.(ByteRunAutomaton.java:32)
at 
org.apache.lucene.util.automaton.CompiledAutomaton.(CompiledAutomaton.java:247)
at 
org.apache.lucene.util.automaton.CompiledAutomaton.(CompiledAutomaton.java:133)
at 
org.apache.lucene.search.FuzzyTermsEnum.(FuzzyTermsEnum.java:143)
at org.apache.lucene.search.FuzzyQuery.getTermsEnum(FuzzyQuery.java:154)
at 
org.apache.lucene.search.MultiTermQuery$RewriteMethod.getTermsEnum(MultiTermQuery.java:78)
at 
org.apache.lucene.search.TermCollectingRewrite.collectTerms(TermCollectingRewrite.java:58)
at 
org.apache.lucene.search.TopTermsRewrite.rewrite(TopTermsRewrite.java:67)
at 
org.apache.lucene.search.MultiTermQuery.rewrite(MultiTermQuery.java:310)
at 
org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:442)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:200)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1604)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1420)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
at 
org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1435)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:374)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1762 - Failure

2019-01-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1762/

1 tests failed.
FAILED:  
org.apache.solr.uninverting.TestDocTermOrdsUninvertLimit.testTriggerUnInvertLimit

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([F101D8F7CA56E07C:C2B3F033C7E13ACB]:0)
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at 
org.apache.lucene.store.ByteBuffersDataOutput$$Lambda$184/165617701.apply(Unknown
 Source)
at 
org.apache.lucene.store.ByteBuffersDataOutput.appendBlock(ByteBuffersDataOutput.java:447)
at 
org.apache.lucene.store.ByteBuffersDataOutput.writeBytes(ByteBuffersDataOutput.java:164)
at 
org.apache.lucene.store.ByteBuffersIndexOutput.writeBytes(ByteBuffersIndexOutput.java:115)
at 
org.apache.lucene.store.MockIndexOutputWrapper.writeBytes(MockIndexOutputWrapper.java:141)
at 
org.apache.lucene.store.MockIndexOutputWrapper.writeByte(MockIndexOutputWrapper.java:126)
at org.apache.lucene.store.DataOutput.writeInt(DataOutput.java:70)
at org.apache.lucene.codecs.CodecUtil.writeFooter(CodecUtil.java:391)
at 
org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.close(Lucene50PostingsWriter.java:504)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:88)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)
at 
org.apache.lucene.codecs.blockterms.BlockTermsWriter.close(BlockTermsWriter.java:184)
at 
org.apache.lucene.util.IOUtils.closeWhileHandlingException(IOUtils.java:122)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:175)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:244)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:139)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4459)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4054)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2155)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5116)
at 
org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1287)
at 
org.apache.lucene.index.IndexWriter.addDocuments(IndexWriter.java:1257)
at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:159)
at 
org.apache.solr.uninverting.TestDocTermOrdsUninvertLimit.testTriggerUnInvertLimit(TestDocTermOrdsUninvertLimit.java:65)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)




Build Log:
[...truncated 15806 lines...]
   [junit4] Suite: org.apache.solr.uninverting.TestDocTermOrdsUninvertLimit
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestDocTermOrdsUninvertLimit -Dtests.method=testTriggerUnInvertLimit 
-Dtests.seed=F101D8F7CA56E07C -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=mk -Dtests.timezone=Asia/Karachi -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   33.4s J2 | 
TestDocTermOrdsUninvertLimit.testTriggerUnInvertLimit <<<
   [junit4]> Throwable #1: java.lang.OutOfMemoryError: Java heap space
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([F101D8F7CA56E07C:C2B3F033C7E13ACB]:0)
   [junit4]>at 
java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
   [junit4]>at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
   [junit4]>at 
org.apache.lucene.store.ByteBuffersDataOutput$$Lambda$184/165617701.apply(Unknown
 Source)
   [junit4]>at 
org.apache.lucene.store.ByteBuffersDataOutput.appendBlock(ByteBuffersDataOutput.java:447)
   [junit4]>at 
org.apache.lucene.store.ByteBuffersDataOutput.writeBytes(ByteBuffersDataOutput.java:164)
   [junit4]>at 
org.apache.lucene.store.ByteBuffersIndexOutput.writeBytes(ByteBuffersIndexOutput.java:115)
   [junit4]>at 
org.apache.lucene.store.MockIndexOutputWrapper.writeBytes(MockIndexOutputWrapper.java:141)
   [junit4]>at 

[jira] [Comment Edited] (SOLR-5480) Make MoreLikeThisHandler distributable

2019-01-29 Thread phoema (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754718#comment-16754718
 ] 

phoema edited comment on SOLR-5480 at 1/30/19 1:14 AM:
---

I have the same problem as Issue SOLR-5480 with Solr 7.6.0. When will this 
issue be solved?


was (Author: phoema):
is this issue will be process?

> Make MoreLikeThisHandler distributable
> --
>
> Key: SOLR-5480
> URL: https://issues.apache.org/jira/browse/SOLR-5480
> Project: Solr
>  Issue Type: Improvement
>Reporter: Steve Molloy
>Assignee: Noble Paul
>Priority: Major
> Attachments: MoreLikeThisHandlerTestST.txt, SOLR-5480.patch, 
> SOLR-5480.patch, SOLR-5480.patch, SOLR-5480.patch, SOLR-5480.patch, 
> SOLR-5480.patch, SOLR-5480.patch, SOLR-5480.patch, SOLR-5480.patch, 
> SOLR-5480.patch, SOLR-5480.patch
>
>
> The MoreLikeThis component, when used in the standard search handler supports 
> distributed searches. But the MoreLikeThisHandler itself doesn't, which 
> prevents from say, passing in text to perform the query. I'll start looking 
> into adapting the SearchHandler logic to the MoreLikeThisHandler. If anyone 
> has some work done already and want to share, or want to contribute, any help 
> will be welcomed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 2751 - Still Unstable

2019-01-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/2751/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-8.x/13/consoleText

[repro] Revision: 8413b105c200d7e602fb10935565a39e23a8c96b

[repro] Repro line:  ant test  -Dtestcase=TestLBHttpSolrClient 
-Dtests.method=testReliability -Dtests.seed=6F8B56201AC6F5E9 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=de-LU -Dtests.timezone=Europe/Prague -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
cf39708609ac9975cb462c0b1427fe2443d6d842
[repro] git fetch
[repro] git checkout 8413b105c200d7e602fb10935565a39e23a8c96b

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   TestLBHttpSolrClient
[repro] ant compile-test

[...truncated 2708 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestLBHttpSolrClient" -Dtests.showOutput=onerror  
-Dtests.seed=6F8B56201AC6F5E9 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=de-LU -Dtests.timezone=Europe/Prague 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 1664 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: org.apache.solr.client.solrj.TestLBHttpSolrClient
[repro] git checkout cf39708609ac9975cb462c0b1427fe2443d6d842

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (LUCENE-8662) Change TermsEnum.seekExact(BytesRef) to abstract

2019-01-29 Thread jefferyyuan (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jefferyyuan updated LUCENE-8662:

Description: 
Recently in our production, we found that Solr uses a lot of memory(more than 
10g) during recovery or commit for a small index (3.5gb)
 The stack trace is:

 
{code:java}
Thread 0x4d4b115c0 
  at org.apache.lucene.store.DataInput.readVInt()I (DataInput.java:125) 
  at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V 
(SegmentTermsEnumFrame.java:157) 
  at 
org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTermNonLeaf(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
 (SegmentTermsEnumFrame.java:786) 
  at 
org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTerm(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
 (SegmentTermsEnumFrame.java:538) 
  at 
org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
 (SegmentTermsEnum.java:757) 
  at 
org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
 (FilterLeafReader.java:185) 
  at 
org.apache.lucene.index.TermsEnum.seekExact(Lorg/apache/lucene/util/BytesRef;)Z 
(TermsEnum.java:74) 
  at 
org.apache.solr.search.SolrIndexSearcher.lookupId(Lorg/apache/lucene/util/BytesRef;)J
 (SolrIndexSearcher.java:823) 
  at 
org.apache.solr.update.VersionInfo.getVersionFromIndex(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
 (VersionInfo.java:204) 
  at 
org.apache.solr.update.UpdateLog.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
 (UpdateLog.java:786) 
  at 
org.apache.solr.update.VersionInfo.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
 (VersionInfo.java:194) 
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Lorg/apache/solr/update/AddUpdateCommand;)Z
 (DistributedUpdateProcessor.java:1051)  
{code}
We reproduced the problem locally with the following code using Lucene code.
{code:java}
public static void main(String[] args) throws IOException {
  FSDirectory index = FSDirectory.open(Paths.get("the-index"));
  try (IndexReader reader = new   
ExitableDirectoryReader(DirectoryReader.open(index),
new QueryTimeoutImpl(1000 * 60 * 5))) {
String id = "the-id";

BytesRef text = new BytesRef(id);
for (LeafReaderContext lf : reader.leaves()) {
  TermsEnum te = lf.reader().terms("id").iterator();
  System.out.println(te.seekExact(text));
}
  }
}
{code}
 

I added System.out.println("ord: " + ord); in 
codecs.blocktree.SegmentTermsEnum.getFrame(int).

Please check the attached output of test program.txt. 

 

We found out the root cause:

we didn't implement seekExact(BytesRef) method in FilterLeafReader.FilterTerms, 
so it uses the base class TermsEnum.seekExact(BytesRef) implementation which is 
very inefficient in this case.
{code:java}
public boolean seekExact(BytesRef text) throws IOException {
  return seekCeil(text) == SeekStatus.FOUND;
}
{code}
The fix is simple, just override seekExact(BytesRef) method in 
FilterLeafReader.FilterTerms
{code:java}
@Override
public boolean seekExact(BytesRef text) throws IOException {
  return in.seekExact(text);
}
{code}

  was:
Recently in our production, we found that Sole uses a lot of memory(more than 
10g) during recovery or commit for a small index (3.5gb)
 The stack trace is:

 
{code:java}
Thread 0x4d4b115c0 
  at org.apache.lucene.store.DataInput.readVInt()I (DataInput.java:125) 
  at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V 
(SegmentTermsEnumFrame.java:157) 
  at 
org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTermNonLeaf(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
 (SegmentTermsEnumFrame.java:786) 
  at 
org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTerm(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
 (SegmentTermsEnumFrame.java:538) 
  at 
org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
 (SegmentTermsEnum.java:757) 
  at 
org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
 (FilterLeafReader.java:185) 
  at 
org.apache.lucene.index.TermsEnum.seekExact(Lorg/apache/lucene/util/BytesRef;)Z 
(TermsEnum.java:74) 
  at 
org.apache.solr.search.SolrIndexSearcher.lookupId(Lorg/apache/lucene/util/BytesRef;)J
 (SolrIndexSearcher.java:823) 
  at 
org.apache.solr.update.VersionInfo.getVersionFromIndex(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
 (VersionInfo.java:204) 
  at 

[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk1.8.0_172) - Build # 102 - Unstable!

2019-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/102/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestCloudRecovery2.test

Error Message:
Error from server at https://127.0.0.1:44779/solr: 100 Async exceptions during 
distributed update:  java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused 
java.net.ConnectException: Connection refused java.net.ConnectException: 
Connection refused java.net.ConnectException: Connection refused

Stack 

[jira] [Commented] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-01-29 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755410#comment-16755410
 ] 

Hoss Man commented on SOLR-13189:
-

{quote}As currently written, this test will fail very easily...
{quote}
To clarify, the test as _uploaded_ already has the TestInjection line commented 
out with a {{nocommit}} ... so it should reliably pass for anyone.  remove 
nocommit and allow the {{TestInjection.failReplicaRequests}} to beset, and it 
should start failing very easily.

> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-29 Thread Ankit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755389#comment-16755389
 ] 

Ankit Jain edited comment on LUCENE-8635 at 1/29/19 9:20 PM:
-

{quote}Given that the performance hit is mostly on PK lookups, maybe a starting 
point could be to always put the FST off-heap except when docCount == 
sumDocFreq, which suggests the field is an ID field.{quote}
[~jpountz] - Does that exlude autogenerated id fields that are uuid, resulting 
in large FSTs? Elasticsearch for example has _id field, which IMO is better 
offheap.


was (Author: akjain):
{quote}Given that the performance hit is mostly on PK lookups, maybe a starting 
point could be to always put the FST off-heap except when docCount == 
sumDocFreq, which suggests the field is an ID field.{quote}
[~jpountz] - Does that exlude autogenerated id fields that are uuid, resulting 
in huge FST? Elasticsearch for example has _id field, that is better offheap.

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: fst-offheap-ra-rev.patch, fst-offheap-rev.patch, 
> offheap.patch, optional_offheap_ra.patch, ra.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-29 Thread Mike Sokolov (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755391#comment-16755391
 ] 

Mike Sokolov commented on LUCENE-8635:
--

I posted my latest patch including off-heap change + FST reversal + reading 
index forward by wrapping IndexInput directly (no random access, and no bug 
with using slow skipBytes) -- that's fst-offheap-rev.patch

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: fst-offheap-ra-rev.patch, fst-offheap-rev.patch, 
> offheap.patch, optional_offheap_ra.patch, ra.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-29 Thread Mike Sokolov (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Sokolov updated LUCENE-8635:
-
Attachment: fst-offheap-rev.patch

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: fst-offheap-ra-rev.patch, fst-offheap-rev.patch, 
> offheap.patch, optional_offheap_ra.patch, ra.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-29 Thread Ankit Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755389#comment-16755389
 ] 

Ankit Jain commented on LUCENE-8635:


{quote}Given that the performance hit is mostly on PK lookups, maybe a starting 
point could be to always put the FST off-heap except when docCount == 
sumDocFreq, which suggests the field is an ID field.{quote}
[~jpountz] - Does that exlude autogenerated id fields that are uuid, resulting 
in huge FST? Elasticsearch for example has _id field, that is better offheap.

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: fst-offheap-ra-rev.patch, offheap.patch, 
> optional_offheap_ra.patch, ra.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-29 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755374#comment-16755374
 ] 

Michael McCandless commented on LUCENE-8635:


Oooh I like that proposal [~jpountz]!

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: fst-offheap-ra-rev.patch, offheap.patch, 
> optional_offheap_ra.patch, ra.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-01-29 Thread Hoss Man (JIRA)
Hoss Man created SOLR-13189:
---

 Summary: Need reliable example (Test) of how to use 
TestInjection.failReplicaRequests
 Key: SOLR-13189
 URL: https://issues.apache.org/jira/browse/SOLR-13189
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man



We need a test that reliably demonstrates the usage of 
{{TestInjection.failReplicaRequests}} and shows what steps a test needs to take 
after issuing updates to reliably "pass" (finding all index updates that 
succeeded from the clients perspective) even in the event of an (injected) 
replica failure.

As things stand now, it does not seem that any test using 
{{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear if 
this is due to poorly designed tests, or an indication of a bug in distributed 
updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-29 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755352#comment-16755352
 ] 

Adrien Grand commented on LUCENE-8635:
--

Given that the performance hit is mostly on PK lookups, maybe a starting point 
could be to always put the FST off-heap except when docCount == sumDocFreq, 
which suggests the field is an ID field.

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: fst-offheap-ra-rev.patch, offheap.patch, 
> optional_offheap_ra.patch, ra.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8635) Lazy loading Lucene FST offheap using mmap

2019-01-29 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755344#comment-16755344
 ] 

Michael McCandless commented on LUCENE-8635:


OK net/net it looks like there is a small performance impact for some queries, 
and biggish (-7-8%) impact for {{PKLookup.}}

But this is a nice option to have for users who are heap constrained by the 
FSTs, so I wonder how we could add this option off by default?  E.g. users 
might want their {{id}} field to store the FST in heap (like today), but all 
other fields off-heap.

There is no index format change required here, which is nice, but Lucene 
doesn't make it easy to have read-time codec behavior changes, so maybe the 
solution is that at write-time we add an option e.g. to 
{{BlockTreeTermsWriter}} and it stores this in the index and then at read-time 
{{BlockTreeTermsReader}} checks that option and loads the FST accordingly?  
Then users could customize their codecs to achieve this.

Or I suppose we could add a global system property, e.g. our default stored 
fields writer has a property to turn on/off bulk merge, but I think we are 
trying not to use Java properties going forward?

Can anyone think of any other approaches to make this option possible?

> Lazy loading Lucene FST offheap using mmap
> --
>
> Key: LUCENE-8635
> URL: https://issues.apache.org/jira/browse/LUCENE-8635
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/FSTs
> Environment: I used below setup for es_rally tests:
> single node i3.xlarge running ES 6.5
> es_rally was running on another i3.xlarge instance
>Reporter: Ankit Jain
>Priority: Major
> Attachments: fst-offheap-ra-rev.patch, offheap.patch, 
> optional_offheap_ra.patch, ra.patch, rally_benchmark.xlsx
>
>
> Currently, FST loads all the terms into heap memory during index open. This 
> causes frequent JVM OOM issues if the term size gets big. A better way of 
> doing this will be to lazily load FST using mmap. That ensures only the 
> required terms get loaded into memory.
>  
> Lucene can expose API for providing list of fields to load terms offheap. I'm 
> planning to take following approach for this:
>  # Add a boolean property fstOffHeap in FieldInfo
>  # Pass list of offheap fields to lucene during index open (ALL can be 
> special keyword for loading ALL fields offheap)
>  # Initialize the fstOffHeap property during lucene index open
>  # FieldReader invokes default FST constructor or OffHeap constructor based 
> on fstOffHeap field
>  
> I created a patch (that loads all fields offheap), did some benchmarks using 
> es_rally and results look good.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8662) Change TermsEnum.seekExact(BytesRef) to abstract

2019-01-29 Thread jefferyyuan (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755323#comment-16755323
 ] 

jefferyyuan commented on LUCENE-8662:
-

Thanks for the comments and suggestions.
Changed TermsEnum.seekExact(BytesRef) to abstract.

When needed, all subclasses calls the default implementation for now.
https://github.com/apache/lucene-solr/pull/551/files#diff-bdfed242b7c2c62e7df628f47532dfd9

Maybe we can check which subclasses should have its own implementation of 
seekExact method for the sake of better performance, and change them in another 
pr(s).

> Change TermsEnum.seekExact(BytesRef) to abstract
> 
>
> Key: LUCENE-8662
> URL: https://issues.apache.org/jira/browse/LUCENE-8662
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 5.5.5, 6.6.5, 7.6, 8.0
>Reporter: jefferyyuan
>Priority: Major
>  Labels: query
> Fix For: 8.0, 7.7
>
> Attachments: output of test program.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Recently in our production, we found that Sole uses a lot of memory(more than 
> 10g) during recovery or commit for a small index (3.5gb)
>  The stack trace is:
>  
> {code:java}
> Thread 0x4d4b115c0 
>   at org.apache.lucene.store.DataInput.readVInt()I (DataInput.java:125) 
>   at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V 
> (SegmentTermsEnumFrame.java:157) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTermNonLeaf(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:786) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTerm(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:538) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnum.java:757) 
>   at 
> org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (FilterLeafReader.java:185) 
>   at 
> org.apache.lucene.index.TermsEnum.seekExact(Lorg/apache/lucene/util/BytesRef;)Z
>  (TermsEnum.java:74) 
>   at 
> org.apache.solr.search.SolrIndexSearcher.lookupId(Lorg/apache/lucene/util/BytesRef;)J
>  (SolrIndexSearcher.java:823) 
>   at 
> org.apache.solr.update.VersionInfo.getVersionFromIndex(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:204) 
>   at 
> org.apache.solr.update.UpdateLog.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (UpdateLog.java:786) 
>   at 
> org.apache.solr.update.VersionInfo.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:194) 
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Lorg/apache/solr/update/AddUpdateCommand;)Z
>  (DistributedUpdateProcessor.java:1051)  
> {code}
> We reproduced the problem locally with the following code using Lucene code.
> {code:java}
> public static void main(String[] args) throws IOException {
>   FSDirectory index = FSDirectory.open(Paths.get("the-index"));
>   try (IndexReader reader = new   
> ExitableDirectoryReader(DirectoryReader.open(index),
> new QueryTimeoutImpl(1000 * 60 * 5))) {
> String id = "the-id";
> BytesRef text = new BytesRef(id);
> for (LeafReaderContext lf : reader.leaves()) {
>   TermsEnum te = lf.reader().terms("id").iterator();
>   System.out.println(te.seekExact(text));
> }
>   }
> }
> {code}
>  
> I added System.out.println("ord: " + ord); in 
> codecs.blocktree.SegmentTermsEnum.getFrame(int).
> Please check the attached output of test program.txt. 
>  
> We found out the root cause:
> we didn't implement seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms, so it uses the base class 
> TermsEnum.seekExact(BytesRef) implementation which is very inefficient in 
> this case.
> {code:java}
> public boolean seekExact(BytesRef text) throws IOException {
>   return seekCeil(text) == SeekStatus.FOUND;
> }
> {code}
> The fix is simple, just override seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms
> {code:java}
> @Override
> public boolean seekExact(BytesRef text) throws IOException {
>   return in.seekExact(text);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8662) Change TermsEnum.seekExact(BytesRef) to abstract

2019-01-29 Thread jefferyyuan (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jefferyyuan updated LUCENE-8662:

Summary: Change TermsEnum.seekExact(BytesRef) to abstract  (was: Make 
TermsEnum.seekExact(BytesRef) abstract)

> Change TermsEnum.seekExact(BytesRef) to abstract
> 
>
> Key: LUCENE-8662
> URL: https://issues.apache.org/jira/browse/LUCENE-8662
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 5.5.5, 6.6.5, 7.6, 8.0
>Reporter: jefferyyuan
>Priority: Major
>  Labels: query
> Fix For: 8.0, 7.7
>
> Attachments: output of test program.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Recently in our production, we found that Sole uses a lot of memory(more than 
> 10g) during recovery or commit for a small index (3.5gb)
>  The stack trace is:
>  
> {code:java}
> Thread 0x4d4b115c0 
>   at org.apache.lucene.store.DataInput.readVInt()I (DataInput.java:125) 
>   at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V 
> (SegmentTermsEnumFrame.java:157) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTermNonLeaf(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:786) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTerm(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:538) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnum.java:757) 
>   at 
> org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (FilterLeafReader.java:185) 
>   at 
> org.apache.lucene.index.TermsEnum.seekExact(Lorg/apache/lucene/util/BytesRef;)Z
>  (TermsEnum.java:74) 
>   at 
> org.apache.solr.search.SolrIndexSearcher.lookupId(Lorg/apache/lucene/util/BytesRef;)J
>  (SolrIndexSearcher.java:823) 
>   at 
> org.apache.solr.update.VersionInfo.getVersionFromIndex(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:204) 
>   at 
> org.apache.solr.update.UpdateLog.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (UpdateLog.java:786) 
>   at 
> org.apache.solr.update.VersionInfo.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:194) 
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Lorg/apache/solr/update/AddUpdateCommand;)Z
>  (DistributedUpdateProcessor.java:1051)  
> {code}
> We reproduced the problem locally with the following code using Lucene code.
> {code:java}
> public static void main(String[] args) throws IOException {
>   FSDirectory index = FSDirectory.open(Paths.get("the-index"));
>   try (IndexReader reader = new   
> ExitableDirectoryReader(DirectoryReader.open(index),
> new QueryTimeoutImpl(1000 * 60 * 5))) {
> String id = "the-id";
> BytesRef text = new BytesRef(id);
> for (LeafReaderContext lf : reader.leaves()) {
>   TermsEnum te = lf.reader().terms("id").iterator();
>   System.out.println(te.seekExact(text));
> }
>   }
> }
> {code}
>  
> I added System.out.println("ord: " + ord); in 
> codecs.blocktree.SegmentTermsEnum.getFrame(int).
> Please check the attached output of test program.txt. 
>  
> We found out the root cause:
> we didn't implement seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms, so it uses the base class 
> TermsEnum.seekExact(BytesRef) implementation which is very inefficient in 
> this case.
> {code:java}
> public boolean seekExact(BytesRef text) throws IOException {
>   return seekCeil(text) == SeekStatus.FOUND;
> }
> {code}
> The fix is simple, just override seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms
> {code:java}
> @Override
> public boolean seekExact(BytesRef text) throws IOException {
>   return in.seekExact(text);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8662) Make TermsEnum.seekExact(BytesRef) abstract

2019-01-29 Thread jefferyyuan (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jefferyyuan updated LUCENE-8662:

Summary: Make TermsEnum.seekExact(BytesRef) abstract  (was: Override 
seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum)

> Make TermsEnum.seekExact(BytesRef) abstract
> ---
>
> Key: LUCENE-8662
> URL: https://issues.apache.org/jira/browse/LUCENE-8662
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 5.5.5, 6.6.5, 7.6, 8.0
>Reporter: jefferyyuan
>Priority: Major
>  Labels: query
> Fix For: 8.0, 7.7
>
> Attachments: output of test program.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Recently in our production, we found that Sole uses a lot of memory(more than 
> 10g) during recovery or commit for a small index (3.5gb)
>  The stack trace is:
>  
> {code:java}
> Thread 0x4d4b115c0 
>   at org.apache.lucene.store.DataInput.readVInt()I (DataInput.java:125) 
>   at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock()V 
> (SegmentTermsEnumFrame.java:157) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTermNonLeaf(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:786) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.scanToTerm(Lorg/apache/lucene/util/BytesRef;Z)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnumFrame.java:538) 
>   at 
> org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (SegmentTermsEnum.java:757) 
>   at 
> org.apache.lucene.index.FilterLeafReader$FilterTermsEnum.seekCeil(Lorg/apache/lucene/util/BytesRef;)Lorg/apache/lucene/index/TermsEnum$SeekStatus;
>  (FilterLeafReader.java:185) 
>   at 
> org.apache.lucene.index.TermsEnum.seekExact(Lorg/apache/lucene/util/BytesRef;)Z
>  (TermsEnum.java:74) 
>   at 
> org.apache.solr.search.SolrIndexSearcher.lookupId(Lorg/apache/lucene/util/BytesRef;)J
>  (SolrIndexSearcher.java:823) 
>   at 
> org.apache.solr.update.VersionInfo.getVersionFromIndex(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:204) 
>   at 
> org.apache.solr.update.UpdateLog.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (UpdateLog.java:786) 
>   at 
> org.apache.solr.update.VersionInfo.lookupVersion(Lorg/apache/lucene/util/BytesRef;)Ljava/lang/Long;
>  (VersionInfo.java:194) 
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(Lorg/apache/solr/update/AddUpdateCommand;)Z
>  (DistributedUpdateProcessor.java:1051)  
> {code}
> We reproduced the problem locally with the following code using Lucene code.
> {code:java}
> public static void main(String[] args) throws IOException {
>   FSDirectory index = FSDirectory.open(Paths.get("the-index"));
>   try (IndexReader reader = new   
> ExitableDirectoryReader(DirectoryReader.open(index),
> new QueryTimeoutImpl(1000 * 60 * 5))) {
> String id = "the-id";
> BytesRef text = new BytesRef(id);
> for (LeafReaderContext lf : reader.leaves()) {
>   TermsEnum te = lf.reader().terms("id").iterator();
>   System.out.println(te.seekExact(text));
> }
>   }
> }
> {code}
>  
> I added System.out.println("ord: " + ord); in 
> codecs.blocktree.SegmentTermsEnum.getFrame(int).
> Please check the attached output of test program.txt. 
>  
> We found out the root cause:
> we didn't implement seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms, so it uses the base class 
> TermsEnum.seekExact(BytesRef) implementation which is very inefficient in 
> this case.
> {code:java}
> public boolean seekExact(BytesRef text) throws IOException {
>   return seekCeil(text) == SeekStatus.FOUND;
> }
> {code}
> The fix is simple, just override seekExact(BytesRef) method in 
> FilterLeafReader.FilterTerms
> {code:java}
> @Override
> public boolean seekExact(BytesRef text) throws IOException {
>   return in.seekExact(text);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-11) - Build # 3472 - Failure!

2019-01-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3472/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 15419 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/temp/junit4-J1-20190129_184217_84814776660232927593763.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x7f8cfcb0a13c, pid=27685, tid=27730
   [junit4] #
   [junit4] # JRE version: OpenJDK Runtime Environment (11.0+28) (build 11+28)
   [junit4] # Java VM: OpenJDK 64-Bit Server VM (11+28, mixed mode, tiered, g1 
gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0xd3e13c]  PhaseIdealLoop::split_up(Node*, Node*, 
Node*) [clone .part.39]+0x47c
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/hs_err_pid27685.log
   [junit4] #
   [junit4] # Compiler replay data is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/replay_pid27685.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J1: EOF 

[...truncated 187 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/home/jenkins/tools/java/64bit/jdk-11/bin/java -XX:-UseCompressedOops 
-XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/heapdumps -ea 
-esa --illegal-access=deny -Dtests.prefix=tests -Dtests.seed=D13E6E73545CE6BC 
-Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=7.8.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=7.8.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/home/jenkins/workspace/Lucene-Solr-7.x-Linux 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1
 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/temp
 -Djunit4.childvm.id=1 -Djunit4.childvm.count=3 -Dfile.encoding=UTF-8 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false 
-classpath 

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 444 - Still Unstable

2019-01-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/444/

1 tests failed.
FAILED:  org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef

Error Message:
ReaderPool is already closed

Stack Trace:
org.apache.lucene.store.AlreadyClosedException: ReaderPool is already closed
at 
__randomizedtesting.SeedInfo.seed([321E02CA62B086E7:DB8375F81479611A]:0)
at org.apache.lucene.index.ReaderPool.get(ReaderPool.java:367)
at 
org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3338)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:519)
at 
org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:394)
at 
org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:328)
at 
org.apache.lucene.index.TestIndexFileDeleter.testExcInDecRef(TestIndexFileDeleter.java:465)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 1374 lines...]
   [junit4] Suite: org.apache.lucene.index.TestIndexFileDeleter
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestIndexFileDeleter -Dtests.method=testExcInDecRef 
-Dtests.seed=321E02CA62B086E7 

[jira] [Created] (SOLR-13187) NullPointerException at o.a.solr.search.QParser.getParser

2019-01-29 Thread Cesar Rodriguez (JIRA)
Cesar Rodriguez created SOLR-13187:
--

 Summary: NullPointerException at o.a.solr.search.QParser.getParser
 Key: SOLR-13187
 URL: https://issues.apache.org/jira/browse/SOLR-13187
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (9.0)
 Environment: h1. Steps to reproduce

* Use a Linux machine.
*  Build commit {{ea2c8ba}} of Solr as described in the section below.
* Build the films collection as described below.
* Start the server using the command {{./bin/solr start -f -p 8983 -s 
/tmp/home}}
* Request the URL given in the bug description.

h1. Compiling the server

{noformat}
git clone https://github.com/apache/lucene-solr
cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
{noformat}

h1. Building the collection

We followed [Exercise 
2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
the [Solr Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. 
The attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} 
that you will obtain by following the steps below:

{noformat}
mkdir -p /tmp/home
echo '' > /tmp/home/solr.xml
{noformat}

In one terminal start a Solr instance in foreground:
{noformat}
./bin/solr start -f -p 8983 -s /tmp/home
{noformat}

In another terminal, create a collection of movies, with no shards and no 
replication, and initialize it:

{noformat}
bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
http://localhost:8983/solr/films/schema
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
http://localhost:8983/solr/films/schema
./bin/post -c films example/films/films.json
{noformat}

Reporter: Cesar Rodriguez
 Attachments: home.zip

Requesting the following URL causes Solr to return an HTTP 500 error response:

{noformat}
http://localhost:8983/solr/films/select?fq={!a}
{noformat}

The error response seems to be caused by the following uncaught exception:

{noformat}
java.lang.NullPointerException
at org.apache.solr.search.QParser.getParser(QParser.java:367)
at org.apache.solr.search.QParser.getParser(QParser.java:319)
at org.apache.solr.search.QParser.getParser(QParser.java:309)
at 
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:203)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:272)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
[...]
{noformat}

The call to {{getQueryPlugin}} from 
{{org.apache.solr.search.QParser.getParser()}}, at line 366, can return a null 
pointer, as witnessed by the URL above. Method {{getParser}} should probably 
check for this.


We found this bug using [Diffblue Microservices 
Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
testing 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13188) NullPointerException in org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667)

2019-01-29 Thread Marek (JIRA)
Marek created SOLR-13188:


 Summary: NullPointerException in 
org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667)
 Key: SOLR-13188
 URL: https://issues.apache.org/jira/browse/SOLR-13188
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (9.0)
 Environment: h1. Steps to reproduce

* Use a Linux machine.
*  Build commit {{ea2c8ba}} of Solr as described in the section below.
* Build the films collection as described below.
* Start the server using the command {{./bin/solr start -f -p 8983 -s 
/tmp/home}}
* Request the URL given in the bug description.

h1. Compiling the server

{noformat}
git clone https://github.com/apache/lucene-solr
cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
{noformat}

h1. Building the collection

We followed [Exercise 
2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
the [Solr Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. 
The attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} 
that you will obtain by following the steps below:

{noformat}
mkdir -p /tmp/home
echo '' > /tmp/home/solr.xml
{noformat}

In one terminal start a Solr instance in foreground:
{noformat}
./bin/solr start -f -p 8983 -s /tmp/home
{noformat}

In another terminal, create a collection of movies, with no shards and no 
replication, and initialize it:

{noformat}
bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
http://localhost:8983/solr/films/schema
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
http://localhost:8983/solr/films/schema
./bin/post -c films example/films/films.json
{noformat}

Reporter: Marek
 Attachments: home.zip

Requesting the following URL causes Solr to return an HTTP 500 error response:
{noformat}
http://localhost:8983/solr/films/select?q={!parent%20fq={!collapse%20field=id}}
{noformat}
The error response seems to be caused by the following uncaught exception:
{noformat}
ERROR (qtp689401025-21) [   x:films] o.a.s.s.HttpSolrCall 
null:java.lang.NullPointerException
at 
org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667)
at 
org.apache.lucene.search.join.QueryBitSetProducer.getBitSet(QueryBitSetProducer.java:73)
at 
org.apache.solr.search.join.BlockJoinParentQParser$BitDocIdSetFilterWrapper.getDocIdSet(BlockJoinParentQParser.java:135)
at 
org.apache.solr.search.SolrConstantScoreQuery$ConstantWeight.scorer(SolrConstantScoreQuery.java:99)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:177)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:649)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:200)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1604)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1420)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
at 
org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
[...]
{noformat}
In org/apache/lucene/search/join/QueryBitSetProducer.java[73] there is called
 method 'org.apache.lucene.search.IndexSearcher.rewrite' with null value stored
 in the member 'query'. Inside the called method there is method 'rewrite' on 
the
 accepted argument.

The member 'query' of QueryBitSetProducer is initialised only once (i.e. only 
for 
 the first query issued; for subsequent queries it is not created again) from
 'org.apache.solr.search.join.BlockJoinParentQParser.getCachedFilter'
 (org/apache/solr/search/join/BlockJoinParentQParser.java[98]), where is 
 called 'createParentFilter' with null.

---
 We found this bug using [Diffblue 

[GitHub] risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-458639506
 
 
   ping @hgadre since you filled SOLR-9761 - I know I didn't get to all the 
subtasks under SOLR-9761 but would appreciate any thoughts you might have.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk edited a comment on issue #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
risdenk edited a comment on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-458633207
 
 
   @uschindler sorry I definitely didn't explain it as well as I could have. 
Hadoop moved from old mortbay Jetty to Jetty 9.3. Solr is on Jetty 9.4. Session 
management changed between Jetty 9.3 and 9.4. For the integration tests with 
Hadoop, the classpath has Jetty 9.4. Adding Jetty 9.3 isn't possible since then 
the new HTTP2 tests don't work. The smallest change I could make was to copy 
HttpServer2 from Hadoop and fix the session management sections for Jetty 9.4. 
The changes are similar to https://issues.apache.org/jira/browse/HADOOP-14930 
which looked at upgrading Hadoop to Jetty 9.4.
   
   So yes your understanding is correct in that we can't have both Jetty 9.3 
and 9.4 on the Solr test classpath. The testing I have done shows that the 
Hadoop integration tests work with Jetty 9.4 with the copied/patched 
HttpServer2 code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-458633207
 
 
   @uschindler sorry I definitely didn't explain it as well as I could have. 
Hadoop moved from old mortba Jetty to Jetty 9.3. Solr is on Jetty 9.4. Session 
management changed between Jetty 9.3 and 9.4. For the integration tests with 
Hadoop, the classpath has Jetty 9.4. Adding Jetty 9.3 isn't possible since then 
the new HTTP2 tests don't work. The smallest change I could make was to copy 
HttpServer2 from Hadoop and fix the session management sections for Jetty 9.4. 
The changes are similar to https://issues.apache.org/jira/browse/HADOOP-14930 
which looked at upgrading Hadoop to Jetty 9.4.
   
   So yes your understanding is correct in that we can't have both Jetty 9.3 
and 9.4 on the Solr test classpath. The testing I have done shows that the 
Hadoop integration tests work with Jetty 9.4 with the copied/patched 
HttpServer2 code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] uschindler edited a comment on issue #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
uschindler edited a comment on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-458626014
 
 
   Hi,
   I don't fully understand why you need the clone of the HttpServer2.java 
source code. Isn't this included in hadoop anyways? Or has this some special 
reason like replacing a shaded version inside hadoop or similar? It's good that 
we have removed the mortbay jetty now from test dependencies, so I have the 
feeling this does something like this to prevent another version of jetty to be 
included.
   
   So please explain!
   
   As long as Solr uses classpath instead of modulepath this won't bring any 
issues - if the classpath order for running tests is fine and our private 
version is used in preference.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] uschindler commented on issue #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
uschindler commented on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-458626014
 
 
   Hi,
   I don't fully understand why you need the clone of the HttpServer2.java 
source code. Isn't this included in hadoop anyways. Ot has this some special 
reason like replacing a shaded version inside hadoop. It's gppd that we have 
removed the mortbay jetty now from test dependencies, so I have the feeling 
this does something like this to prevent another version of jetyt to be 
included.
   
   So please explain!
   
   As long as Solr uses classpath instead of modules this won't bring any 
issues - if the classpath order for running tests is fine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13186) When a node wins the Overseer election there is a race that can cause an invalid Overseer leader node to be registered.

2019-01-29 Thread Mark Miller (JIRA)
Mark Miller created SOLR-13186:
--

 Summary: When a node wins the Overseer election there is a race 
that can cause an invalid Overseer leader node to be registered.
 Key: SOLR-13186
 URL: https://issues.apache.org/jira/browse/SOLR-13186
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller
Assignee: Mark Miller


These can leave you without an Overseer until the bad ephemeral leader 
registration node is removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13185) NPE in query parsing because of missing null check

2019-01-29 Thread Johannes Kloos (JIRA)
Johannes Kloos created SOLR-13185:
-

 Summary: NPE in query parsing because of missing null check
 Key: SOLR-13185
 URL: https://issues.apache.org/jira/browse/SOLR-13185
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: query parsers
Affects Versions: master (9.0)
 Environment: h1. Steps to reproduce

* Use a Linux machine.
*  Build commit {{ea2c8ba}} of Solr as described in the section below.
* Build the films collection as described below.
* Start the server using the command {{./bin/solr start -f -p 8983 -s 
/tmp/home}}
* Request the URL given in the bug description.

h1. Compiling the server

{noformat}
git clone https://github.com/apache/lucene-solr
cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
{noformat}

h1. Building the collection

We followed [Exercise 
2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
the [Solr Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. 
The attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} 
that you will obtain by following the steps below:

{noformat}
mkdir -p /tmp/home
echo '' > /tmp/home/solr.xml
{noformat}

In one terminal start a Solr instance in foreground:
{noformat}
./bin/solr start -f -p 8983 -s /tmp/home
{noformat}

In another terminal, create a collection of movies, with no shards and no 
replication, and initialize it:

{noformat}
bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
http://localhost:8983/solr/films/schema
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
http://localhost:8983/solr/films/schema
./bin/post -c films example/films/films.json
{noformat}

Reporter: Johannes Kloos
 Attachments: home.zip

Requesting the following URL causes Solr to return an HTTP 500 error response:

{noformat}
http://localhost:8983/solr/films/select?defType=complexphrase=AND
{noformat}

The error response seems to be caused by the following uncaught exception:

{noformat}
java.lang.NullPointerException
at java.io.StringReader.(StringReader.java:50)
at 
org.apache.lucene.queryparser.classic.QueryParserBase.parse(QueryParserBase.java:106)
at 
org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser.parse(ComplexPhraseQueryParser.java:125)
at 
org.apache.solr.search.ComplexPhraseQParserPlugin$ComplexPhraseQParser.parse(ComplexPhraseQParserPlugin.java:164)
at org.apache.solr.search.QParser.getQuery(QParser.java:173)
at 
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:158)
[...]
{noformat}

What happens here is that a querystring (qstr) is passed into a StringReader. 
Ultimately, this query string comes from the method o.a.s.h.c.QueryComponent, 
in method prepare (line 157), where it is extracted using rb.queryString() [rb 
is of type responseBuffer]. The query string stored in the response buffer was 
earlier on extracted from the request URL by looking for the "q" parameter; 
note that this parameter is absent in the example request, so qstr would be 
null. The extracted qstr is then passed to QParser.getParser, which expects a 
non-null query string.

We found this bug using [Diffblue Microservices 
Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
testing 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8667) NPE due to missing input checking in ValueSourceParser

2019-01-29 Thread Johannes Kloos (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755193#comment-16755193
 ] 

Johannes Kloos commented on LUCENE-8667:


Thanks, will do next time!

> NPE due to missing input checking in ValueSourceParser
> --
>
> Key: LUCENE-8667
> URL: https://issues.apache.org/jira/browse/LUCENE-8667
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Johannes Kloos
>Priority: Minor
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?q={!frange%20l=10%20u=100}joindf(genre:comedy,$x)
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.lucene.queries.function.valuesource.JoinDocFreqValueSource.hashCode(JoinDocFreqValueSource.java:98)
> at 
> org.apache.solr.search.function.ValueSourceRangeFilter.hashCode(ValueSourceRangeFilter.java:139)
> at 
> org.apache.solr.search.SolrConstantScoreQuery.hashCode(SolrConstantScoreQuery.java:138)
> at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1328)
> at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
> at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373)
> [...]
> {noformat}
> As far as I can tell, the bug comes about as follows:
> In org.apache.solr.search.ValueSourceParser, in the addParser(“joindf”, …) 
> statement (lines 335-342), we extract the arguments f0 and qf without 
> checking if these arguments could not be parsed. The test case produces a 
> null pointer for the qfield field in the JoinDocFreqValueSource instance. 
> This causes problems in hashcode (as evidenced in this bug), since it expects 
> qfield to be non-null.
> Looking at the usages of qfield, it is generally expected to be non-null, so 
> it seems we are missing input validation in the parser.
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
> testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13184) NPE due to missing input checking in ValueSourceParser

2019-01-29 Thread Johannes Kloos (JIRA)
Johannes Kloos created SOLR-13184:
-

 Summary: NPE due to missing input checking in ValueSourceParser
 Key: SOLR-13184
 URL: https://issues.apache.org/jira/browse/SOLR-13184
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SearchComponents - other
Affects Versions: master (9.0)
 Environment: h1. Steps to reproduce

* Use a Linux machine.
*  Build commit {{ea2c8ba}} of Solr as described in the section below.
* Build the films collection as described below.
* Start the server using the command {{./bin/solr start -f -p 8983 -s 
/tmp/home}}
* Request the URL given in the bug description.

h1. Compiling the server

{noformat}
git clone https://github.com/apache/lucene-solr
cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
{noformat}

h1. Building the collection

We followed [Exercise 
2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
the [Solr Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. 
The attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} 
that you will obtain by following the steps below:

{noformat}
mkdir -p /tmp/home
echo '' > /tmp/home/solr.xml
{noformat}

In one terminal start a Solr instance in foreground:
{noformat}
./bin/solr start -f -p 8983 -s /tmp/home
{noformat}

In another terminal, create a collection of movies, with no shards and no 
replication, and initialize it:

{noformat}
bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
http://localhost:8983/solr/films/schema
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
http://localhost:8983/solr/films/schema
./bin/post -c films example/films/films.json
{noformat}

Reporter: Johannes Kloos
 Attachments: home.zip

Requesting the following URL causes Solr to return an HTTP 500 error response:
{noformat}
http://localhost:8983/solr/films/select?q={!frange%20l=10%20u=100}joindf(genre:comedy,$x)
{noformat}
The error response seems to be caused by the following uncaught exception:
{noformat}
java.lang.NullPointerException
at 
org.apache.lucene.queries.function.valuesource.JoinDocFreqValueSource.hashCode(JoinDocFreqValueSource.java:98)
at 
org.apache.solr.search.function.ValueSourceRangeFilter.hashCode(ValueSourceRangeFilter.java:139)
at 
org.apache.solr.search.SolrConstantScoreQuery.hashCode(SolrConstantScoreQuery.java:138)
at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1328)
at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
at 
org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373)

{noformat}
As far as I can tell, this bug comes about as follows: In 
org.apache.solr.search.ValueSourceParser, in the addParser(“joindf”, …) 
statement (lines 335-342), we extract the arguments f0 and qf without checking 
if these arguments could not be parsed. The test case produces a null pointer 
for the qfield field in the JoinDocFreqValueSource instance. This causes 
problems in hashcode (as evidenced in this bug), since it expects qfield to be 
non-null.

Looking at the usages of qfield, it is generally expected to be non-null, so it 
seems we are missing input validation in the parser.

We found this bug using [Diffblue Microservices 
Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
testing 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13183) NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter

2019-01-29 Thread Cesar Rodriguez (JIRA)
Cesar Rodriguez created SOLR-13183:
--

 Summary: NullPointerException at 
o.a.solr.servlet.SolrDispatchFilter.doFilter
 Key: SOLR-13183
 URL: https://issues.apache.org/jira/browse/SOLR-13183
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (9.0)
 Environment: h1. Steps to reproduce

* Use a Linux machine.
*  Build commit {{ea2c8ba}} of Solr as described in the section below.
* Build the films collection as described below.
* Start the server using the command {{./bin/solr start -f -p 8983 -s 
/tmp/home}}
* Request the URL given in the bug description.

h1. Compiling the server

{noformat}
git clone https://github.com/apache/lucene-solr
cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
{noformat}

h1. Building the collection

We followed [Exercise 
2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
the [Solr Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. 
The attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} 
that you will obtain by following the steps below:

{noformat}
mkdir -p /tmp/home
echo '' > /tmp/home/solr.xml
{noformat}

In one terminal start a Solr instance in foreground:
{noformat}
./bin/solr start -f -p 8983 -s /tmp/home
{noformat}

In another terminal, create a collection of movies, with no shards and no 
replication, and initialize it:

{noformat}
bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
http://localhost:8983/solr/films/schema
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
http://localhost:8983/solr/films/schema
./bin/post -c films example/films/films.json
{noformat}

Reporter: Cesar Rodriguez
 Attachments: home.zip

Requesting the following URL causes Solr to return an HTTP 500 error response:

{noformat}
http://localhost:8983/solr/films/schema/%25
{noformat}

The error response seems to be caused by the following uncaught exception:

{noformat}
java.lang.NullPointerException
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:403)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
[...]
{noformat}

Function SolrDisplatchFilter.doFilter(), line 403 calls methods forward() on a 
null pointer. The problem happens because 
ServletRequestWrapper.getRequestDispatcher(), line 338 returns null. And that 
happens because 
org.eclipse.jetty.server.handler.ContextHandler.Context.getRequestDispatcher() 
returns a null pointer. This happens because 
org.eclipse.jetty.http.HttpURI.getDecodedPath() tries to decode the string 
{{/solr/films/schema/%}}, which is an invalid encoding.

I don’t fully follow the logic of the code but it seems that the 
percent-encoding of the URL has first been decoded and then it’s being decoded 
again?

We found this bug using [Diffblue Microservices 
Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
testing 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8667) NPE due to missing input checking in ValueSourceParser

2019-01-29 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755184#comment-16755184
 ] 

Cassandra Targett commented on LUCENE-8667:
---

FYI, issues can be moved to a different project in the case of mistakes like 
these. Since Lucene and Solr are under the same PMC, a comment asking for a 
committer to move the issue would be preferred to closing and recreating it. 
And easier for you too, I suspect.

> NPE due to missing input checking in ValueSourceParser
> --
>
> Key: LUCENE-8667
> URL: https://issues.apache.org/jira/browse/LUCENE-8667
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Johannes Kloos
>Priority: Minor
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?q={!frange%20l=10%20u=100}joindf(genre:comedy,$x)
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.lucene.queries.function.valuesource.JoinDocFreqValueSource.hashCode(JoinDocFreqValueSource.java:98)
> at 
> org.apache.solr.search.function.ValueSourceRangeFilter.hashCode(ValueSourceRangeFilter.java:139)
> at 
> org.apache.solr.search.SolrConstantScoreQuery.hashCode(SolrConstantScoreQuery.java:138)
> at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1328)
> at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
> at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373)
> [...]
> {noformat}
> As far as I can tell, the bug comes about as follows:
> In org.apache.solr.search.ValueSourceParser, in the addParser(“joindf”, …) 
> statement (lines 335-342), we extract the arguments f0 and qf without 
> checking if these arguments could not be parsed. The test case produces a 
> null pointer for the qfield field in the JoinDocFreqValueSource instance. 
> This causes problems in hashcode (as evidenced in this bug), since it expects 
> qfield to be non-null.
> Looking at the usages of qfield, it is generally expected to be non-null, so 
> it seems we are missing input validation in the parser.
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
> testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-8667) NPE due to missing input checking in ValueSourceParser

2019-01-29 Thread Johannes Kloos (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johannes Kloos closed LUCENE-8667.
--

Invalid issue, should have been submitted to SOLR instead.

> NPE due to missing input checking in ValueSourceParser
> --
>
> Key: LUCENE-8667
> URL: https://issues.apache.org/jira/browse/LUCENE-8667
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Johannes Kloos
>Priority: Minor
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?q={!frange%20l=10%20u=100}joindf(genre:comedy,$x)
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.lucene.queries.function.valuesource.JoinDocFreqValueSource.hashCode(JoinDocFreqValueSource.java:98)
> at 
> org.apache.solr.search.function.ValueSourceRangeFilter.hashCode(ValueSourceRangeFilter.java:139)
> at 
> org.apache.solr.search.SolrConstantScoreQuery.hashCode(SolrConstantScoreQuery.java:138)
> at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1328)
> at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
> at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373)
> [...]
> {noformat}
> As far as I can tell, the bug comes about as follows:
> In org.apache.solr.search.ValueSourceParser, in the addParser(“joindf”, …) 
> statement (lines 335-342), we extract the arguments f0 and qf without 
> checking if these arguments could not be parsed. The test case produces a 
> null pointer for the qfield field in the JoinDocFreqValueSource instance. 
> This causes problems in hashcode (as evidenced in this bug), since it expects 
> qfield to be non-null.
> Looking at the usages of qfield, it is generally expected to be non-null, so 
> it seems we are missing input validation in the parser.
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
> testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8667) NPE due to missing input checking in ValueSourceParser

2019-01-29 Thread Johannes Kloos (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johannes Kloos resolved LUCENE-8667.

Resolution: Invalid

Sorry, submitted to the wrong bug tracker.

> NPE due to missing input checking in ValueSourceParser
> --
>
> Key: LUCENE-8667
> URL: https://issues.apache.org/jira/browse/LUCENE-8667
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Johannes Kloos
>Priority: Minor
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?q={!frange%20l=10%20u=100}joindf(genre:comedy,$x)
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.NullPointerException
> at 
> org.apache.lucene.queries.function.valuesource.JoinDocFreqValueSource.hashCode(JoinDocFreqValueSource.java:98)
> at 
> org.apache.solr.search.function.ValueSourceRangeFilter.hashCode(ValueSourceRangeFilter.java:139)
> at 
> org.apache.solr.search.SolrConstantScoreQuery.hashCode(SolrConstantScoreQuery.java:138)
> at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1328)
> at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
> at 
> org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373)
> [...]
> {noformat}
> As far as I can tell, the bug comes about as follows:
> In org.apache.solr.search.ValueSourceParser, in the addParser(“joindf”, …) 
> statement (lines 335-342), we extract the arguments f0 and qf without 
> checking if these arguments could not be parsed. The test case produces a 
> null pointer for the qfield field in the JoinDocFreqValueSource instance. 
> This causes problems in hashcode (as evidenced in this bug), since it expects 
> qfield to be non-null.
> Looking at the usages of qfield, it is generally expected to be non-null, so 
> it seems we are missing input validation in the parser.
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
> testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8667) NPE due to missing input checking in ValueSourceParser

2019-01-29 Thread Johannes Kloos (JIRA)
Johannes Kloos created LUCENE-8667:
--

 Summary: NPE due to missing input checking in ValueSourceParser
 Key: LUCENE-8667
 URL: https://issues.apache.org/jira/browse/LUCENE-8667
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: master (9.0)
 Environment: h1. Steps to reproduce

* Use a Linux machine.
*  Build commit {{ea2c8ba}} of Solr as described in the section below.
* Build the films collection as described below.
* Start the server using the command {{./bin/solr start -f -p 8983 -s 
/tmp/home}}
* Request the URL given in the bug description.

h1. Compiling the server

{noformat}
git clone https://github.com/apache/lucene-solr
cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
{noformat}

h1. Building the collection

We followed [Exercise 
2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
the [Solr Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. 
The attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} 
that you will obtain by following the steps below:

{noformat}
mkdir -p /tmp/home
echo '' > /tmp/home/solr.xml
{noformat}

In one terminal start a Solr instance in foreground:
{noformat}
./bin/solr start -f -p 8983 -s /tmp/home
{noformat}

In another terminal, create a collection of movies, with no shards and no 
replication, and initialize it:

{noformat}
bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
http://localhost:8983/solr/films/schema
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
http://localhost:8983/solr/films/schema
./bin/post -c films example/films/films.json
{noformat}

Reporter: Johannes Kloos
 Attachments: home.zip


Requesting the following URL causes Solr to return an HTTP 500 error response:

{noformat}
http://localhost:8983/solr/films/select?q={!frange%20l=10%20u=100}joindf(genre:comedy,$x)
{noformat}

The error response seems to be caused by the following uncaught exception:

{noformat}
java.lang.NullPointerException
at 
org.apache.lucene.queries.function.valuesource.JoinDocFreqValueSource.hashCode(JoinDocFreqValueSource.java:98)
at 
org.apache.solr.search.function.ValueSourceRangeFilter.hashCode(ValueSourceRangeFilter.java:139)
at 
org.apache.solr.search.SolrConstantScoreQuery.hashCode(SolrConstantScoreQuery.java:138)
at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:46)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1328)
at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
at 
org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373)
[...]
{noformat}

As far as I can tell, the bug comes about as follows:
In org.apache.solr.search.ValueSourceParser, in the addParser(“joindf”, …) 
statement (lines 335-342), we extract the arguments f0 and qf without checking 
if these arguments could not be parsed. The test case produces a null pointer 
for the qfield field in the JoinDocFreqValueSource instance. This causes 
problems in hashcode (as evidenced in this bug), since it expects qfield to be 
non-null.

Looking at the usages of qfield, it is generally expected to be non-null, so it 
seems we are missing input validation in the parser.

We found this bug using [Diffblue Microservices 
Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
testing 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13179) NullPointerException in org/apache/lucene/queries/function/FunctionScoreQuery.java [109]

2019-01-29 Thread Marek (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marek updated SOLR-13179:
-
Description: 
Requesting the following URL causes Solr to return an HTTP 500 error response:

{noformat}
http://localhost:8983/solr/films/select?facet.query=={!frange%20l=10%20u=100}boost({!v=+},3)
{noformat}

The error response seems to be caused by the following uncaught exception:

{noformat}
ERROR (qtp689401025-23) [   x:films] o.a.s.s.HttpSolrCall 
null:java.lang.NullPointerException
at 
org.apache.lucene.queries.function.FunctionScoreQuery.rewrite(FunctionScoreQuery.java:109)
at 
org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667)
at 
org.apache.lucene.queries.function.valuesource.QueryValueSource.createWeight(QueryValueSource.java:75)
at 
org.apache.solr.search.function.ValueSourceRangeFilter.createWeight(ValueSourceRangeFilter.java:105)
at 
org.apache.solr.search.SolrConstantScoreQuery$ConstantWeight.(SolrConstantScoreQuery.java:94)
at 
org.apache.solr.search.SolrConstantScoreQuery.createWeight(SolrConstantScoreQuery.java:119)
at 
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:717)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:200)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1604)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1420)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
at 
org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
[...]
{noformat}

1. In org/apache/solr/search/ValueSourceParser.java[330] a variable query 'q' 
is assigned the value null, which is obtained from 
org/apache/solr/search/LuceneQParser.java[39], because a variable 'qstr' is the 
empty string.

2. In org/apache/solr/search/ValueSourceParser.java[332] the null value of 'q' 
is passed to function 'FunctionScoreQuery.boostByValue', which in turn leads to 
initialisation of member 'in' of 
org.apache.lucene.queries.function.FunctionScoreQuery to null at 
org/apache/lucene/queries/function/FunctionScoreQuery.java[56].

3. Later, during execution of the query, there is dereferenced the member 'in' 
(still having the null value) at 
org/apache/lucene/queries/function/FunctionScoreQuery.java[109].



We found this bug using [Diffblue Microservices 
Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
testing 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].


  was:
Requesting the following URL causes Solr to return an HTTP 500 error response:

{noformat}
http://localhost:8983/solr/films/select?facet.query=={!frange%20l=10%20u=100}boost({!v=+},3)&~ama=on=true
{noformat}

The error response seems to be caused by the following uncaught exception:

{noformat}
ERROR (qtp1067599825-23) [   x:films] o.a.s.s.HttpSolrCall 
null:java.lang.NullPointerException
at 
org.apache.lucene.queries.function.FunctionScoreQuery.rewrite(FunctionScoreQuery.java:109)
at 
org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667)
at 
org.apache.lucene.queries.function.valuesource.QueryValueSource.createWeight(QueryValueSource.java:75)
at 
org.apache.solr.search.function.ValueSourceRangeFilter.createWeight(ValueSourceRangeFilter.java:105)
at 
org.apache.solr.search.SolrConstantScoreQuery$ConstantWeight.(SolrConstantScoreQuery.java:94)
at 
org.apache.solr.search.SolrConstantScoreQuery.createWeight(SolrConstantScoreQuery.java:119)
at 
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:717)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:200)
at 

[jira] [Updated] (SOLR-13179) NullPointerException in org/apache/lucene/queries/function/FunctionScoreQuery.java [109]

2019-01-29 Thread Marek (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marek updated SOLR-13179:
-
Description: 
Requesting the following URL causes Solr to return an HTTP 500 error response:

{noformat}
http://localhost:8983/solr/films/select?facet.query=={!frange%20l=10%20u=100}boost({!v=+},3)&~ama=on=true
{noformat}

The error response seems to be caused by the following uncaught exception:

{noformat}
ERROR (qtp1067599825-23) [   x:films] o.a.s.s.HttpSolrCall 
null:java.lang.NullPointerException
at 
org.apache.lucene.queries.function.FunctionScoreQuery.rewrite(FunctionScoreQuery.java:109)
at 
org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667)
at 
org.apache.lucene.queries.function.valuesource.QueryValueSource.createWeight(QueryValueSource.java:75)
at 
org.apache.solr.search.function.ValueSourceRangeFilter.createWeight(ValueSourceRangeFilter.java:105)
at 
org.apache.solr.search.SolrConstantScoreQuery$ConstantWeight.(SolrConstantScoreQuery.java:94)
at 
org.apache.solr.search.SolrConstantScoreQuery.createWeight(SolrConstantScoreQuery.java:119)
at 
org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:717)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:200)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListAndSetNC(SolrIndexSearcher.java:1711)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1416)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
at 
org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:306)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
[...]
{noformat}

1. In org/apache/solr/search/ValueSourceParser.java[330] a variable query 'q' 
is assigned the value null, which is obtained from 
org/apache/solr/search/LuceneQParser.java[39], because a variable 'qstr' is the 
empty string.

2. In org/apache/solr/search/ValueSourceParser.java[332] the null value of 'q' 
is passed to function 'FunctionScoreQuery.boostByValue', which in turn leads to 
initialisation of member 'in' of 
org.apache.lucene.queries.function.FunctionScoreQuery to null at 
org/apache/lucene/queries/function/FunctionScoreQuery.java[56].

3. Later, during execution of the query, there is dereferenced the member 'in' 
(still having the null value) at 
org/apache/lucene/queries/function/FunctionScoreQuery.java[109].



We found this bug using [Diffblue Microservices 
Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
testing 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].


  was:
Execution of the URL query:

*http://localhost:8983/solr/films/select?q=\{!frange%20l=10%20u=100}boost(\{!v=+},3)*

leads to a NullPointerException:

2019-01-29 13:42:04.662 ERROR (qtp689401025-21) [ x:films] o.a.s.s.HttpSolrCall 
null:java.lang.NullPointerException
 at 
org.apache.lucene.queries.function.FunctionScoreQuery.rewrite(FunctionScoreQuery.java:109)
 at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667)
 at 
org.apache.lucene.queries.function.valuesource.QueryValueSource.createWeight(QueryValueSource.java:75)
 at 
org.apache.solr.search.function.ValueSourceRangeFilter.createWeight(ValueSourceRangeFilter.java:105)
 at 
org.apache.solr.search.SolrConstantScoreQuery$ConstantWeight.(SolrConstantScoreQuery.java:94)
 at 
org.apache.solr.search.SolrConstantScoreQuery.createWeight(SolrConstantScoreQuery.java:119)
 at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:717)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443)
 at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:200)
 at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1604)
 at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1420)
 at 

[jira] [Updated] (SOLR-13179) NullPointerException in org/apache/lucene/queries/function/FunctionScoreQuery.java [109]

2019-01-29 Thread Marek (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marek updated SOLR-13179:
-
Environment: 
h1. Steps to reproduce

* Use a Linux machine.
*  Build commit {{ea2c8ba}} of Solr as described in the section below.
* Build the films collection as described below.
* Start the server using the command {{./bin/solr start -f -p 8983 -s 
/tmp/home}}
* Request the URL given in the bug description.

h1. Compiling the server

{noformat}
git clone https://github.com/apache/lucene-solr
cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
{noformat}

h1. Building the collection

We followed [Exercise 
2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
the [Solr Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. 
The attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} 
that you will obtain by following the steps below:

{noformat}
mkdir -p /tmp/home
echo '' > /tmp/home/solr.xml
{noformat}

In one terminal start a Solr instance in foreground:
{noformat}
./bin/solr start -f -p 8983 -s /tmp/home
{noformat}

In another terminal, create a collection of movies, with no shards and no 
replication, and initialize it:

{noformat}
bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
http://localhost:8983/solr/films/schema
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
http://localhost:8983/solr/films/schema
./bin/post -c films example/films/films.json
{noformat}


  was:
h2. Steps to reproduce
 * Build commit ea2c8ba of Solr as described in the section below.
 * Build the films collection as described below.
 * Start the server using the command "./bin/solr start -f -p 8983 -s /tmp/home"
 * Request the URL above.

h2. Compiling the server

git clone [https://github.com/apache/lucene-solr
]cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
h2. Building the collection

We followed Exercise 2 from the quick start tutorial 
([http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2]). The 
attached file (home.zip) gives the contents of folder /tmp/home that you will 
obtain by following the steps below.

 

mkdir -p /tmp/home
 echo '' > 
/tmp/home/solr.xml

 

In one terminal start a Solr instance in foreground:

./bin/solr start -f -p 8983 -s /tmp/home

 

In another terminal, create a collection of movies, with no shards and no 
replication:

bin/solr create -c films

curl -X POST -H 'Content-type:application/json' --data-binary '\{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
[http://localhost:8983/solr/films/schema]

curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
[http://localhost:8983/solr/films/schema]

./bin/post -c films example/films/films.json


> NullPointerException in 
> org/apache/lucene/queries/function/FunctionScoreQuery.java [109]
> 
>
> Key: SOLR-13179
> URL: https://issues.apache.org/jira/browse/SOLR-13179
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X 

[jira] [Created] (SOLR-13182) NullPointerException due to an invariant violation in org/apache/lucene/search/BooleanClause.java[60]

2019-01-29 Thread Marek (JIRA)
Marek created SOLR-13182:


 Summary: NullPointerException due to an invariant violation in 
org/apache/lucene/search/BooleanClause.java[60]
 Key: SOLR-13182
 URL: https://issues.apache.org/jira/browse/SOLR-13182
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (9.0)
 Environment: h1. Steps to reproduce

* Use a Linux machine.
*  Build commit {{ea2c8ba}} of Solr as described in the section below.
* Build the films collection as described below.
* Start the server using the command {{./bin/solr start -f -p 8983 -s 
/tmp/home}}
* Request the URL given in the bug description.

h1. Compiling the server

{noformat}
git clone https://github.com/apache/lucene-solr
cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
{noformat}

h1. Building the collection

We followed [Exercise 
2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
the [Solr Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. 
The attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} 
that you will obtain by following the steps below:

{noformat}
mkdir -p /tmp/home
echo '' > /tmp/home/solr.xml
{noformat}

In one terminal start a Solr instance in foreground:
{noformat}
./bin/solr start -f -p 8983 -s /tmp/home
{noformat}

In another terminal, create a collection of movies, with no shards and no 
replication, and initialize it:

{noformat}
bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
http://localhost:8983/solr/films/schema
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
http://localhost:8983/solr/films/schema
./bin/post -c films example/films/films.json
{noformat}

Reporter: Marek
 Attachments: home.zip

Requesting the following URL causes Solr to return an HTTP 500 error response:

{noformat}
http://localhost:8983/solr/films/select?q={!child%20q={}
{noformat}

The error response seems to be caused by the following uncaught exception:

{noformat}
ERROR (qtp689401025-14) [ x:films] o.a.s.h.RequestHandlerBase 
java.lang.NullPointerException: Query must not be null
 at java.util.Objects.requireNonNull(Objects.java:228)
 at org.apache.lucene.search.BooleanClause.(BooleanClause.java:60)
 at org.apache.lucene.search.BooleanQuery$Builder.add(BooleanQuery.java:127)
 at 
org.apache.solr.search.join.BlockJoinChildQParser.noClausesQuery(BlockJoinChildQParser.java:50)
 at org.apache.solr.search.join.FiltersQParser.parse(FiltersQParser.java:60)
 at org.apache.solr.search.QParser.getQuery(QParser.java:173)
 at 
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:158)
 at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:272)
 at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
 at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)

[...]
{noformat}

In org/apache/solr/search/join/BlockJoinChildQParser.java[47] there is 
computed query variable 'parents', which receives value null from call to
'parseParentFilter()'. The null value is then passed to
'org.apache.lucene.search.BooleanQuery.Builder.add' method at line 50. That
method calls the constructor where 'Objects.requireNonNull' failes
(the exception is thrown).

The call to 'parseParentFilter()' evaluates to null, because:
 #  In org/apache/solr/search/join/BlockJoinParentQParser.java[59] null is
    set to string 'filter' (becase "which" is not in 'localParams' map).
 #  The parser 'parentParser' obtained in the next line has member 'qstr' set
    to null, because the 'filter' passed to 'subQuery' is passed as the first 
    argument to 'org.apache.solr.search.QParserPlugin.createParser'.
 #  Subsequnt call to 'org.apache.solr.search.QParser.getQuery' on the
    'parentParser' at 
org/apache/solr/search/join/BlockJoinParentQParser.java[61]
    leads to retuning null in 'org.apache.solr.search.LuceneQParser.parse',
    because the queried string 'qstr' is the empty string.


--
We found this bug using [Diffblue Microservices 
Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
testing 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].




--
This message was sent by Atlassian JIRA

[jira] [Created] (LUCENE-8666) NPE in o.a.l.codecs.perfield.PerFieldPostingsFormat

2019-01-29 Thread Johannes Kloos (JIRA)
Johannes Kloos created LUCENE-8666:
--

 Summary: NPE in o.a.l.codecs.perfield.PerFieldPostingsFormat 
 Key: LUCENE-8666
 URL: https://issues.apache.org/jira/browse/LUCENE-8666
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/codecs
Affects Versions: 7.5, master (9.0)
 Environment: * 
Running on Unix, using a recent git checkout of master and the films example 
database.
h2. Steps to reproduce
 * Build commit ea2c8ba of Solr as described in the section below.
 * Build the films collection as described below.
 * Start the server using the command “./bin/solr start -f -p 8983 -s /tmp/home”
 * Request the URL above.

h2. Compiling the server

git clone [https://github.com/apache/lucene-solr]
 cd lucene-solr
 git checkout ea2c8ba
 ant compile
 cd solr
 ant server


h2. Building the collection

We followed Exercise 2 from the SOLR quick start tutorial 
([http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2]). The 
attached file (home.zip) gives the contents of folder /tmp/home that you will 
obtain by following the steps below.
{{}}{{mkdir -p /tmp/home}}
{{ echo '' > 
/tmp/home/solr.xml}}
 

In one terminal start a Solr instance in foreground:

{{./bin/solr start -f -p 8983 -s /tmp/home}}

 

In another terminal, create a collection of movies, with no shards and no 
replication:
{{bin/solr create -c films}}
{{ curl -X POST -H 'Content-type:application/json' --data-binary 
'\{"add-field": {"name":"name", "type":"text_general", "multiValued":false, 
"stored":true}}' [http://localhost:8983/solr/films/schema]
{{curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}{{' 
[http://localhost:8983/solr/films/schema]}}'}}
{{./bin/post -c films example/films/films.json}}
{{ }}
Reporter: Johannes Kloos
 Attachments: 0001-Fix-NullPointerException.patch, home.zip

Requesting this URL in SOLR gives a 500 error with a stack trace pointing to 
Lucene:

{{http://localhost:8983/solr/films/select?q=\{!complexphrase}genre:"-om*"}}

The stack trace is (cut down to the reasonably relevant part):

{{java.lang.NullPointerException\n\tat 
java.util.TreeMap.getEntry(TreeMap.java:347)
at java.util.TreeMap.get(TreeMap.java:278)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.terms(PerFieldPostingsFormat.java:311)
at org.apache.lucene.index.CodecReader.terms(CodecReader.java:106)
at org.apache.lucene.index.FilterLeafReader.terms(FilterLeafReader.java:351)
at 
org.apache.lucene.index.ExitableDirectoryReader$ExitableFilterAtomicReader.terms(ExitableDirectoryReader.java:91)
at 
org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight.getSpans(SpanNearQuery.java:208)
at 
org.apache.lucene.search.spans.SpanNotQuery$SpanNotWeight.getSpans(SpanNotQuery.java:127)
at org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:135)
at org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:46)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:177)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:649)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443)
at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:200)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1604)}}{{The
 error is actually a bit deeper and can be traced back to the 
o.a.l.queryparser.complexPhrase.ComplexPhraseQueryParser class.}}

Handling this query involves constructing a SpanQuery, which happens in the 
rewrite method of ComplexPhraseQueryParser. In particular, the expression is 
decomposed into a BooleanQuery, which has exactly one clause, namely the 
negative clause -genre:”om*”. The rewrite method then further transforms this 
into a SpanQuery; in this case, it goes into the path that handles complex 
queries with both positive and negative clauses. It extracts the subset of 
positive clauses - note that this set of clauses is empty for this query. The 
positive clauses are then combined into a SpanNearQuery (around line 340), 
which is then used to build a SpanNotQuery. Further down the line, the field 
attribute of the SpanNearQuery is accessed and used as an index into a TreeMap. 
But since we had an empty set of positive clauses, the SpanNearQuery does not 
have its field attribute set, so we get a null here - this leads to an 
exception. A possible fix would be to detect the situation where we have an 
empty set of positive clauses and include a single synthetic clause that 
matches either everything or nothing. See attached file 
0001-Fix-NullPointerException.patch.

This bug was found using [Diffblue Microservices 
Testing|http://www.diffblue.com/labs]. Find more information on this [test 

[jira] [Created] (SOLR-13181) NullPointerException in org.apache.solr.request.macro.MacroExpander

2019-01-29 Thread Cesar Rodriguez (JIRA)
Cesar Rodriguez created SOLR-13181:
--

 Summary: NullPointerException in 
org.apache.solr.request.macro.MacroExpander
 Key: SOLR-13181
 URL: https://issues.apache.org/jira/browse/SOLR-13181
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (9.0)
 Environment: h1. Steps to reproduce

* Use a Linux machine.
*  Build commit {{ea2c8ba}} of Solr as described in the section below.
* Build the films collection as described below.
* Start the server using the command {{./bin/solr start -f -p 8983 -s 
/tmp/home}}
* Request the URL given in the bug description.

h1. Compiling the server

{noformat}
git clone https://github.com/apache/lucene-solr
cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
{noformat}

h1. Building the collection

We followed [Exercise 
2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
the [Solr Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. 
The attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} 
that you will obtain by following the steps below:

{noformat}
mkdir -p /tmp/home
echo '' > /tmp/home/solr.xml
{noformat}

In one terminal start a Solr instance in foreground:
{noformat}
./bin/solr start -f -p 8983 -s /tmp/home
{noformat}

In another terminal, create a collection of movies, with no shards and no 
replication, and initialize it:

{noformat}
bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
http://localhost:8983/solr/films/schema
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
http://localhost:8983/solr/films/schema
./bin/post -c films example/films/films.json
{noformat}
Reporter: Cesar Rodriguez
 Attachments: 
0001-Macro-expander-fail-gracefully-on-unsupported-syntax.patch, home.zip

Requesting the following URL causes Solr to return an HTTP 500 error response:

{noformat}
http://localhost:8983/solr/films/select?a=${${b}}
{noformat}

The error response seems to be caused by the following uncaught exception:

{noformat}
java.lang.StringIndexOutOfBoundsException: String index out of range: -4
at java.lang.String.substring(String.java:1967)
at 
org.apache.solr.request.macro.MacroExpander._expand(MacroExpander.java:150)
at 
org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:101)
at 
org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:65)
at 
org.apache.solr.request.macro.MacroExpander.expand(MacroExpander.java:51)
at 
org.apache.solr.request.json.RequestUtil.processParams(RequestUtil.java:159)
at 
org.apache.solr.util.SolrPluginUtils.setDefaults(SolrPluginUtils.java:167)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:196)
[...]
{noformat}

Parameter [macro expansion|http://yonik.com/solr-query-parameter-substitution/] 
seems to take place in 
{{org.apache.solr.request.macro.MacroExpander._expand(String val)}}. From 
reading the code of the function it seems that macros are not expanded inside 
curly brackets {{${...}}}, and so the {{${b}}} inside

{noformat}
${${b}}
{noformat}

should not be expanded. But the function seems to fail to detect this specific 
case and graciously refuse to expand it.

A possible fix could be updating the {{idx}} variable when the {{StrParser}} 
detects that no valid identifier can be found inside the brackets. See attached 
file {{0001-Macro-expander-fail-gracefully-on-unsupported-syntax.patch}}.

We found this bug using [Diffblue Microservices 
Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
testing 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8665) Add temporary code in TestBackwardsCompatibility to handle two concurrent releases

2019-01-29 Thread Jim Ferenczi (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi resolved LUCENE-8665.
--
Resolution: Won't Fix

Fair enough. I reverted the version bump on branch_8x and will focus on 
releasing 7.7 first

> Add temporary code in TestBackwardsCompatibility to handle two concurrent 
> releases
> --
>
> Key: LUCENE-8665
> URL: https://issues.apache.org/jira/browse/LUCENE-8665
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Major
>
> Today TestBackwardsCompatibility can handle a single release at a time 
> because TestBackwardsCompatibility#testAllVersionsTested is lenient on the 
> latest version only (the one that is released). However and since we want to 
> release two versions simultaneously (7.7 and 8.0) this test is failing on 
> branch_8x. This means that we need to do one release at a time or add more 
> leniency in the test to handle this special case. We could for instance add 
> something like:
> {noformat}
> // NORELEASE: we have two releases in progress (7.7.0 and 8.0.0) so we 
> could be missing
> // 2 files, 1 for 7.7.0 and one for 8.0.0. This should be removed when 
> 7.7.0 is released.
> if (extraFiles.isEmpty() && missingFiles.size() == 2 && 
> missingFiles.contains("7.7.0-cfs") && missingFiles.contains("8.0.0-cfs")) {
>   // success
>   return;
> }
> {noformat}
> and remove the code when 7.7.0 is released ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8665) Add temporary code in TestBackwardsCompatibility to handle two concurrent releases

2019-01-29 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755148#comment-16755148
 ] 

Adrien Grand commented on LUCENE-8665:
--

Given that we need to release 8.0 after 7.7 anyway to make sure than 8.0 can 
read 7.7 indices, we can't really release them in parallel. I'm leaning towards 
reverting the version bump on 8.0 for now and re-introducing it when 7.7 is out.

> Add temporary code in TestBackwardsCompatibility to handle two concurrent 
> releases
> --
>
> Key: LUCENE-8665
> URL: https://issues.apache.org/jira/browse/LUCENE-8665
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Major
>
> Today TestBackwardsCompatibility can handle a single release at a time 
> because TestBackwardsCompatibility#testAllVersionsTested is lenient on the 
> latest version only (the one that is released). However and since we want to 
> release two versions simultaneously (7.7 and 8.0) this test is failing on 
> branch_8x. This means that we need to do one release at a time or add more 
> leniency in the test to handle this special case. We could for instance add 
> something like:
> {noformat}
> // NORELEASE: we have two releases in progress (7.7.0 and 8.0.0) so we 
> could be missing
> // 2 files, 1 for 7.7.0 and one for 8.0.0. This should be removed when 
> 7.7.0 is released.
> if (extraFiles.isEmpty() && missingFiles.size() == 2 && 
> missingFiles.contains("7.7.0-cfs") && missingFiles.contains("8.0.0-cfs")) {
>   // success
>   return;
> }
> {noformat}
> and remove the code when 7.7.0 is released ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13180) ClassCastExceptions in o.a.s.s.facet.FacetModule for valid JSON inputs that are not objects

2019-01-29 Thread Johannes Kloos (JIRA)
Johannes Kloos created SOLR-13180:
-

 Summary: ClassCastExceptions in o.a.s.s.facet.FacetModule for 
valid JSON inputs that are not objects
 Key: SOLR-13180
 URL: https://issues.apache.org/jira/browse/SOLR-13180
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Affects Versions: 7.5, master (9.0)
 Environment: Running on Unix, using a recent git checkout of master 
and the films example database.
h2. Steps to reproduce
 * Build commit ea2c8ba of Solr as described in the section below.
 * Build the films collection as described below.
 * Start the server using the command “./bin/solr start -f -p 8983 -s /tmp/home”
 * Request the URL above.

h2. Compiling the server

{{git clone [https://github.com/apache/lucene-solr]}}
{{cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
}}
h2. Building the collection

We followed Exercise 2 from the quick start tutorial 
([http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2]). The 
attached file (home.zip) gives the contents of folder /tmp/home that you will 
obtain by following the steps below.


{{mkdir -p /tmp/home
echo '' > 
/tmp/home/solr.xml}}

 

In one terminal start a Solr instance in foreground:

{{./bin/solr start -f -p 8983 -s /tmp/home}}

 

In another terminal, create a collection of movies, with no shards and no 
replication:

{{bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '\{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
[http://localhost:8983/solr/films/schema]}}
{{curl -X POST -H 'Content-type:application/json' --data-binary 
'\{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
[http://localhost:8983/solr/films/schema]}}
{{./bin/post -c films example/films/films.json}}
Reporter: Johannes Kloos
 Attachments: home.zip

Requesting the following URL gives a 500 error due to a ClassCastException in 
o.a.s.s.f.FacetModule: [http://localhost:8983/solr/films/select?json=0]

The error response is caught by an uncaught ClassCastException, with the 
stacktrace shown here:

java.lang.ClassCastException: java.lang.Long cannot be cast to java.util.Map
at org.apache.solr.search.facet.FacetModule.prepare(FacetModule.java:78)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:272)

 

The cause of this bug is similar to #13178: line 78 in FacetModule reads

{{jsonFacet = (Map) json.get("facet")}}

and assumes that the JSON object contained in facet is a JSON object, while we 
only guarantee that it is a JSON value.

Line 92 semms to contain another situation like this, but I do not have a test 
case handy for this specific case.

This bug was found using [Diffblue Microservices 
Testing|http://www.diffblue.com/labs]. Find more information on this [test 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13179) NullPointerException in org/apache/lucene/queries/function/FunctionScoreQuery.java [109]

2019-01-29 Thread Marek (JIRA)
Marek created SOLR-13179:


 Summary: NullPointerException in 
org/apache/lucene/queries/function/FunctionScoreQuery.java [109]
 Key: SOLR-13179
 URL: https://issues.apache.org/jira/browse/SOLR-13179
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (9.0)
 Environment: h2. Steps to reproduce
 * Build commit ea2c8ba of Solr as described in the section below.
 * Build the films collection as described below.
 * Start the server using the command "./bin/solr start -f -p 8983 -s /tmp/home"
 * Request the URL above.

h2. Compiling the server

git clone [https://github.com/apache/lucene-solr
]cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
h2. Building the collection

We followed Exercise 2 from the quick start tutorial 
([http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2]). The 
attached file (home.zip) gives the contents of folder /tmp/home that you will 
obtain by following the steps below.

 

mkdir -p /tmp/home
 echo '' > 
/tmp/home/solr.xml

 

In one terminal start a Solr instance in foreground:

./bin/solr start -f -p 8983 -s /tmp/home

 

In another terminal, create a collection of movies, with no shards and no 
replication:

bin/solr create -c films

curl -X POST -H 'Content-type:application/json' --data-binary '\{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
[http://localhost:8983/solr/films/schema]

curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
[http://localhost:8983/solr/films/schema]

./bin/post -c films example/films/films.json
Reporter: Marek
 Attachments: home.zip

Execution of the URL query:

*http://localhost:8983/solr/films/select?q=\{!frange%20l=10%20u=100}boost(\{!v=+},3)*

leads to a NullPointerException:

2019-01-29 13:42:04.662 ERROR (qtp689401025-21) [ x:films] o.a.s.s.HttpSolrCall 
null:java.lang.NullPointerException
 at 
org.apache.lucene.queries.function.FunctionScoreQuery.rewrite(FunctionScoreQuery.java:109)
 at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:667)
 at 
org.apache.lucene.queries.function.valuesource.QueryValueSource.createWeight(QueryValueSource.java:75)
 at 
org.apache.solr.search.function.ValueSourceRangeFilter.createWeight(ValueSourceRangeFilter.java:105)
 at 
org.apache.solr.search.SolrConstantScoreQuery$ConstantWeight.(SolrConstantScoreQuery.java:94)
 at 
org.apache.solr.search.SolrConstantScoreQuery.createWeight(SolrConstantScoreQuery.java:119)
 at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:717)
 at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443)
 at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:200)
 at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1604)
 at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1420)
 at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567)
 at 
org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434)
 at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373)
 at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
 at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)

[...]

 

More details:

1. In org/apache/solr/search/ValueSourceParser.java[330] a variable query 'q' 
is assigned the value null, which is obtained from 
org/apache/solr/search/LuceneQParser.java[39], because a variable 'qstr' is the 
empty string.

2. In org/apache/solr/search/ValueSourceParser.java[332] the null value of 'q' 
is passed to function 'FunctionScoreQuery.boostByValue', which in turn leads to 
initialisation of member 'in' of 
org.apache.lucene.queries.function.FunctionScoreQuery to null at 
org/apache/lucene/queries/function/FunctionScoreQuery.java[56].

3. Later, during execution of the query, there is dereferenced the member 'in' 
(still having the null value) at 
org/apache/lucene/queries/function/FunctionScoreQuery.java[109].

 

See section 'Environment' to see how Solr and data (films collection) were 
installed and configured.

 

-
This bug was found using [Diffblue Microservices 
Testing|http://www.diffblue.com/labs]. Find more information on this [test 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Updated] (LUCENE-8665) Add temporary code in TestBackwardsCompatibility to handle two concurrent releases

2019-01-29 Thread Jim Ferenczi (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi updated LUCENE-8665:
-
Description: 
Today TestBackwardsCompatibility can handle a single release at a time because 
TestBackwardsCompatibility#testAllVersionsTested is lenient on the latest 
version only (the one that is released). However and since we want to release 
two versions simultaneously (7.7 and 8.0) this test is failing on branch_8x. 
This means that we need to do one release at a time or add more leniency in the 
test to handle this special case. We could for instance add something like:

{noformat}
// NORELEASE: we have two releases in progress (7.7.0 and 8.0.0) so we 
could be missing
// 2 files, 1 for 7.7.0 and one for 8.0.0. This should be removed when 
7.7.0 is released.
if (extraFiles.isEmpty() && missingFiles.size() == 2 && 
missingFiles.contains("7.7.0-cfs") && missingFiles.contains("8.0.0-cfs")) {
  // success
  return;
}
{noformat}

and remove the code when 7.7.0 is released ?

  was:
Today TestBackwardsCompatibility can handle a single release at a time because 
TestBackwardsCompatibility#testAllVersionsTested is lenient on the latest 
version only (the one that is released). However and since we want to release 
two versions simultaneously (7.7 and 8.0) this test is failing on branch_8x. 
This means that we need to do one release at a time or add more leniency in the 
test to handle this special case. We could for instance add something like:

{noformat}
// NORELEASE: we have two releases in progress (7.7.0 and 8.0.0) so we 
could be missing
// 2 files, 1 for 7.7.0 and one for 8.0.0. This should be removed when 
7.7.0 is released.
if (extraFiles.isEmpty() && missingFiles.size() == 2 && 
missingFiles.contains("7.7.0-cfs") && missingFiles.contains("8.0.0-cfs")) {
  // success
  return;
}
{noformat}

and remove the code when 7.6.0 is released ?


> Add temporary code in TestBackwardsCompatibility to handle two concurrent 
> releases
> --
>
> Key: LUCENE-8665
> URL: https://issues.apache.org/jira/browse/LUCENE-8665
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Major
>
> Today TestBackwardsCompatibility can handle a single release at a time 
> because TestBackwardsCompatibility#testAllVersionsTested is lenient on the 
> latest version only (the one that is released). However and since we want to 
> release two versions simultaneously (7.7 and 8.0) this test is failing on 
> branch_8x. This means that we need to do one release at a time or add more 
> leniency in the test to handle this special case. We could for instance add 
> something like:
> {noformat}
> // NORELEASE: we have two releases in progress (7.7.0 and 8.0.0) so we 
> could be missing
> // 2 files, 1 for 7.7.0 and one for 8.0.0. This should be removed when 
> 7.7.0 is released.
> if (extraFiles.isEmpty() && missingFiles.size() == 2 && 
> missingFiles.contains("7.7.0-cfs") && missingFiles.contains("8.0.0-cfs")) {
>   // success
>   return;
> }
> {noformat}
> and remove the code when 7.7.0 is released ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [DISCUSS] Opening old indices for reading

2019-01-29 Thread Simon Willnauer
thanks folks,

these are all good points. I created a first cut of what I had in mind
[1] . It's relatively simple and from a java visibility perspective
the only change that a user can take advantage of is this [2] and this
[3] respectively. This would allow opening indices back to Lucene 7.0
given that the codecs and postings formats are available. From a
documentation perspective I added [4]. Thisi s a pure read-only change
and doesn't allow opening these indices for writing. You can't merge
them neither would you be able to open an index writer on top of it. I
still need to add support to Check-Index but that's what it is
basically.

lemme know what you think,

simon
[1] 
https://github.com/apache/lucene-solr/commit/0c4c885214ef30627a01e320f9c861dc2521b752
[2] 
https://github.com/apache/lucene-solr/commit/0c4c885214ef30627a01e320f9c861dc2521b752#diff-e0352098b027d6f41a17c068ad8d7ef0R689
[3] 
https://github.com/apache/lucene-solr/commit/0c4c885214ef30627a01e320f9c861dc2521b752#diff-e3ccf9ee90355b10f2dd22ce2da6c73cR306
[4] 
https://github.com/apache/lucene-solr/commit/0c4c885214ef30627a01e320f9c861dc2521b752#diff-1bedf4d0d52ff88ef8a16a6788ad7684R86

On Fri, Jan 25, 2019 at 3:14 PM Michael McCandless
 wrote:
>
> Another example is long ago Lucene allowed pos=-1 to be indexed and it caused 
> all sorts of problems.  We also stopped allowing positions close to 
> Integer.MAX_VALUE (https://issues.apache.org/jira/browse/LUCENE-6382).  Yet 
> another is allowing negative vInts which are possible but horribly 
> inefficient (https://issues.apache.org/jira/browse/LUCENE-3738).
>
> We do need to be free to fix these problems and then know after N+2 releases 
> that no index can have the issue.
>
> I like the idea of providing "expert" / best effort / limited way of carrying 
> forward such ancient indices, but I think the huge challenge for someone 
> using that tool on an important index will be enumerating the list of issues 
> that might "matter" (the 3 Adrien listed + the 3 I listed above is a start 
> for this list) and taking appropriate steps to "correct" the index if so.  
> E.g. on a norms encoding change, somehow these expert tools must decode norms 
> the old way, encode them the new way, and then rewrite the norms files.  Or 
> if the index has pos=-1, changing that to pos=0.  Or if it has negative 
> vInts, ... etc.
>
> Or maybe the "special" DirectoryReader only reads stored fields?  And so you 
> would enumerate your _source and reindex into the latest format ...
>
> > Something like https://issues.apache.org/jira/browse/LUCENE-8277 would
> > help make it harder to introduce corrupt data in an index.
>
> +1
>
> Every time we catch something like "don't allow pos = -1 into the index" we 
> need somehow remember to go and add the check also in addIndices.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Fri, Jan 25, 2019 at 3:52 AM Adrien Grand  wrote:
>>
>> Agreed with Michael that setting expectations is going to be
>> important. The thing that I would like to make sure is that we would
>> never refrain from moving Lucene forward because of this feature. In
>> particular, lucene-core should be free to make assumptions that are
>> valid for N and N-1 indices without worrying about the fact that we
>> have this super-expert feature that allows opening older indices. Here
>> are some assumptions that I have in mind which have not always been
>> true:
>>  - norms might be encoded in a different way (this changed in 7)
>>  - all index files have a checksum (only true since Lucene 5)
>>  - offsets are always going forward (only enforced since Lucene 7)
>>
>> This means that carrying indices over by just merging them with the
>> new version to move them to a new codec won't work all the time. For
>> instance if your index has backward offsets and new codecs assume that
>> offsets are going forward, then merging might fail or corrupt offsets
>> - I'd like to make sure that we would not consider this a bug.
>>
>> Erick, I don't think this feature would be suitable for "robust index
>> upgrades". To me it is really a best effort and shouldn't be trusted
>> too much.
>>
>> I think some users will be tempted to wrap old readers to make them
>> look good and then add them back to an index using addIndexes?
>> Something like https://issues.apache.org/jira/browse/LUCENE-8277 would
>> help make it harder to introduce corrupt data in an index.
>>
>> On Wed, Jan 23, 2019 at 3:11 PM Simon Willnauer
>>  wrote:
>> >
>> > Hey folks,
>> >
>> > tl;dr; I want to be able to open an indexreader on an old index if the
>> > SegmentInfo version is supported and all segment codecs are available.
>> > Today that's not possible even if I port old formats to current
>> > versions.
>> >
>> > Our BWC policy for quite a while has been N-1 major versions. That's
>> > good and I think we should keep it that way. Only recently, caused by
>> > changes how we encode/decode norms we also hard-enforce a the
>> > 

[jira] [Created] (LUCENE-8665) Add temporary code in TestBackwardsCompatibility to handle two concurrent releases

2019-01-29 Thread Jim Ferenczi (JIRA)
Jim Ferenczi created LUCENE-8665:


 Summary: Add temporary code in TestBackwardsCompatibility to 
handle two concurrent releases
 Key: LUCENE-8665
 URL: https://issues.apache.org/jira/browse/LUCENE-8665
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Jim Ferenczi


Today TestBackwardsCompatibility can handle a single release at a time because 
TestBackwardsCompatibility#testAllVersionsTested is lenient on the latest 
version only (the one that is released). However and since we want to release 
two versions simultaneously (7.7 and 8.0) this test is failing on branch_8x. 
This means that we need to do one release at a time or add more leniency in the 
test to handle this special case. We could for instance add something like:

{noformat}
// NORELEASE: we have two releases in progress (7.7.0 and 8.0.0) so we 
could be missing
// 2 files, 1 for 7.7.0 and one for 8.0.0. This should be removed when 
7.7.0 is released.
if (extraFiles.isEmpty() && missingFiles.size() == 2 && 
missingFiles.contains("7.7.0-cfs") && missingFiles.contains("8.0.0-cfs")) {
  // success
  return;
}
{noformat}

and remove the code when 7.6.0 is released ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13178) ClassCastExceptions in o.a.s.request.json.ObjectUtil for valid JSON inputs that are not objects

2019-01-29 Thread Johannes Kloos (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johannes Kloos updated SOLR-13178:
--
Description: 
Requesting any of the following URLs gives a 500 error due to a 
ClassCastException in o.a.s.r.j.ObjectUtil.mergeObjects:
 * [http://localhost:8983/solr/films/select?json=0]
 * [http://localhost:8983/solr/films/select?json.facet=1=x]

The error response is caused by uncaught ClassCastExceptions, such as (for the 
first URL):

{\{ java.lang.ClassCastException: java.lang.Long cannot be cast to 
java.util.Map}}
 {{at 
org.apache.solr.request.json.ObjectUtil.mergeObjects(ObjectUtil.java:108)}}
 {{at org.apache.solr.request.json.RequestUtil.mergeJSON(RequestUtil.java:269)}}
 {{at 
org.apache.solr.request.json.RequestUtil.processParams(RequestUtil.java:180)}}
 {{at 
org.apache.solr.util.SolrPluginUtils.setDefaults(SolrPluginUtils.java:167)}}
 {{at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:196)}}
 {{[...]}}

{{The culprit seems to be the o.a.s.r.j.RequestUtil.mergeJSON method, in 
particular the following fragment:}}
 {{    Object o = ObjectBuilder.fromJSON(jsonStr);}}
 {{    // zero-length strings or comments can cause this to be null (and a 
zero-length string can result from a json content-type w/o a body)}}
 {{    if (o != null) {}}
 {{  ObjectUtil.mergeObjects(json, path, o, handler);}}
     }

Note that o is an Object representing a JSON _value_, while SOLR seems to 
expect that o holds a JSON _object_. But in the examples above, the JSON value 
is a number (represented by  a Long object) instead - this is, in fact, valid 
JSON.

A possible fix could be to use the getObject method of ObjectUtil instead of 
blindly calling fromJSON.

This bug was found using [Diffblue Microservices 
Testing|http://www.diffblue.com/labs]. Find more information on this [test 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].

  was:
We found this bug using Diffblue Microservice testing

Requesting any of the following URLs gives a 500 error due to a 
ClassCastException in o.a.s.r.j.ObjectUtil.mergeObjects:
 * [http://localhost:8983/solr/films/select?json=0]
 * [http://localhost:8983/solr/films/select?json.facet=1=x]

The error response is caused by uncaught ClassCastExceptions, such as (for the 
first URL):

{\{ java.lang.ClassCastException: java.lang.Long cannot be cast to 
java.util.Map}}
 {{at 
org.apache.solr.request.json.ObjectUtil.mergeObjects(ObjectUtil.java:108)}}
 {{at org.apache.solr.request.json.RequestUtil.mergeJSON(RequestUtil.java:269)}}
 {{at 
org.apache.solr.request.json.RequestUtil.processParams(RequestUtil.java:180)}}
 {{at 
org.apache.solr.util.SolrPluginUtils.setDefaults(SolrPluginUtils.java:167)}}
 {{at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:196)}}
 {{[...]}}

{{The culprit seems to be the o.a.s.r.j.RequestUtil.mergeJSON method, in 
particular the following fragment:}}
 {{    Object o = ObjectBuilder.fromJSON(jsonStr);}}
 {{    // zero-length strings or comments can cause this to be null (and a 
zero-length string can result from a json content-type w/o a body)}}
 {{    if (o != null) {}}
 {{  ObjectUtil.mergeObjects(json, path, o, handler);}}
    }

Note that o is an Object representing a JSON _value_, while SOLR seems to 
expect that o holds a JSON _object_. But in the examples above, the JSON value 
is a number (represented by  a Long object) instead - this is, in fact, valid 
JSON.

A possible fix could be to use the getObject method of ObjectUtil instead of 
blindly calling fromJSON.

This bug was found using [Diffblue Microservices 
Testing|http://www.diffblue.com/labs]. Find more information on this [test 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].


> ClassCastExceptions in o.a.s.request.json.ObjectUtil for valid JSON inputs 
> that are not objects
> ---
>
> Key: SOLR-13178
> URL: https://issues.apache.org/jira/browse/SOLR-13178
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 7.5, master (9.0)
> Environment: Running on Unix, using a git checkout close to master.
> h2. Steps to reproduce
>  * Build commit ea2c8ba of Solr as described in the section below.
>  * Build the films collection as described below.
>  * {{Start the server using the command “./bin/solr start -f -p 8983 -s 
> /tmp/home”}}
>  * Request the URL above.
> h2. Compiling the server
> {{git clone [https://github.com/apache/lucene-solr
>  ]cd lucene-solr
> 

[jira] [Created] (SOLR-13178) ClassCastExceptions in o.a.s.request.json.ObjectUtil for valid JSON inputs that are not objects

2019-01-29 Thread Johannes Kloos (JIRA)
Johannes Kloos created SOLR-13178:
-

 Summary: ClassCastExceptions in o.a.s.request.json.ObjectUtil for 
valid JSON inputs that are not objects
 Key: SOLR-13178
 URL: https://issues.apache.org/jira/browse/SOLR-13178
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Server
Affects Versions: 7.5, master (9.0)
 Environment: h2. Steps to reproduce
 * Build commit ea2c8ba of Solr as described in the section below.
 * Build the films collection as described below.
 * {{Start the server using the command “./bin/solr start -f -p 8983 -s 
/tmp/home”}}
 * Request the URL above.

h2. Compiling the server

{{git clone [https://github.com/apache/lucene-solr
]cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server}}
h2. Building the collection

We followed Exercise 2 from the quick start tutorial 
([http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2]) - for 
reference, I have attached a copy of the database.


{{mkdir -p /tmp/home
echo '' > 
/tmp/home/solr.xml}}

In one terminal start a Solr instance in foreground:

./bin/solr start -f -p 8983 -s /tmp/home

In another terminal, create a collection of movies, with no shards and no 
replication:

{{bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '\{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
[http://localhost:8983/solr/films/schema]}}
{{curl -X POST -H 'Content-type:application/json' --data-binary 
'\{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
[http://localhost:8983/solr/films/schema]}}
{{./bin/post -c films example/films/films.json}}
Reporter: Johannes Kloos
 Attachments: home.zip

We found this bug using Diffblue Microservice testing

Requesting any of the following URLs gives a 500 error due to a 
ClassCastException in o.a.s.r.j.ObjectUtil.mergeObjects:
 * [http://localhost:8983/solr/films/select?json=0]
 * [http://localhost:8983/solr/films/select?json.facet=1=x]

The error response is caused by uncaught ClassCastExceptions, such as (for the 
first URL):

{\{ java.lang.ClassCastException: java.lang.Long cannot be cast to 
java.util.Map}}
 {{at 
org.apache.solr.request.json.ObjectUtil.mergeObjects(ObjectUtil.java:108)}}
 {{at org.apache.solr.request.json.RequestUtil.mergeJSON(RequestUtil.java:269)}}
 {{at 
org.apache.solr.request.json.RequestUtil.processParams(RequestUtil.java:180)}}
 {{at 
org.apache.solr.util.SolrPluginUtils.setDefaults(SolrPluginUtils.java:167)}}
 {{at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:196)}}
 {{[...]}}

{{The culprit seems to be the o.a.s.r.j.RequestUtil.mergeJSON method, in 
particular the following fragment:}}
 {{    Object o = ObjectBuilder.fromJSON(jsonStr);}}
 {{    // zero-length strings or comments can cause this to be null (and a 
zero-length string can result from a json content-type w/o a body)}}
 {{    if (o != null) {}}
 {{  ObjectUtil.mergeObjects(json, path, o, handler);}}
    }

Note that o is an Object representing a JSON _value_, while SOLR seems to 
expect that o holds a JSON _object_. But in the examples above, the JSON value 
is a number (represented by  a Long object) instead - this is, in fact, valid 
JSON.

A possible fix could be to use the getObject method of ObjectUtil instead of 
blindly calling fromJSON.

This bug was found using [Diffblue Microservices 
Testing|http://www.diffblue.com/labs]. Find more information on this [test 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13178) ClassCastExceptions in o.a.s.request.json.ObjectUtil for valid JSON inputs that are not objects

2019-01-29 Thread Johannes Kloos (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johannes Kloos updated SOLR-13178:
--
Environment: 
Running on Unix, using a git checkout close to master.
h2. Steps to reproduce
 * Build commit ea2c8ba of Solr as described in the section below.
 * Build the films collection as described below.
 * {{Start the server using the command “./bin/solr start -f -p 8983 -s 
/tmp/home”}}
 * Request the URL above.

h2. Compiling the server

{{git clone [https://github.com/apache/lucene-solr
 ]cd lucene-solr
 git checkout ea2c8ba
 ant compile
 cd solr
 ant server}}
h2. Building the collection

We followed Exercise 2 from the quick start tutorial 
([http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2]) - for 
reference, I have attached a copy of the database.

{{mkdir -p /tmp/home
 echo '' > 
/tmp/home/solr.xml}}

In one terminal start a Solr instance in foreground:

./bin/solr start -f -p 8983 -s /tmp/home

In another terminal, create a collection of movies, with no shards and no 
replication:

{{bin/solr create -c films
 curl -X POST -H 'Content-type:application/json' --data-binary '\{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
[http://localhost:8983/solr/films/schema]}}
 {{curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
[http://localhost:8983/solr/films/schema]}}
 {{./bin/post -c films example/films/films.json}}

  was:
h2. Steps to reproduce
 * Build commit ea2c8ba of Solr as described in the section below.
 * Build the films collection as described below.
 * {{Start the server using the command “./bin/solr start -f -p 8983 -s 
/tmp/home”}}
 * Request the URL above.

h2. Compiling the server

{{git clone [https://github.com/apache/lucene-solr
]cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server}}
h2. Building the collection

We followed Exercise 2 from the quick start tutorial 
([http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2]) - for 
reference, I have attached a copy of the database.


{{mkdir -p /tmp/home
echo '' > 
/tmp/home/solr.xml}}

In one terminal start a Solr instance in foreground:

./bin/solr start -f -p 8983 -s /tmp/home

In another terminal, create a collection of movies, with no shards and no 
replication:

{{bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '\{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
[http://localhost:8983/solr/films/schema]}}
{{curl -X POST -H 'Content-type:application/json' --data-binary 
'\{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
[http://localhost:8983/solr/films/schema]}}
{{./bin/post -c films example/films/films.json}}


> ClassCastExceptions in o.a.s.request.json.ObjectUtil for valid JSON inputs 
> that are not objects
> ---
>
> Key: SOLR-13178
> URL: https://issues.apache.org/jira/browse/SOLR-13178
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 7.5, master (9.0)
> Environment: Running on Unix, using a git checkout close to master.
> h2. Steps to reproduce
>  * Build commit ea2c8ba of Solr as described in the section below.
>  * Build the films collection as described below.
>  * {{Start the server using the command “./bin/solr start -f -p 8983 -s 
> /tmp/home”}}
>  * Request the URL above.
> h2. Compiling the server
> {{git clone [https://github.com/apache/lucene-solr
>  ]cd lucene-solr
>  git checkout ea2c8ba
>  ant compile
>  cd solr
>  ant server}}
> h2. Building the collection
> We followed Exercise 2 from the quick start tutorial 
> ([http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2]) - 
> for reference, I have attached a copy of the database.
> {{mkdir -p /tmp/home
>  echo '' > 
> /tmp/home/solr.xml}}
> In one terminal start a Solr instance in foreground:
> ./bin/solr start -f -p 8983 -s /tmp/home
> In another terminal, create a collection of movies, with no shards and no 
> replication:
> {{bin/solr create -c films
>  curl -X POST -H 'Content-type:application/json' --data-binary 
> '\{"add-field": {"name":"name", "type":"text_general", "multiValued":false, 
> "stored":true}}' [http://localhost:8983/solr/films/schema]}}
>  {{curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> [http://localhost:8983/solr/films/schema]}}
>  {{./bin/post -c films example/films/films.json}}
>Reporter: Johannes Kloos
>Priority: Minor
> Attachments: home.zip
>
>
> We found this bug using Diffblue Microservice testing
> 

[GitHub] risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-458565430
 
 
   Would appreciate a review - @markrmiller, @uschindler, @sigram, @ctargett 
and whoever else is interested.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-29 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755060#comment-16755060
 ] 

Kevin Risden commented on SOLR-9515:


Updated patch to master with HDFS tests passing. 

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r251859763
 
 

 ##
 File path: lucene/tools/src/groovy/check-source-patterns.groovy
 ##
 @@ -149,6 +149,7 @@ ant.fileScanner{
 exclude(name: 'lucene/benchmark/temp/**')
 exclude(name: '**/CheckLoggingConfiguration.java')
 exclude(name: 'lucene/tools/src/groovy/check-source-patterns.groovy') // 
ourselves :-)
+exclude(name: 'solr/core/src/test/org/apache/hadoop/**')
 
 Review comment:
   Needed to skip checking the copied HttpServer2 code from Apache Hadoop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r251859799
 
 

 ##
 File path: solr/core/build.xml
 ##
 @@ -25,6 +25,7 @@
 
   

[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r251859319
 
 

 ##
 File path: 
solr/test-framework/src/java/org/apache/solr/util/BadHdfsThreadsFilter.java
 ##
 @@ -29,16 +29,18 @@ public boolean reject(Thread t) {
   return true;
 } else if (name.startsWith("org.apache.hadoop.hdfs.PeerCache")) { // 
SOLR-7288
   return true;
+} else if (name.endsWith("StatisticsDataReferenceCleaner")) {
+  return true;
 } else if (name.startsWith("LeaseRenewer")) { // SOLR-7287
   return true;
 } else if (name.startsWith("org.apache.hadoop.fs.FileSystem$Statistics")) 
{ // SOLR-11261
   return true;
 } else if (name.startsWith("ForkJoinPool.")) { // JVM built in pool
   return true;
+} else if (name.startsWith("ForkJoinPool-")) { // JVM built in pool
 
 Review comment:
   I have not tracked down why this is necessary yet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r251859267
 
 

 ##
 File path: 
solr/test-framework/src/java/org/apache/solr/util/BadHdfsThreadsFilter.java
 ##
 @@ -29,16 +29,18 @@ public boolean reject(Thread t) {
   return true;
 } else if (name.startsWith("org.apache.hadoop.hdfs.PeerCache")) { // 
SOLR-7288
   return true;
+} else if (name.endsWith("StatisticsDataReferenceCleaner")) {
 
 Review comment:
   I have not tracked down why this is necessary yet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r251858965
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/security/JWTAuthPluginTest.java
 ##
 @@ -44,7 +44,6 @@
 import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
-import org.mortbay.util.ajax.JSON;
 
 Review comment:
   Avoid using old mortbay utilities for converting JSON. Uses existing Solr 
Utils to convert from JSON string.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r251858743
 
 

 ##
 File path: 
solr/core/src/test/org/apache/solr/response/TestCustomDocTransformer.java
 ##
 @@ -26,7 +26,6 @@
 import org.apache.solr.request.SolrQueryRequest;
 import org.apache.solr.response.transform.DocTransformer;
 import org.apache.solr.response.transform.TransformerFactory;
-import org.bouncycastle.util.Strings;
 
 Review comment:
   Test used to use old bouncycastle dependency which isn't needed anymore from 
Hadoop. Switched to use builtin Java split.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r251858482
 
 

 ##
 File path: 
solr/core/src/test/org/apache/solr/handler/export/TestExportWriter.java
 ##
 @@ -28,14 +28,14 @@
 import java.util.Map;
 import java.util.Set;
 
+import com.fasterxml.jackson.databind.ObjectMapper;
 import org.apache.lucene.util.TestUtil;
 import org.apache.solr.SolrTestCaseJ4;
 import org.apache.solr.common.SolrInputDocument;
 import org.apache.solr.common.util.SuppressForbidden;
 import org.apache.solr.common.util.Utils;
 import org.apache.solr.request.SolrQueryRequest;
 import org.apache.solr.schema.SchemaField;
-import org.codehaus.jackson.map.ObjectMapper;
 
 Review comment:
   Test used to use old Jackson class. Replaced with newer one and removed old 
Jackson pulled in by Hadoop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r251858165
 
 

 ##
 File path: solr/core/src/test/org/apache/hadoop/http/HttpServer2.java
 ##
 @@ -0,0 +1,1685 @@
+/*
 
 Review comment:
   @uschindler I know you have a lot of experience with JDK 9+ and packaging. 
Is this even allowed to have a separate package `org.apache.hadoop...` inside 
the Solr test directory? Will it cause issues with other JDK versions?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r251857862
 
 

 ##
 File path: solr/core/src/test/org/apache/hadoop/http/HttpServer2.java
 ##
 @@ -0,0 +1,1685 @@
+/*
 
 Review comment:
   This file is copied directly from Apache Hadoop to allow integration tests 
to run under Jetty 9.4.
   
   I couldn't find a way to get the shaded Hadoop dependencies (new in Hadoop 
3) to work since they shaded too much (like javax.servlet). This made our 
integration with hadoop-auth break.
   
   I have no idea if copying HttpServer2 in here will break other things. I 
tested with JDK 8 and so far this looks good.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk opened a new pull request #553: SOLR-9515: Update to Hadoop 3

2019-01-29 Thread GitBox
risdenk opened a new pull request #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12743) Memory leak introduced in Solr 7.3.0

2019-01-29 Thread Markus Jelsma (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755052#comment-16755052
 ] 

Markus Jelsma commented on SOLR-12743:
--

Hello [~bjoernhaeuser], thanks for confirming.

I, sadly, confirm the problem persists with Solr 7.6.0. We still can not 
reproduce it locally, not even if we take the index from production.

> Memory leak introduced in Solr 7.3.0
> 
>
> Key: SOLR-12743
> URL: https://issues.apache.org/jira/browse/SOLR-12743
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3, 7.3.1, 7.4
>Reporter: Tomás Fernández Löbbe
>Priority: Critical
>
> Reported initially by [~markus17]([1], [2]), but other users have had the 
> same issue [3]. Some of the key parts:
> {noformat}
> Some facts:
> * problem started after upgrading from 7.2.1 to 7.3.0;
> * it occurs only in our main text search collection, all other collections 
> are unaffected;
> * despite what i said earlier, it is so far unreproducible outside 
> production, even when mimicking production as good as we can;
> * SortedIntDocSet instances and ConcurrentLRUCache$CacheEntry instances are 
> both leaked on commit;
> * filterCache is enabled using FastLRUCache;
> * filter queries are simple field:value using strings, and three filter query 
> for time range using [NOW/DAY TO NOW+1DAY/DAY] syntax for 'today', 'last 
> week' and 'last month', but rarely used;
> * reloading the core manually frees OldGen;
> * custom URP's don't cause the problem, disabling them doesn't solve it;
> * the collection uses custom extensions for QueryComponent and 
> QueryElevationComponent, ExtendedDismaxQParser and MoreLikeThisQParser, a 
> whole bunch of TokenFilters, and several DocTransformers and due it being 
> only reproducible on production, i really cannot switch these back to 
> Solr/Lucene versions;
> * useFilterForSortedQuery is/was not defined in schema so it was default 
> (true?), SOLR-11769 could be the culprit, i disabled it just now only for the 
> node running 7.4.0, rest of collection runs 7.2.1;
> {noformat}
> {noformat}
> You were right, it was leaking exactly one SolrIndexSearcher instance on each 
> commit. 
> {noformat}
> And from Björn Häuser ([3]):
> {noformat}
> Problem Suspect 1
> 91 instances of "org.apache.solr.search.SolrIndexSearcher", loaded by 
> "org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6807d1048" occupy 
> 1.981.148.336 (38,26%) bytes. 
> Biggest instances:
>         • org.apache.solr.search.SolrIndexSearcher @ 0x6ffd47ea8 - 70.087.272 
> (1,35%) bytes. 
>         • org.apache.solr.search.SolrIndexSearcher @ 0x79ea9c040 - 65.678.264 
> (1,27%) bytes. 
>         • org.apache.solr.search.SolrIndexSearcher @ 0x6855ad680 - 63.050.600 
> (1,22%) bytes. 
> Problem Suspect 2
> 223 instances of "org.apache.solr.util.ConcurrentLRUCache", loaded by 
> "org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6807d1048" occupy 
> 1.373.110.208 (26,52%) bytes. 
> {noformat}
> More details in the email threads.
> [1] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201804.mbox/%3Czarafa.5ae201c6.2f85.218a781d795b07b1%40mail1.ams.nl.openindex.io%3E]
>  [2] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201806.mbox/%3Czarafa.5b351537.7b8c.647ddc93059f68eb%40mail1.ams.nl.openindex.io%3E]
>  [3] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201809.mbox/%3c7b5e78c6-8cf6-42ee-8d28-872230ded...@gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11127) Add a Collections API command to migrate the .system collection schema from Trie-based (pre-7.0) to Points-based (7.0+)

2019-01-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755022#comment-16755022
 ] 

Jan Høydahl commented on SOLR-11127:


How to handle the two time gaps when .system will return 0 hits during copying?

Let's say we add a config option to configure {{BlobHandler}} and 
{{UpdateRequestHandler}} into R/O mode (readOnly=true) where update requests 
return HTTP 503 Service Unavailable. Then we could start by setting .system in 
R/O and then safely copy back and forth and move alias only when copy is 
complete, then at the end set .system back to readOnly=false and RELOAD .system 
collection to get back to normal operation. Don't know how much work that would 
be, sounds doable.

> Add a Collections API command to migrate the .system collection schema from 
> Trie-based (pre-7.0) to Points-based (7.0+)
> ---
>
> Key: SOLR-11127
> URL: https://issues.apache.org/jira/browse/SOLR-11127
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Andrzej Bialecki 
>Priority: Blocker
>  Labels: numeric-tries-to-points
> Fix For: 8.0
>
>
> SOLR-9 will switch the Trie fieldtypes in the .system collection's schema 
> to Points.
> Users with pre-7.0 .system collections will no longer be able to use them 
> once Trie fields have been removed (8.0).
> Solr should provide a Collections API command MIGRATESYSTEMCOLLECTION to 
> automatically convert a Trie-based .system collection to a Points-based one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11127) Add a Collections API command to migrate the .system collection schema from Trie-based (pre-7.0) to Points-based (7.0+)

2019-01-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755022#comment-16755022
 ] 

Jan Høydahl edited comment on SOLR-11127 at 1/29/19 1:46 PM:
-

How to handle the two time gaps when .system will return 0 hits during copying?

Let's say we add a config option to configure {{BlobHandler}} and 
{{UpdateRequestHandler}} into R/O mode (readOnly=true) where update requests 
return HTTP 503 Service Unavailable. Then we could start by setting .system in 
R/O and then safely copy back and forth and move alias only when copy is 
complete, then at the end set .system back to readOnly=false and RELOAD .system 
collection to get back to normal operation. Don't know how much work that would 
be, sounds doable.

The assumption is of course that any client should gracefully handle a 
temporary error like 503 :)


was (Author: janhoy):
How to handle the two time gaps when .system will return 0 hits during copying?

Let's say we add a config option to configure {{BlobHandler}} and 
{{UpdateRequestHandler}} into R/O mode (readOnly=true) where update requests 
return HTTP 503 Service Unavailable. Then we could start by setting .system in 
R/O and then safely copy back and forth and move alias only when copy is 
complete, then at the end set .system back to readOnly=false and RELOAD .system 
collection to get back to normal operation. Don't know how much work that would 
be, sounds doable.

> Add a Collections API command to migrate the .system collection schema from 
> Trie-based (pre-7.0) to Points-based (7.0+)
> ---
>
> Key: SOLR-11127
> URL: https://issues.apache.org/jira/browse/SOLR-11127
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Andrzej Bialecki 
>Priority: Blocker
>  Labels: numeric-tries-to-points
> Fix For: 8.0
>
>
> SOLR-9 will switch the Trie fieldtypes in the .system collection's schema 
> to Points.
> Users with pre-7.0 .system collections will no longer be able to use them 
> once Trie fields have been removed (8.0).
> Solr should provide a Collections API command MIGRATESYSTEMCOLLECTION to 
> automatically convert a Trie-based .system collection to a Points-based one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8664) Add equals/hashcode to TotalHits

2019-01-29 Thread Luca Cavanna (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755016#comment-16755016
 ] 

Luca Cavanna commented on LUCENE-8664:
--

I am not using TotalHits in a map. I would benefit from the equals method for 
comparisons in tests. For instance in Elasticsearch we return the lucene 
TotalHits to users as part of bigger objects that have their own equals method. 
We end up wrapping TotalHits into another internal class that has its own 
equals/hashcode (among others). Having equals/hashcode built-in into lucene 
would remove the need for a wrapper class, as well as making equality 
comparisons a one-liner, especially when comparing multiple instances of 
objects holding TotalHits. This is a minor thing obviously, but I did not think 
it would be a bug to consider two different TotalHits instances that have same 
value and relation equal? I was chatting to [~jim.ferenczi] about this and we 
thought we should propose adding this to Lucene. Happy to close this if you 
think it should not be done.

> Add equals/hashcode to TotalHits
> 
>
> Key: LUCENE-8664
> URL: https://issues.apache.org/jira/browse/LUCENE-8664
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Luca Cavanna
>Priority: Minor
>
> I think it would be convenient to add equals/hashcode methods to the 
> TotalHits class. I opened a PR here: 
> [https://github.com/apache/lucene-solr/pull/552] .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 2750 - Still Unstable

2019-01-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/2750/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1761/consoleText

[repro] Revision: d7dc53ff7c3a16110aac4120e5bfdf7721b21bcd

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=TestOfflineSorter 
-Dtests.method=testThreadSafety -Dtests.seed=972EEC94E272C842 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ms -Dtests.timezone=Pacific/Marquesas -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestSimTriggerIntegration 
-Dtests.method=testEventQueue -Dtests.seed=BBE49E48E44CDBA7 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=sk -Dtests.timezone=Asia/Choibalsan -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=HdfsUnloadDistributedZkTest 
-Dtests.method=test -Dtests.seed=BBE49E48E44CDBA7 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ro-RO -Dtests.timezone=Asia/Tehran -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
1cfbd3e1c84d35e741cfc068a8e88f0eff4ea9e1
[repro] git fetch
[repro] git checkout d7dc53ff7c3a16110aac4120e5bfdf7721b21bcd

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   HdfsUnloadDistributedZkTest
[repro]   TestSimTriggerIntegration
[repro]lucene/core
[repro]   TestOfflineSorter
[repro] ant compile-test

[...truncated 3562 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.HdfsUnloadDistributedZkTest|*.TestSimTriggerIntegration" 
-Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=BBE49E48E44CDBA7 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ro-RO -Dtests.timezone=Asia/Tehran -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 80071 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 142 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestOfflineSorter" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=972EEC94E272C842 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ms -Dtests.timezone=Pacific/Marquesas -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 328 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration
[repro]   3/5 failed: org.apache.lucene.util.TestOfflineSorter
[repro]   5/5 failed: org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest

[repro] Re-testing 100% failures at the tip of master
[repro] git fetch
[repro] git checkout master

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   HdfsUnloadDistributedZkTest
[repro] ant compile-test

[...truncated 3592 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.HdfsUnloadDistributedZkTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=BBE49E48E44CDBA7 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ro-RO -Dtests.timezone=Asia/Tehran -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 22895 lines...]
[repro] Setting last failure code 

[jira] [Comment Edited] (SOLR-11127) Add a Collections API command to migrate the .system collection schema from Trie-based (pre-7.0) to Points-based (7.0+)

2019-01-29 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755007#comment-16755007
 ] 

Andrzej Bialecki  edited comment on SOLR-11127 at 1/29/19 1:24 PM:
---

My plan of attack is to implement a collection command that orchestrates the 
following steps:
 * create a temporary collection with a unique name, eg. {{tmpCollection_123}}, 
using the updated {{.system}} schema
 * define an alias that points {{.system -> tmpCollection_123}}. This should 
redirect all updates and queries to the temp collection.
 * copy the documents from {{.system}} to the temp collection, avoiding 
overwriting updated docs (incremental updates won't work during this process, 
but AFAIK no Solr component uses incremental updates when indexing to 
{{.system}})
 * delete the original {{.system}} and create it again using the updated schema.
 * remove the alias
 * copy over the documents from temporary collection to {{.system}}, again 
avoiding overwrites.

The collection command will take care of async processing, resuming the 
operation on Overseer restarts, etc.

I considered doing this as a sort of rolling in-place update but this wouldn't 
be any less expensive and I think it would have been impossible to do (and to 
get it right) - updated schema uses points instead of trie fields for the same 
fields.

Comments and feedback are welcome (thanks [~janhoy] for useful suggestions).

(Also, given that the 8.0 release is imminent I'm not sure I can fix this in 
time for the 8.0 release.)


was (Author: ab):
My plan of attack is to implement a collection command that orchestrates the 
following steps:
 * create a temporary collection with a unique name, eg. {{tmpCollection_123}}, 
using the updated {{.system}} schema
 * define an alias that points {{.system -> tmpCollection_123}}. This should 
redirect all updates and queries to the temp collection.
 * copy the documents from {{.system}} to the temp collection, avoiding 
overwriting updated docs (incremental updates won't work during this process, 
but AFAIK no Solr component uses incremental updates when indexing to 
{{.system}})
 * delete the original {{.system}} and create it again using the updated schema.
 * remove the alias
 * copy over the documents from temporary collection to {{.system}}, again 
avoiding overwrites.

The collection command will take care of async processing, resuming the 
operation on Overseer restarts, etc.

Comments and feedback are welcome.

(Also, given that the 8.0 release is imminent I'm not sure I can fix this in 
time for the 8.0 release.)

> Add a Collections API command to migrate the .system collection schema from 
> Trie-based (pre-7.0) to Points-based (7.0+)
> ---
>
> Key: SOLR-11127
> URL: https://issues.apache.org/jira/browse/SOLR-11127
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Andrzej Bialecki 
>Priority: Blocker
>  Labels: numeric-tries-to-points
> Fix For: 8.0
>
>
> SOLR-9 will switch the Trie fieldtypes in the .system collection's schema 
> to Points.
> Users with pre-7.0 .system collections will no longer be able to use them 
> once Trie fields have been removed (8.0).
> Solr should provide a Collections API command MIGRATESYSTEMCOLLECTION to 
> automatically convert a Trie-based .system collection to a Points-based one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >