Erick Erickson created SOLR-9895:
Summary: Replace existing ref guide references to zkcli with
bin/solr zk options where possible
Key: SOLR-9895
URL: https://issues.apache.org/jira/browse/SOLR-9895
Project: Solr
Issue Type: Improvement
Security Level: Public (Default Security Level. Issues are Public)
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Minor
I was looking through the CWiki for SOLR-9891 and noticed a fair number of
references to zkcli. I'd like to replace as many of those as possible and use
the bin/solr zk way of interacting with Zookeeper on the principle that
fewer tools == less confusion.
Any help welcome!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/SOLR-9891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Erick Erickson updated SOLR-9891:
-
Attachment: SOLR-9891.patch
Not tested on Windows. I've copy/pasted/edited what I think are the necessary
bits into bin/solr.cmd, but it needs someone to try it out before I can check
it in.
If some kind person with a windows setup could give this patch a spin on
Windows I would be grateful.
NOTE: I've used 'mkroot' as the command. I'm not particularly wedded to that
name. What opinions do people have? Two possibilities that spring to mind are
'mkpath' and 'mkdir'...
I slightly prefer 'mkroot' even though it's really a generic 'mkpath' command.
The intent is to create something for chroot, but it's really more general than
that. Not sure that generality needs to be advertised though...
> bin/solr cannot create an empty Znode which is useful for chroot
>
>
> Key: SOLR-9891
> URL: https://issues.apache.org/jira/browse/SOLR-9891
> Project: Solr
> Issue Type: Improvement
> Security Level: Public(Default Security Level. Issues are Public)
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-9891.patch
>
>
> This came to my attention just now. To use a different root in Solr, we say
> this in the ref guide:
> IMPORTANT: If your ZooKeeper connection string uses a chroot, such as
> localhost:2181/solr, then you need to bootstrap the /solr znode before
> launching SolrCloud using the bin/solr script. To do this, you need to use
> the zkcli.sh script shipped with Solr, such as:
> server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181/solr -cmd
> bootstrap -solrhome server/solr
> I think all this really does is create an empty /solr ZNode. We're trying to
> move the common usages of the zkcli scripts to bin/solr so I tried making
> this work.
> It's clumsy. If I try to copy up an empty directory to /solr nothing happens.
> I got it to work by copying file:README.txt to zk:/solr/nonsense then delete
> zk:/solr/nonsense. Ugly.
> I don't want to get into reproducing the whole Unix shell file manipulation
> commands with mkdir, touch, etc.
> I guess we already have special 'upconfig' and 'downconfig' commands, so
> maybe a specific command for this like 'mkroot' would be OK. Do people have
> opinions about this as opposed to 'mkdir'? I'm tending to mkdir.
> Or have the cp command handle empty directories, but mkroot/mkdir seems more
> intuitive if not as generic.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6314/
Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseG1GC
1 tests failed.
FAILED: org.apache.solr.metrics.JvmMetricsTest.testOperatingSystemMetricsSet
Error Message:
Stack Trace:
java.lang.AssertionError
at
__randomizedtesting.SeedInfo.seed([E0D730E989E0FC75:F828549C70E6A720]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at
org.apache.solr.metrics.JvmMetricsTest.testOperatingSystemMetricsSet(JvmMetricsTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Build Log:
[...truncated 10825 lines...]
[junit4] Suite: org.apache.solr.metrics.JvmMetricsTest
[junit4] 2> Creating dataDir:
[
https://issues.apache.org/jira/browse/SOLR-9894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
王海涛 updated SOLR-9894:
--
Description:
my schema.xml has a fieldType as folow:
Attention:
index tokenzier useSmart is false
query tokenzier useSmart is true
But when I send query request with parameter q ,
the query tokenziner sometimes useSmart equals true
sometimes useSmart equal false.
That is so terrible!
I guess the problem may be caught by tokenizer cache.
when I query ,the tokenizer should use true as the useSmart's value,
but it had cache the wrong tokenizer result which created by indexWriter who
use false as useSmart's value.
was:
my schema.xml has a fieldType as folow:
Attention:
index tokenzier useSmart is false
query tokenzier useSmart is true
But when I send query request with parameter q ,
the query tokenziner sometimes useSmart equals true
sometimes useSmart equal false.
That is so terrible!
I guess the problem may be caught by tokenizer cache.
when I query ,the tokenizer should use true as the useSmart's value,
but it had cache the wrong tokenizer result which created by indexWriter who
use false as useSmart's value.
> Tokenizer work randomly
> ---
>
> Key: SOLR-9894
> URL: https://issues.apache.org/jira/browse/SOLR-9894
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Components: query parsers
>Affects Versions: 6.2.1
> Environment: solrcloud 6.2.1(3 solr nodes)
> OS:linux
> RAM:8G
>Reporter: 王海涛
>Priority: Critical
> Labels: patch
>
> my schema.xml has a fieldType as folow:
>
>
>class="org.wltea.analyzer.lucene.IKTokenizerFactory" useSmart="false"/>
>class="org.wltea.pinyin.solr5.PinyinTokenFilterFactory" pinyinAll="true"
> minTermLength="2"/>
>
>
>
>class="org.wltea.analyzer.lucene.IKTokenizerFactory" useSmart="true"/>
>
>
>
> Attention:
> index tokenzier useSmart is false
> query tokenzier useSmart is true
> But when I send query request with parameter q ,
> the query tokenziner sometimes useSmart equals true
> sometimes useSmart equal false.
> That is so terrible!
> I guess the problem may be caught by tokenizer cache.
> when I query ,the tokenizer should use true as the useSmart's value,
> but it had cache the wrong tokenizer result which created by indexWriter who
> use false as useSmart's value.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
王海涛 created SOLR-9894:
-
Summary: Tokenizer work randomly
Key: SOLR-9894
URL: https://issues.apache.org/jira/browse/SOLR-9894
Project: Solr
Issue Type: Bug
Security Level: Public (Default Security Level. Issues are Public)
Components: query parsers
Affects Versions: 6.2.1
Environment: solrcloud 6.2.1(3 solr nodes)
OS:linux
RAM:8G
Reporter: 王海涛
Priority: Critical
my schema.xml has a fieldType as folow:
Attention:
index tokenzier useSmart is false
query tokenzier useSmart is true
But when I send query request with parameter q ,
the query tokenziner sometimes useSmart equals true
sometimes useSmart equal false.
That is so terrible!
I guess the problem may be caught by tokenizer cache.
when I query ,the tokenizer should use true as the useSmart's value,
but it had cache the wrong tokenizer result which created by indexWriter who
use false as useSmart's value.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/629/
1 tests failed.
FAILED: junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest
Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202) at
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137) at
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94) at
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor120.newInstance(Unknown Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:704) at
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:766) at
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1005) at
org.apache.solr.core.SolrCore.(SolrCore.java:870) at
org.apache.solr.core.SolrCore.(SolrCore.java:774) at
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842) at
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498) at
java.util.concurrent.FutureTask.run(FutureTask.java:266) at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor120.newInstance(Unknown
Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:704)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:766)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1005)
at org.apache.solr.core.SolrCore.(SolrCore.java:870)
at org.apache.solr.core.SolrCore.(SolrCore.java:774)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)
at
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([5A418C9884580FAA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:266)
at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
[
https://issues.apache.org/jira/browse/LUCENE-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15779360#comment-15779360
]
Uwe Schindler edited comment on LUCENE-7595 at 12/27/16 1:49 AM:
-
Here is my patch that makes test works on whole Lucene:
- On Java 9 it disables the static leak detector
- RamUsageTester was fixed to have some "shortcuts" which are used if Java 9+
is detected: String/StringBuffer/StringBuilder and some other types are
calculated using their length/capacity. It also estimates memory usage of Maps
and Iterables by just iterating over their items (not respecting the
Hash/LinkedHash impl details, just plain stupid summing up). Because of this I
had to disable one test for the LRU cache, but otherwise the estimation is
almost correct. All other uses of RamUsageTester pass :-)
[~dweiss]: What do you think?
was (Author: thetaphi):
Here is my patch that makes test works on whole Lucene:
- On Java 9 it disables the static leak detector
- RamUsageTester was fixed to have some "shortcuts" which are used if Java 9+
is detected: String/StringBuffer/StringBuilder and some other types are
calculated using their length/capacity. It also estimates memory usage of Maps
and Iterables by just iterating over their items (not respecting the
Hash/LinkedHash impl details, just plain stupid summing up). Because of this I
had to disable one test for the LRU cache, but otherwise the estimation is
almost correct. All other uses of RamUsageTester pass :-)
[~dweiss], [~dawid.weiss], [~dawidweiss]: What do you think?
> RAMUsageTester in test-framework and static field checker no longer works
> with Java 9
> -
>
> Key: LUCENE-7595
> URL: https://issues.apache.org/jira/browse/LUCENE-7595
> Project: Lucene - Core
> Issue Type: Bug
> Components: general/test
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Labels: Java9
> Attachments: LUCENE-7595.patch
>
>
> Lucene/Solr tests have a special rule that records memory usage in static
> fields before and after test, so we can detect memory leaks. This check dives
> into JDK classes (like java.lang.String to detect their size). As Java 9
> build 148 completely forbids setAccessible on any runtime class, we have to
> change or disable this check:
> - As first step I will only add the rule to LTC, if we not have Java 8
> - As a second step we might investigate how to improve this
> [~rcmuir] had some ideas for the 2nd point:
> - Don't dive into classes from JDK modules and instead "estimate" the size
> for some special cases (like Strings)
> - Disallow any static field in tests that is not final (constant) and points
> to an Object except: Strings and native (wrapper) types.
> In addition we also have RAMUsageTester, that has similar problems and is
> used to compare estimations of Lucene's calculations of
> Codec/IndexWriter/IndexReader memory usage with reality. We should simply
> disable those tests.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Uwe Schindler updated LUCENE-7595:
--
Attachment: LUCENE-7595.patch
Here is my patch that makes test works on whole Lucene:
- On Java 9 it disables the static leak detector
- RamUsageTester was fixed to have some "shortcuts" which are used if Java 9+
is detected: String/StringBuffer/StringBuilder and some other types are
calculated using their length/capacity. It also estimates memory usage of Maps
and Iterables by just iterating over their items (not respecting the
Hash/LinkedHash impl details, just plain stupid summing up). Because of this I
had to disable one test for the LRU cache, but otherwise the estimation is
almost correct. All other uses of RamUsageTester pass :-)
[~dweiss], [~dawid.weiss], [~dawidweiss]: What do you think?
> RAMUsageTester in test-framework and static field checker no longer works
> with Java 9
> -
>
> Key: LUCENE-7595
> URL: https://issues.apache.org/jira/browse/LUCENE-7595
> Project: Lucene - Core
> Issue Type: Bug
> Components: general/test
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Labels: Java9
> Attachments: LUCENE-7595.patch
>
>
> Lucene/Solr tests have a special rule that records memory usage in static
> fields before and after test, so we can detect memory leaks. This check dives
> into JDK classes (like java.lang.String to detect their size). As Java 9
> build 148 completely forbids setAccessible on any runtime class, we have to
> change or disable this check:
> - As first step I will only add the rule to LTC, if we not have Java 8
> - As a second step we might investigate how to improve this
> [~rcmuir] had some ideas for the 2nd point:
> - Don't dive into classes from JDK modules and instead "estimate" the size
> for some special cases (like Strings)
> - Disallow any static field in tests that is not final (constant) and points
> to an Object except: Strings and native (wrapper) types.
> In addition we also have RAMUsageTester, that has similar problems and is
> used to compare estimations of Lucene's calculations of
> Codec/IndexWriter/IndexReader memory usage with reality. We should simply
> disable those tests.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1569/
1 tests failed.
FAILED:
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI
Error Message:
expected:<3> but was:<2>
Stack Trace:
java.lang.AssertionError: expected:<3> but was:<2>
at
__randomizedtesting.SeedInfo.seed([45D2616FF6355345:DA715DBF0067CD0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:517)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Build Log:
[...truncated 11730 lines...]
[junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
[junit4] 2> Creating dataDir:
[
https://issues.apache.org/jira/browse/LUCENE-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15779150#comment-15779150
]
Michael McCandless commented on LUCENE-7603:
Whoa, thanks [~mattweber], I'll have a look, but likely not until I'm back from
vacation next year!
> Support Graph Token Streams in QueryBuilder
> ---
>
> Key: LUCENE-7603
> URL: https://issues.apache.org/jira/browse/LUCENE-7603
> Project: Lucene - Core
> Issue Type: Improvement
> Components: core/queryparser, core/search
>Reporter: Matt Weber
>
> With [LUCENE-6664|https://issues.apache.org/jira/browse/LUCENE-6664] we can
> use multi-term synonyms query time. A "graph token stream" will be created
> which which is nothing more than using the position length attribute on
> stacked tokens to indicate how many positions a token should span. Currently
> the position length attribute on tokens is ignored during query parsing.
> This issue will add support for handling these graph token streams inside the
> QueryBuilder utility class used by query parsers.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-6664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15779143#comment-15779143
]
Michael McCandless commented on LUCENE-6664:
Thanks [~steve_rowe], I'll look...
> Replace SynonymFilter with SynonymGraphFilter
> -
>
> Key: LUCENE-6664
> URL: https://issues.apache.org/jira/browse/LUCENE-6664
> Project: Lucene - Core
> Issue Type: New Feature
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-6664.patch, LUCENE-6664.patch, LUCENE-6664.patch,
> LUCENE-6664.patch, LUCENE-6664.patch, usa.png, usa_flat.png
>
>
> Spinoff from LUCENE-6582.
> I created a new SynonymGraphFilter (to replace the current buggy
> SynonymFilter), that produces correct graphs (does no "graph
> flattening" itself). I think this makes it simpler.
> This means you must add the FlattenGraphFilter yourself, if you are
> applying synonyms during indexing.
> Index-time syn expansion is a necessarily "lossy" graph transformation
> when multi-token (input or output) synonyms are applied, because the
> index does not store {{posLength}}, so there will always be phrase
> queries that should match but do not, and then phrase queries that
> should not match but do.
> http://blog.mikemccandless.com/2012/04/lucenes-tokenstreams-are-actually.html
> goes into detail about this.
> However, with this new SynonymGraphFilter, if instead you do synonym
> expansion at query time (and don't do the flattening), and you use
> TermAutomatonQuery (future: somehow integrated into a query parser),
> or maybe just "enumerate all paths and make union of PhraseQuery", you
> should get 100% correct matches (not sure about "proper" scoring
> though...).
> This new syn filter still cannot consume an arbitrary graph.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/SOLR-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Mikhail Khludnev updated SOLR-9668:
---
Attachment: SOLR-9668.patch
what about [^SOLR-9668.patch]?
> Support cursor paging in SolrEntityProcessor
>
>
> Key: SOLR-9668
> URL: https://issues.apache.org/jira/browse/SOLR-9668
> Project: Solr
> Issue Type: Improvement
> Security Level: Public(Default Security Level. Issues are Public)
> Components: contrib - DataImportHandler
>Reporter: Yegor Kozlov
>Assignee: Mikhail Khludnev
>Priority: Minor
> Labels: dataimportHandler
> Fix For: master (7.0)
>
> Attachments: SOLR-9668.patch
>
>
> SolrEntityProcessor paginates using the start and rows parameters which can
> be very inefficient at large offsets. In fact, the current implementation is
> impracticable to import large amounts of data (10M+ documents) because the
> data import rate degrades from 1000docs/second to 10docs/second and the
> import gets stuck.
> This patch introduces support for cursor paging which offers more or less
> predictable performance. In my tests the time to fetch the 1st and 1000th
> pages was about the same and the data import rate was stable throughout the
> entire import.
> To enable cursor paging a user needs to add a "sort" attribute in the entity
> configuration:
> {code}
>
>
>
> query="*:*"
> rows="1000"
> sort="id asc"
> url="http://localhost:8983/solr/collection1;>
>
>
>
> {code}
> If the "sort" attribute is missing then the default start/rows pagination is
> used.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15779059#comment-15779059
]
Paul Elschot edited comment on LUCENE-7602 at 12/26/16 10:06 PM:
-
bq. can't we just use Map
[
https://issues.apache.org/jira/browse/LUCENE-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15779059#comment-15779059
]
Paul Elschot commented on LUCENE-7602:
--
bq. can't we just use Map ?
ContextMap implements that interface.
Since this is widely used, I prefer not use a lucene class class (ContextMap)
over an interface that is defined in the java language (Map),
because it allows a change in a single place.
We could still separate the implementation from the interface, but that would
be more than fixing the compiler warnings here.
> Fix compiler warnings for ant clean compile
> ---
>
> Key: LUCENE-7602
> URL: https://issues.apache.org/jira/browse/LUCENE-7602
> Project: Lucene - Core
> Issue Type: Improvement
>Reporter: Paul Elschot
>Priority: Minor
> Labels: build
> Fix For: trunk
>
> Attachments: LUCENE-7602-ContextMap-lucene.patch,
> LUCENE-7602-ContextMap-solr.patch, LUCENE-7602.patch, LUCENE-7602.patch
>
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15779059#comment-15779059
]
Paul Elschot edited comment on LUCENE-7602 at 12/26/16 10:02 PM:
-
bq. can't we just use Map ?
ContextMap implements that interface.
Since this is widely used, I prefer use a lucene class class (ContextMap) over
an interface that is defined in the java language (Map), because
it allows a change in a single place.
We could still separate the implementation from the interface, but that would
be more than fixing the compiler warnings here.
was (Author: paul.elsc...@xs4all.nl):
bq. can't we just use Map ?
ContextMap implements that interface.
Since this is widely used, I prefer not use a lucene class class (ContextMap)
over an interface that is defined in the java language (Map),
because it allows a change in a single place.
We could still separate the implementation from the interface, but that would
be more than fixing the compiler warnings here.
> Fix compiler warnings for ant clean compile
> ---
>
> Key: LUCENE-7602
> URL: https://issues.apache.org/jira/browse/LUCENE-7602
> Project: Lucene - Core
> Issue Type: Improvement
>Reporter: Paul Elschot
>Priority: Minor
> Labels: build
> Fix For: trunk
>
> Attachments: LUCENE-7602-ContextMap-lucene.patch,
> LUCENE-7602-ContextMap-solr.patch, LUCENE-7602.patch, LUCENE-7602.patch
>
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Paul Elschot updated LUCENE-7602:
-
Attachment: LUCENE-7602.patch
Patch of 26 Dec 2016.
Mostly as discussed above.
ContextMap extends HashMap. I tried implementing AbstractMap, but that ends up
in a detour to a HashMap anyway, so I left it at direct extension.
Is there a way to quickly check for unused imports at top level?
I used ant precommit for that, but it is quite slow because it stops after the
first module with an error, and quite a few modules are involved here.
> Fix compiler warnings for ant clean compile
> ---
>
> Key: LUCENE-7602
> URL: https://issues.apache.org/jira/browse/LUCENE-7602
> Project: Lucene - Core
> Issue Type: Improvement
>Reporter: Paul Elschot
>Priority: Minor
> Labels: build
> Fix For: trunk
>
> Attachments: LUCENE-7602-ContextMap-lucene.patch,
> LUCENE-7602-ContextMap-solr.patch, LUCENE-7602.patch, LUCENE-7602.patch
>
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-7596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15779026#comment-15779026
]
Uwe Schindler commented on LUCENE-7596:
---
As a temporary workaround we might add the snapshot builds to our ivy-settings
file, what do other think? We should just not release with such hacks included.
Maybe make the snapshot version temporary and enabled only on Jenkins?
> Update Groovy to 2.4.8 in build system
> --
>
> Key: LUCENE-7596
> URL: https://issues.apache.org/jira/browse/LUCENE-7596
> Project: Lucene - Core
> Issue Type: Bug
> Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Labels: Java9
>
> The current version of Groovy used by several Ant components is incompatible
> with Java 9 build 148+. We need to update to 2.4.8 once it is released:
> http://mail.openjdk.java.net/pipermail/jigsaw-dev/2016-December/010474.html
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-7596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15779022#comment-15779022
]
Uwe Schindler commented on LUCENE-7596:
---
I tested the build system with Java Groovy snapshot builds and all works fine.
So we just have to wait for 2.4.8, which hopefully gets released soon!
> Update Groovy to 2.4.8 in build system
> --
>
> Key: LUCENE-7596
> URL: https://issues.apache.org/jira/browse/LUCENE-7596
> Project: Lucene - Core
> Issue Type: Bug
> Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Labels: Java9
>
> The current version of Groovy used by several Ant components is incompatible
> with Java 9 build 148+. We need to update to 2.4.8 once it is released:
> http://mail.openjdk.java.net/pipermail/jigsaw-dev/2016-December/010474.html
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/SOLR-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15779017#comment-15779017
]
Uwe Schindler commented on SOLR-9893:
-
I opened: https://github.com/cglib/cglib/issues/93
> EasyMock/Mockito no longer works with Java 9 b148+
> --
>
> Key: SOLR-9893
> URL: https://issues.apache.org/jira/browse/SOLR-9893
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Components: Tests
>Affects Versions: 6.x, master (7.0)
>Reporter: Uwe Schindler
>Priority: Blocker
>
> EasyMock does not work anymore with latest Java 9, because it uses cglib
> behind that is trying to access a protected method inside the runtime using
> setAccessible. This is no longer allowed by Java 9.
> Actually this is really stupid. Instead of forcefully making the protected
> defineClass method available to the outside, it is much more correct to just
> subclass ClassLoader (like the Lucene expressions module does).
> I tried updating to easymock/mockito, but all that does not work, approx 25
> tests fail. The only way is to disable all Mocking tests in Java 9. The
> underlying issue in cglib is still not solved, master's code is here:
> https://github.com/cglib/cglib/blob/master/cglib/src/main/java/net/sf/cglib/core/ReflectUtils.java#L44-L62
> As we use an old stone-aged version of mockito (1.x), a fix is not expected
> to happen, although cglib might fix this!
> What should we do? This stupid issue prevents us from testing Java 9 with
> Solr completely!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/SOLR-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15779005#comment-15779005
]
Uwe Schindler edited comment on SOLR-9893 at 12/26/16 9:26 PM:
---
Hi Mark,
I fully agree with you. I will keep this issue open as a blocker. I will first
fix the remaining issues in Lucene and then check out all usages of mocking
libraries. Unfortunately, as you said, we have multiple mock libs. But all have
the same problem behind: CGLIB. The underlying issue is the static initializer
of CGLIB's ReflectUtils. I will open a bug report on their Github account later.
Java 9 will (hopefully) be released this summer, so we should really work on
solving the remaining Java 9 issues. From my participation in OpenJDK mailing
lists I know that it is unlikely they will fix the setAccessible on runtime
classes (public APIs) - they only have special cases for sun.misc.Unsafe and
sun.misc.ReflectUtils.
One "quick'n'dirty" solution would be to add a command line option to the test
runners in Solr only that opens "java.lang" for reflection (which is still
possible). As this only affects tests and not production code, we may be able
to live with this. I will also investigate that.
was (Author: thetaphi):
Hi Mark,
I fully agree with you. I will keep this issue open as a blocker. I will first
fix the remaining issues in Lucene and then check out all usages of mocking
libraries. Unfortunately, as you said, we have multiple mock libs. But all have
the same problem behind: CGLIB. The underlying issue is the static initializer
of CGLIB's ReflectUtils. I will open a bug report on their Github account later.
Java 9 will (hopefully) be released this summer, so we should really work on
solving the remaining Java 9 issues. From my participation in OpenJDK mailing
lists.
One "quick'n'dirty" solution would be to add a command line option to the test
runners in Solr only that opens "java.lang" for reflection (which is still
possible). As this only affects tests and not production code, we may be able
to live with this. I will also investigate that.
> EasyMock/Mockito no longer works with Java 9 b148+
> --
>
> Key: SOLR-9893
> URL: https://issues.apache.org/jira/browse/SOLR-9893
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Components: Tests
>Affects Versions: 6.x, master (7.0)
>Reporter: Uwe Schindler
>Priority: Blocker
>
> EasyMock does not work anymore with latest Java 9, because it uses cglib
> behind that is trying to access a protected method inside the runtime using
> setAccessible. This is no longer allowed by Java 9.
> Actually this is really stupid. Instead of forcefully making the protected
> defineClass method available to the outside, it is much more correct to just
> subclass ClassLoader (like the Lucene expressions module does).
> I tried updating to easymock/mockito, but all that does not work, approx 25
> tests fail. The only way is to disable all Mocking tests in Java 9. The
> underlying issue in cglib is still not solved, master's code is here:
> https://github.com/cglib/cglib/blob/master/cglib/src/main/java/net/sf/cglib/core/ReflectUtils.java#L44-L62
> As we use an old stone-aged version of mockito (1.x), a fix is not expected
> to happen, although cglib might fix this!
> What should we do? This stupid issue prevents us from testing Java 9 with
> Solr completely!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/SOLR-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15779005#comment-15779005
]
Uwe Schindler commented on SOLR-9893:
-
Hi Make,
I fully agree with you. I will keep this issue open as a blocker. I will first
fix the remaining issues in Lucene and then check out all usages of mocking
libraries. Unfortunately, as you said, we have multiple mock libs. But all have
the same problem behind: CGLIB. The underlying issue is the static initializer
of CGLIB's ReflectUtils. I will open a bug report on their Github account later.
Java 9 will (hopefully) be released this summer, so we should really work on
solving the remaining Java 9 issues. From my participation in OpenJDK mailing
lists.
One "quick'n'dirty" solution would be to add a command line option to the test
runners in Solr only that opens "java.lang" for reflection (which is still
possible). As this only affects tests and not production code, we may be able
to live with this. I will also investigate that.
> EasyMock/Mockito no longer works with Java 9 b148+
> --
>
> Key: SOLR-9893
> URL: https://issues.apache.org/jira/browse/SOLR-9893
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Components: Tests
>Affects Versions: 6.x, master (7.0)
>Reporter: Uwe Schindler
>Priority: Critical
>
> EasyMock does not work anymore with latest Java 9, because it uses cglib
> behind that is trying to access a protected method inside the runtime using
> setAccessible. This is no longer allowed by Java 9.
> Actually this is really stupid. Instead of forcefully making the protected
> defineClass method available to the outside, it is much more correct to just
> subclass ClassLoader (like the Lucene expressions module does).
> I tried updating to easymock/mockito, but all that does not work, approx 25
> tests fail. The only way is to disable all Mocking tests in Java 9. The
> underlying issue in cglib is still not solved, master's code is here:
> https://github.com/cglib/cglib/blob/master/cglib/src/main/java/net/sf/cglib/core/ReflectUtils.java#L44-L62
> As we use an old stone-aged version of mockito (1.x), a fix is not expected
> to happen, although cglib might fix this!
> What should we do? This stupid issue prevents us from testing Java 9 with
> Solr completely!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/SOLR-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15779005#comment-15779005
]
Uwe Schindler edited comment on SOLR-9893 at 12/26/16 9:24 PM:
---
Hi Mark,
I fully agree with you. I will keep this issue open as a blocker. I will first
fix the remaining issues in Lucene and then check out all usages of mocking
libraries. Unfortunately, as you said, we have multiple mock libs. But all have
the same problem behind: CGLIB. The underlying issue is the static initializer
of CGLIB's ReflectUtils. I will open a bug report on their Github account later.
Java 9 will (hopefully) be released this summer, so we should really work on
solving the remaining Java 9 issues. From my participation in OpenJDK mailing
lists.
One "quick'n'dirty" solution would be to add a command line option to the test
runners in Solr only that opens "java.lang" for reflection (which is still
possible). As this only affects tests and not production code, we may be able
to live with this. I will also investigate that.
was (Author: thetaphi):
Hi Make,
I fully agree with you. I will keep this issue open as a blocker. I will first
fix the remaining issues in Lucene and then check out all usages of mocking
libraries. Unfortunately, as you said, we have multiple mock libs. But all have
the same problem behind: CGLIB. The underlying issue is the static initializer
of CGLIB's ReflectUtils. I will open a bug report on their Github account later.
Java 9 will (hopefully) be released this summer, so we should really work on
solving the remaining Java 9 issues. From my participation in OpenJDK mailing
lists.
One "quick'n'dirty" solution would be to add a command line option to the test
runners in Solr only that opens "java.lang" for reflection (which is still
possible). As this only affects tests and not production code, we may be able
to live with this. I will also investigate that.
> EasyMock/Mockito no longer works with Java 9 b148+
> --
>
> Key: SOLR-9893
> URL: https://issues.apache.org/jira/browse/SOLR-9893
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Components: Tests
>Affects Versions: 6.x, master (7.0)
>Reporter: Uwe Schindler
>Priority: Blocker
>
> EasyMock does not work anymore with latest Java 9, because it uses cglib
> behind that is trying to access a protected method inside the runtime using
> setAccessible. This is no longer allowed by Java 9.
> Actually this is really stupid. Instead of forcefully making the protected
> defineClass method available to the outside, it is much more correct to just
> subclass ClassLoader (like the Lucene expressions module does).
> I tried updating to easymock/mockito, but all that does not work, approx 25
> tests fail. The only way is to disable all Mocking tests in Java 9. The
> underlying issue in cglib is still not solved, master's code is here:
> https://github.com/cglib/cglib/blob/master/cglib/src/main/java/net/sf/cglib/core/ReflectUtils.java#L44-L62
> As we use an old stone-aged version of mockito (1.x), a fix is not expected
> to happen, although cglib might fix this!
> What should we do? This stupid issue prevents us from testing Java 9 with
> Solr completely!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/SOLR-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Uwe Schindler updated SOLR-9893:
Priority: Blocker (was: Critical)
> EasyMock/Mockito no longer works with Java 9 b148+
> --
>
> Key: SOLR-9893
> URL: https://issues.apache.org/jira/browse/SOLR-9893
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Components: Tests
>Affects Versions: 6.x, master (7.0)
>Reporter: Uwe Schindler
>Priority: Blocker
>
> EasyMock does not work anymore with latest Java 9, because it uses cglib
> behind that is trying to access a protected method inside the runtime using
> setAccessible. This is no longer allowed by Java 9.
> Actually this is really stupid. Instead of forcefully making the protected
> defineClass method available to the outside, it is much more correct to just
> subclass ClassLoader (like the Lucene expressions module does).
> I tried updating to easymock/mockito, but all that does not work, approx 25
> tests fail. The only way is to disable all Mocking tests in Java 9. The
> underlying issue in cglib is still not solved, master's code is here:
> https://github.com/cglib/cglib/blob/master/cglib/src/main/java/net/sf/cglib/core/ReflectUtils.java#L44-L62
> As we use an old stone-aged version of mockito (1.x), a fix is not expected
> to happen, although cglib might fix this!
> What should we do? This stupid issue prevents us from testing Java 9 with
> Solr completely!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/SOLR-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778972#comment-15778972
]
Mark Miller commented on SOLR-9893:
---
I am a proponent of only using one of these mock libs. It's too much to ask
devs to deal with two.
I hate even having to deal with one, but I do love how it forces devs to
understand more than they want to make some changes. Changing some of these
tests can be such a painful process though.
I use simple object mocks wherever I can instead.
> EasyMock/Mockito no longer works with Java 9 b148+
> --
>
> Key: SOLR-9893
> URL: https://issues.apache.org/jira/browse/SOLR-9893
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Components: Tests
>Affects Versions: 6.x, master (7.0)
>Reporter: Uwe Schindler
>Priority: Critical
>
> EasyMock does not work anymore with latest Java 9, because it uses cglib
> behind that is trying to access a protected method inside the runtime using
> setAccessible. This is no longer allowed by Java 9.
> Actually this is really stupid. Instead of forcefully making the protected
> defineClass method available to the outside, it is much more correct to just
> subclass ClassLoader (like the Lucene expressions module does).
> I tried updating to easymock/mockito, but all that does not work, approx 25
> tests fail. The only way is to disable all Mocking tests in Java 9. The
> underlying issue in cglib is still not solved, master's code is here:
> https://github.com/cglib/cglib/blob/master/cglib/src/main/java/net/sf/cglib/core/ReflectUtils.java#L44-L62
> As we use an old stone-aged version of mockito (1.x), a fix is not expected
> to happen, although cglib might fix this!
> What should we do? This stupid issue prevents us from testing Java 9 with
> Solr completely!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/SOLR-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778968#comment-15778968
]
Mark Miller commented on SOLR-9893:
---
I imagine this will get addressed at some point as problems bubble up. I would
just leave a blocker issue open for whatever version we expect to ship on 9 and
ignore those tests for java 9.
We should probably open issues against these libs if they don't already exist.
> EasyMock/Mockito no longer works with Java 9 b148+
> --
>
> Key: SOLR-9893
> URL: https://issues.apache.org/jira/browse/SOLR-9893
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Components: Tests
>Affects Versions: 6.x, master (7.0)
>Reporter: Uwe Schindler
>Priority: Critical
>
> EasyMock does not work anymore with latest Java 9, because it uses cglib
> behind that is trying to access a protected method inside the runtime using
> setAccessible. This is no longer allowed by Java 9.
> Actually this is really stupid. Instead of forcefully making the protected
> defineClass method available to the outside, it is much more correct to just
> subclass ClassLoader (like the Lucene expressions module does).
> I tried updating to easymock/mockito, but all that does not work, approx 25
> tests fail. The only way is to disable all Mocking tests in Java 9. The
> underlying issue in cglib is still not solved, master's code is here:
> https://github.com/cglib/cglib/blob/master/cglib/src/main/java/net/sf/cglib/core/ReflectUtils.java#L44-L62
> As we use an old stone-aged version of mockito (1.x), a fix is not expected
> to happen, although cglib might fix this!
> What should we do? This stupid issue prevents us from testing Java 9 with
> Solr completely!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/SOLR-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shalin Shekhar Mangar updated SOLR-9877:
Attachment: SOLR-9877.patch
Patch that adds instrumentation for HttpShardHandlerFactory. I'm going to add
metrics to UpdateShardHandler along similar lines. The metrics-http library is
added to solr in the patch but I am going to remove it since it is not flexible
enough for our API. Instead, I've added solr specific sub-classes of
PoolingHttpClientConnectionManager and HttpRequestExecutor which implement
SolrMetricProducer interface.
> Use instrumented http client
>
>
> Key: SOLR-9877
> URL: https://issues.apache.org/jira/browse/SOLR-9877
> Project: Solr
> Issue Type: Improvement
> Security Level: Public(Default Security Level. Issues are Public)
> Components: metrics
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: SOLR-9877.patch
>
>
> Use instrumented equivalents of PooledHttpClientConnectionManager and others
> from metrics-httpclient library.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/SOLR-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Uwe Schindler updated SOLR-9893:
Description:
EasyMock does not work anymore with latest Java 9, because it uses cglib behind
that is trying to access a protected method inside the runtime using
setAccessible. This is no longer allowed by Java 9.
Actually this is really stupid. Instead of forcefully making the protected
defineClass method available to the outside, it is much more correct to just
subclass ClassLoader (like the Lucene expressions module does).
I tried updating to easymock/mockito, but all that does not work, approx 25
tests fail. The only way is to disable all Mocking tests in Java 9. The
underlying issue in cglib is still not solved, master's code is here:
https://github.com/cglib/cglib/blob/master/cglib/src/main/java/net/sf/cglib/core/ReflectUtils.java#L44-L62
As we use an old stone-aged version of mockito (1.x), a fix is not expected to
happen, although cglib might fix this!
What should we do? This stupid issue prevents us from testing Java 9 with Solr
completely!
was:
EasyMock does not work anymore with latest Java 9, because it uses cglib behind
that is trying to access a protected method inside the runtime using
setAccessible. This is no longer allowed by Java 9.
Actually this is really stupid. Instead of forcefully making the protected
defineClass method available to the outside, it is much more correct to just
subclass ClassLoader (like the Lucene expressions module does).
I tried updating to easymock/mockito, but all that does not work, approx 25
tests fail. The only way is to disable all Mocking tests in Java 9. The
underlying issue in cglib is still not solved, master's code is here:
https://github.com/cglib/cglib/blob/master/cglib/src/main/java/net/sf/cglib/core/ReflectUtils.java#L44-L62
As we use an old version ock mockito (1.x), a fix is not expected to happen,
although cglib might fix this!
What should we do? This stupid issue prevents us from testing Java 9 with Solr
completely!
> EasyMock/Mockito no longer works with Java 9 b148+
> --
>
> Key: SOLR-9893
> URL: https://issues.apache.org/jira/browse/SOLR-9893
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Components: Tests
>Affects Versions: 6.x, master (7.0)
>Reporter: Uwe Schindler
>Priority: Critical
>
> EasyMock does not work anymore with latest Java 9, because it uses cglib
> behind that is trying to access a protected method inside the runtime using
> setAccessible. This is no longer allowed by Java 9.
> Actually this is really stupid. Instead of forcefully making the protected
> defineClass method available to the outside, it is much more correct to just
> subclass ClassLoader (like the Lucene expressions module does).
> I tried updating to easymock/mockito, but all that does not work, approx 25
> tests fail. The only way is to disable all Mocking tests in Java 9. The
> underlying issue in cglib is still not solved, master's code is here:
> https://github.com/cglib/cglib/blob/master/cglib/src/main/java/net/sf/cglib/core/ReflectUtils.java#L44-L62
> As we use an old stone-aged version of mockito (1.x), a fix is not expected
> to happen, although cglib might fix this!
> What should we do? This stupid issue prevents us from testing Java 9 with
> Solr completely!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
Uwe Schindler created SOLR-9893:
---
Summary: EasyMock/Mockito no longer works with Java 9 b148+
Key: SOLR-9893
URL: https://issues.apache.org/jira/browse/SOLR-9893
Project: Solr
Issue Type: Bug
Security Level: Public (Default Security Level. Issues are Public)
Components: Tests
Affects Versions: 6.x, master (7.0)
Reporter: Uwe Schindler
Priority: Critical
EasyMock does not work anymore with latest Java 9, because it uses cglib behind
that is trying to access a protected method inside the runtime using
setAccessible. This is no longer allowed by Java 9.
Actually this is really stupid. Instead of forcefully making the protected
defineClass method available to the outside, it is much more correct to just
subclass ClassLoader (like the Lucene expressions module does).
I tried updating to easymock/mockito, but all that does not work, approx 25
tests fail. The only way is to disable all Mocking tests in Java 9. The
underlying issue in cglib is still not solved, master's code is here:
https://github.com/cglib/cglib/blob/master/cglib/src/main/java/net/sf/cglib/core/ReflectUtils.java#L44-L62
As we use an old version ock mockito (1.x), a fix is not expected to happen,
although cglib might fix this!
What should we do? This stupid issue prevents us from testing Java 9 with Solr
completely!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6313/
Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseSerialGC
1 tests failed.
FAILED: org.apache.solr.metrics.JvmMetricsTest.testOperatingSystemMetricsSet
Error Message:
Stack Trace:
java.lang.AssertionError
at
__randomizedtesting.SeedInfo.seed([A665BDDF79CA3CD1:BE9AD9AA80CC6784]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at
org.apache.solr.metrics.JvmMetricsTest.testOperatingSystemMetricsSet(JvmMetricsTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Build Log:
[...truncated 12508 lines...]
[junit4] Suite: org.apache.solr.metrics.JvmMetricsTest
[junit4] 2> Creating dataDir:
[
https://issues.apache.org/jira/browse/SOLR-9887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778858#comment-15778858
]
Alexandre Rafalovitch commented on SOLR-9887:
-
I believe the current direction with Solr is to focus on Managed resources that
are REST-managed in a push fashion, as opposed to JDBC-pull. This way the
common issues related to API, SolrCloud distribution, etc can be solved for all
of those at once.
Is there any chance you could use that approach? It would be a very valuable
contribution.
> Add KeepWordFilter, StemmerOverrideFilter, StopFilterFactory, SynonymFilter
> that reads data from a JDBC source
> --
>
> Key: SOLR-9887
> URL: https://issues.apache.org/jira/browse/SOLR-9887
> Project: Solr
> Issue Type: Improvement
> Security Level: Public(Default Security Level. Issues are Public)
>Reporter: Tobias Kässmann
>Priority: Minor
>
> We've created some new {{FilterFactories}} that reads their stopwords or
> synonyms from a database (by a JDBC source). That enables us a easy
> management of large lists and also add the possibility to do this in other
> tools. JDBC data sources are retrieved via JNDI.
> For a easy reload of this lists we've added a {{SeacherAwareReloader}}
> abstraciton that reloads this lists on every new searcher event.
> If this is a feature that is interesting for Solr, we will create a pull
> request. All the sources are currently available here:
> https://github.com/shopping24/solr-jdbc
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Uwe Schindler updated LUCENE-7604:
--
Affects Version/s: master (7.0)
6.x
> TestLRUQueryCache.testDetectMutatedQueries does not work on Java 9 b150
> ---
>
> Key: LUCENE-7604
> URL: https://issues.apache.org/jira/browse/LUCENE-7604
> Project: Lucene - Core
> Issue Type: Bug
> Components: core/search
>Affects Versions: 6.x, master (7.0)
>Reporter: Uwe Schindler
> Labels: Java9
>
> For some strange reason, the test testDetectMutatedQueries of
> TestLRUQueryCache suite does not trigger the ConcurrentModificationException
> on changing the hashCode (see BadQuery class).
> I have no idea why this happens, so I will disable this test on Java 9 for
> now.
> The other test also fails with Java 9 because of RamUsageTester
> (LUCENE-7595), but this is unrelated, so I opened a separate issue.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778840#comment-15778840
]
ASF subversion and git services commented on LUCENE-7604:
-
Commit f217e3c43bdf10391fd66d555c478e1318e02299 in lucene-solr's branch
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f217e3c ]
LUCENE-7604: Disable test on Java 9
> TestLRUQueryCache.testDetectMutatedQueries does not work on Java 9 b150
> ---
>
> Key: LUCENE-7604
> URL: https://issues.apache.org/jira/browse/LUCENE-7604
> Project: Lucene - Core
> Issue Type: Bug
> Components: core/search
>Reporter: Uwe Schindler
> Labels: Java9
>
> For some strange reason, the test testDetectMutatedQueries of
> TestLRUQueryCache suite does not trigger the ConcurrentModificationException
> on changing the hashCode (see BadQuery class).
> I have no idea why this happens, so I will disable this test on Java 9 for
> now.
> The other test also fails with Java 9 because of RamUsageTester
> (LUCENE-7595), but this is unrelated, so I opened a separate issue.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778832#comment-15778832
]
ASF subversion and git services commented on LUCENE-7604:
-
Commit 1d3fb3e9a9ea0e3d566632c0b827dad0295ce425 in lucene-solr's branch
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1d3fb3e ]
LUCENE-7604: Disable test on Java 9
> TestLRUQueryCache.testDetectMutatedQueries does not work on Java 9 b150
> ---
>
> Key: LUCENE-7604
> URL: https://issues.apache.org/jira/browse/LUCENE-7604
> Project: Lucene - Core
> Issue Type: Bug
> Components: core/search
>Reporter: Uwe Schindler
> Labels: Java9
>
> For some strange reason, the test testDetectMutatedQueries of
> TestLRUQueryCache suite does not trigger the ConcurrentModificationException
> on changing the hashCode (see BadQuery class).
> I have no idea why this happens, so I will disable this test on Java 9 for
> now.
> The other test also fails with Java 9 because of RamUsageTester
> (LUCENE-7595), but this is unrelated, so I opened a separate issue.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
Uwe Schindler created LUCENE-7604:
-
Summary: TestLRUQueryCache.testDetectMutatedQueries does not work
on Java 9 b150
Key: LUCENE-7604
URL: https://issues.apache.org/jira/browse/LUCENE-7604
Project: Lucene - Core
Issue Type: Bug
Components: core/search
Reporter: Uwe Schindler
For some strange reason, the test testDetectMutatedQueries of TestLRUQueryCache
suite does not trigger the ConcurrentModificationException on changing the
hashCode (see BadQuery class).
I have no idea why this happens, so I will disable this test on Java 9 for now.
The other test also fails with Java 9 because of RamUsageTester (LUCENE-7595),
but this is unrelated, so I opened a separate issue.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/577/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC
1 tests failed.
FAILED: junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest
Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202) at
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137) at
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94) at
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor160.newInstance(Unknown Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:704) at
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:766) at
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1005) at
org.apache.solr.core.SolrCore.(SolrCore.java:870) at
org.apache.solr.core.SolrCore.(SolrCore.java:774) at
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842) at
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498) at
java.util.concurrent.FutureTask.run(FutureTask.java:266) at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor160.newInstance(Unknown
Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:704)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:766)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1005)
at org.apache.solr.core.SolrCore.(SolrCore.java:870)
at org.apache.solr.core.SolrCore.(SolrCore.java:774)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)
at
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([6FFCF686A8FE08F5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:266)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
[
https://issues.apache.org/jira/browse/LUCENE-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778692#comment-15778692
]
Paul Elschot commented on LUCENE-7602:
--
Meanwhile I tried implementing solr's QueryContext by extending ContextMap, and
no more wrapping of fcontext.qcontext in a SolrContextMap, see
FuncSlotAcc.setNextReader above.
The solr tests passed, so I think there is no more need for IdentityHashMap, in
both lucene and solr.
Shall I post a complete patch against master, or just the changes changes since
yesterday?
> Fix compiler warnings for ant clean compile
> ---
>
> Key: LUCENE-7602
> URL: https://issues.apache.org/jira/browse/LUCENE-7602
> Project: Lucene - Core
> Issue Type: Improvement
>Reporter: Paul Elschot
>Priority: Minor
> Labels: build
> Fix For: trunk
>
> Attachments: LUCENE-7602-ContextMap-lucene.patch,
> LUCENE-7602-ContextMap-solr.patch, LUCENE-7602.patch
>
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/SOLR-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778688#comment-15778688
]
Erick Erickson commented on SOLR-9185:
--
Happened across SOLR-4381 and SOLR-5379 while searching for this JIRA and
thought we should check how/if they're related
> Solr's "Lucene"/standard query parser should not split on whitespace before
> sending terms to analysis
> -
>
> Key: SOLR-9185
> URL: https://issues.apache.org/jira/browse/SOLR-9185
> Project: Solr
> Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Attachments: SOLR-9185.patch, SOLR-9185.patch, SOLR-9185.patch
>
>
> Copied from LUCENE-2605:
> The queryparser parses input on whitespace, and sends each whitespace
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across
> whitespace boundaries:
> n-gram analysis
> shingles
> synonyms (especially multi-word for whitespace-separated languages)
> languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their
> charfilters/tokenizers/tokenfilters will do the same thing at index and
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse
> around only real 'operators'.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778681#comment-15778681
]
Paul Elschot commented on LUCENE-7602:
--
Do you mean like this:
{code}
public class ContextMap extends AbstractMap
{ ... }
{code}
> Fix compiler warnings for ant clean compile
> ---
>
> Key: LUCENE-7602
> URL: https://issues.apache.org/jira/browse/LUCENE-7602
> Project: Lucene - Core
> Issue Type: Improvement
>Reporter: Paul Elschot
>Priority: Minor
> Labels: build
> Fix For: trunk
>
> Attachments: LUCENE-7602-ContextMap-lucene.patch,
> LUCENE-7602-ContextMap-solr.patch, LUCENE-7602.patch
>
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778672#comment-15778672
]
ASF GitHub Bot commented on LUCENE-7603:
GitHub user mattweber opened a pull request:
https://github.com/apache/lucene-solr/pull/129
LUCENE-7603: Support Graph Token Streams in QueryBuilder
Adds support for handling graph token streams inside the
QueryBuilder util class used by query parsers.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/mattweber/lucene-solr LUCENE-7603
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/lucene-solr/pull/129.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #129
commit 568cb43d6af1aeef96cc7b6cabb7237de9058f36
Author: Matt Weber
Date: 2016-12-26T15:50:58Z
Support Graph Token Streams in QueryBuilder
Adds support for handling graph token streams inside the
QueryBuilder util class used by query parsers.
> Support Graph Token Streams in QueryBuilder
> ---
>
> Key: LUCENE-7603
> URL: https://issues.apache.org/jira/browse/LUCENE-7603
> Project: Lucene - Core
> Issue Type: Improvement
> Components: core/queryparser, core/search
>Reporter: Matt Weber
>
> With [LUCENE-6664|https://issues.apache.org/jira/browse/LUCENE-6664] we can
> use multi-term synonyms query time. A "graph token stream" will be created
> which which is nothing more than using the position length attribute on
> stacked tokens to indicate how many positions a token should span. Currently
> the position length attribute on tokens is ignored during query parsing.
> This issue will add support for handling these graph token streams inside the
> QueryBuilder utility class used by query parsers.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
GitHub user mattweber opened a pull request:
https://github.com/apache/lucene-solr/pull/129
LUCENE-7603: Support Graph Token Streams in QueryBuilder
Adds support for handling graph token streams inside the
QueryBuilder util class used by query parsers.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/mattweber/lucene-solr LUCENE-7603
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/lucene-solr/pull/129.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #129
commit 568cb43d6af1aeef96cc7b6cabb7237de9058f36
Author: Matt Weber
Date: 2016-12-26T15:50:58Z
Support Graph Token Streams in QueryBuilder
Adds support for handling graph token streams inside the
QueryBuilder util class used by query parsers.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
Matt Weber created LUCENE-7603:
--
Summary: Support Graph Token Streams in QueryBuilder
Key: LUCENE-7603
URL: https://issues.apache.org/jira/browse/LUCENE-7603
Project: Lucene - Core
Issue Type: Improvement
Components: core/queryparser, core/search
Reporter: Matt Weber
With [LUCENE-6664|https://issues.apache.org/jira/browse/LUCENE-6664] we can use
multi-term synonyms query time. A "graph token stream" will be created which
which is nothing more than using the position length attribute on stacked
tokens to indicate how many positions a token should span. Currently the
position length attribute on tokens is ignored during query parsing. This
issue will add support for handling these graph token streams inside the
QueryBuilder utility class used by query parsers.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1035/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC
2 tests failed.
FAILED:
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail
Error Message:
expected:<200> but was:<404>
Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at
__randomizedtesting.SeedInfo.seed([CE66D9CCE57AD453:A6D9ECE635E0C6BF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.cancelDelegationToken(TestDelegationWithHadoopAuth.java:128)
at
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail(TestDelegationWithHadoopAuth.java:280)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
[
https://issues.apache.org/jira/browse/SOLR-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Erick Erickson resolved SOLR-9892.
--
Resolution: Invalid
Please raise issues like this on the user's list, many more people will see it
and you'll likely get help much more quickly.
If it's determined that this is a problem with Solr code, _then_ you should
raise a JIRA.
> Core is locked
> --
>
> Key: SOLR-9892
> URL: https://issues.apache.org/jira/browse/SOLR-9892
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Environment: UAT
>Reporter: Naresh Kumar Geepalem
>
> Hi Team,
> We have setup of one master and two slaves. One master and one slave in one
> server (same JVM) and another slave in different server. This setup is
> working fine from past 6 months including production environment.
> Slave replication poll interval is 20 seconds on both slaves.
> Now, we are facing below issue in master solr.
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
> Index dir
> 'D:\inetpub\wwwroot\CMS-Solr\Solr\solr-5.4.1\server\solr\sitecore_marketing_asset_index_master\data\index/'
> of core 'sitecore_marketing_asset_index_master' is already locked. The most
> likely cause is another Solr server (or another solr core in this server)
> also configured to use this directory; other possible causes may be specific
> to lockType: nativesitecore_web_index:
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
> Index dir 'D:\inetpub\wwwroot\CMS-Solr\Solr\
> We have stopped both slaves and restarted master. Still same issue is coming.
> We have deleted write.lock file from all cores under data/index.
> Then "site cant be reached" message is showing when access solr url.
> Can some one please provide information on how to fix this as we are struck
> with this issue during UAT?
> Thanks,
> G. Naresh Kumar
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/238/
2 tests failed.
FAILED:
org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.testSpecificConfigsets
Error Message:
KeeperErrorCode = NoNode for /collections/withconfigset2
Stack Trace:
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode
for /collections/withconfigset2
at
__randomizedtesting.SeedInfo.seed([D95ACC6B67E1C0E6:F424833190C57AEA]:0)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:356)
at
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:353)
at
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
at
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:353)
at
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testSpecificConfigsets(CollectionsAPIDistributedZkTest.java:425)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
[
https://issues.apache.org/jira/browse/SOLR-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Naresh Kumar Geepalem updated SOLR-9892:
Description:
Hi Team,
We have setup of one master and two slaves. One master and one slave in one
server (same JVM) and another slave in different server. This setup is working
fine from past 6 months including production environment.
Slave replication poll interval is 20 seconds on both slaves.
Now, we are facing below issue in master solr.
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Index dir
'D:\inetpub\wwwroot\CMS-Solr\Solr\solr-5.4.1\server\solr\sitecore_marketing_asset_index_master\data\index/'
of core 'sitecore_marketing_asset_index_master' is already locked. The most
likely cause is another Solr server (or another solr core in this server) also
configured to use this directory; other possible causes may be specific to
lockType: nativesitecore_web_index:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Index dir 'D:\inetpub\wwwroot\CMS-Solr\Solr\
We have stopped both slaves and restarted master. Still same issue is coming.
We have deleted write.lock file from all cores under data/index.
Then "site cant be reached" message is showing when access solr url.
Can some one please provide information on how to fix this as we are struck
with this issue during UAT?
Thanks,
G. Naresh Kumar
was:
Hi Team,
We have setup of one master and two slaves. One master and one slave one server
(same JVM) and another slave in different server. This setup is working fine
from past 6 months including production environment.
Now, we are facing below issue in master solr.
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Index dir
'D:\inetpub\wwwroot\CMS-Solr\Solr\solr-5.4.1\server\solr\sitecore_marketing_asset_index_master\data\index/'
of core 'sitecore_marketing_asset_index_master' is already locked. The most
likely cause is another Solr server (or another solr core in this server) also
configured to use this directory; other possible causes may be specific to
lockType: nativesitecore_web_index:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Index dir 'D:\inetpub\wwwroot\CMS-Solr\Solr\
We have stopped both slaves and restarted master. Still same issue is coming.
We have deleted write.lock file from all cores under data/index.
Then "site cant be reached" message is showing when access solr url.
Can some one please provide information on how to fix this as we are struck
with this issue during UAT?
Thanks,
G. Naresh Kumar
> Core is locked
> --
>
> Key: SOLR-9892
> URL: https://issues.apache.org/jira/browse/SOLR-9892
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Environment: UAT
>Reporter: Naresh Kumar Geepalem
>
> Hi Team,
> We have setup of one master and two slaves. One master and one slave in one
> server (same JVM) and another slave in different server. This setup is
> working fine from past 6 months including production environment.
> Slave replication poll interval is 20 seconds on both slaves.
> Now, we are facing below issue in master solr.
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
> Index dir
> 'D:\inetpub\wwwroot\CMS-Solr\Solr\solr-5.4.1\server\solr\sitecore_marketing_asset_index_master\data\index/'
> of core 'sitecore_marketing_asset_index_master' is already locked. The most
> likely cause is another Solr server (or another solr core in this server)
> also configured to use this directory; other possible causes may be specific
> to lockType: nativesitecore_web_index:
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
> Index dir 'D:\inetpub\wwwroot\CMS-Solr\Solr\
> We have stopped both slaves and restarted master. Still same issue is coming.
> We have deleted write.lock file from all cores under data/index.
> Then "site cant be reached" message is showing when access solr url.
> Can some one please provide information on how to fix this as we are struck
> with this issue during UAT?
> Thanks,
> G. Naresh Kumar
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
Naresh Kumar Geepalem created SOLR-9892:
---
Summary: Core is locked
Key: SOLR-9892
URL: https://issues.apache.org/jira/browse/SOLR-9892
Project: Solr
Issue Type: Bug
Security Level: Public (Default Security Level. Issues are Public)
Environment: UAT
Reporter: Naresh Kumar Geepalem
Hi Team,
We have setup of one master and two slaves. One master and one slave one server
(same JVM) and another slave in different server. This setup is working fine
from past 6 months including production environment.
Now, we are facing below issue in master solr.
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Index dir
'D:\inetpub\wwwroot\CMS-Solr\Solr\solr-5.4.1\server\solr\sitecore_marketing_asset_index_master\data\index/'
of core 'sitecore_marketing_asset_index_master' is already locked. The most
likely cause is another Solr server (or another solr core in this server) also
configured to use this directory; other possible causes may be specific to
lockType: nativesitecore_web_index:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Index dir 'D:\inetpub\wwwroot\CMS-Solr\Solr\
We have stopped both slaves and restarted master. Still same issue is coming.
We have deleted write.lock file from all cores under data/index.
Then "site cant be reached" message is showing when access solr url.
Can some one please provide information on how to fix this as we are struck
with this issue during UAT?
Thanks,
G. Naresh Kumar
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778369#comment-15778369
]
Cao Manh Dat edited comment on SOLR-9835 at 12/26/16 1:50 PM:
--
[~yo...@apache.org][~ysee...@gmail.com] : Here are scenario for the problem
that I encountered today
- an replica ( let's call it rep1 ) is on recovering mode -> its ulog will be
on buffering state.
- rep1 receives an update ( contain doc1 ), rep1 will write the update to its
tlog without updating ulog.map for real-time-get
- rep1 replay buffered updates, rep1 will write doc1 to its index, and update
ulog.map for real-time-get ( but in this case, ulog.map will point doc1 ->
position = -1 because we don't write updateCommand with REPLAY flag to tlog )
- client call real-time-get for doc1
- rep1 will always open a real-time-searcher for this case. Because ulog.map
for doc 1 return position = -1
I just wonder why we do that currently? Why don't we just write the update to
tlog and ulog.map so we don't have to open a new real-time-searcher for this
case?
was (Author: caomanhdat):
[~yo...@apache.org][~ysee...@gmail.com] : Here are scenario for the problem
that I encountered today
- an replica ( let's call it rep1 ) is on recovering mode -> its ulog will be
on buffering state.
- rep1 receives an update ( contain doc1 ), rep1 will write the update to its
tlog without updating ulog.map for real-time-get
- rep1 replay buffered updates, rep1 will write doc1 to its index, and update
ulog.map for real-time-get ( but in this case, ulog.map will point doc1 ->
position = -1 because we don't write updateCommand with REPLAY flag to tlog )
- client call real-time-get for doc1
- rep1 will always open a real-time-searcher for this case
I just wonder why we do that currently? Why don't we just write the update to
tlog and ulog.map so we don't have to open a new real-time-searcher for this
case?
> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which
> replicas start in same initial state and for each input, the input is
> distributed across replicas so all replicas will end up with same next state.
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply
> the update to IW, other replicas just store the update to UpdateLog (act like
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit,
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778369#comment-15778369
]
Cao Manh Dat commented on SOLR-9835:
[~yo...@apache.org][~ysee...@gmail.com] : Here are scenario for the problem
that I encountered today
- an replica ( let's call it rep1 ) is on recovering mode -> its ulog will be
on buffering state.
- rep1 receives an update ( contain doc1 ), rep1 will write the update to its
tlog without updating ulog.map for real-time-get
- rep1 replay buffered updates, rep1 will write doc1 to its index, and update
ulog.map for real-time-get ( but in this case, ulog.map will point doc1 ->
position = -1 because we don't write updateCommand with REPLAY flag to tlog )
- client call real-time-get for doc1
- rep1 will always open a real-time-searcher for this case
I just wonder why we do that currently? Why don't we just write the update to
tlog and ulog.map so we don't have to open a new real-time-searcher for this
case?
> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which
> replicas start in same initial state and for each input, the input is
> distributed across replicas so all replicas will end up with same next state.
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply
> the update to IW, other replicas just store the update to UpdateLog (act like
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit,
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/647/
Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC
1 tests failed.
FAILED: org.apache.solr.core.TestDynamicLoading.testDynamicLoading
Error Message:
Could not get expected value 'X val' for path 'x' full output: {
"responseHeader":{ "status":0, "QTime":0}, "params":{"wt":"json"},
"context":{ "webapp":"/h_", "path":"/test1", "httpMethod":"GET"},
"class":"org.apache.solr.core.BlobStoreTestRequestHandler", "x":null}, from
server: null
Stack Trace:
java.lang.AssertionError: Could not get expected value 'X val' for path 'x'
full output: {
"responseHeader":{
"status":0,
"QTime":0},
"params":{"wt":"json"},
"context":{
"webapp":"/h_",
"path":"/test1",
"httpMethod":"GET"},
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",
"x":null}, from server: null
at
__randomizedtesting.SeedInfo.seed([715063A5C1032B3:DF582B6DABCD9713]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:535)
at
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:232)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
[
https://issues.apache.org/jira/browse/LUCENE-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778113#comment-15778113
]
Jim Ferenczi commented on LUCENE-7055:
--
{quote}
I think this problem was solved with the two-phase iteration API: if you put a
DocValuesNumbersQuery in a conjunction, ConjunctionScorer will make sure to use
the two-phase iteration API on the DocValuesNumbersQuery, so it will never make
it search for the next matching doc.
{quote}
Thanks for the explanation, I did not notice that RandomAccessWeight was meant
to do that.
{quote}
I am fine either way. I started with your idea but later switched to a boolean
since I thought it would be easier to test and would open this API to a couple
more use-cases in addition to conjunctions, in particular facets on filters
(since filters are consumed in a random-access fashion in that case) and
disjunctions (MUST_NOT clauses).
{quote}
I agree, I was not sure about using the DocValuesNumbersQuery when the cost is
big and the conjunction with another clause is sparse but as you mentioned the
two phase iteration API should optimize this case efficiently. So +1 to keep
the boolean if it simplifies the logic.
> Better execution path for costly queries
>
>
> Key: LUCENE-7055
> URL: https://issues.apache.org/jira/browse/LUCENE-7055
> Project: Lucene - Core
> Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
> Attachments: LUCENE-7055.patch
>
>
> In Lucene 5.0, we improved the execution path for queries that run costly
> operations on a per-document basis, like phrase queries or doc values
> queries. But we have another class of costly queries, that return fine
> iterators, but these iterators are very expensive to build. This is typically
> the case for queries that leverage DocIdSetBuilder, like TermsQuery,
> multi-term queries or the new point queries. Intersecting such queries with a
> selective query is very inefficient since these queries build a doc id set of
> matching documents for the entire index.
> Is there something we could do to improve the execution path for these
> queries?
> One idea that comes to mind is that most of these queries could also run on
> doc values, so maybe we could come up with something that would help decide
> how to run a query based on other parts of the query? (Just thinking out
> loud, other ideas are very welcome)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778043#comment-15778043
]
Adrien Grand commented on LUCENE-7055:
--
bq. maybe instead of a boolean LazyScorer#get should take the min cost as an
argument. With a simple boolean it's the parent query that leads the decision
based on the min cost
I am fine either way. I started with your idea but later switched to a boolean
since I thought it would be easier to test and would open this API to a couple
more use-cases in addition to conjunctions, in particular facets on filters
(since filters are consumed in a random-access fashion in that case) and
disjunctions (MUST_NOT clauses).
bq. I also wonder if it's possible to completely disable the search for the
next doc ids in the DocValuesNumbersQuery. Isn't it possible to transform this
type of query in a simple filter that accepts or rejects docids ? This would
eliminate the need to switch to a point query when the min cost is smaller than
the point query cost but big enough to make the docvalues query costly since it
will need to find the next docids that matches the range every time the leading
iteration finds a match.
I think this problem was solved with the two-phase iteration API: if you put a
DocValuesNumbersQuery in a conjunction, ConjunctionScorer will make sure to use
the two-phase iteration API on the DocValuesNumbersQuery, so it will never make
it search for the next matching doc.
> Better execution path for costly queries
>
>
> Key: LUCENE-7055
> URL: https://issues.apache.org/jira/browse/LUCENE-7055
> Project: Lucene - Core
> Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
> Attachments: LUCENE-7055.patch
>
>
> In Lucene 5.0, we improved the execution path for queries that run costly
> operations on a per-document basis, like phrase queries or doc values
> queries. But we have another class of costly queries, that return fine
> iterators, but these iterators are very expensive to build. This is typically
> the case for queries that leverage DocIdSetBuilder, like TermsQuery,
> multi-term queries or the new point queries. Intersecting such queries with a
> selective query is very inefficient since these queries build a doc id set of
> matching documents for the entire index.
> Is there something we could do to improve the execution path for these
> queries?
> One idea that comes to mind is that most of these queries could also run on
> doc values, so maybe we could come up with something that would help decide
> how to run a query based on other parts of the query? (Just thinking out
> loud, other ideas are very welcome)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/627/
1 tests failed.
FAILED: junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest
Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202) at
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137) at
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94) at
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor187.newInstance(Unknown Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:704) at
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:766) at
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1005) at
org.apache.solr.core.SolrCore.(SolrCore.java:870) at
org.apache.solr.core.SolrCore.(SolrCore.java:774) at
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842) at
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498) at
java.util.concurrent.FutureTask.run(FutureTask.java:266) at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor187.newInstance(Unknown
Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:704)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:766)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1005)
at org.apache.solr.core.SolrCore.(SolrCore.java:870)
at org.apache.solr.core.SolrCore.(SolrCore.java:774)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)
at
org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([69CB894DA9B28985]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:266)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
[
https://issues.apache.org/jira/browse/LUCENE-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Adrien Grand resolved LUCENE-7401.
--
Resolution: Fixed
Fix Version/s: 6.4
master (7.0)
Thanks Mike for having a look.
> BKDWriter should ensure all dimensions are indexed
> --
>
> Key: LUCENE-7401
> URL: https://issues.apache.org/jira/browse/LUCENE-7401
> Project: Lucene - Core
> Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7401.patch
>
>
> The current heuristic is to use the dimension that has the largest span, so
> if dimensions have a different distribution of values, we could theoretically
> (but maybe in practice too?) end up with one dimension that is not indexed at
> all and queries that are mostly selective on this dimension would need to
> scan lots of blocks.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778024#comment-15778024
]
ASF subversion and git services commented on LUCENE-7401:
-
Commit ba47f530d1165d4518569422472bc9e4f1c04b26 in lucene-solr's branch
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ba47f53 ]
LUCENE-7401: Make sure BKD trees index all dimensions.
> BKDWriter should ensure all dimensions are indexed
> --
>
> Key: LUCENE-7401
> URL: https://issues.apache.org/jira/browse/LUCENE-7401
> Project: Lucene - Core
> Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (7.0), 6.4
>
> Attachments: LUCENE-7401.patch
>
>
> The current heuristic is to use the dimension that has the largest span, so
> if dimensions have a different distribution of values, we could theoretically
> (but maybe in practice too?) end up with one dimension that is not indexed at
> all and queries that are mostly selective on this dimension would need to
> scan lots of blocks.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778007#comment-15778007
]
ASF subversion and git services commented on LUCENE-7401:
-
Commit 0c1cab71920a54080701f7198ca402e16740 in lucene-solr's branch
refs/heads/branch_6x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0c1cab7 ]
LUCENE-7401: Make sure BKD trees index all dimensions.
> BKDWriter should ensure all dimensions are indexed
> --
>
> Key: LUCENE-7401
> URL: https://issues.apache.org/jira/browse/LUCENE-7401
> Project: Lucene - Core
> Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7401.patch
>
>
> The current heuristic is to use the dimension that has the largest span, so
> if dimensions have a different distribution of values, we could theoretically
> (but maybe in practice too?) end up with one dimension that is not indexed at
> all and queries that are mostly selective on this dimension would need to
> scan lots of blocks.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6312/
Java: 32bit/jdk1.8.0_112 -client -XX:+UseConcMarkSweepGC
1 tests failed.
FAILED: org.apache.solr.metrics.JvmMetricsTest.testOperatingSystemMetricsSet
Error Message:
Stack Trace:
java.lang.AssertionError
at
__randomizedtesting.SeedInfo.seed([230C59855969EF63:3BF33DF0A06FB436]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at
org.apache.solr.metrics.JvmMetricsTest.testOperatingSystemMetricsSet(JvmMetricsTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Build Log:
[...truncated 12509 lines...]
[junit4] Suite: org.apache.solr.metrics.JvmMetricsTest
[junit4] 2> Creating dataDir:
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.metrics.JvmMetricsTest_230C59855969EF63-001\init-core-data-001
[
https://issues.apache.org/jira/browse/LUCENE-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15777952#comment-15777952
]
Jim Ferenczi commented on LUCENE-7055:
--
I like the new cost estimation and the lazy scorer but maybe instead of a
boolean LazyScorer#get should take the min cost as an argument. With a simple
boolean it's the parent query that leads the decision based on the min cost.
The min cost could be big and the intersection with the point query could be
sparse so I think it would be more flexible if the IndexOrDocValuesQuery makes
the choice. I also wonder if it's possible to completely disable the search for
the next doc ids in the DocValuesNumbersQuery. Isn't it possible to transform
this type of query in a simple filter that accepts or rejects docids ? This
would eliminate the need to switch to a point query when the min cost is
smaller than the point query cost but big enough to make the docvalues query
costly since it will need to find the next docids that matches the range every
time the leading iteration finds a match.
> Better execution path for costly queries
>
>
> Key: LUCENE-7055
> URL: https://issues.apache.org/jira/browse/LUCENE-7055
> Project: Lucene - Core
> Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
> Attachments: LUCENE-7055.patch
>
>
> In Lucene 5.0, we improved the execution path for queries that run costly
> operations on a per-document basis, like phrase queries or doc values
> queries. But we have another class of costly queries, that return fine
> iterators, but these iterators are very expensive to build. This is typically
> the case for queries that leverage DocIdSetBuilder, like TermsQuery,
> multi-term queries or the new point queries. Intersecting such queries with a
> selective query is very inefficient since these queries build a doc id set of
> matching documents for the entire index.
> Is there something we could do to improve the execution path for these
> queries?
> One idea that comes to mind is that most of these queries could also run on
> doc values, so maybe we could come up with something that would help decide
> how to run a query based on other parts of the query? (Just thinking out
> loud, other ideas are very welcome)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1193/
7 tests failed.
FAILED:
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings
Error Message:
startOffset must be non-negative, and endOffset must be >= startOffset,
startOffset=24,endOffset=22
Stack Trace:
java.lang.IllegalArgumentException: startOffset must be non-negative, and
endOffset must be >= startOffset, startOffset=24,endOffset=22
at
__randomizedtesting.SeedInfo.seed([740BB1C4895371B0:1E500ED5D01D5143]:0)
at
org.apache.lucene.analysis.tokenattributes.PackedTokenAttributeImpl.setOffset(PackedTokenAttributeImpl.java:107)
at
org.apache.lucene.analysis.synonym.FlattenGraphFilter.releaseBufferedToken(FlattenGraphFilter.java:237)
at
org.apache.lucene.analysis.synonym.FlattenGraphFilter.incrementToken(FlattenGraphFilter.java:264)
at
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
at
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:724)
at
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:635)
at
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:533)
at
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at