[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 40 - Failure

2018-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/40/

2 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
number of ops expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: number of ops expected:<2> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([7B8FBCAAC3B8A4E:6473CD4835F4F963]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:187)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.request.TestUnInvertedFieldException

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [SolrIndexSearcher, 

[JENKINS-EA] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-11-ea+5) - Build # 24 - Still Unstable!

2018-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/24/
Java: 64bit/jdk-11-ea+5 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader

Error Message:
Doc with id=4 not found in 
http://127.0.0.1:33659/solr/outOfSyncReplicasCannotBecomeLeader-false due to: 
Path not found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=4 not found in 
http://127.0.0.1:33659/solr/outOfSyncReplicasCannotBecomeLeader-false due to: 
Path not found: /id; rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([F3539682567A7275:8DB8B692951D7D4F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.TestCloudConsistency.assertDocExists(TestCloudConsistency.java:254)
at 
org.apache.solr.cloud.TestCloudConsistency.assertDocsExistInAllReplicas(TestCloudConsistency.java:238)
at 
org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader(TestCloudConsistency.java:131)
at 
org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader(TestCloudConsistency.java:93)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-12232) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after

2018-04-17 Thread Jeff Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441909#comment-16441909
 ] 

Jeff Miller commented on SOLR-12232:


"Please don't shoot the messenger "

Sometimes it's about how you deliver it, not the message itself.  Your point is 
understood and appreciated.

> NativeFSLockFactory loses the channel when a thread is interrupted and the 
> SolrCore becomes unusable after
> --
>
> Key: SOLR-12232
> URL: https://issues.apache.org/jira/browse/SOLR-12232
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: NativeFSLockFactory, locking
>   Original Estimate: 24h
>  Time Spent: 10m
>  Remaining Estimate: 23h 50m
>
> The condition is rare for us and seems basically a race.  If a thread that is 
> running just happens to have the FileChannel open for NativeFSLockFactory and 
> is interrupted, the channel is closed since it extends 
> [AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html]
> Unfortunately this means the Solr Core has to be unloaded and reopened to 
> make the core usable again as the ensureValid check forever throws an 
> exception after.
> org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an 
> external force: 
> NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807
>  exclusive invalid],creationTime=2018-04-06T21:45:11Z) at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178)
>  at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
>  at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
>  
> Proposed solution is using AsynchronousFileChannel instead, since this is 
> only operating on a lock and .size method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12232) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after

2018-04-17 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441898#comment-16441898
 ] 

Robert Muir commented on SOLR-12232:


{quote}
Understood. But we don't have to use NIO.
{quote}

Yes, use another lock factory or some alternative if you want. But this is NIO 
lock factory, and well it uses NIO. And its behavior is correct: its wrong to 
interrupt the NIO stuff. It is definitely OK to dictate that its wrong to 
interrupt NIO stuff, we document it that way for a reason, because its 
dangerous.

Lock validation and other checks here are important because they prevent screw 
crazy corruption-looking cases from showing up. Please don't shoot the 
messenger but fix the actual bugs instead (the perp calling interrupt on lucene 
threads).


> NativeFSLockFactory loses the channel when a thread is interrupted and the 
> SolrCore becomes unusable after
> --
>
> Key: SOLR-12232
> URL: https://issues.apache.org/jira/browse/SOLR-12232
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: NativeFSLockFactory, locking
>   Original Estimate: 24h
>  Time Spent: 10m
>  Remaining Estimate: 23h 50m
>
> The condition is rare for us and seems basically a race.  If a thread that is 
> running just happens to have the FileChannel open for NativeFSLockFactory and 
> is interrupted, the channel is closed since it extends 
> [AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html]
> Unfortunately this means the Solr Core has to be unloaded and reopened to 
> make the core usable again as the ensureValid check forever throws an 
> exception after.
> org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an 
> external force: 
> NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807
>  exclusive invalid],creationTime=2018-04-06T21:45:11Z) at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178)
>  at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
>  at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
>  
> Proposed solution is using AsynchronousFileChannel instead, since this is 
> only operating on a lock and .size method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12233) QParserPlugin maintains a list of classes recreated every time a Solrcore object is created

2018-04-17 Thread Jeff Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441896#comment-16441896
 ] 

Jeff Miller edited comment on SOLR-12233 at 4/18/18 4:54 AM:
-

In our use case we're loading some 60,000 solrcore objects over 4ish hours and 
this was one of the high items in the profiling,  the other being creating the 
handlers which is a whole other problem (the amount doubled since solr.4 days). 
Luckily we aren't using half and can just disable them.

 

*Edit. - I should probably mention these are transient cores, meaning the 
SolrCore objects are destroyed based on eldest once we hit a threshold, meaning 
we recreate the core objects a lot in some scenarios.


was (Author: millerjeff0):
In our use case we're loading some 60,000 solrcore objects over 4ish hours and 
this was one of the high items in the profiling,  the other being creating the 
handlers which is a whole other problem (the amount doubled since solr.4 days). 
Luckily we aren't using half and can just disable them.

> QParserPlugin maintains a list of classes recreated every time a Solrcore 
> object is created
> ---
>
> Key: SOLR-12233
> URL: https://issues.apache.org/jira/browse/SOLR-12233
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: Performance, qparserplugin
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> QParserPlugin maintains a static map of Class Names to Class objects and 
> everytime we create a SolrCore object this causes a lot of overhead doing 
> classloader lookups.  Our system runs a lot of cores and the classloader gets 
> bogged down when a lot of threads are creating solrcore objects.  
> There's no need to create these objects every time, similar classes such as 
> TransformerFactory store the object one time and reference it over and over 
> again



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12233) QParserPlugin maintains a list of classes recreated every time a Solrcore object is created

2018-04-17 Thread Jeff Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441896#comment-16441896
 ] 

Jeff Miller commented on SOLR-12233:


In our use case we're loading some 60,000 solrcore objects over 4ish hours and 
this was one of the high items in the profiling,  the other being creating the 
handlers which is a whole other problem (the amount doubled since solr.4 days). 
Luckily we aren't using half and can just disable them.

> QParserPlugin maintains a list of classes recreated every time a Solrcore 
> object is created
> ---
>
> Key: SOLR-12233
> URL: https://issues.apache.org/jira/browse/SOLR-12233
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: Performance, qparserplugin
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> QParserPlugin maintains a static map of Class Names to Class objects and 
> everytime we create a SolrCore object this causes a lot of overhead doing 
> classloader lookups.  Our system runs a lot of cores and the classloader gets 
> bogged down when a lot of threads are creating solrcore objects.  
> There's no need to create these objects every time, similar classes such as 
> TransformerFactory store the object one time and reference it over and over 
> again



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12233) QParserPlugin maintains a list of classes recreated every time a Solrcore object is created

2018-04-17 Thread Jeff Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441894#comment-16441894
 ] 

Jeff Miller commented on SOLR-12233:


We've done some extensive testing ourselves, the class loader lookups are the 
expensive part but only during high load with a lot of solr core objects being 
created in different threads.  I looked at every class and they all seemed to 
just be returning new objects, I didn't see a point of recreating them all.

> QParserPlugin maintains a list of classes recreated every time a Solrcore 
> object is created
> ---
>
> Key: SOLR-12233
> URL: https://issues.apache.org/jira/browse/SOLR-12233
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: Performance, qparserplugin
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> QParserPlugin maintains a static map of Class Names to Class objects and 
> everytime we create a SolrCore object this causes a lot of overhead doing 
> classloader lookups.  Our system runs a lot of cores and the classloader gets 
> bogged down when a lot of threads are creating solrcore objects.  
> There's no need to create these objects every time, similar classes such as 
> TransformerFactory store the object one time and reference it over and over 
> again



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12232) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after

2018-04-17 Thread Jeff Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441890#comment-16441890
 ] 

Jeff Miller commented on SOLR-12232:


The thread interrupting is purposeful for our solution and won't be changing 
anytime soon due to external requirements.  It worked just fine for quite a few 
years until the call to ensureValid was added.  Since I saw no specific 
requirement for this class to close its file channel due to an interrupt 
exception it seemed a decent solution incase anyone else out there uses 
interrupts in any manner without removing the ensureValid call for us.  If no 
one sees value in this for Solr then so be it.

 

 

> NativeFSLockFactory loses the channel when a thread is interrupted and the 
> SolrCore becomes unusable after
> --
>
> Key: SOLR-12232
> URL: https://issues.apache.org/jira/browse/SOLR-12232
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: NativeFSLockFactory, locking
>   Original Estimate: 24h
>  Time Spent: 10m
>  Remaining Estimate: 23h 50m
>
> The condition is rare for us and seems basically a race.  If a thread that is 
> running just happens to have the FileChannel open for NativeFSLockFactory and 
> is interrupted, the channel is closed since it extends 
> [AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html]
> Unfortunately this means the Solr Core has to be unloaded and reopened to 
> make the core usable again as the ensureValid check forever throws an 
> exception after.
> org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an 
> external force: 
> NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807
>  exclusive invalid],creationTime=2018-04-06T21:45:11Z) at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178)
>  at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
>  at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
>  
> Proposed solution is using AsynchronousFileChannel instead, since this is 
> only operating on a lock and .size method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12232) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after

2018-04-17 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441889#comment-16441889
 ] 

David Smiley commented on SOLR-12232:
-

By trade-off, I mean an app loses raw search speed (and you explained this 
well) but the app gains the ability to interrupt (cancel) a search task that is 
taking too long.  However wise we may be, I don't think we ought to dictate to 
all users/apps that doing this is fundamentally wrong (what you call a bug).

bq. If you interrupt lucene threads using nio ...

Understood.  But we don't have to use NIO.

> NativeFSLockFactory loses the channel when a thread is interrupted and the 
> SolrCore becomes unusable after
> --
>
> Key: SOLR-12232
> URL: https://issues.apache.org/jira/browse/SOLR-12232
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: NativeFSLockFactory, locking
>   Original Estimate: 24h
>  Time Spent: 10m
>  Remaining Estimate: 23h 50m
>
> The condition is rare for us and seems basically a race.  If a thread that is 
> running just happens to have the FileChannel open for NativeFSLockFactory and 
> is interrupted, the channel is closed since it extends 
> [AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html]
> Unfortunately this means the Solr Core has to be unloaded and reopened to 
> make the core usable again as the ensureValid check forever throws an 
> exception after.
> org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an 
> external force: 
> NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807
>  exclusive invalid],creationTime=2018-04-06T21:45:11Z) at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178)
>  at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
>  at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
>  
> Proposed solution is using AsynchronousFileChannel instead, since this is 
> only operating on a lock and .size method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12233) QParserPlugin maintains a list of classes recreated every time a Solrcore object is created

2018-04-17 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441887#comment-16441887
 ] 

Shawn Heisey commented on SOLR-12233:
-

Are you *absolutely certain* that all of the QParserPlugin implementations are 
safe to have one instance shared between all SolrCore objects?  How much 
testing has been done with this patch?

At first glance (looking at only a few of them) they do look a little bit like 
factory classes -- each one has an implementation of the abstract method 
createParser, and it looks like a parser is probably created for every request.

> QParserPlugin maintains a list of classes recreated every time a Solrcore 
> object is created
> ---
>
> Key: SOLR-12233
> URL: https://issues.apache.org/jira/browse/SOLR-12233
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: Performance, qparserplugin
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> QParserPlugin maintains a static map of Class Names to Class objects and 
> everytime we create a SolrCore object this causes a lot of overhead doing 
> classloader lookups.  Our system runs a lot of cores and the classloader gets 
> bogged down when a lot of threads are creating solrcore objects.  
> There's no need to create these objects every time, similar classes such as 
> TransformerFactory store the object one time and reference it over and over 
> again



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12232) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after

2018-04-17 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441878#comment-16441878
 ] 

Robert Muir commented on SOLR-12232:


There isn't a real tradeoff. I'm not even sure it counts as a "workaround". RAF 
must synchronize all reads so its basically like just only having one thread, 
searches will pile up. 

It has nothing to do with what i like or don't like. If you interrupt lucene 
threads using nio its gonna look nasty, probably like index corruption. the 
whole point of lockfactory is to detect bugs in the code: it found one here in 
solr (or some plugin or something). That's what needs to be fixed.

> NativeFSLockFactory loses the channel when a thread is interrupted and the 
> SolrCore becomes unusable after
> --
>
> Key: SOLR-12232
> URL: https://issues.apache.org/jira/browse/SOLR-12232
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: NativeFSLockFactory, locking
>   Original Estimate: 24h
>  Time Spent: 10m
>  Remaining Estimate: 23h 50m
>
> The condition is rare for us and seems basically a race.  If a thread that is 
> running just happens to have the FileChannel open for NativeFSLockFactory and 
> is interrupted, the channel is closed since it extends 
> [AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html]
> Unfortunately this means the Solr Core has to be unloaded and reopened to 
> make the core usable again as the ensureValid check forever throws an 
> exception after.
> org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an 
> external force: 
> NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807
>  exclusive invalid],creationTime=2018-04-06T21:45:11Z) at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178)
>  at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
>  at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
>  
> Proposed solution is using AsynchronousFileChannel instead, since this is 
> only operating on a lock and .size method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr IRCChannels wiki page

2018-04-17 Thread David Smiley
Hmmm.  Maybe the link could have some javascript that forces a pop-up to
remind you that answers can take awhile?  Just an idea; do what you like.

On Tue, Apr 17, 2018 at 11:53 PM Shawn Heisey  wrote:

> Some time ago I created this wiki page:
>
> https://wiki.apache.org/solr/IRCChannels
>
> The IRC link in the Solr admin UI and a link in the IRC section of the
> Solr community page both point to this wiki page, with the idea that
> users who want to join the IRC channel are presented with some advice
> about how to have a good experience on the IRC channel.  One of the big
> pieces of advice there is that answers may take quite a while, because
> nobody may be looking at the channel right at any given moment.
>
> Unfortunately, it seems that we still have a LOT of people who pop in,
> ask a question, and disappear a few minutes later.This happened
> tonight.  I happened to look at the channel two minutes after they left,
> but usually I wander in several hours later.
>
> I was thinking that maybe I could change the freenode web-based IRC
> client links to just URLs that people can't click on.  If somebody who
> wants to join the IRC channel must copy the URL, maybe they'll be more
> likely to see what's written near the URL.  I'm hesitant to actually do
> this, because it's a definite roadblock for novice users.
>
> Thoughts?
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-12232) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after

2018-04-17 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441875#comment-16441875
 ] 

David Smiley commented on SOLR-12232:
-

Disclaimer: I haven't studied the approach in the patch.

Lucene has {{RAFDirectory}} (in misc) for apps that want to trade-off raw 
performance for interruptibility.  It uses RandomAccessFile and not NIO.  
Wouldn't it be appropriate to have a LockFactory impl that supports (safe) 
interruptability too, so they can be used together?  I'm not sure I'm getting 
your point Rob... are you saying, indirectly, that interrupt-*safe* IO is 
impossible?  Or maybe you don't like interruption at all so, in your opinion, 
anyone using it has made an error in judgement for using it?

> NativeFSLockFactory loses the channel when a thread is interrupted and the 
> SolrCore becomes unusable after
> --
>
> Key: SOLR-12232
> URL: https://issues.apache.org/jira/browse/SOLR-12232
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: NativeFSLockFactory, locking
>   Original Estimate: 24h
>  Time Spent: 10m
>  Remaining Estimate: 23h 50m
>
> The condition is rare for us and seems basically a race.  If a thread that is 
> running just happens to have the FileChannel open for NativeFSLockFactory and 
> is interrupted, the channel is closed since it extends 
> [AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html]
> Unfortunately this means the Solr Core has to be unloaded and reopened to 
> make the core usable again as the ensureValid check forever throws an 
> exception after.
> org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an 
> external force: 
> NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807
>  exclusive invalid],creationTime=2018-04-06T21:45:11Z) at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178)
>  at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
>  at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
>  
> Proposed solution is using AsynchronousFileChannel instead, since this is 
> only operating on a lock and .size method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12231) /etc/init.d/solr problem

2018-04-17 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441872#comment-16441872
 ] 

Shawn Heisey commented on SOLR-12231:
-

Other examples of the quick assignment can be found with this command on a 
Linux machine, and probably on other UNIX flavors that have an init.d directory:

{noformat}
grep "[A-Z_][A-Z_]*=[A-Za-z0-9][A-Za-z0-9]* " /etc/init.d/*
{noformat}

This also shows hits where that trick is NOT being used that happen to match 
the regex.

> /etc/init.d/solr problem
> 
>
> Key: SOLR-12231
> URL: https://issues.apache.org/jira/browse/SOLR-12231
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 7.3
> Environment: Centos 7.4 
> java-1.8.0-openjdk
>Reporter: Lihua Wang
>Assignee: Steve Rowe
>Priority: Minor
>
> I noticed that there are a couple of minor issues with the init.d script in 
> pretty much every version. 
> Basically, a semicolon (or an escaped semicolon) is missing in the 
> *{color:#205081}BLUE{color}* lines blow:
>  
> if [ -n "$RUNAS" ]; then
>  {color:#205081}su -c "SOLR_INCLUDE=\"$SOLR_ENV\" 
> \"$SOLR_INSTALL_DIR/bin/solr\" $SOLR_CMD" - "$RUNAS"{color}
>  else
>  {color:#205081}SOLR_INCLUDE="$SOLR_ENV" "$SOLR_INSTALL_DIR/bin/solr" 
> "$SOLR_CMD"{color}
>  fi
>  
> *With the {color:#d04437}added semicolons{color} (escaped where necessary), 
> the code would look like:* 
> if [ -n "$RUNAS" ]; then
>  *{color:#8eb021}{color:#8eb021}su -c 
> "SOLR_INCLUDE=\"$SOLR_ENV\"{color:#d04437}\;{color}{color} 
> \"$SOLR_INSTALL_DIR/bin/solr\" $SOLR_CMD" - 
> "$RUNAS"{color}{color:#8eb021}*{color}*
>  *else*
>  
> *{color:#8eb021}*SOLR_INCLUDE="$SOLR_ENV"{color:#d04437};{color}{color}{color:#8eb021}
>  "$SOLR_INSTALL_DIR/bin/solr" "$SOLR_CMD"{color}*
>  fi
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-master-Linux (64bit/jdk1.8.0_162) - Build # 24 - Failure!

2018-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/24/
Java: 64bit/jdk1.8.0_162 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 61965 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj1026895660
 [ecj-lint] Compiling 896 source files to /tmp/ecj1026895660
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/solr/core/src/test/org/apache/solr/analysis/TokenizerChainTest.java
 (at line 37)
 [ecj-lint] TokenizerChain tokenizerChain = new TokenizerChain(
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'tokenizerChain' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java
 (at line 31)
 [ecj-lint] import java.util.function.Supplier;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.function.Supplier is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 25)
 [ecj-lint] import java.util.concurrent.TimeUnit;
 [ecj-lint]^
 [ecj-lint] The import java.util.concurrent.TimeUnit is never used
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 26)
 [ecj-lint] import java.util.stream.Collectors;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.stream.Collectors is never used
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 32)
 [ecj-lint] import org.apache.solr.cloud.overseer.OverseerAction;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.cloud.overseer.OverseerAction is never 
used
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 38)
 [ecj-lint] import org.apache.solr.common.cloud.ZkNodeProps;
 [ecj-lint]
 [ecj-lint] The import org.apache.solr.common.cloud.ZkNodeProps is never used
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 39)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReader is never used
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 41)
 [ecj-lint] import org.apache.solr.common.util.Utils;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.common.util.Utils is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 25)
 [ecj-lint] import java.util.HashSet;
 [ecj-lint]^
 [ecj-lint] The import java.util.HashSet is never used
 [ecj-lint] --
 [ecj-lint] 10. ERROR in 
/home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 42)
 [ecj-lint] import org.apache.solr.common.cloud.CollectionStateWatcher;
 [ecj-lint]^^^
 [ecj-lint] The import org.apache.solr.common.cloud.CollectionStateWatcher is 
never used
 [ecj-lint] --
 [ecj-lint] 11. ERROR in 
/home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 46)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReaderAccessor;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReaderAccessor is 
never used
 [ecj-lint] --
 [ecj-lint] 11 problems (10 errors, 1 warning)

BUILD FAILED

[JENKINS] Lucene-Solr-repro - Build # 518 - Still Unstable

2018-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/518/

[...truncated 39 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/567/consoleText

[repro] Revision: 94adf9d2ff42cc4133354f7ab09ed32c496250b9

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=A2BDA1E5C8E2819F 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ja-JP 
-Dtests.timezone=Indian/Mahe -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
1d2441441be5f5d87103ceeec6d852f8f2f6ba85
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 94adf9d2ff42cc4133354f7ab09ed32c496250b9

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=A2BDA1E5C8E2819F -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=ja-JP -Dtests.timezone=Indian/Mahe -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 5365 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   4/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 1d2441441be5f5d87103ceeec6d852f8f2f6ba85

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-7.x - Build # 568 - Failure

2018-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/568/

1 tests failed.
FAILED:  org.apache.solr.cloud.AliasIntegrationTest.testModifyPropertiesV2

Error Message:
Error from server at https://127.0.0.1:39751/solr: Collection : collection2meta 
is part of alias testModifyPropertiesV2 remove or modify the alias before 
removing this collection.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:39751/solr: Collection : collection2meta is 
part of alias testModifyPropertiesV2 remove or modify the alias before removing 
this collection.
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:451)
at 
org.apache.solr.cloud.AliasIntegrationTest.tearDown(AliasIntegrationTest.java:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:992)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-12232) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after

2018-04-17 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441844#comment-16441844
 ] 

Shawn Heisey commented on SOLR-12232:
-

[~rcmuir] likely has a much better understanding of the gory details than I do.

I've only written a few multi-threaded apps ... but I have *never* used 
Thread.interrupt.  What little reading I've done on the subject tells me that 
doing so is likely to cause problems.

Even if I were to research it and learn how to use interrupting properly, I 
would never use it on a thread that I didn't create -- especially not those in 
a comprehensive system like Solr or Lucene.


> NativeFSLockFactory loses the channel when a thread is interrupted and the 
> SolrCore becomes unusable after
> --
>
> Key: SOLR-12232
> URL: https://issues.apache.org/jira/browse/SOLR-12232
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: NativeFSLockFactory, locking
>   Original Estimate: 24h
>  Time Spent: 10m
>  Remaining Estimate: 23h 50m
>
> The condition is rare for us and seems basically a race.  If a thread that is 
> running just happens to have the FileChannel open for NativeFSLockFactory and 
> is interrupted, the channel is closed since it extends 
> [AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html]
> Unfortunately this means the Solr Core has to be unloaded and reopened to 
> make the core usable again as the ensureValid check forever throws an 
> exception after.
> org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an 
> external force: 
> NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807
>  exclusive invalid],creationTime=2018-04-06T21:45:11Z) at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178)
>  at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
>  at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
>  
> Proposed solution is using AsynchronousFileChannel instead, since this is 
> only operating on a lock and .size method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12232) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after

2018-04-17 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441825#comment-16441825
 ] 

Robert Muir commented on SOLR-12232:


IMO it does not solve the problem. The correct fix is not to Thread.interrupt 
lucene threads using NIO apis. it is not safe to use Thread.interrupt with 
nio-based stuff with lucene: we document that. It is good that locking detected 
the error in the code (use of Thread.interrupt) because it can have much more 
dangerous impacts (e.g. loss of a reader). Asynchronous channels are too slow 
and wont help there. In the future, maybe its fixed in the JDK: 
http://mail.openjdk.java.net/pipermail/nio-dev/2018-March/004761.html

I don't think lucene should mask the problem here because it will not solve 
anything for these reasons. Please, fix the Thread.interrupt

> NativeFSLockFactory loses the channel when a thread is interrupted and the 
> SolrCore becomes unusable after
> --
>
> Key: SOLR-12232
> URL: https://issues.apache.org/jira/browse/SOLR-12232
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: NativeFSLockFactory, locking
>   Original Estimate: 24h
>  Time Spent: 10m
>  Remaining Estimate: 23h 50m
>
> The condition is rare for us and seems basically a race.  If a thread that is 
> running just happens to have the FileChannel open for NativeFSLockFactory and 
> is interrupted, the channel is closed since it extends 
> [AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html]
> Unfortunately this means the Solr Core has to be unloaded and reopened to 
> make the core usable again as the ensureValid check forever throws an 
> exception after.
> org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an 
> external force: 
> NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807
>  exclusive invalid],creationTime=2018-04-06T21:45:11Z) at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178)
>  at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
>  at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
>  
> Proposed solution is using AsynchronousFileChannel instead, since this is 
> only operating on a lock and .size method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12232) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after

2018-04-17 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441816#comment-16441816
 ] 

Erick Erickson commented on SOLR-12232:
---

[~rcmuir ] [~mikemccand] Is this perhaps more properly a Lucene issue?

> NativeFSLockFactory loses the channel when a thread is interrupted and the 
> SolrCore becomes unusable after
> --
>
> Key: SOLR-12232
> URL: https://issues.apache.org/jira/browse/SOLR-12232
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: NativeFSLockFactory, locking
>   Original Estimate: 24h
>  Time Spent: 10m
>  Remaining Estimate: 23h 50m
>
> The condition is rare for us and seems basically a race.  If a thread that is 
> running just happens to have the FileChannel open for NativeFSLockFactory and 
> is interrupted, the channel is closed since it extends 
> [AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html]
> Unfortunately this means the Solr Core has to be unloaded and reopened to 
> make the core usable again as the ensureValid check forever throws an 
> exception after.
> org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an 
> external force: 
> NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807
>  exclusive invalid],creationTime=2018-04-06T21:45:11Z) at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178)
>  at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
>  at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
>  
> Proposed solution is using AsynchronousFileChannel instead, since this is 
> only operating on a lock and .size method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #356: SOLR-12232

2018-04-17 Thread millerjeff0
GitHub user millerjeff0 opened a pull request:

https://github.com/apache/lucene-solr/pull/356

SOLR-12232

Update to use uninterruptible file channel due to interrupted threads 
closing connection until solrcore is recreated

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/millerjeff0/lucene-solr SOLR-12232

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/356.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #356


commit 8153b2a7bcedea3f9f84862cbdf174cd9ebbff86
Author: Jeff 
Date:   2018-04-18T02:46:46Z

SOLR-12232




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_162) - Build # 1742 - Still Failing!

2018-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1742/
Java: 32bit/jdk1.8.0_162 -server -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 60508 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj716612560
 [ecj-lint] Compiling 895 source files to /tmp/ecj716612560
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/analysis/TokenizerChainTest.java
 (at line 37)
 [ecj-lint] TokenizerChain tokenizerChain = new TokenizerChain(
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'tokenizerChain' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 25)
 [ecj-lint] import java.util.concurrent.TimeUnit;
 [ecj-lint]^
 [ecj-lint] The import java.util.concurrent.TimeUnit is never used
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 26)
 [ecj-lint] import java.util.stream.Collectors;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.stream.Collectors is never used
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 32)
 [ecj-lint] import org.apache.solr.cloud.overseer.OverseerAction;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.cloud.overseer.OverseerAction is never 
used
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 38)
 [ecj-lint] import org.apache.solr.common.cloud.ZkNodeProps;
 [ecj-lint]
 [ecj-lint] The import org.apache.solr.common.cloud.ZkNodeProps is never used
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 39)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReader is never used
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 41)
 [ecj-lint] import org.apache.solr.common.util.Utils;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.common.util.Utils is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 25)
 [ecj-lint] import java.util.HashSet;
 [ecj-lint]^
 [ecj-lint] The import java.util.HashSet is never used
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 42)
 [ecj-lint] import org.apache.solr.common.cloud.CollectionStateWatcher;
 [ecj-lint]^^^
 [ecj-lint] The import org.apache.solr.common.cloud.CollectionStateWatcher is 
never used
 [ecj-lint] --
 [ecj-lint] 10. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 46)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReaderAccessor;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReaderAccessor is 
never used
 [ecj-lint] --
 [ecj-lint] 10 problems (9 errors, 1 warning)

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:633: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build.xml:690: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/common-build.xml:2095: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/common-build.xml:2128: 
Compile 

[jira] [Commented] (SOLR-12203) Error in response for field containing date. Unexpected state.

2018-04-17 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441775#comment-16441775
 ] 

Erick Erickson commented on SOLR-12203:
---

Jeroen: No problem, we all learn about the JIRA list sometime.

I'll make a stab at another place to look, but then let's move the rest of the 
conversation to the user's list (haven't checked it yet today).

Native numeric types changed to use point-derived types between 6 and 7. My 
_guess_ is that you have some dynamicField definition like ds_* or 
*_lastModified and "somehow" your indexing process is throwing documents with 
fields like that at Solr.

This is kind of hand-waving, I don't really know the exact mechanism that would 
drive that, but it seems in the right general area.

Luke won't show you fields in your index that aren't realized, so if all docs 
are failing that have this field and it's a dynamic field it won't show up 
'cause it's not there.

If you're using "schemaless", that introduces yet another issue. I strongly 
advise if you can at all use explicit schema definitions to do so unless you 
can totally and absolutely guarantee that the first doc encountered has exactly 
the same representation. Even something as trivial as the first doc having a 
field with a value of "1" and the next doc "1.0" can do Bad Things.

And if you find an answer and don't think it's really a bug (or suggestion for 
improvement) please close this JIRA.

> Error in response for field containing date. Unexpected state.
> --
>
> Key: SOLR-12203
> URL: https://issues.apache.org/jira/browse/SOLR-12203
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Affects Versions: 7.2.1, 7.3
>Reporter: Jeroen Steggink
>Priority: Minor
>
> I get the following error:
> {noformat}
> java.lang.AssertionError: Unexpected state. Field: 
> stored,indexed,tokenized,omitNorms,indexOptions=DOCSds_lastModified:2013-10-04T22:25:11Z
> at org.apache.solr.schema.DatePointField.toObject(DatePointField.java:154)
> at org.apache.solr.schema.PointField.write(PointField.java:198)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:141)
> at 
> org.apache.solr.response.JSONWriter.writeSolrDocument(JSONResponseWriter.java:374)
> at 
> org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161)
> at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)
> at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> at 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1007 - Still Failing

2018-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1007/

No tests ran.

Build Log:
[...truncated 24176 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2198 links (1753 relative) to 3011 anchors in 244 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 

[jira] [Assigned] (SOLR-12233) QParserPlugin maintains a list of classes recreated every time a Solrcore object is created

2018-04-17 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-12233:
-

Assignee: Erick Erickson

> QParserPlugin maintains a list of classes recreated every time a Solrcore 
> object is created
> ---
>
> Key: SOLR-12233
> URL: https://issues.apache.org/jira/browse/SOLR-12233
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: Performance, qparserplugin
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> QParserPlugin maintains a static map of Class Names to Class objects and 
> everytime we create a SolrCore object this causes a lot of overhead doing 
> classloader lookups.  Our system runs a lot of cores and the classloader gets 
> bogged down when a lot of threads are creating solrcore objects.  
> There's no need to create these objects every time, similar classes such as 
> TransformerFactory store the object one time and reference it over and over 
> again



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12232) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after

2018-04-17 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-12232:
-

Assignee: Erick Erickson

> NativeFSLockFactory loses the channel when a thread is interrupted and the 
> SolrCore becomes unusable after
> --
>
> Key: SOLR-12232
> URL: https://issues.apache.org/jira/browse/SOLR-12232
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1.1
>Reporter: Jeff Miller
>Assignee: Erick Erickson
>Priority: Minor
>  Labels: NativeFSLockFactory, locking
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The condition is rare for us and seems basically a race.  If a thread that is 
> running just happens to have the FileChannel open for NativeFSLockFactory and 
> is interrupted, the channel is closed since it extends 
> [AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html]
> Unfortunately this means the Solr Core has to be unloaded and reopened to 
> make the core usable again as the ensureValid check forever throws an 
> exception after.
> org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an 
> external force: 
> NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807
>  exclusive invalid],creationTime=2018-04-06T21:45:11Z) at 
> org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178)
>  at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113)
>  at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
>  at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)
>  
> Proposed solution is using AsynchronousFileChannel instead, since this is 
> only operating on a lock and .size method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12187) Replica should watch clusterstate and unload itself if its entry is removed

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441750#comment-16441750
 ] 

ASF subversion and git services commented on SOLR-12187:


Commit 864b8d1f85f64cb8b5057a7838d45c3d693aa757 in lucene-solr's branch 
refs/heads/branch_7x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=864b8d1 ]

SOLR-12187: ZkStateReader.Notification thread should only catch Exception


> Replica should watch clusterstate and unload itself if its entry is removed
> ---
>
> Key: SOLR-12187
> URL: https://issues.apache.org/jira/browse/SOLR-12187
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch, 
> SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch
>
>
> With the introduction of autoscaling framework, we have seen an increase in 
> the number of issues related to the race condition between delete a replica 
> and other stuff.
> Case 1: DeleteReplicaCmd failed to send UNLOAD request to a replica, 
> therefore, forcefully remove its entry from clusterstate, but the replica 
> still function normally and be able to become a leader -> SOLR-12176
> Case 2:
>  * DeleteReplicaCmd enqueue a DELETECOREOP (without sending a request to 
> replica because the node is not live)
>  * The node start and the replica get loaded
>  * DELETECOREOP has not processed hence the replica still present in 
> clusterstate --> pass checkStateInZk
>  * DELETECOREOP is executed, DeleteReplicaCmd finished
>  ** result 1: the replica start recovering, finish it and publish itself as 
> ACTIVE --> state of the replica is ACTIVE
>  ** result 2: the replica throw an exception (probably: NPE) 
> --> state of the replica is DOWN, not join leader election



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12187) Replica should watch clusterstate and unload itself if its entry is removed

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441748#comment-16441748
 ] 

ASF subversion and git services commented on SOLR-12187:


Commit 1d2441441be5f5d87103ceeec6d852f8f2f6ba85 in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1d24414 ]

SOLR-12187: ZkStateReader.Notification thread should only catch Exception


> Replica should watch clusterstate and unload itself if its entry is removed
> ---
>
> Key: SOLR-12187
> URL: https://issues.apache.org/jira/browse/SOLR-12187
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch, 
> SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch
>
>
> With the introduction of autoscaling framework, we have seen an increase in 
> the number of issues related to the race condition between delete a replica 
> and other stuff.
> Case 1: DeleteReplicaCmd failed to send UNLOAD request to a replica, 
> therefore, forcefully remove its entry from clusterstate, but the replica 
> still function normally and be able to become a leader -> SOLR-12176
> Case 2:
>  * DeleteReplicaCmd enqueue a DELETECOREOP (without sending a request to 
> replica because the node is not live)
>  * The node start and the replica get loaded
>  * DELETECOREOP has not processed hence the replica still present in 
> clusterstate --> pass checkStateInZk
>  * DELETECOREOP is executed, DeleteReplicaCmd finished
>  ** result 1: the replica start recovering, finish it and publish itself as 
> ACTIVE --> state of the replica is ACTIVE
>  ** result 2: the replica throw an exception (probably: NPE) 
> --> state of the replica is DOWN, not join leader election



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12187) Replica should watch clusterstate and unload itself if its entry is removed

2018-04-17 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441744#comment-16441744
 ] 

Cao Manh Dat commented on SOLR-12187:
-

Thanks [~tomasflobbe]

> Replica should watch clusterstate and unload itself if its entry is removed
> ---
>
> Key: SOLR-12187
> URL: https://issues.apache.org/jira/browse/SOLR-12187
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch, 
> SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch
>
>
> With the introduction of autoscaling framework, we have seen an increase in 
> the number of issues related to the race condition between delete a replica 
> and other stuff.
> Case 1: DeleteReplicaCmd failed to send UNLOAD request to a replica, 
> therefore, forcefully remove its entry from clusterstate, but the replica 
> still function normally and be able to become a leader -> SOLR-12176
> Case 2:
>  * DeleteReplicaCmd enqueue a DELETECOREOP (without sending a request to 
> replica because the node is not live)
>  * The node start and the replica get loaded
>  * DELETECOREOP has not processed hence the replica still present in 
> clusterstate --> pass checkStateInZk
>  * DELETECOREOP is executed, DeleteReplicaCmd finished
>  ** result 1: the replica start recovering, finish it and publish itself as 
> ACTIVE --> state of the replica is ACTIVE
>  ** result 2: the replica throw an exception (probably: NPE) 
> --> state of the replica is DOWN, not join leader election



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 517 - Still Unstable

2018-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/517/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1533/consoleText

[repro] Revision: 4ee92c22a4b731d3ec2f93409f3fe57ae348cea1

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=44BEF462524B4CCB -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-CR -Dtests.timezone=US/Samoa -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfos -Dtests.seed=44BEF462524B4CCB 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=sl-SI -Dtests.timezone=Mexico/BajaSur -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfosVersion -Dtests.seed=44BEF462524B4CCB 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=sl-SI -Dtests.timezone=Mexico/BajaSur -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfosData -Dtests.seed=44BEF462524B4CCB 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=sl-SI -Dtests.timezone=Mexico/BajaSur -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
8c60be448921f3bb59a1d6de1b3655a1dc1d75f0
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 4ee92c22a4b731d3ec2f93409f3fe57ae348cea1

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SegmentsInfoRequestHandlerTest
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.SegmentsInfoRequestHandlerTest|*.IndexSizeTriggerTest" 
-Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=44BEF462524B4CCB -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=sl-SI -Dtests.timezone=Mexico/BajaSur -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 10894 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro]   5/5 failed: 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest

[repro] Re-testing 100% failures at the tip of master
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SegmentsInfoRequestHandlerTest
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.SegmentsInfoRequestHandlerTest|*.IndexSizeTriggerTest" 
-Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=44BEF462524B4CCB -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=sl-SI -Dtests.timezone=Mexico/BajaSur -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 7885 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of master:
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro]   5/5 failed: 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest

[repro] Re-testing 100% failures at the tip of master without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   

[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 42 - Failure

2018-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/42/

All tests passed

Build Log:
[...truncated 62081 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj1050644116
 [ecj-lint] Compiling 895 source files to /tmp/ecj1050644116
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/core/src/test/org/apache/solr/analysis/TokenizerChainTest.java
 (at line 37)
 [ecj-lint] TokenizerChain tokenizerChain = new TokenizerChain(
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'tokenizerChain' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 25)
 [ecj-lint] import java.util.concurrent.TimeUnit;
 [ecj-lint]^
 [ecj-lint] The import java.util.concurrent.TimeUnit is never used
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 26)
 [ecj-lint] import java.util.stream.Collectors;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.stream.Collectors is never used
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 32)
 [ecj-lint] import org.apache.solr.cloud.overseer.OverseerAction;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.cloud.overseer.OverseerAction is never 
used
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 38)
 [ecj-lint] import org.apache.solr.common.cloud.ZkNodeProps;
 [ecj-lint]
 [ecj-lint] The import org.apache.solr.common.cloud.ZkNodeProps is never used
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 39)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReader is never used
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 41)
 [ecj-lint] import org.apache.solr.common.util.Utils;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.common.util.Utils is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 25)
 [ecj-lint] import java.util.HashSet;
 [ecj-lint]^
 [ecj-lint] The import java.util.HashSet is never used
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 42)
 [ecj-lint] import org.apache.solr.common.cloud.CollectionStateWatcher;
 [ecj-lint]^^^
 [ecj-lint] The import org.apache.solr.common.cloud.CollectionStateWatcher is 
never used
 [ecj-lint] --
 [ecj-lint] 10. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 46)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReaderAccessor;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReaderAccessor is 
never used
 [ecj-lint] --
 [ecj-lint] 10 problems (9 errors, 1 warning)

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/build.xml:642:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/build.xml:101:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-7.x/solr/build.xml:690:
 The following error occurred while 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_162) - Build # 1741 - Still Failing!

2018-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1741/
Java: 64bit/jdk1.8.0_162 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 62275 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj1263501163
 [ecj-lint] Compiling 895 source files to /tmp/ecj1263501163
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/analysis/TokenizerChainTest.java
 (at line 37)
 [ecj-lint] TokenizerChain tokenizerChain = new TokenizerChain(
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'tokenizerChain' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 25)
 [ecj-lint] import java.util.concurrent.TimeUnit;
 [ecj-lint]^
 [ecj-lint] The import java.util.concurrent.TimeUnit is never used
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 26)
 [ecj-lint] import java.util.stream.Collectors;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.stream.Collectors is never used
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 32)
 [ecj-lint] import org.apache.solr.cloud.overseer.OverseerAction;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.cloud.overseer.OverseerAction is never 
used
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 38)
 [ecj-lint] import org.apache.solr.common.cloud.ZkNodeProps;
 [ecj-lint]
 [ecj-lint] The import org.apache.solr.common.cloud.ZkNodeProps is never used
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 39)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReader is never used
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 41)
 [ecj-lint] import org.apache.solr.common.util.Utils;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.common.util.Utils is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 25)
 [ecj-lint] import java.util.HashSet;
 [ecj-lint]^
 [ecj-lint] The import java.util.HashSet is never used
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 42)
 [ecj-lint] import org.apache.solr.common.cloud.CollectionStateWatcher;
 [ecj-lint]^^^
 [ecj-lint] The import org.apache.solr.common.cloud.CollectionStateWatcher is 
never used
 [ecj-lint] --
 [ecj-lint] 10. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 46)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReaderAccessor;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReaderAccessor is 
never used
 [ecj-lint] --
 [ecj-lint] 10 problems (9 errors, 1 warning)

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:633: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build.xml:690: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/common-build.xml:2095: The 
following error occurred while executing this line:

[jira] [Resolved] (SOLR-11924) Add the ability to watch collection set changes in ZkStateReader

2018-04-17 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove resolved SOLR-11924.

   Resolution: Fixed
 Assignee: Dennis Gove
Fix Version/s: (was: master (8.0))

> Add the ability to watch collection set changes in ZkStateReader
> 
>
> Key: SOLR-11924
> URL: https://issues.apache.org/jira/browse/SOLR-11924
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4, master (8.0)
>Reporter: Houston Putman
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 7.4
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Allow users to watch when the set of collections for a cluster is changed. 
> This is useful if a user is trying to discover collections within a cloud.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11924) Add the ability to watch collection set changes in ZkStateReader

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441643#comment-16441643
 ] 

ASF subversion and git services commented on SOLR-11924:


Commit d82a4704b4c10392d3b288917e4e6057b4a172f5 in lucene-solr's branch 
refs/heads/branch_7x from [~houstonputman]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d82a470 ]

SOLR-11924: Added CloudCollectionsListener to watch the list of collections in 
a cloud. This closes #313


> Add the ability to watch collection set changes in ZkStateReader
> 
>
> Key: SOLR-11924
> URL: https://issues.apache.org/jira/browse/SOLR-11924
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4, master (8.0)
>Reporter: Houston Putman
>Priority: Minor
> Fix For: 7.4, master (8.0)
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Allow users to watch when the set of collections for a cluster is changed. 
> This is useful if a user is trying to discover collections within a cloud.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11924) Add the ability to watch collection set changes in ZkStateReader

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441644#comment-16441644
 ] 

ASF subversion and git services commented on SOLR-11924:


Commit 367e6d85b4b9e01b68147cb21119bb90a1d1d7a7 in lucene-solr's branch 
refs/heads/branch_7x from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=367e6d8 ]

SOLR-11924: Updates solr/CHANGES.txt for v7.4


> Add the ability to watch collection set changes in ZkStateReader
> 
>
> Key: SOLR-11924
> URL: https://issues.apache.org/jira/browse/SOLR-11924
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4, master (8.0)
>Reporter: Houston Putman
>Priority: Minor
> Fix For: 7.4, master (8.0)
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Allow users to watch when the set of collections for a cluster is changed. 
> This is useful if a user is trying to discover collections within a cloud.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #313: SOLR-11924: Added a way to create collection ...

2018-04-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/313


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11924) Add the ability to watch collection set changes in ZkStateReader

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441637#comment-16441637
 ] 

ASF subversion and git services commented on SOLR-11924:


Commit 8c60be448921f3bb59a1d6de1b3655a1dc1d75f0 in lucene-solr's branch 
refs/heads/master from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8c60be4 ]

SOLR-11924: Updates solr/CHANGES.txt for v7.4


> Add the ability to watch collection set changes in ZkStateReader
> 
>
> Key: SOLR-11924
> URL: https://issues.apache.org/jira/browse/SOLR-11924
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4, master (8.0)
>Reporter: Houston Putman
>Priority: Minor
> Fix For: 7.4, master (8.0)
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Allow users to watch when the set of collections for a cluster is changed. 
> This is useful if a user is trying to discover collections within a cloud.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11924) Add the ability to watch collection set changes in ZkStateReader

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441636#comment-16441636
 ] 

ASF subversion and git services commented on SOLR-11924:


Commit ae0190b696396bc2fc4d239a22d568c8438b8c4f in lucene-solr's branch 
refs/heads/master from [~houstonputman]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ae0190b ]

SOLR-11924: Added CloudCollectionsListener to watch the list of collections in 
a cloud. This closes #313


> Add the ability to watch collection set changes in ZkStateReader
> 
>
> Key: SOLR-11924
> URL: https://issues.apache.org/jira/browse/SOLR-11924
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4, master (8.0)
>Reporter: Houston Putman
>Priority: Minor
> Fix For: 7.4, master (8.0)
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Allow users to watch when the set of collections for a cluster is changed. 
> This is useful if a user is trying to discover collections within a cloud.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_162) - Build # 21849 - Failure!

2018-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21849/
Java: 32bit/jdk1.8.0_162 -server -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 60402 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj1547532468
 [ecj-lint] Compiling 896 source files to /tmp/ecj1547532468
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/analysis/TokenizerChainTest.java
 (at line 37)
 [ecj-lint] TokenizerChain tokenizerChain = new TokenizerChain(
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'tokenizerChain' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 25)
 [ecj-lint] import java.util.concurrent.TimeUnit;
 [ecj-lint]^
 [ecj-lint] The import java.util.concurrent.TimeUnit is never used
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 26)
 [ecj-lint] import java.util.stream.Collectors;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.stream.Collectors is never used
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 32)
 [ecj-lint] import org.apache.solr.cloud.overseer.OverseerAction;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.cloud.overseer.OverseerAction is never 
used
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 38)
 [ecj-lint] import org.apache.solr.common.cloud.ZkNodeProps;
 [ecj-lint]
 [ecj-lint] The import org.apache.solr.common.cloud.ZkNodeProps is never used
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 39)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReader is never used
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 41)
 [ecj-lint] import org.apache.solr.common.util.Utils;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.common.util.Utils is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 25)
 [ecj-lint] import java.util.HashSet;
 [ecj-lint]^
 [ecj-lint] The import java.util.HashSet is never used
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 42)
 [ecj-lint] import org.apache.solr.common.cloud.CollectionStateWatcher;
 [ecj-lint]^^^
 [ecj-lint] The import org.apache.solr.common.cloud.CollectionStateWatcher is 
never used
 [ecj-lint] --
 [ecj-lint] 10. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 46)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReaderAccessor;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReaderAccessor is 
never used
 [ecj-lint] --
 [ecj-lint] 10 problems (9 errors, 1 warning)

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:633: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build.xml:690: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:2095: 
The following error occurred while executing this line:

[GitHub] lucene-solr pull request #355: SOLR-12233

2018-04-17 Thread millerjeff0
GitHub user millerjeff0 opened a pull request:

https://github.com/apache/lucene-solr/pull/355

SOLR-12233

Instead of recreating the classes every time we reload a solr core, just 
create them once.  The same is done in other classes such as TransformerFactory

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/millerjeff0/lucene-solr SOLR-12233

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/355.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #355


commit 81d35d4a5500e6f8db683cfeba194837787437ee
Author: Jeff 
Date:   2018-04-17T22:35:20Z

SOLR-12233




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-NightlyTests-7.x - Build # 7 - Still Failing

2018-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-7.x/7/

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.RestartWhileUpdatingTest

Error Message:
7 threads leaked from SUITE scope at 
org.apache.solr.cloud.RestartWhileUpdatingTest: 1) Thread[id=2273, 
name=searcherExecutor-506-thread-1, state=WAITING, 
group=TGRP-RestartWhileUpdatingTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)2) Thread[id=2312, 
name=searcherExecutor-517-thread-1, state=WAITING, 
group=TGRP-RestartWhileUpdatingTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)3) Thread[id=2533, 
name=searcherExecutor-578-thread-1, state=WAITING, 
group=TGRP-RestartWhileUpdatingTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)4) Thread[id=2500, 
name=searcherExecutor-567-thread-1, state=WAITING, 
group=TGRP-RestartWhileUpdatingTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)5) Thread[id=2347, 
name=searcherExecutor-528-thread-1, state=WAITING, 
group=TGRP-RestartWhileUpdatingTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)6) Thread[id=2467, 
name=searcherExecutor-556-thread-1, state=WAITING, 
group=TGRP-RestartWhileUpdatingTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)7) Thread[id=2390, 

[jira] [Created] (SOLR-12233) QParserPlugin maintains a list of classes recreated every time a Solrcore object is created

2018-04-17 Thread Jeff Miller (JIRA)
Jeff Miller created SOLR-12233:
--

 Summary: QParserPlugin maintains a list of classes recreated every 
time a Solrcore object is created
 Key: SOLR-12233
 URL: https://issues.apache.org/jira/browse/SOLR-12233
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.1.1
Reporter: Jeff Miller


QParserPlugin maintains a static map of Class Names to Class objects and 
everytime we create a SolrCore object this causes a lot of overhead doing 
classloader lookups.  Our system runs a lot of cores and the classloader gets 
bogged down when a lot of threads are creating solrcore objects.  

There's no need to create these objects every time, similar classes such as 

TransformerFactory store the object one time and reference it over and over 
again



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12232) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after

2018-04-17 Thread Jeff Miller (JIRA)
Jeff Miller created SOLR-12232:
--

 Summary: NativeFSLockFactory loses the channel when a thread is 
interrupted and the SolrCore becomes unusable after
 Key: SOLR-12232
 URL: https://issues.apache.org/jira/browse/SOLR-12232
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.1.1
Reporter: Jeff Miller


The condition is rare for us and seems basically a race.  If a thread that is 
running just happens to have the FileChannel open for NativeFSLockFactory and 
is interrupted, the channel is closed since it extends 
[AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html]

Unfortunately this means the Solr Core has to be unloaded and reopened to make 
the core usable again as the ensureValid check forever throws an exception 
after.

org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an 
external force: 
NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807
 exclusive invalid],creationTime=2018-04-06T21:45:11Z) at 
org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178)
 at 
org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43)
 at 
org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43)
 at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113)
 at 
org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128)
 at 
org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183)

 

Proposed solution is using AsynchronousFileChannel instead, since this is only 
operating on a lock and .size method



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_162) - Build # 1740 - Still Failing!

2018-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1740/
Java: 64bit/jdk1.8.0_162 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 62299 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj1687743590
 [ecj-lint] Compiling 895 source files to /tmp/ecj1687743590
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/analysis/TokenizerChainTest.java
 (at line 37)
 [ecj-lint] TokenizerChain tokenizerChain = new TokenizerChain(
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'tokenizerChain' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 25)
 [ecj-lint] import java.util.concurrent.TimeUnit;
 [ecj-lint]^
 [ecj-lint] The import java.util.concurrent.TimeUnit is never used
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 26)
 [ecj-lint] import java.util.stream.Collectors;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.stream.Collectors is never used
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 32)
 [ecj-lint] import org.apache.solr.cloud.overseer.OverseerAction;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.cloud.overseer.OverseerAction is never 
used
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 38)
 [ecj-lint] import org.apache.solr.common.cloud.ZkNodeProps;
 [ecj-lint]
 [ecj-lint] The import org.apache.solr.common.cloud.ZkNodeProps is never used
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 39)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReader is never used
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 41)
 [ecj-lint] import org.apache.solr.common.util.Utils;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.common.util.Utils is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 25)
 [ecj-lint] import java.util.HashSet;
 [ecj-lint]^
 [ecj-lint] The import java.util.HashSet is never used
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 42)
 [ecj-lint] import org.apache.solr.common.cloud.CollectionStateWatcher;
 [ecj-lint]^^^
 [ecj-lint] The import org.apache.solr.common.cloud.CollectionStateWatcher is 
never used
 [ecj-lint] --
 [ecj-lint] 10. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 46)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReaderAccessor;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReaderAccessor is 
never used
 [ecj-lint] --
 [ecj-lint] 10 problems (9 errors, 1 warning)

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:633: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build.xml:690: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/common-build.xml:2095: The 
following error occurred while executing this line:

[jira] [Updated] (SOLR-12155) Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField()

2018-04-17 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-12155:

Attachment: SOLR-12155.master.patch

> Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField() 
> 
>
> Key: SOLR-12155
> URL: https://issues.apache.org/jira/browse/SOLR-12155
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Kishor gandham
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: 21832-consoleText.txt.zip, SOLR-12155.master.patch, 
> SOLR-12155.patch, SOLR-12155.patch, SOLR-12155.patch, SOLR-12155.patch, 
> SOLR-12155.patch, SOLR-12155.patch, stack.txt
>
>
> I am attaching a stack trace from our production Solr (7.2.1). Occasionally, 
> we are seeing SOLR becoming unresponsive. We are then forced to kill the JVM 
> and start solr again.
> We have a lot of facet queries and our index has approximately 15 million 
> documents. We have recently started using json.facet queries and some of the 
> facet fields use DocValues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12155) Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField()

2018-04-17 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-12155:

Attachment: SOLR-12155.master.patch

> Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField() 
> 
>
> Key: SOLR-12155
> URL: https://issues.apache.org/jira/browse/SOLR-12155
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Kishor gandham
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: 21832-consoleText.txt.zip, SOLR-12155.patch, 
> SOLR-12155.patch, SOLR-12155.patch, SOLR-12155.patch, SOLR-12155.patch, 
> SOLR-12155.patch, stack.txt
>
>
> I am attaching a stack trace from our production Solr (7.2.1). Occasionally, 
> we are seeing SOLR becoming unresponsive. We are then forced to kill the JVM 
> and start solr again.
> We have a lot of facet queries and our index has approximately 15 million 
> documents. We have recently started using json.facet queries and some of the 
> facet fields use DocValues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12155) Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField()

2018-04-17 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-12155:

Attachment: (was: SOLR-12155.master.patch)

> Solr 7.2.1 deadlock in UnInvertedField.getUnInvertedField() 
> 
>
> Key: SOLR-12155
> URL: https://issues.apache.org/jira/browse/SOLR-12155
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Kishor gandham
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: 21832-consoleText.txt.zip, SOLR-12155.patch, 
> SOLR-12155.patch, SOLR-12155.patch, SOLR-12155.patch, SOLR-12155.patch, 
> SOLR-12155.patch, stack.txt
>
>
> I am attaching a stack trace from our production Solr (7.2.1). Occasionally, 
> we are seeing SOLR becoming unresponsive. We are then forced to kill the JVM 
> and start solr again.
> We have a lot of facet queries and our index has approximately 15 million 
> documents. We have recently started using json.facet queries and some of the 
> facet fields use DocValues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12200) ZkControllerTest failure. Leaking Overseer

2018-04-17 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441479#comment-16441479
 ] 

Mikhail Khludnev commented on SOLR-12200:
-

It turns to be more depressing. [^SOLR-12200.patch] fixed 
{{OverseerTest.testOverseerStatsReset}}, it seems like it make 
{{LeaderElectionIntegrationTest.testSimpleSliceLeaderElection}} hangs. 


> ZkControllerTest failure. Leaking Overseer
> --
>
> Key: SOLR-12200
> URL: https://issues.apache.org/jira/browse/SOLR-12200
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12200.patch, SOLR-12200.patch, SOLR-12200.patch, 
> patch-unit-solr_core.zip, tests-failures.txt, tests-failures.txt.gz, 
> zk.fail.txt.gz
>
>
> Failure seems suspiciously the same. 
>[junit4]   2> 499919 INFO  
> (TEST-ZkControllerTest.testReadConfigName-seed#[BC856CC565039E77]) 
> [n:127.0.0.1:8983_solr] o.a.s.c.Overseer Overseer 
> (id=73578760132362243-127.0.0.1:8983_solr-n_00) closing
>[junit4]   2> 499920 INFO  
> (OverseerStateUpdate-73578760132362243-127.0.0.1:8983_solr-n_00) [
> ] o.a.s.c.Overseer Overseer Loop exiting : 127.0.0.1:8983_solr
>[junit4]   2> 499920 ERROR 
> (OverseerCollectionConfigSetProcessor-73578760132362243-127.0.0.1:8983_solr-n_00)
>  [] o.a.s.c.OverseerTaskProcessor Unable to prioritize overseer
>[junit4]   2> java.lang.InterruptedException: null
>[junit4]   2>at java.lang.Object.wait(Native Method) ~[?:1.8.0_152]
>[junit4]   2>at java.lang.Object.wait(Object.java:502) 
> ~[?:1.8.0_152]
>[junit4]   2>at 
> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1409) 
> ~[zookeeper-3.4.11.jar:3.4
> then it spins in SessionExpiredException, all tests pass but suite fails due 
> to leaking Overseer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12200) ZkControllerTest failure. Leaking Overseer

2018-04-17 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-12200:

Attachment: SOLR-12200.patch

> ZkControllerTest failure. Leaking Overseer
> --
>
> Key: SOLR-12200
> URL: https://issues.apache.org/jira/browse/SOLR-12200
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12200.patch, SOLR-12200.patch, SOLR-12200.patch, 
> patch-unit-solr_core.zip, tests-failures.txt, tests-failures.txt.gz, 
> zk.fail.txt.gz
>
>
> Failure seems suspiciously the same. 
>[junit4]   2> 499919 INFO  
> (TEST-ZkControllerTest.testReadConfigName-seed#[BC856CC565039E77]) 
> [n:127.0.0.1:8983_solr] o.a.s.c.Overseer Overseer 
> (id=73578760132362243-127.0.0.1:8983_solr-n_00) closing
>[junit4]   2> 499920 INFO  
> (OverseerStateUpdate-73578760132362243-127.0.0.1:8983_solr-n_00) [
> ] o.a.s.c.Overseer Overseer Loop exiting : 127.0.0.1:8983_solr
>[junit4]   2> 499920 ERROR 
> (OverseerCollectionConfigSetProcessor-73578760132362243-127.0.0.1:8983_solr-n_00)
>  [] o.a.s.c.OverseerTaskProcessor Unable to prioritize overseer
>[junit4]   2> java.lang.InterruptedException: null
>[junit4]   2>at java.lang.Object.wait(Native Method) ~[?:1.8.0_152]
>[junit4]   2>at java.lang.Object.wait(Object.java:502) 
> ~[?:1.8.0_152]
>[junit4]   2>at 
> org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1409) 
> ~[zookeeper-3.4.11.jar:3.4
> then it spins in SessionExpiredException, all tests pass but suite fails due 
> to leaking Overseer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12231) /etc/init.d/solr problem

2018-04-17 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-12231.
---
Resolution: Not A Problem
  Assignee: Steve Rowe

This is not a syntax error.  This syntax assigns a value to an environment 
variable in the process created for the following command.

I couldn't find info on this capability in the Bash manual, but here is info 
from [https://help.ubuntu.com/community/EnvironmentVariables]:

{quote}
h2. Bash's quick assignment and inheritance trick

The bash shell has a trick to allow us to set one or more environment variables 
and run a child process with single command. For example, in order to set the 
"LANGUAGE" and "FOO" environment variables and then run "gedit", we would use 
the following command:

{noformat}
LANGUAGE=he FOO=bar gedit
{noformat}

Note: When using this command, the new values are only assigned to the 
environment variables of the child process (in this case gedit). The variables 
of the shell retain their original values. For instance, in the example above, 
the value of "LANGUAGE" will not change from its original value, as far as 
subsequent commands to the shell are concerned.
{quote}

> /etc/init.d/solr problem
> 
>
> Key: SOLR-12231
> URL: https://issues.apache.org/jira/browse/SOLR-12231
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 7.3
> Environment: Centos 7.4 
> java-1.8.0-openjdk
>Reporter: Lihua Wang
>Assignee: Steve Rowe
>Priority: Minor
>
> I noticed that there are a couple of minor issues with the init.d script in 
> pretty much every version. 
> Basically, a semicolon (or an escaped semicolon) is missing in the 
> *{color:#205081}BLUE{color}* lines blow:
>  
> if [ -n "$RUNAS" ]; then
>  {color:#205081}su -c "SOLR_INCLUDE=\"$SOLR_ENV\" 
> \"$SOLR_INSTALL_DIR/bin/solr\" $SOLR_CMD" - "$RUNAS"{color}
>  else
>  {color:#205081}SOLR_INCLUDE="$SOLR_ENV" "$SOLR_INSTALL_DIR/bin/solr" 
> "$SOLR_CMD"{color}
>  fi
>  
> *With the {color:#d04437}added semicolons{color} (escaped where necessary), 
> the code would look like:* 
> if [ -n "$RUNAS" ]; then
>  *{color:#8eb021}{color:#8eb021}su -c 
> "SOLR_INCLUDE=\"$SOLR_ENV\"{color:#d04437}\;{color}{color} 
> \"$SOLR_INSTALL_DIR/bin/solr\" $SOLR_CMD" - 
> "$RUNAS"{color}{color:#8eb021}*{color}*
>  *else*
>  
> *{color:#8eb021}*SOLR_INCLUDE="$SOLR_ENV"{color:#d04437};{color}{color}{color:#8eb021}
>  "$SOLR_INSTALL_DIR/bin/solr" "$SOLR_CMD"{color}*
>  fi
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12224) there is no API to read collection properties

2018-04-17 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441384#comment-16441384
 ] 

Shawn Heisey commented on SOLR-12224:
-

I should probably check to see what is in CLUSTERSTATUS.

(final jeopardy music)

I see your point now.  It already has detailed information about every 
collection, so adding cluster properties is not unreasonable.

Somebody with a really large cloud is already going to be expecting 
CLUSTERSTATUS to take a while.  The amount of extra time needed to gather 
property info would not be horrible.

This isn't what I would have expected from a status action on a whole cluster, 
but then I remember that this is basically everything that clusterstate.json 
used to hold in 4.x (plus a little more added in later versions), so from a 
developer perspective, the direct conversion of clusterstate.json to 
CLUSTERSTATUS actually does make sense.

I haven't closely examined the v2 API yet, so I'm not going to comment there.


> there is no API to read collection properties
> -
>
> Key: SOLR-12224
> URL: https://issues.apache.org/jira/browse/SOLR-12224
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.3
>Reporter: Hendrik Haddorp
>Priority: Major
>
> Solr 7.3 added the COLLECTIONPROP API call 
> (https://lucene.apache.org/solr/guide/7_3/collections-api.html#collectionprop)
>  that allows to set arbitrary properties on a collection. There is however no 
> API call that returns the data. The only option is to manually read out the 
> collectionprops.json file in ZK below the collection.
> Options could be that the COLLECTIONPROP command has an option to retrieve 
> properties, have a special command to list the properties and/or to have the 
> properties listed in the clusterstatus output for a collection.
> Would be great if SolrJ would also be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12218) solr.cmd will skip part of help text due to missing special character quote

2018-04-17 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch resolved SOLR-12218.
--
Resolution: Duplicate

Definitely looks like duplicate and the relevant line seems fixed (escaped with 
^).

> solr.cmd will skip part of help text due to missing special character quote
> ---
>
> Key: SOLR-12218
> URL: https://issues.apache.org/jira/browse/SOLR-12218
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.1
> Environment: Windows
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
>
> SOLR-11084 introduced some help text that was not properly escaped in Windows 
> batch file (solr.cmd), cause an - easy to miss - error message and truncated 
> help information for the _bin\solr start -help_ command (anything after -t 
> option).
> The fix is to either quote the whole line (done in other part of the file) or 
> quote the specific (less than and more than) characters, which for the echo 
> command is done with the ^ character, just as it is used a couple of lines 
> lower in the same file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 516 - Still Unstable

2018-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/516/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/202/consoleText

[repro] Revision: 0ccd991fa3b878cb23774216b661614d5426c26d

[repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9
[repro] Repro line:  ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfosVersion -Dtests.seed=F9DAB0F23873EC1C 
-Dtests.multiplier=2 -Dtests.locale=fr -Dtests.timezone=America/Bogota 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfos -Dtests.seed=F9DAB0F23873EC1C 
-Dtests.multiplier=2 -Dtests.locale=fr -Dtests.timezone=America/Bogota 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfosData -Dtests.seed=F9DAB0F23873EC1C 
-Dtests.multiplier=2 -Dtests.locale=fr -Dtests.timezone=America/Bogota 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testEventQueue -Dtests.seed=F9DAB0F23873EC1C 
-Dtests.multiplier=2 -Dtests.locale=el-GR -Dtests.timezone=AGT 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=F9DAB0F23873EC1C 
-Dtests.multiplier=2 -Dtests.locale=es-PR 
-Dtests.timezone=Atlantic/South_Georgia -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMergeIntegration -Dtests.seed=F9DAB0F23873EC1C 
-Dtests.multiplier=2 -Dtests.locale=es-PR 
-Dtests.timezone=Atlantic/South_Georgia -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=F9DAB0F23873EC1C -Dtests.multiplier=2 
-Dtests.locale=es-PR -Dtests.timezone=Atlantic/South_Georgia 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SystemLogListenerTest 
-Dtests.method=test -Dtests.seed=F9DAB0F23873EC1C -Dtests.multiplier=2 
-Dtests.locale=lt-LT -Dtests.timezone=Africa/Juba -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
d904112428184ce9c1726313add5d184f4014a72
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 0ccd991fa3b878cb23774216b661614d5426c26d

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro]   SystemLogListenerTest
[repro]   SegmentsInfoRequestHandlerTest
[repro]   TestTriggerIntegration
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.IndexSizeTriggerTest|*.SystemLogListenerTest|*.SegmentsInfoRequestHandlerTest|*.TestTriggerIntegration"
 -Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=F9DAB0F23873EC1C -Dtests.multiplier=2 -Dtests.locale=es-PR 
-Dtests.timezone=Atlantic/South_Georgia -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 8401 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.SystemLogListenerTest
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro]   5/5 failed: 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SegmentsInfoRequestHandlerTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.SegmentsInfoRequestHandlerTest" -Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=F9DAB0F23873EC1C -Dtests.multiplier=2 -Dtests.locale=fr 
-Dtests.timezone=America/Bogota -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 773 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   5/5 failed: 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest

[repro] Re-testing 100% failures at the tip of branch_7x without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SegmentsInfoRequestHandlerTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] 

[jira] [Commented] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle

2018-04-17 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441298#comment-16441298
 ] 

Dawid Weiss commented on SOLR-11200:


I'd like to commit this patch in -- this seems important. I have just talked to 
a company where they have a large index (not sharded, monolithic) and do 
nightly merges. The time to perform those merges is important and IO-throttling 
essentially caps the bandwidth of an otherwise very fast drive to 5MB/s, which 
means the whole merge ends with a trailing single merge thread that lasts over 
an hour longer than anything else out there (searches are performed on 
different machines).


> provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
> ---
>
> Key: SOLR-11200
> URL: https://issues.apache.org/jira/browse/SOLR-11200
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Nawab Zada Asad iqbal
>Priority: Minor
> Attachments: SOLR-11200.patch, SOLR-11200.patch, SOLR-11200.patch
>
>
> This config can be useful while bulk indexing. Lucene introduced it 
> https://issues.apache.org/jira/browse/LUCENE-6119 . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12224) there is no API to read collection properties

2018-04-17 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441281#comment-16441281
 ] 

Jason Gerlowski edited comment on SOLR-12224 at 4/17/18 6:09 PM:
-

Small +1 to including the collection props in CLUSTERSTATUS.  I get your point 
Shawn, that CLUSTERSTATUS is supposed to be about the cluster, not the 
collection.  But "holding the line" on that distinction seems like a lost 
battle: we already allow filtering CLUSTERSTATUS by the collection(s)/shard(s) 
you care about, and the API includes many other bits of collection state in the 
response (shard hash ranges, replica/shard state, leadership, etc.).  I'm not 
sure what's more intuitive in general, but speaking only for myself, 
CLUSTERSTATUS is usually the API I think to hit when I want to see the overview 
for a collection (which in my mind includes any properties).  I don't care 
strongly, just my 2 cents.

I'm curious too whether anyone cares how this is exposed in the v2 API.  That's 
the "future" I guess, so worth some discussion.  Would we expose this under 
{{GET v2/c/collection-name}} (which lines up pretty well with 
{{action=CLUSTERSTATUS=collection-name}}), or does it deserve its 
own collection subpath, like {{GET v2/c/collection-name/properties}}?


was (Author: gerlowskija):
Small +1 to including the collection props in CLUSTERSTATUS.  I get your point 
Shawn, that CLUSTERSTATUS is supposed to be about the cluster, not the 
collection.  But "holding the line" on that distinction seems like a lost 
battle: we already allow filtering CLUSTERSTATUS by the collection(s)/shard(s) 
you care about, and the API includes many other bits of collection state in the 
response (shard hash ranges, replica/shard state, leadership, etc.).  I'm not 
sure what's more intuitive in general, but speaking only for myself, 
CLUSTERSTATUS is usually the API I think to hit when I want to see the overview 
for a collection.  I don't care strongly, just my 2 cents.

I'm curious too whether anyone cares how this is exposed in the v2 API.  That's 
the "future" I guess, so worth some discussion.  Would we expose this under 
{{GET v2/c/collection-name}} (which lines up pretty well with 
{{action=CLUSTERSTATUS=collection-name}}), or does it deserve its 
own collection subpath, like {{GET v2/c/collection-name/properties}}?

> there is no API to read collection properties
> -
>
> Key: SOLR-12224
> URL: https://issues.apache.org/jira/browse/SOLR-12224
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.3
>Reporter: Hendrik Haddorp
>Priority: Major
>
> Solr 7.3 added the COLLECTIONPROP API call 
> (https://lucene.apache.org/solr/guide/7_3/collections-api.html#collectionprop)
>  that allows to set arbitrary properties on a collection. There is however no 
> API call that returns the data. The only option is to manually read out the 
> collectionprops.json file in ZK below the collection.
> Options could be that the COLLECTIONPROP command has an option to retrieve 
> properties, have a special command to list the properties and/or to have the 
> properties listed in the clusterstatus output for a collection.
> Would be great if SolrJ would also be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12187) Replica should watch clusterstate and unload itself if its entry is removed

2018-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441279#comment-16441279
 ] 

Tomás Fernández Löbbe commented on SOLR-12187:
--

{code:java}
-if (watcher.onStateChanged(liveNodes, collectionState)) {
-  removeCollectionStateWatcher(collection, watcher);
+try {
+  if (watcher.onStateChanged(liveNodes, collectionState)) {
+removeCollectionStateWatcher(collection, watcher);
+  }
+} catch (Throwable throwable) {
+  LOG.warn("Error on calling watcher", throwable);
 }
{code}
Why {{Throwable}} and not {{Exception}}?

{code}
+while (true) {
+  try {
+CollectionAdminRequest.addReplicaToShard(collectionName, "shard1")
+.process(cluster.getSolrClient());
+break;
+  } catch (Exception e) {
+// expected, when the node is not fully started
+Thread.sleep(500);
+  }
+}
{code}
Maybe better to have some number of attempts or a timeout? Otherwise we'll get 
a weird Suite timeout if this command keeps failing

> Replica should watch clusterstate and unload itself if its entry is removed
> ---
>
> Key: SOLR-12187
> URL: https://issues.apache.org/jira/browse/SOLR-12187
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch, 
> SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch
>
>
> With the introduction of autoscaling framework, we have seen an increase in 
> the number of issues related to the race condition between delete a replica 
> and other stuff.
> Case 1: DeleteReplicaCmd failed to send UNLOAD request to a replica, 
> therefore, forcefully remove its entry from clusterstate, but the replica 
> still function normally and be able to become a leader -> SOLR-12176
> Case 2:
>  * DeleteReplicaCmd enqueue a DELETECOREOP (without sending a request to 
> replica because the node is not live)
>  * The node start and the replica get loaded
>  * DELETECOREOP has not processed hence the replica still present in 
> clusterstate --> pass checkStateInZk
>  * DELETECOREOP is executed, DeleteReplicaCmd finished
>  ** result 1: the replica start recovering, finish it and publish itself as 
> ACTIVE --> state of the replica is ACTIVE
>  ** result 2: the replica throw an exception (probably: NPE) 
> --> state of the replica is DOWN, not join leader election



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12224) there is no API to read collection properties

2018-04-17 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441281#comment-16441281
 ] 

Jason Gerlowski commented on SOLR-12224:


Small +1 to including the collection props in CLUSTERSTATUS.  I get your point 
Shawn, that CLUSTERSTATUS is supposed to be about the cluster, not the 
collection.  But "holding the line" on that distinction seems like a lost 
battle: we already allow filtering CLUSTERSTATUS by the collection(s)/shard(s) 
you care about, and the API includes many other bits of collection state in the 
response (shard hash ranges, replica/shard state, leadership, etc.).  I'm not 
sure what's more intuitive in general, but speaking only for myself, 
CLUSTERSTATUS is usually the API I think to hit when I want to see the overview 
for a collection.  I don't care strongly, just my 2 cents.

I'm curious too whether anyone cares how this is exposed in the v2 API.  That's 
the "future" I guess, so worth some discussion.  Would we expose this under 
{{GET v2/c/collection-name}} (which lines up pretty well with 
{{action=CLUSTERSTATUS=collection-name}}), or does it deserve its 
own collection subpath, like {{GET v2/c/collection-name/properties}}?

> there is no API to read collection properties
> -
>
> Key: SOLR-12224
> URL: https://issues.apache.org/jira/browse/SOLR-12224
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.3
>Reporter: Hendrik Haddorp
>Priority: Major
>
> Solr 7.3 added the COLLECTIONPROP API call 
> (https://lucene.apache.org/solr/guide/7_3/collections-api.html#collectionprop)
>  that allows to set arbitrary properties on a collection. There is however no 
> API call that returns the data. The only option is to manually read out the 
> collectionprops.json file in ZK below the collection.
> Options could be that the COLLECTIONPROP command has an option to retrieve 
> properties, have a special command to list the properties and/or to have the 
> properties listed in the clusterstatus output for a collection.
> Would be great if SolrJ would also be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 515 - Still Unstable

2018-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/515/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/202/consoleText

[repro] Revision: d9ef3a3d02870af55d7ee7447852fbdf374bf238

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfos -Dtests.seed=816D46DF52CEEB5A 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP 
-Dtests.timezone=Pacific/Norfolk -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfosData -Dtests.seed=816D46DF52CEEB5A 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP 
-Dtests.timezone=Pacific/Norfolk -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfosVersion -Dtests.seed=816D46DF52CEEB5A 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP 
-Dtests.timezone=Pacific/Norfolk -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
d904112428184ce9c1726313add5d184f4014a72
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout d9ef3a3d02870af55d7ee7447852fbdf374bf238

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SegmentsInfoRequestHandlerTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.SegmentsInfoRequestHandlerTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=816D46DF52CEEB5A -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP 
-Dtests.timezone=Pacific/Norfolk -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 787 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   5/5 failed: 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SegmentsInfoRequestHandlerTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.SegmentsInfoRequestHandlerTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=816D46DF52CEEB5A -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP 
-Dtests.timezone=Pacific/Norfolk -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 786 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   5/5 failed: 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest

[repro] Re-testing 100% failures at the tip of branch_7x without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SegmentsInfoRequestHandlerTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.SegmentsInfoRequestHandlerTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 

[jira] [Updated] (SOLR-12231) /etc/init.d/solr problem

2018-04-17 Thread Lihua Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lihua Wang updated SOLR-12231:
--
Description: 
I noticed that there are a couple of minor issues with the init.d script in 
pretty much every version. 

Basically, a semicolon (or an escaped semicolon) is missing in the 
*{color:#205081}BLUE{color}* lines blow:

 

if [ -n "$RUNAS" ]; then
 {color:#205081}su -c "SOLR_INCLUDE=\"$SOLR_ENV\" 
\"$SOLR_INSTALL_DIR/bin/solr\" $SOLR_CMD" - "$RUNAS"{color}
 else
 {color:#205081}SOLR_INCLUDE="$SOLR_ENV" "$SOLR_INSTALL_DIR/bin/solr" 
"$SOLR_CMD"{color}
 fi

 

*With the {color:#d04437}added semicolons{color} (escaped where necessary), the 
code would look like:* 

if [ -n "$RUNAS" ]; then
 *{color:#8eb021}{color:#8eb021}su -c 
"SOLR_INCLUDE=\"$SOLR_ENV\"{color:#d04437}\;{color}{color} 
\"$SOLR_INSTALL_DIR/bin/solr\" $SOLR_CMD" - 
"$RUNAS"{color}{color:#8eb021}*{color}*
 *else*
 
*{color:#8eb021}*SOLR_INCLUDE="$SOLR_ENV"{color:#d04437};{color}{color}{color:#8eb021}
 "$SOLR_INSTALL_DIR/bin/solr" "$SOLR_CMD"{color}*
 fi

 

  was:
I noticed that there are a couple of minor issues with the init.d script in 
pretty much every version. 

Basically, a semicolon (or an escaped semicolon) is missing in the 
*{color:#205081}BLUE{color}* lines blow:

 

if [ -n "$RUNAS" ]; then
 {color:#205081}su -c "SOLR_INCLUDE=\"$SOLR_ENV\" 
\"$SOLR_INSTALL_DIR/bin/solr\" $SOLR_CMD" - "$RUNAS"{color}
 else
 {color:#205081}SOLR_INCLUDE="$SOLR_ENV" "$SOLR_INSTALL_DIR/bin/solr" 
"$SOLR_CMD"{color}
 fi

 

*With the {color:#d04437}added semicolons{color} (escaped where necessary), the 
code would look like:* 

if [ -n "$RUNAS" ]; then
 *{color:#8eb021}su -c "SOLR_INCLUDE=\"$SOLR_ENV\"{color:#d04437}\;{color} 
\"$SOLR_INSTALL_DIR/bin/solr\" $SOLR_CMD" - "$RUNAS"{color}*
else
 *{color:#8eb021}SOLR_INCLUDE="$SOLR_ENV"{color:#d04437};{color} 
"$SOLR_INSTALL_DIR/bin/solr" "$SOLR_CMD"{color}*
fi

 


> /etc/init.d/solr problem
> 
>
> Key: SOLR-12231
> URL: https://issues.apache.org/jira/browse/SOLR-12231
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 7.3
> Environment: Centos 7.4 
> java-1.8.0-openjdk
>Reporter: Lihua Wang
>Priority: Minor
>
> I noticed that there are a couple of minor issues with the init.d script in 
> pretty much every version. 
> Basically, a semicolon (or an escaped semicolon) is missing in the 
> *{color:#205081}BLUE{color}* lines blow:
>  
> if [ -n "$RUNAS" ]; then
>  {color:#205081}su -c "SOLR_INCLUDE=\"$SOLR_ENV\" 
> \"$SOLR_INSTALL_DIR/bin/solr\" $SOLR_CMD" - "$RUNAS"{color}
>  else
>  {color:#205081}SOLR_INCLUDE="$SOLR_ENV" "$SOLR_INSTALL_DIR/bin/solr" 
> "$SOLR_CMD"{color}
>  fi
>  
> *With the {color:#d04437}added semicolons{color} (escaped where necessary), 
> the code would look like:* 
> if [ -n "$RUNAS" ]; then
>  *{color:#8eb021}{color:#8eb021}su -c 
> "SOLR_INCLUDE=\"$SOLR_ENV\"{color:#d04437}\;{color}{color} 
> \"$SOLR_INSTALL_DIR/bin/solr\" $SOLR_CMD" - 
> "$RUNAS"{color}{color:#8eb021}*{color}*
>  *else*
>  
> *{color:#8eb021}*SOLR_INCLUDE="$SOLR_ENV"{color:#d04437};{color}{color}{color:#8eb021}
>  "$SOLR_INSTALL_DIR/bin/solr" "$SOLR_CMD"{color}*
>  fi
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12231) /etc/init.d/solr problem

2018-04-17 Thread Lihua Wang (JIRA)
Lihua Wang created SOLR-12231:
-

 Summary: /etc/init.d/solr problem
 Key: SOLR-12231
 URL: https://issues.apache.org/jira/browse/SOLR-12231
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: scripts and tools
Affects Versions: 7.3
 Environment: Centos 7.4 

java-1.8.0-openjdk
Reporter: Lihua Wang


I noticed that there are a couple of minor issues with the init.d script in 
pretty much every version. 

Basically, a semicolon (or an escaped semicolon) is missing in the 
*{color:#205081}BLUE{color}* lines blow:

 

if [ -n "$RUNAS" ]; then
 {color:#205081}su -c "SOLR_INCLUDE=\"$SOLR_ENV\" 
\"$SOLR_INSTALL_DIR/bin/solr\" $SOLR_CMD" - "$RUNAS"{color}
 else
 {color:#205081}SOLR_INCLUDE="$SOLR_ENV" "$SOLR_INSTALL_DIR/bin/solr" 
"$SOLR_CMD"{color}
 fi

 

*With the {color:#d04437}added semicolons{color} (escaped where necessary), the 
code would look like:* 

if [ -n "$RUNAS" ]; then
 *{color:#8eb021}su -c "SOLR_INCLUDE=\"$SOLR_ENV\"{color:#d04437}\;{color} 
\"$SOLR_INSTALL_DIR/bin/solr\" $SOLR_CMD" - "$RUNAS"{color}*
else
 *{color:#8eb021}SOLR_INCLUDE="$SOLR_ENV"{color:#d04437};{color} 
"$SOLR_INSTALL_DIR/bin/solr" "$SOLR_CMD"{color}*
fi

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12231) /etc/init.d/solr problem

2018-04-17 Thread Lihua Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lihua Wang updated SOLR-12231:
--
Issue Type: Bug  (was: New Feature)

> /etc/init.d/solr problem
> 
>
> Key: SOLR-12231
> URL: https://issues.apache.org/jira/browse/SOLR-12231
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 7.3
> Environment: Centos 7.4 
> java-1.8.0-openjdk
>Reporter: Lihua Wang
>Priority: Minor
>
> I noticed that there are a couple of minor issues with the init.d script in 
> pretty much every version. 
> Basically, a semicolon (or an escaped semicolon) is missing in the 
> *{color:#205081}BLUE{color}* lines blow:
>  
> if [ -n "$RUNAS" ]; then
>  {color:#205081}su -c "SOLR_INCLUDE=\"$SOLR_ENV\" 
> \"$SOLR_INSTALL_DIR/bin/solr\" $SOLR_CMD" - "$RUNAS"{color}
>  else
>  {color:#205081}SOLR_INCLUDE="$SOLR_ENV" "$SOLR_INSTALL_DIR/bin/solr" 
> "$SOLR_CMD"{color}
>  fi
>  
> *With the {color:#d04437}added semicolons{color} (escaped where necessary), 
> the code would look like:* 
> if [ -n "$RUNAS" ]; then
>  *{color:#8eb021}su -c "SOLR_INCLUDE=\"$SOLR_ENV\"{color:#d04437}\;{color} 
> \"$SOLR_INSTALL_DIR/bin/solr\" $SOLR_CMD" - "$RUNAS"{color}*
> else
>  *{color:#8eb021}SOLR_INCLUDE="$SOLR_ENV"{color:#d04437};{color} 
> "$SOLR_INSTALL_DIR/bin/solr" "$SOLR_CMD"{color}*
> fi
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12187) Replica should watch clusterstate and unload itself if its entry is removed

2018-04-17 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441273#comment-16441273
 ] 

Lucene/Solr QA commented on SOLR-12187:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} SOLR-12187 does not apply to master. Rebase required? Wrong 
Branch? See 
https://wiki.apache.org/solr/HowToContribute#Creating_the_patch_file for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12187 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919360/SOLR-12187.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/59/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Replica should watch clusterstate and unload itself if its entry is removed
> ---
>
> Key: SOLR-12187
> URL: https://issues.apache.org/jira/browse/SOLR-12187
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch, 
> SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch
>
>
> With the introduction of autoscaling framework, we have seen an increase in 
> the number of issues related to the race condition between delete a replica 
> and other stuff.
> Case 1: DeleteReplicaCmd failed to send UNLOAD request to a replica, 
> therefore, forcefully remove its entry from clusterstate, but the replica 
> still function normally and be able to become a leader -> SOLR-12176
> Case 2:
>  * DeleteReplicaCmd enqueue a DELETECOREOP (without sending a request to 
> replica because the node is not live)
>  * The node start and the replica get loaded
>  * DELETECOREOP has not processed hence the replica still present in 
> clusterstate --> pass checkStateInZk
>  * DELETECOREOP is executed, DeleteReplicaCmd finished
>  ** result 1: the replica start recovering, finish it and publish itself as 
> ACTIVE --> state of the replica is ACTIVE
>  ** result 2: the replica throw an exception (probably: NPE) 
> --> state of the replica is DOWN, not join leader election



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_162) - Build # 1739 - Still Failing!

2018-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1739/
Java: 32bit/jdk1.8.0_162 -server -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 60513 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj321881263
 [ecj-lint] Compiling 895 source files to /tmp/ecj321881263
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/analysis/TokenizerChainTest.java
 (at line 37)
 [ecj-lint] TokenizerChain tokenizerChain = new TokenizerChain(
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'tokenizerChain' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 25)
 [ecj-lint] import java.util.concurrent.TimeUnit;
 [ecj-lint]^
 [ecj-lint] The import java.util.concurrent.TimeUnit is never used
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 26)
 [ecj-lint] import java.util.stream.Collectors;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.stream.Collectors is never used
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 32)
 [ecj-lint] import org.apache.solr.cloud.overseer.OverseerAction;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.cloud.overseer.OverseerAction is never 
used
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 38)
 [ecj-lint] import org.apache.solr.common.cloud.ZkNodeProps;
 [ecj-lint]
 [ecj-lint] The import org.apache.solr.common.cloud.ZkNodeProps is never used
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 39)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReader is never used
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 41)
 [ecj-lint] import org.apache.solr.common.util.Utils;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.common.util.Utils is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 25)
 [ecj-lint] import java.util.HashSet;
 [ecj-lint]^
 [ecj-lint] The import java.util.HashSet is never used
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 42)
 [ecj-lint] import org.apache.solr.common.cloud.CollectionStateWatcher;
 [ecj-lint]^^^
 [ecj-lint] The import org.apache.solr.common.cloud.CollectionStateWatcher is 
never used
 [ecj-lint] --
 [ecj-lint] 10. ERROR in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 46)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReaderAccessor;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReaderAccessor is 
never used
 [ecj-lint] --
 [ecj-lint] 10 problems (9 errors, 1 warning)

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:633: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build.xml:690: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/common-build.xml:2095: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/common-build.xml:2128: 
Compile 

[jira] [Commented] (SOLR-11833) Allow searchRate trigger to delete replicas

2018-04-17 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441251#comment-16441251
 ] 

Andrzej Bialecki  commented on SOLR-11833:
--

New patch with detailed documentation.

This also contains other changes:
* implemented {{DeleteNodeSuggester}} to properly handle (optional) DELETENODE 
requests for idle nodes.
* changed the logic in "cold ops" calculation to always request DELETEREPLICA 
before DELETENODE (otherwise DELETEREPLICA-s could be issued for replicas that 
were just removed by DELETENODE).
* added a unit test to test DELETENODE condition.

> Allow searchRate trigger to delete replicas
> ---
>
> Key: SOLR-11833
> URL: https://issues.apache.org/jira/browse/SOLR-11833
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11833.patch, SOLR-11833.patch
>
>
> Currently {{SearchRateTrigger}} generates events when search rate thresholds 
> are exceeded, and {{ComputePlanAction}} computes ADDREPLICA actions in 
> response - adding replicas should allow the search rate to be reduced across 
> the increased number of replicas.
> However, once the peak load period is over the collection is left with too 
> many replicas, which unnecessarily tie cluster resources. 
> {{SearchRateTrigger}} should detect situations like this and generate events 
> that should cause some of these replicas to be deleted.
> {{SearchRateTrigger}} should use hysteresis to avoid thrashing when the rate 
> is close to the threshold.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11833) Allow searchRate trigger to delete replicas

2018-04-17 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-11833:
-
Attachment: SOLR-11833.patch

> Allow searchRate trigger to delete replicas
> ---
>
> Key: SOLR-11833
> URL: https://issues.apache.org/jira/browse/SOLR-11833
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11833.patch, SOLR-11833.patch
>
>
> Currently {{SearchRateTrigger}} generates events when search rate thresholds 
> are exceeded, and {{ComputePlanAction}} computes ADDREPLICA actions in 
> response - adding replicas should allow the search rate to be reduced across 
> the increased number of replicas.
> However, once the peak load period is over the collection is left with too 
> many replicas, which unnecessarily tie cluster resources. 
> {{SearchRateTrigger}} should detect situations like this and generate events 
> that should cause some of these replicas to be deleted.
> {{SearchRateTrigger}} should use hysteresis to avoid thrashing when the rate 
> is close to the threshold.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12218) solr.cmd will skip part of help text due to missing special character quote

2018-04-17 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441233#comment-16441233
 ] 

Jason Gerlowski edited comment on SOLR-12218 at 4/17/18 5:37 PM:
-

Hi [~arafalov], can you verify whether this is a duplicate of SOLR-11840?  I 
merged a commit last night which addresses several of these help-text issues in 
{{solr.cmd}}, and I strongly suspect that it fixes the behavior you referenced 
above.  (I don't have a Windows machine in front of me to check, but will do so 
shortly.)


was (Author: gerlowskija):
Hi [~arafalov], can you verify whether this is a duplicate of SOLR-11840?  I 
merged a commit last night which addresses several of these help-text issues in 
{{solr.cmd}}.

> solr.cmd will skip part of help text due to missing special character quote
> ---
>
> Key: SOLR-12218
> URL: https://issues.apache.org/jira/browse/SOLR-12218
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.1
> Environment: Windows
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
>
> SOLR-11084 introduced some help text that was not properly escaped in Windows 
> batch file (solr.cmd), cause an - easy to miss - error message and truncated 
> help information for the _bin\solr start -help_ command (anything after -t 
> option).
> The fix is to either quote the whole line (done in other part of the file) or 
> quote the specific (less than and more than) characters, which for the echo 
> command is done with the ^ character, just as it is used a couple of lines 
> lower in the same file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12218) solr.cmd will skip part of help text due to missing special character quote

2018-04-17 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441233#comment-16441233
 ] 

Jason Gerlowski commented on SOLR-12218:


Hi [~arafalov], can you verify whether this is a duplicate of SOLR-11840?  I 
merged a commit last night which addresses several of these help-text issues in 
{{solr.cmd}}.

> solr.cmd will skip part of help text due to missing special character quote
> ---
>
> Key: SOLR-12218
> URL: https://issues.apache.org/jira/browse/SOLR-12218
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.1
> Environment: Windows
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
>
> SOLR-11084 introduced some help text that was not properly escaped in Windows 
> batch file (solr.cmd), cause an - easy to miss - error message and truncated 
> help information for the _bin\solr start -help_ command (anything after -t 
> option).
> The fix is to either quote the whole line (done in other part of the file) or 
> quote the specific (less than and more than) characters, which for the echo 
> command is done with the ^ character, just as it is used a couple of lines 
> lower in the same file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11840) Inconsistencies in the Usage Messages of bin/solr.cmd

2018-04-17 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski resolved SOLR-11840.

   Resolution: Fixed
Fix Version/s: (was: 7.2)
   master (8.0)
   7.4

> Inconsistencies in the Usage Messages of bin/solr.cmd
> -
>
> Key: SOLR-11840
> URL: https://issues.apache.org/jira/browse/SOLR-11840
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Jakob Furrer
>Assignee: Jason Gerlowski
>Priority: Major
>  Labels: documentation, easyfix
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11840.patch, SOLR-11840.patch, SOLR-11840.patch, 
> solr.cmd.txt, solr.txt, solr_start_help_Syntaxfehler.png
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> I noticed a number of errors/problems/peculiarities in the Usage Messages 
> that are displayed when using *bin/solr.cmd* with the parameter *_-help_*.
> The items are listed in no particular order and may be addressed 
> independantly.
> To spot the differences between the Usage Messages of _bin/solr_ and 
> _bin/solr.cmd_ I compiled an extract of the Usage Messages of the two files 
> so that they can be compared using WinMerge or a similar diff tool.
> See the attached files *solr.cmd.txt* and *solr.txt*.
> Note that I work on a German Windows 10, therefore some error messages I 
> quote here are in German.
> # _solr_ _start_ _-help_ results in a syntax error
> The special characters '<' and '>' are not escaped.
> The line 314 must be changed as follows:
> {noformat}
> CURRENT : ... the default server/
> SHALL_BE: ... the default server/^
> {noformat}
> \\
> # _solr auth -help_ ends is empty
> A goto label ':auth_usage' with the appropriate Usage Messages already exists.
> At line 266 an additional if-statement is required.
> Also, a respective if-statement will be required on line 1858.
> {noformat}
> NEW_CODE: IF "%SCRIPT_CMD%"=="auth" goto auth_usage
> {noformat}
> Some additional bugs in the section ':auth_usage' must then also be addressed.
> The special character '|' is not escaped at a number of locations.
> The lines 568, 569, 570, 577, 580 and 585 must be changed, e.g.
> {noformat}
> CURRENT : echo Usage: solr auth enable [-type basicAuth] -credentials 
> user:pass [-blockUnknown ^] [-updateIncludeFileOnly 
> ^] [-V]
> SHALL_BE: echo Usage: solr auth enable [-type basicAuth] -credentials 
> user:pass [-blockUnknown ^] [-updateIncludeFileOnly 
> ^] [-V]
> {noformat}
> The empty 'echo' statement (i.e. 'newline') needs the be written with a dot 
> ('echo.') to avoid "ECHO ist ausgeschaltet (OFF)." statements.
> The lines 571, 573, 576, 577, 579, 584, 587, 589, 591, 594 and 596 must be 
> changed:
> {noformat}
> CURRENT : echo
> SHALL_BE: echo.
> {noformat}
> \\
> # _solr_ _-help_ does not mention the command _status_
> The line 271 must be changed as follows:
> {noformat}
> CURRENT : @echowhere COMMAND is one of: start, stop, restart, 
> healthcheck, create, create_core, create_collection, delete, version, zk, 
> auth, assert
> SHALL_BE: @echowhere COMMAND is one of: start, stop, restart, status, 
> healthcheck, create, create_core, create_collection, delete, version, zk, 
> auth, assert
> {noformat}
> \\
> # In _bin/solr.cmd_ the description of _solr_ _start_ _-p_ _port_ does not 
> mention the STOP_PORT and the RMI_PORT, see line 324 to 326 of _bin/solr_.
> {noformat}
> echo "  -p  Specify the port to start the Solr HTTP listener on; 
> default is 8983"
> echo "  The specified port (SOLR_PORT) will also be used to 
> determine the stop port"
> echo "  STOP_PORT=(\$SOLR_PORT-1000) and JMX RMI listen port 
> RMI_PORT=(\$SOLR_PORT+1). "
> echo "  For instance, if you set -p 8985, then the 
> STOP_PORT=7985 and RMI_PORT=18985"
> {noformat}
> \\
> # The description of _solr_ _start_ _-s_ _dir_ seems to have been revised in 
> _bin/solr.cmd_ but not in _bin/solr_.
> {noformat}
>   on which example is run. The default value is server/solr. 
> If passed a relative dir
>   validation with the current dir will be done before trying 
> the default server/
> {noformat}
> vs.
> {noformat}  
>   on which example is run. The default value is server/solr. 
> If passed relative dir,
>   validation with current dir will be done, before trying 
> default server/
> {noformat}
> \\
> # The description of _solr_ _start_ _-t_ _dir_ is different
> {noformat}
>   -t dirSets the solr.data.home system property, used as root for 
> 

[jira] [Assigned] (SOLR-11840) Inconsistencies in the Usage Messages of bin/solr.cmd

2018-04-17 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski reassigned SOLR-11840:
--

Assignee: Jason Gerlowski

> Inconsistencies in the Usage Messages of bin/solr.cmd
> -
>
> Key: SOLR-11840
> URL: https://issues.apache.org/jira/browse/SOLR-11840
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Jakob Furrer
>Assignee: Jason Gerlowski
>Priority: Major
>  Labels: documentation, easyfix
> Fix For: 7.2
>
> Attachments: SOLR-11840.patch, SOLR-11840.patch, SOLR-11840.patch, 
> solr.cmd.txt, solr.txt, solr_start_help_Syntaxfehler.png
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> I noticed a number of errors/problems/peculiarities in the Usage Messages 
> that are displayed when using *bin/solr.cmd* with the parameter *_-help_*.
> The items are listed in no particular order and may be addressed 
> independantly.
> To spot the differences between the Usage Messages of _bin/solr_ and 
> _bin/solr.cmd_ I compiled an extract of the Usage Messages of the two files 
> so that they can be compared using WinMerge or a similar diff tool.
> See the attached files *solr.cmd.txt* and *solr.txt*.
> Note that I work on a German Windows 10, therefore some error messages I 
> quote here are in German.
> # _solr_ _start_ _-help_ results in a syntax error
> The special characters '<' and '>' are not escaped.
> The line 314 must be changed as follows:
> {noformat}
> CURRENT : ... the default server/
> SHALL_BE: ... the default server/^
> {noformat}
> \\
> # _solr auth -help_ ends is empty
> A goto label ':auth_usage' with the appropriate Usage Messages already exists.
> At line 266 an additional if-statement is required.
> Also, a respective if-statement will be required on line 1858.
> {noformat}
> NEW_CODE: IF "%SCRIPT_CMD%"=="auth" goto auth_usage
> {noformat}
> Some additional bugs in the section ':auth_usage' must then also be addressed.
> The special character '|' is not escaped at a number of locations.
> The lines 568, 569, 570, 577, 580 and 585 must be changed, e.g.
> {noformat}
> CURRENT : echo Usage: solr auth enable [-type basicAuth] -credentials 
> user:pass [-blockUnknown ^] [-updateIncludeFileOnly 
> ^] [-V]
> SHALL_BE: echo Usage: solr auth enable [-type basicAuth] -credentials 
> user:pass [-blockUnknown ^] [-updateIncludeFileOnly 
> ^] [-V]
> {noformat}
> The empty 'echo' statement (i.e. 'newline') needs the be written with a dot 
> ('echo.') to avoid "ECHO ist ausgeschaltet (OFF)." statements.
> The lines 571, 573, 576, 577, 579, 584, 587, 589, 591, 594 and 596 must be 
> changed:
> {noformat}
> CURRENT : echo
> SHALL_BE: echo.
> {noformat}
> \\
> # _solr_ _-help_ does not mention the command _status_
> The line 271 must be changed as follows:
> {noformat}
> CURRENT : @echowhere COMMAND is one of: start, stop, restart, 
> healthcheck, create, create_core, create_collection, delete, version, zk, 
> auth, assert
> SHALL_BE: @echowhere COMMAND is one of: start, stop, restart, status, 
> healthcheck, create, create_core, create_collection, delete, version, zk, 
> auth, assert
> {noformat}
> \\
> # In _bin/solr.cmd_ the description of _solr_ _start_ _-p_ _port_ does not 
> mention the STOP_PORT and the RMI_PORT, see line 324 to 326 of _bin/solr_.
> {noformat}
> echo "  -p  Specify the port to start the Solr HTTP listener on; 
> default is 8983"
> echo "  The specified port (SOLR_PORT) will also be used to 
> determine the stop port"
> echo "  STOP_PORT=(\$SOLR_PORT-1000) and JMX RMI listen port 
> RMI_PORT=(\$SOLR_PORT+1). "
> echo "  For instance, if you set -p 8985, then the 
> STOP_PORT=7985 and RMI_PORT=18985"
> {noformat}
> \\
> # The description of _solr_ _start_ _-s_ _dir_ seems to have been revised in 
> _bin/solr.cmd_ but not in _bin/solr_.
> {noformat}
>   on which example is run. The default value is server/solr. 
> If passed a relative dir
>   validation with the current dir will be done before trying 
> the default server/
> {noformat}
> vs.
> {noformat}  
>   on which example is run. The default value is server/solr. 
> If passed relative dir,
>   validation with current dir will be done, before trying 
> default server/
> {noformat}
> \\
> # The description of _solr_ _start_ _-t_ _dir_ is different
> {noformat}
>   -t dirSets the solr.data.home system property, used as root for 
> ^/data directories.
>   If not set, Solr uses solr.solr.home for both 

[jira] [Commented] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr

2018-04-17 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441191#comment-16441191
 ] 

Amrit Sarkar commented on SOLR-9272:


Thank you [~janhoy].

> Auto resolve zkHost for bin/solr zk for running Solr
> 
>
> Key: SOLR-9272
> URL: https://issues.apache.org/jira/browse/SOLR-9272
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, 
> SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch
>
>
> Spinoff from SOLR-9194:
> We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already 
> running. We can optionally accept the {{-p}} parameter instead, and with that 
> use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's 
> easier to remember solr port than zk string.
> Example:
> {noformat}
> bin/solr start -c -p 9090
> bin/solr zk ls / -p 9090
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 514 - Still Unstable

2018-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/514/

[...truncated 35 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2491/consoleText

[repro] Revision: 449ecb601cac8644700c053df145a92c989e0e15

[repro] Repro line:  ant test  -Dtestcase=LeaderElectionIntegrationTest 
-Dtests.method=testSimpleSliceLeaderElection -Dtests.seed=3F8DB391CC3015F5 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=en-ZA 
-Dtests.timezone=Asia/Seoul -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=3F8DB391CC3015F5 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=lt-LT -Dtests.timezone=Asia/Vladivostok 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=3F8DB391CC3015F5 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=lt-LT 
-Dtests.timezone=Asia/Vladivostok -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=LeaderVoteWaitTimeoutTest 
-Dtests.method=testMostInSyncReplicasCanWinElection 
-Dtests.seed=3F8DB391CC3015F5 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=th-TH -Dtests.timezone=Europe/San_Marino -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=LeaderVoteWaitTimeoutTest 
-Dtests.method=basicTest -Dtests.seed=3F8DB391CC3015F5 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=th-TH -Dtests.timezone=Europe/San_Marino 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfosData -Dtests.seed=3F8DB391CC3015F5 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=tr-TR 
-Dtests.timezone=America/St_Barthelemy -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfosVersion -Dtests.seed=3F8DB391CC3015F5 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=tr-TR 
-Dtests.timezone=America/St_Barthelemy -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfos -Dtests.seed=3F8DB391CC3015F5 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=tr-TR 
-Dtests.timezone=America/St_Barthelemy -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
d904112428184ce9c1726313add5d184f4014a72
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 449ecb601cac8644700c053df145a92c989e0e15

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   LeaderVoteWaitTimeoutTest
[repro]   SegmentsInfoRequestHandlerTest
[repro]   LeaderElectionIntegrationTest
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.LeaderVoteWaitTimeoutTest|*.SegmentsInfoRequestHandlerTest|*.LeaderElectionIntegrationTest|*.IndexSizeTriggerTest"
 -Dtests.showOutput=onerror  -Dtests.seed=3F8DB391CC3015F5 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=th-TH -Dtests.timezone=Europe/San_Marino 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 2100 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.LeaderElectionIntegrationTest
[repro]   0/5 failed: org.apache.solr.cloud.LeaderVoteWaitTimeoutTest
[repro]   1/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro]   5/5 failed: 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest

[repro] Re-testing 100% failures at the tip of master
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SegmentsInfoRequestHandlerTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.SegmentsInfoRequestHandlerTest" -Dtests.showOutput=onerror  
-Dtests.seed=3F8DB391CC3015F5 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=tr-TR -Dtests.timezone=America/St_Barthelemy 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 768 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of master:
[repro]   5/5 failed: 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest

[repro] Re-testing 100% failures at the tip of master without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SegmentsInfoRequestHandlerTest
[repro] 

[JENKINS] Lucene-Solr-Tests-master - Build # 2492 - Failure

2018-04-17 Thread Apache Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 76, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_162) - Build # 21847 - Failure!

2018-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21847/
Java: 32bit/jdk1.8.0_162 -server -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 60411 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj1237960276
 [ecj-lint] Compiling 896 source files to /tmp/ecj1237960276
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/analysis/TokenizerChainTest.java
 (at line 37)
 [ecj-lint] TokenizerChain tokenizerChain = new TokenizerChain(
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'tokenizerChain' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 25)
 [ecj-lint] import java.util.concurrent.TimeUnit;
 [ecj-lint]^
 [ecj-lint] The import java.util.concurrent.TimeUnit is never used
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 26)
 [ecj-lint] import java.util.stream.Collectors;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.stream.Collectors is never used
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 32)
 [ecj-lint] import org.apache.solr.cloud.overseer.OverseerAction;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.cloud.overseer.OverseerAction is never 
used
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 38)
 [ecj-lint] import org.apache.solr.common.cloud.ZkNodeProps;
 [ecj-lint]
 [ecj-lint] The import org.apache.solr.common.cloud.ZkNodeProps is never used
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 39)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReader;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReader is never used
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/ForceLeaderTest.java
 (at line 41)
 [ecj-lint] import org.apache.solr.common.util.Utils;
 [ecj-lint]^
 [ecj-lint] The import org.apache.solr.common.util.Utils is never used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 25)
 [ecj-lint] import java.util.HashSet;
 [ecj-lint]^
 [ecj-lint] The import java.util.HashSet is never used
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 42)
 [ecj-lint] import org.apache.solr.common.cloud.CollectionStateWatcher;
 [ecj-lint]^^^
 [ecj-lint] The import org.apache.solr.common.cloud.CollectionStateWatcher is 
never used
 [ecj-lint] --
 [ecj-lint] 10. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java
 (at line 46)
 [ecj-lint] import org.apache.solr.common.cloud.ZkStateReaderAccessor;
 [ecj-lint]^^
 [ecj-lint] The import org.apache.solr.common.cloud.ZkStateReaderAccessor is 
never used
 [ecj-lint] --
 [ecj-lint] 10 problems (9 errors, 1 warning)

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:633: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build.xml:690: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:2095: 
The following error occurred while executing this line:

Re: BadApple report

2018-04-17 Thread Erick Erickson
Great! I won't BadApple TestDocTermOrds on Thursday. Thanks!

On Tue, Apr 17, 2018 at 1:20 AM, Alan Woodward  wrote:
> TestDocTermOrds should be fixed now, as should TestIndexSorting (I 
> un-badappled the latter yesterday)
>
>> On 16 Apr 2018, at 21:59, Erick Erickson  wrote:
>>
>> We have a much smaller list of _consistently_ failing tests this week, i.e.
>> tests that are in Hoss' rollups from two weeks ago and also failed this
>> past week.
>>
>> In order to reduce some of the make-work, I collect failed tests Fri->Mon
>> so the BadApple'd tests on Thursday don't clutter things up.
>>
>>
>> ***Tests I'll BadApple on Thursday.
>>
>> These are tests that failed in the last week and _also_ are failures
>> in Hoss' report from two weeks ago, so nobody has addressed them in
>> that time-frame.
>>
>> PLEASE LET ME KNOW BEFORE THURSDAY WHICH OF THESE SHOULD NOT BE BADAPPLEd
>>   
>> org.apache.solr.cloud.autoscaling.NodeLostTriggerTest.testListenerAcceptance
>>   org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testEventQueue
>>   org.apache.solr.uninverting.TestDocTermOrds.testNumericEncoded64
>>
>>
>> ***All collected test failures:
>> *Timeout (or time related)/session expired/thread
>> leak/zombie threads/Object tracker
>> junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest
>> junit.framework.TestSuite.org.apache.solr.cloud.ZkControllerTest
>> junit.framework.TestSuite.org.apache.solr.ltr.feature.TestExternalFeatures
>> junit.framework.TestSuite.org.apache.solr.request.TestUnInvertedFieldException
>> junit.framework.TestSuite.org.apache.solr.schema.TestCloudSchemaless
>> org.apache.solr.cloud.cdcr.CdcrBootstrapTest.testBootstrapWithSourceCluster
>> org.apache.solr.cloud.TestPullReplicaErrorHandling.testCantConnectToLeader
>> org.apache.solr.common.cloud.TestCollectionStateWatchers.testWaitForStateWatcherIsRetainedOnPredicateFailure
>> unit.framework.TestSuite.org.apache.solr.ltr.feature.TestExternalFeatures
>>
>>
>> ***OutOfMemory/GC overhead exceeded.
>> junit.framework.TestSuite.org.apache.solr.uninverting.TestDocTermOrds
>> org.apache.solr.uninverting.TestDocTermOrds.testActuallySingleValued
>> org.apache.solr.uninverting.TestDocTermOrds.testEmptyIndex
>> org.apache.solr.uninverting.TestDocTermOrds.testNumericEncoded64
>> org.apache.solr.uninverting.TestDocTermOrds.testRandom
>> org.apache.solr.uninverting.TestDocTermOrds.testSortedTermsEnum
>> org.apache.solr.uninverting.TestDocTermOrds.testTriggerUnInvertLimit
>>
>> *Other (typically asserts.)
>> org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale
>> org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration
>> org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger
>> org.apache.solr.cloud.autoscaling.MetricTriggerIntegrationTest.testMetricTrigger
>> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState
>> org.apache.solr.cloud.autoscaling.NodeLostTriggerTest.testListenerAcceptance
>> org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testEventQueue
>> org.apache.solr.cloud.ForceLeaderTest.testZombieLeader
>> org.apache.solr.common.cloud.TestCollectionStateWatchers.testWaitForStateWatcherIsRetainedOnPredicateFailure
>> org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfos
>> org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfosData
>> org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfosVersion
>> org.apache.solr.handler.dataimport.TestContentStreamDataSource.testCommitWithin
>> org.apache.solr.schema.TestCloudSchemaless.test
>> org.apache.solr.uninverting.TestDocTermOrds.testNumericEncoded64
>> org.apache.solr.update.processor.TemplateUpdateProcessorTest.testSimple
>>
>> Annotated tests.
>>
>>
>> *AwaitsFix Annotations:
>>
>> Lucene AwaitsFix
>> RandomGeoPolygonTest.java
>>   testComparePolygons()
>>   //@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8245;)
>>
>> TestControlledRealTimeReopenThread.java
>>   testCRTReopen()
>>   @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-5737;)
>>
>> TestICUNormalizer2CharFilter.java
>>   testRandomStrings()
>>   @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-5595;)
>>
>> TestICUTokenizerCJK.java
>>   TestICUTokenizerCJK suite
>>   @AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8222;)
>>
>> TestMoreLikeThis.java
>>   testMultiFieldShouldReturnPerFieldBooleanQuery()
>>   @AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-7161;)
>>
>> UIMABaseAnalyzerTest.java
>>   testRandomStrings()
>>   @Test @AwaitsFix(bugUrl =
>> "https://issues.apache.org/jira/browse/LUCENE-3869;)
>>
>> UIMABaseAnalyzerTest.java
>>   

[jira] [Comment Edited] (SOLR-12203) Error in response for field containing date. Unexpected state.

2018-04-17 Thread Jeroen Steggink (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440912#comment-16440912
 ] 

Jeroen Steggink edited comment on SOLR-12203 at 4/17/18 3:50 PM:
-

Erick, I apologize for not raising this on the user's list first.

I indeed changed the schema, but only the luceneMatchVersion. It was still on 
version 6.5.1 and I changed it to 7.1.0 I didn't change this fieldtype.


was (Author: jeroens):
Erick, I apologize for not raising this on the user's list first.

I indeed changed the schema, but only the luceneMatchVersion. It was still on 
version 6.5.2 and I changed it to 7.1.0 I didn't change this fieldtype.

> Error in response for field containing date. Unexpected state.
> --
>
> Key: SOLR-12203
> URL: https://issues.apache.org/jira/browse/SOLR-12203
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Affects Versions: 7.2.1, 7.3
>Reporter: Jeroen Steggink
>Priority: Minor
>
> I get the following error:
> {noformat}
> java.lang.AssertionError: Unexpected state. Field: 
> stored,indexed,tokenized,omitNorms,indexOptions=DOCSds_lastModified:2013-10-04T22:25:11Z
> at org.apache.solr.schema.DatePointField.toObject(DatePointField.java:154)
> at org.apache.solr.schema.PointField.write(PointField.java:198)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:141)
> at 
> org.apache.solr.response.JSONWriter.writeSolrDocument(JSONResponseWriter.java:374)
> at 
> org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161)
> at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)
> at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
> at 
> 

[jira] [Comment Edited] (SOLR-12203) Error in response for field containing date. Unexpected state.

2018-04-17 Thread Jeroen Steggink (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440912#comment-16440912
 ] 

Jeroen Steggink edited comment on SOLR-12203 at 4/17/18 3:50 PM:
-

Erick, I apologize for not raising this on the user's list first.

I indeed changed the schema, but only the luceneMatchVersion. It was still on 
version 6.5.1 and I changed it to 7.1.0 I didn't change this fieldtype.


was (Author: jeroens):
Erick, I apologize for not raising this on the user's list first.

I indeed changed the schema, but only the luceneMatchVersion. It was still on 
version 6.5.1 and I changed it to 7.1.0 I didn't change this fieldtype.

> Error in response for field containing date. Unexpected state.
> --
>
> Key: SOLR-12203
> URL: https://issues.apache.org/jira/browse/SOLR-12203
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Affects Versions: 7.2.1, 7.3
>Reporter: Jeroen Steggink
>Priority: Minor
>
> I get the following error:
> {noformat}
> java.lang.AssertionError: Unexpected state. Field: 
> stored,indexed,tokenized,omitNorms,indexOptions=DOCSds_lastModified:2013-10-04T22:25:11Z
> at org.apache.solr.schema.DatePointField.toObject(DatePointField.java:154)
> at org.apache.solr.schema.PointField.write(PointField.java:198)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:141)
> at 
> org.apache.solr.response.JSONWriter.writeSolrDocument(JSONResponseWriter.java:374)
> at 
> org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161)
> at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)
> at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
> at 
> 

[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-04-17 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441001#comment-16441001
 ] 

Erick Erickson commented on LUCENE-7976:


Hold the presses. I really, really, really _hate_ it when the thought occurs at 
6:30 AM "Maybe if I approached the problem slightly differently it would be 
vastly simpler". I suppose sometimes I have to work through all the gory 
details before understanding the process enough to think of a simpler way...

Anyway, I said it above [~mikemccand], If findForcedDeletesMerges and 
findForcedMerges all need to go through the work of findMerges to respect 
segment size, would it be possible to refactor out the meat of findMerges and 
feed it the eligible lists from findForcedDeletesMerges and findForcedMerges? 
Then a couple of parameters would need to be passed into the extracted method, 
things like max segment size and segs per tier etc. But the current code then 
stays largely intact.

Taking a hack at this now, if it pans out at all you should completely ignore 
the previous patch.


> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8253) ForceMergeDeletes does not merge soft-deleted segments

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440956#comment-16440956
 ] 

ASF subversion and git services commented on LUCENE-8253:
-

Commit 330fd18f200dae0892b3aa0882668435730c4319 in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=330fd18 ]

LUCENE-8253: Don't create ReadersAndUpdates for foreign segments

IndexWriter#numDeletesToMerge was creating a ReadersAndUpdates
for all incoming SegmentCommitInfo even if that info wasn't private
to the IndexWriter. This is an illegal use of this API but since it's
transitively public via MergePolicy#findMerges we have to be conservative
with regestiering ReadersAndUpdates. In IndexWriter#numDeletesToMerge we
can only use existing ones. This means for soft-deletes we need to react
earlier in order to produce accurate numbers.

This change partially rolls back the changes in LUCENE-8253. Instead of
registering the readers once they are pulled via IndexWriter#numDeletesToMerge
we now check if segments are fully deleted on flush which is very unlikely and
can be done in a lazy fashion ie. it's only paying the extra cost of opening a
reader and checking all soft-deletes if soft deletes are used and present
in the flushed segment.

This has the side-effect that flushed segments that are 100% hard deleted are 
also
cleaned up right after they are flushed, previously these segments were sticking
around for a while until they got picked for a merge or received another delete.

This also closes LUCENE-8256


> ForceMergeDeletes does not merge soft-deleted segments
> --
>
> Key: LUCENE-8253
> URL: https://issues.apache.org/jira/browse/LUCENE-8253
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Assignee: Simon Willnauer
>Priority: Major
> Attachments: LUCENE-8253.patch, test-merge.patch
>
>
> IndexWriter#forceMergeDeletes should merge segments having soft-deleted 
> documents as hard-deleted documents if we configured "softDeletesField" in an 
> IndexWriterConfig.
> Attached is a failed test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8253) ForceMergeDeletes does not merge soft-deleted segments

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440957#comment-16440957
 ] 

ASF subversion and git services commented on LUCENE-8253:
-

Commit 330fd18f200dae0892b3aa0882668435730c4319 in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=330fd18 ]

LUCENE-8253: Don't create ReadersAndUpdates for foreign segments

IndexWriter#numDeletesToMerge was creating a ReadersAndUpdates
for all incoming SegmentCommitInfo even if that info wasn't private
to the IndexWriter. This is an illegal use of this API but since it's
transitively public via MergePolicy#findMerges we have to be conservative
with regestiering ReadersAndUpdates. In IndexWriter#numDeletesToMerge we
can only use existing ones. This means for soft-deletes we need to react
earlier in order to produce accurate numbers.

This change partially rolls back the changes in LUCENE-8253. Instead of
registering the readers once they are pulled via IndexWriter#numDeletesToMerge
we now check if segments are fully deleted on flush which is very unlikely and
can be done in a lazy fashion ie. it's only paying the extra cost of opening a
reader and checking all soft-deletes if soft deletes are used and present
in the flushed segment.

This has the side-effect that flushed segments that are 100% hard deleted are 
also
cleaned up right after they are flushed, previously these segments were sticking
around for a while until they got picked for a merge or received another delete.

This also closes LUCENE-8256


> ForceMergeDeletes does not merge soft-deleted segments
> --
>
> Key: LUCENE-8253
> URL: https://issues.apache.org/jira/browse/LUCENE-8253
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Assignee: Simon Willnauer
>Priority: Major
> Attachments: LUCENE-8253.patch, test-merge.patch
>
>
> IndexWriter#forceMergeDeletes should merge segments having soft-deleted 
> documents as hard-deleted documents if we configured "softDeletesField" in an 
> IndexWriterConfig.
> Attached is a failed test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8256) MP does not drop fully soft-deleted segments

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440958#comment-16440958
 ] 

ASF subversion and git services commented on LUCENE-8256:
-

Commit 330fd18f200dae0892b3aa0882668435730c4319 in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=330fd18 ]

LUCENE-8253: Don't create ReadersAndUpdates for foreign segments

IndexWriter#numDeletesToMerge was creating a ReadersAndUpdates
for all incoming SegmentCommitInfo even if that info wasn't private
to the IndexWriter. This is an illegal use of this API but since it's
transitively public via MergePolicy#findMerges we have to be conservative
with regestiering ReadersAndUpdates. In IndexWriter#numDeletesToMerge we
can only use existing ones. This means for soft-deletes we need to react
earlier in order to produce accurate numbers.

This change partially rolls back the changes in LUCENE-8253. Instead of
registering the readers once they are pulled via IndexWriter#numDeletesToMerge
we now check if segments are fully deleted on flush which is very unlikely and
can be done in a lazy fashion ie. it's only paying the extra cost of opening a
reader and checking all soft-deletes if soft deletes are used and present
in the flushed segment.

This has the side-effect that flushed segments that are 100% hard deleted are 
also
cleaned up right after they are flushed, previously these segments were sticking
around for a while until they got picked for a merge or received another delete.

This also closes LUCENE-8256


> MP does not drop fully soft-deleted segments
> 
>
> Key: LUCENE-8256
> URL: https://issues.apache.org/jira/browse/LUCENE-8256
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nhat Nguyen
>Assignee: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: test-drop-segment.patch
>
>
> Fully soft-deleted segments should be dropped as fully hard-deleted segments 
> if softDeletesField is provided and MP is configured not to retain fully 
> deleted segments.
> A failed test is attached.
> /cc [~simonw]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8253) ForceMergeDeletes does not merge soft-deleted segments

2018-04-17 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440954#comment-16440954
 ] 

Steve Rowe commented on LUCENE-8253:


Thanks [~simonw]!

> ForceMergeDeletes does not merge soft-deleted segments
> --
>
> Key: LUCENE-8253
> URL: https://issues.apache.org/jira/browse/LUCENE-8253
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Assignee: Simon Willnauer
>Priority: Major
> Attachments: LUCENE-8253.patch, test-merge.patch
>
>
> IndexWriter#forceMergeDeletes should merge segments having soft-deleted 
> documents as hard-deleted documents if we configured "softDeletesField" in an 
> IndexWriterConfig.
> Attached is a failed test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8253) ForceMergeDeletes does not merge soft-deleted segments

2018-04-17 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440951#comment-16440951
 ] 

Simon Willnauer commented on LUCENE-8253:
-

[~steve_rowe] I fixed the issue and reenabled the test. sorry for the noise

> ForceMergeDeletes does not merge soft-deleted segments
> --
>
> Key: LUCENE-8253
> URL: https://issues.apache.org/jira/browse/LUCENE-8253
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Assignee: Simon Willnauer
>Priority: Major
> Attachments: LUCENE-8253.patch, test-merge.patch
>
>
> IndexWriter#forceMergeDeletes should merge segments having soft-deleted 
> documents as hard-deleted documents if we configured "softDeletesField" in an 
> IndexWriterConfig.
> Attached is a failed test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8256) MP does not drop fully soft-deleted segments

2018-04-17 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-8256.
-
Resolution: Fixed

this is fixed by a followup commit on -LUCENE-8253- 

> MP does not drop fully soft-deleted segments
> 
>
> Key: LUCENE-8256
> URL: https://issues.apache.org/jira/browse/LUCENE-8256
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nhat Nguyen
>Assignee: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: test-drop-segment.patch
>
>
> Fully soft-deleted segments should be dropped as fully hard-deleted segments 
> if softDeletesField is provided and MP is configured not to retain fully 
> deleted segments.
> A failed test is attached.
> /cc [~simonw]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 513 - Still Unstable

2018-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/513/

[...truncated 34 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/566/consoleText

[repro] Revision: d9ef3a3d02870af55d7ee7447852fbdf374bf238

[repro] Repro line:  ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfos -Dtests.seed=AB2D4B80F735CF17 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ro-RO 
-Dtests.timezone=Asia/Atyrau -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfosVersion -Dtests.seed=AB2D4B80F735CF17 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ro-RO 
-Dtests.timezone=Asia/Atyrau -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfosData -Dtests.seed=AB2D4B80F735CF17 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ro-RO 
-Dtests.timezone=Asia/Atyrau -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestHdfsCloudBackupRestore 
-Dtests.seed=AB2D4B80F735CF17 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=en-AU -Dtests.timezone=Africa/Addis_Ababa -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
09db13f4f459a391896db2a90b2830f9b1fd898d
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout d9ef3a3d02870af55d7ee7447852fbdf374bf238

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SegmentsInfoRequestHandlerTest
[repro]   TestHdfsCloudBackupRestore
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.SegmentsInfoRequestHandlerTest|*.TestHdfsCloudBackupRestore" 
-Dtests.showOutput=onerror  -Dtests.seed=AB2D4B80F735CF17 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=ro-RO -Dtests.timezone=Asia/Atyrau 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 835 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.api.collections.TestHdfsCloudBackupRestore
[repro]   5/5 failed: 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SegmentsInfoRequestHandlerTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.SegmentsInfoRequestHandlerTest" -Dtests.showOutput=onerror  
-Dtests.seed=AB2D4B80F735CF17 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=ro-RO -Dtests.timezone=Asia/Atyrau -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 771 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   5/5 failed: 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest

[repro] Re-testing 100% failures at the tip of branch_7x without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SegmentsInfoRequestHandlerTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.SegmentsInfoRequestHandlerTest" -Dtests.showOutput=onerror  
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ro-RO 
-Dtests.timezone=Asia/Atyrau -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 772 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x without a seed:
[repro]   5/5 failed: 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest
[repro] git checkout 09db13f4f459a391896db2a90b2830f9b1fd898d

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-8253) ForceMergeDeletes does not merge soft-deleted segments

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440943#comment-16440943
 ] 

ASF subversion and git services commented on LUCENE-8253:
-

Commit d904112428184ce9c1726313add5d184f4014a72 in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d904112 ]

LUCENE-8253: Don't create ReadersAndUpdates for foreign segments

IndexWriter#numDeletesToMerge was creating a ReadersAndUpdates
for all incoming SegmentCommitInfo even if that info wasn't private
to the IndexWriter. This is an illegal use of this API but since it's
transitively public via MergePolicy#findMerges we have to be conservative
with regestiering ReadersAndUpdates. In IndexWriter#numDeletesToMerge we
can only use existing ones. This means for soft-deletes we need to react
earlier in order to produce accurate numbers.

This change partially rolls back the changes in LUCENE-8253. Instead of
registering the readers once they are pulled via IndexWriter#numDeletesToMerge
we now check if segments are fully deleted on flush which is very unlikely and
can be done in a lazy fashion ie. it's only paying the extra cost of opening a
reader and checking all soft-deletes if soft deletes are used and present
in the flushed segment.

This has the side-effect that flushed segments that are 100% hard deleted are 
also
cleaned up right after they are flushed, previously these segments were sticking
around for a while until they got picked for a merge or received another delete.

This also closes LUCENE-8256


> ForceMergeDeletes does not merge soft-deleted segments
> --
>
> Key: LUCENE-8253
> URL: https://issues.apache.org/jira/browse/LUCENE-8253
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Assignee: Simon Willnauer
>Priority: Major
> Attachments: LUCENE-8253.patch, test-merge.patch
>
>
> IndexWriter#forceMergeDeletes should merge segments having soft-deleted 
> documents as hard-deleted documents if we configured "softDeletesField" in an 
> IndexWriterConfig.
> Attached is a failed test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8256) MP does not drop fully soft-deleted segments

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440945#comment-16440945
 ] 

ASF subversion and git services commented on LUCENE-8256:
-

Commit d904112428184ce9c1726313add5d184f4014a72 in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d904112 ]

LUCENE-8253: Don't create ReadersAndUpdates for foreign segments

IndexWriter#numDeletesToMerge was creating a ReadersAndUpdates
for all incoming SegmentCommitInfo even if that info wasn't private
to the IndexWriter. This is an illegal use of this API but since it's
transitively public via MergePolicy#findMerges we have to be conservative
with regestiering ReadersAndUpdates. In IndexWriter#numDeletesToMerge we
can only use existing ones. This means for soft-deletes we need to react
earlier in order to produce accurate numbers.

This change partially rolls back the changes in LUCENE-8253. Instead of
registering the readers once they are pulled via IndexWriter#numDeletesToMerge
we now check if segments are fully deleted on flush which is very unlikely and
can be done in a lazy fashion ie. it's only paying the extra cost of opening a
reader and checking all soft-deletes if soft deletes are used and present
in the flushed segment.

This has the side-effect that flushed segments that are 100% hard deleted are 
also
cleaned up right after they are flushed, previously these segments were sticking
around for a while until they got picked for a merge or received another delete.

This also closes LUCENE-8256


> MP does not drop fully soft-deleted segments
> 
>
> Key: LUCENE-8256
> URL: https://issues.apache.org/jira/browse/LUCENE-8256
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nhat Nguyen
>Assignee: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: test-drop-segment.patch
>
>
> Fully soft-deleted segments should be dropped as fully hard-deleted segments 
> if softDeletesField is provided and MP is configured not to retain fully 
> deleted segments.
> A failed test is attached.
> /cc [~simonw]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8253) ForceMergeDeletes does not merge soft-deleted segments

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440944#comment-16440944
 ] 

ASF subversion and git services commented on LUCENE-8253:
-

Commit d904112428184ce9c1726313add5d184f4014a72 in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d904112 ]

LUCENE-8253: Don't create ReadersAndUpdates for foreign segments

IndexWriter#numDeletesToMerge was creating a ReadersAndUpdates
for all incoming SegmentCommitInfo even if that info wasn't private
to the IndexWriter. This is an illegal use of this API but since it's
transitively public via MergePolicy#findMerges we have to be conservative
with regestiering ReadersAndUpdates. In IndexWriter#numDeletesToMerge we
can only use existing ones. This means for soft-deletes we need to react
earlier in order to produce accurate numbers.

This change partially rolls back the changes in LUCENE-8253. Instead of
registering the readers once they are pulled via IndexWriter#numDeletesToMerge
we now check if segments are fully deleted on flush which is very unlikely and
can be done in a lazy fashion ie. it's only paying the extra cost of opening a
reader and checking all soft-deletes if soft deletes are used and present
in the flushed segment.

This has the side-effect that flushed segments that are 100% hard deleted are 
also
cleaned up right after they are flushed, previously these segments were sticking
around for a while until they got picked for a merge or received another delete.

This also closes LUCENE-8256


> ForceMergeDeletes does not merge soft-deleted segments
> --
>
> Key: LUCENE-8253
> URL: https://issues.apache.org/jira/browse/LUCENE-8253
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Assignee: Simon Willnauer
>Priority: Major
> Attachments: LUCENE-8253.patch, test-merge.patch
>
>
> IndexWriter#forceMergeDeletes should merge segments having soft-deleted 
> documents as hard-deleted documents if we configured "softDeletesField" in an 
> IndexWriterConfig.
> Attached is a failed test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10) - Build # 1738 - Failure!

2018-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1738/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 15294 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/temp/junit4-J0-20180417_135533_4464255050105340774419.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: Java heap space
   [junit4] Dumping heap to 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/heapdumps/java_pid29360.hprof ...
   [junit4] Heap dump file created [343853191 bytes in 0.544 secs]
   [junit4] <<< JVM J0: EOF 

[...truncated 9412 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:633: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:585: Some of the tests 
produced a heap dump, but did not fail. Maybe a suppressed OutOfMemoryError? 
Dumps created:
* java_pid29360.hprof

Total time: 70 minutes 22 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-7.x - Build # 567 - Still Unstable

2018-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/567/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:


Stack Trace:
java.util.concurrent.TimeoutException
at 
__randomizedtesting.SeedInfo.seed([A2BDA1E5C8E2819F:9B3318A5E71D4861]:0)
at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:109)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration(IndexSizeTriggerTest.java:299)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12546 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
   [junit4]   2> 141564 INFO  
(SUITE-IndexSizeTriggerTest-seed#[A2BDA1E5C8E2819F]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-12203) Error in response for field containing date. Unexpected state.

2018-04-17 Thread Jeroen Steggink (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440912#comment-16440912
 ] 

Jeroen Steggink commented on SOLR-12203:


Erick, I apologize for not raising this on the user's list first.

I indeed changed the schema, but only the luceneMatchVersion. It was still on 
version 6.5.2 and I changed it to 7.1.0 I didn't change this fieldtype.

> Error in response for field containing date. Unexpected state.
> --
>
> Key: SOLR-12203
> URL: https://issues.apache.org/jira/browse/SOLR-12203
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Affects Versions: 7.2.1, 7.3
>Reporter: Jeroen Steggink
>Priority: Minor
>
> I get the following error:
> {noformat}
> java.lang.AssertionError: Unexpected state. Field: 
> stored,indexed,tokenized,omitNorms,indexOptions=DOCSds_lastModified:2013-10-04T22:25:11Z
> at org.apache.solr.schema.DatePointField.toObject(DatePointField.java:154)
> at org.apache.solr.schema.PointField.write(PointField.java:198)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:141)
> at 
> org.apache.solr.response.JSONWriter.writeSolrDocument(JSONResponseWriter.java:374)
> at 
> org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161)
> at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)
> at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
> at java.lang.Thread.run(Thread.java:748){noformat}
> I can't find out why this occurs. The weird thing is, I can't seem to find 
> this field (ds_lastModified) in the schema. I tried looking it up 

[jira] [Commented] (SOLR-12187) Replica should watch clusterstate and unload itself if its entry is removed

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440837#comment-16440837
 ] 

ASF subversion and git services commented on SOLR-12187:


Commit 174c11f2c49314160ba7e48dc5d796c3ceff8256 in lucene-solr's branch 
refs/heads/branch_7x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=174c11f ]

SOLR-12187: Replica should watch clusterstate and unload itself if its entry is 
removed


> Replica should watch clusterstate and unload itself if its entry is removed
> ---
>
> Key: SOLR-12187
> URL: https://issues.apache.org/jira/browse/SOLR-12187
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch, 
> SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch
>
>
> With the introduction of autoscaling framework, we have seen an increase in 
> the number of issues related to the race condition between delete a replica 
> and other stuff.
> Case 1: DeleteReplicaCmd failed to send UNLOAD request to a replica, 
> therefore, forcefully remove its entry from clusterstate, but the replica 
> still function normally and be able to become a leader -> SOLR-12176
> Case 2:
>  * DeleteReplicaCmd enqueue a DELETECOREOP (without sending a request to 
> replica because the node is not live)
>  * The node start and the replica get loaded
>  * DELETECOREOP has not processed hence the replica still present in 
> clusterstate --> pass checkStateInZk
>  * DELETECOREOP is executed, DeleteReplicaCmd finished
>  ** result 1: the replica start recovering, finish it and publish itself as 
> ACTIVE --> state of the replica is ACTIVE
>  ** result 2: the replica throw an exception (probably: NPE) 
> --> state of the replica is DOWN, not join leader election



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12187) Replica should watch clusterstate and unload itself if its entry is removed

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440835#comment-16440835
 ] 

ASF subversion and git services commented on SOLR-12187:


Commit 09db13f4f459a391896db2a90b2830f9b1fd898d in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=09db13f ]

SOLR-12187: Replica should watch clusterstate and unload itself if its entry is 
removed


> Replica should watch clusterstate and unload itself if its entry is removed
> ---
>
> Key: SOLR-12187
> URL: https://issues.apache.org/jira/browse/SOLR-12187
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch, 
> SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch
>
>
> With the introduction of autoscaling framework, we have seen an increase in 
> the number of issues related to the race condition between delete a replica 
> and other stuff.
> Case 1: DeleteReplicaCmd failed to send UNLOAD request to a replica, 
> therefore, forcefully remove its entry from clusterstate, but the replica 
> still function normally and be able to become a leader -> SOLR-12176
> Case 2:
>  * DeleteReplicaCmd enqueue a DELETECOREOP (without sending a request to 
> replica because the node is not live)
>  * The node start and the replica get loaded
>  * DELETECOREOP has not processed hence the replica still present in 
> clusterstate --> pass checkStateInZk
>  * DELETECOREOP is executed, DeleteReplicaCmd finished
>  ** result 1: the replica start recovering, finish it and publish itself as 
> ACTIVE --> state of the replica is ACTIVE
>  ** result 2: the replica throw an exception (probably: NPE) 
> --> state of the replica is DOWN, not join leader election



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #354: Change the method identifier from "getShardNa...

2018-04-17 Thread BruceKuiLiu
GitHub user BruceKuiLiu opened a pull request:

https://github.com/apache/lucene-solr/pull/354

Change the method identifier from "getShardNames" to "addShardNames".

The method seems to add "sliceName" to "shardNames", thus the method name 
"addShardNames" should be better "getShardNames" since "get" means getting 
something.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/BruceKuiLiu/lucene-solr getShardNames

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/354.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #354


commit 5f1da7b29f6171a6f83133443c461b396ff84803
Author: Kui LIU 
Date:   2018-04-17T13:02:10Z

Change the method identifier from "getShardNames" to "addShardNames".

The method seems to add "sliceName" to "shardNames", thus the method name 
"addShardNames" should be better "getShardNames" since "get" means getting 
something.




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1533 - Unstable

2018-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1533/

4 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
waitFor not elapsed but produced an event

Stack Trace:
java.lang.AssertionError: waitFor not elapsed but produced an event
at 
__randomizedtesting.SeedInfo.seed([44BEF462524B4CCB:2775C2E0CB843FE6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:180)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfos

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([44BEF462524B4CCB:8E963DD9847949B]:0)
at 

[jira] [Assigned] (LUCENE-8256) MP does not drop fully soft-deleted segments

2018-04-17 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer reassigned LUCENE-8256:
---

Assignee: Simon Willnauer

> MP does not drop fully soft-deleted segments
> 
>
> Key: LUCENE-8256
> URL: https://issues.apache.org/jira/browse/LUCENE-8256
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nhat Nguyen
>Assignee: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: test-drop-segment.patch
>
>
> Fully soft-deleted segments should be dropped as fully hard-deleted segments 
> if softDeletesField is provided and MP is configured not to retain fully 
> deleted segments.
> A failed test is attached.
> /cc [~simonw]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr

2018-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440781#comment-16440781
 ] 

Jan Høydahl commented on SOLR-9272:
---

So sorry for not answering your gentle requests before, thanks for nudging! 
I'll give the latest patch a new look soon!

> Auto resolve zkHost for bin/solr zk for running Solr
> 
>
> Key: SOLR-9272
> URL: https://issues.apache.org/jira/browse/SOLR-9272
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, 
> SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch
>
>
> Spinoff from SOLR-9194:
> We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already 
> running. We can optionally accept the {{-p}} parameter instead, and with that 
> use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's 
> easier to remember solr port than zk string.
> Example:
> {noformat}
> bin/solr start -c -p 9090
> bin/solr zk ls / -p 9090
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11990) Make it possible to co-locate replicas of multiple collections together in a node using policy

2018-04-17 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-11990:
-
Description: 
It is necessary to co-locate replicas of different collection together in a 
node when cross-collection joins are performed. The policy rules framework 
should support this use-case.

Example: Co-locate exactly 1 replica of collection A in each node where a 
replica of collection B is present.
{code}
{"replica":">0", "collection":"A", "shard":"#EACH", "withCollection":"B"}
{code}

This requires changing create collection, create shard and add replica APIs as 
well because we want a replica of collection A to be created first before a 
replica of collection B is created so that join queries etc are always possible.

  was:
It is necessary to co-locate replicas of different collection together in a 
node when cross-collection joins are performed. The policy rules framework 
should support this use-case.

Example: Co-locate exactly 1 replica of collection A in each node where a 
replica of collection B is present.
{code}
{"replica":">1", "collection":"A", "shard":"#EACH", "withCollection":"B"}
{code}

This requires changing create collection, create shard and add replica APIs as 
well because we want a replica of collection A to be created first before a 
replica of collection B is created so that join queries etc are always possible.


> Make it possible to co-locate replicas of multiple collections together in a 
> node using policy
> --
>
> Key: SOLR-11990
> URL: https://issues.apache.org/jira/browse/SOLR-11990
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11990.patch, SOLR-11990.patch
>
>
> It is necessary to co-locate replicas of different collection together in a 
> node when cross-collection joins are performed. The policy rules framework 
> should support this use-case.
> Example: Co-locate exactly 1 replica of collection A in each node where a 
> replica of collection B is present.
> {code}
> {"replica":">0", "collection":"A", "shard":"#EACH", "withCollection":"B"}
> {code}
> This requires changing create collection, create shard and add replica APIs 
> as well because we want a replica of collection A to be created first before 
> a replica of collection B is created so that join queries etc are always 
> possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10) - Build # 1737 - Still Unstable!

2018-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1737/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseParallelGC

10 tests failed.
FAILED:  
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfos

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([E01F94A4D778E442:AC48031B1D743C12]:0)
at 
org.apache.lucene.index.IndexWriter$ReaderPool.noDups(IndexWriter.java:867)
at 
org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:857)
at 
org.apache.lucene.index.IndexWriter.numDeletesToMerge(IndexWriter.java:5235)
at 
org.apache.lucene.index.LogMergePolicy.sizeDocs(LogMergePolicy.java:153)
at 
org.apache.lucene.index.LogDocMergePolicy.size(LogDocMergePolicy.java:44)
at 
org.apache.lucene.index.LogMergePolicy.findMerges(LogMergePolicy.java:469)
at 
org.apache.solr.handler.admin.SegmentsInfoRequestHandler.getMergeCandidatesNames(SegmentsInfoRequestHandler.java:100)
at 
org.apache.solr.handler.admin.SegmentsInfoRequestHandler.getSegmentsInfo(SegmentsInfoRequestHandler.java:59)
at 
org.apache.solr.handler.admin.SegmentsInfoRequestHandler.handleRequestBody(SegmentsInfoRequestHandler.java:48)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2508)
at org.apache.solr.util.TestHarness.query(TestHarness.java:337)
at org.apache.solr.util.TestHarness.query(TestHarness.java:319)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:890)
at 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfos(SegmentsInfoRequestHandlerTest.java:61)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (LUCENE-8253) ForceMergeDeletes does not merge soft-deleted segments

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440745#comment-16440745
 ] 

ASF subversion and git services commented on LUCENE-8253:
-

Commit 94adf9d2ff42cc4133354f7ab09ed32c496250b9 in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=94adf9d ]

LUCENE-8253: Mute test while a fix is worked on


> ForceMergeDeletes does not merge soft-deleted segments
> --
>
> Key: LUCENE-8253
> URL: https://issues.apache.org/jira/browse/LUCENE-8253
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Assignee: Simon Willnauer
>Priority: Major
> Attachments: LUCENE-8253.patch, test-merge.patch
>
>
> IndexWriter#forceMergeDeletes should merge segments having soft-deleted 
> documents as hard-deleted documents if we configured "softDeletesField" in an 
> IndexWriterConfig.
> Attached is a failed test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8253) ForceMergeDeletes does not merge soft-deleted segments

2018-04-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440746#comment-16440746
 ] 

ASF subversion and git services commented on LUCENE-8253:
-

Commit f7f12a51f313bf406f0fa3d48e74864268338c6d in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f7f12a5 ]

LUCENE-8253: Mute test while a fix is worked on


> ForceMergeDeletes does not merge soft-deleted segments
> --
>
> Key: LUCENE-8253
> URL: https://issues.apache.org/jira/browse/LUCENE-8253
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Assignee: Simon Willnauer
>Priority: Major
> Attachments: LUCENE-8253.patch, test-merge.patch
>
>
> IndexWriter#forceMergeDeletes should merge segments having soft-deleted 
> documents as hard-deleted documents if we configured "softDeletesField" in an 
> IndexWriterConfig.
> Attached is a failed test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >