[jira] [Updated] (SOLR-13674) NodeAddedTrigger does not support configuration of replica type hint

2019-08-07 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-13674:
-
Summary: NodeAddedTrigger does not support configuration of replica type 
hint  (was: NodeAddedTrigger does not support configuration of relica type hint)

> NodeAddedTrigger does not support configuration of replica type hint
> 
>
> Key: SOLR-13674
> URL: https://issues.apache.org/jira/browse/SOLR-13674
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6
>Reporter: Irena Shaigorodsky
>Assignee: Shalin Shekhar Mangar
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The current code 
> org.apache.solr.cloud.autoscaling.ComputePlanAction#getNodeAddedSuggester 
> only sets COLL_SHARD hint, as a result any added replica will be NRT one.
> Our current setup has TLOG nodes on physical hardware and PULL nodes on k8s 
> that are recycled periodically. An attempt to add those will bring the nodes 
> in the cluster as NRT one.
> The root cause is 
> org.apache.solr.client.solrj.cloud.autoscaling.AddReplicaSuggester#tryEachNode
>  that expects to find the hint REPLICATYPE and defaults to NRT one.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.

2019-08-07 Thread Anindita Gupta (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902694#comment-16902694
 ] 

Anindita Gupta commented on SOLR-12801:
---

Hello [~markrmil...@gmail.com]

In FacetStream.java, open() method Socket timeout 30 seconds & connection 
timeout 15 seconds has been added in Version #7.7 onward which is hard-coded 
and not configurable. 
These numbers are very low when dealing with large number of documents for 
complex Streaming facet query and if we try to retrieve large number of records 
or starting Offset value is high, Timeout exception is occurring while waiting 
response from server.

How can we deal with this issue?

> Fix the tests, remove BadApples and AwaitsFix annotations, improve env for 
> test development.
> 
>
> Key: SOLR-12801
> URL: https://issues.apache.org/jira/browse/SOLR-12801
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> A single issue to counteract the single issue adding tons of annotations, the 
> continued addition of new flakey tests, and the continued addition of 
> flakiness to existing tests.
> Lots more to come.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 174 - Still unstable

2019-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/174/

1 tests failed.
FAILED:  org.apache.solr.cloud.api.collections.ShardSplitTest.test

Error Message:
Wrong doc count on shard1_0. See SOLR-5309 expected:<257> but was:<316>

Stack Trace:
java.lang.AssertionError: Wrong doc count on shard1_0. See SOLR-5309 
expected:<257> but was:<316>
at 
__randomizedtesting.SeedInfo.seed([AE04B5C9BA6E9A4:82B47486355A845C]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.checkDocCountsAndShardStates(ShardSplitTest.java:1002)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.splitByUniqueKeyTest(ShardSplitTest.java:794)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.test(ShardSplitTest.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[GitHub] [lucene-solr] chenkovsky commented on issue #824: LUCENE-8755: QuadPrefixTree robustness

2019-08-07 Thread GitBox
chenkovsky commented on issue #824: LUCENE-8755: QuadPrefixTree robustness
URL: https://github.com/apache/lucene-solr/pull/824#issuecomment-519363352
 
 
   I find that Lucene has a tool called IndexUpgrader. Maybe updating index is 
a better solution for backward compatibility?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1923 - Still Failing

2019-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1923/

No tests ran.

Build Log:
[...truncated 25 lines...]
ERROR: Failed to check out http://svn.apache.org/repos/asf/lucene/test-data
org.tmatesoft.svn.core.SVNException: svn: E175002: connection refused by the 
server
svn: E175002: OPTIONS request failed on '/repos/asf/lucene/test-data'
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:112)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:96)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:765)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:352)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:340)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:910)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:702)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:113)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1035)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getLatestRevision(DAVRepository.java:164)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.getRevisionNumber(SvnNgRepositoryAccess.java:119)
at 
org.tmatesoft.svn.core.internal.wc2.SvnRepositoryAccess.getLocations(SvnRepositoryAccess.java:178)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.createRepositoryFor(SvnNgRepositoryAccess.java:43)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgAbstractUpdate.checkout(SvnNgAbstractUpdate.java:831)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:26)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgOperationRunner.run(SvnNgOperationRunner.java:20)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:21)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1239)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
hudson.scm.subversion.CheckoutUpdater$SubversionUpdateTask.perform(CheckoutUpdater.java:133)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:176)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:134)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168)
at 
hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:1041)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:1017)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:990)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at 
org.tmatesoft.svn.core.internal.util.SVNSocketConnection.run(SVNSocketConnection.java:57)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
... 4 more
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)

[JENKINS] Lucene-Solr-8.2-Windows (64bit/jdk-11.0.3) - Build # 165 - Unstable!

2019-08-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.2-Windows/165/
Java: 64bit/jdk-11.0.3 -XX:-UseCompressedOops -XX:+UseParallelGC

18 tests failed.
FAILED:  
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testFilePersistence

Error Message:
Software caused connection abort: recv failed

Stack Trace:
javax.net.ssl.SSLException: Software caused connection abort: recv failed
at 
__randomizedtesting.SeedInfo.seed([57A4849E871E618E:7505E0E3227207CB]:0)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:127)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:321)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:259)
at 
java.base/sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1314)
at 
java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:839)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.solr.util.RestTestHarness.getResponse(RestTestHarness.java:215)
at org.apache.solr.util.RestTestHarness.query(RestTestHarness.java:107)
at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:226)
at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
at 
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testFilePersistence(TestModelManagerPersistence.java:168)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 

[GitHub] [lucene-solr] chenkovsky commented on issue #824: LUCENE-8755: QuadPrefixTree robustness

2019-08-07 Thread GitBox
chenkovsky commented on issue #824: LUCENE-8755: QuadPrefixTree robustness
URL: https://github.com/apache/lucene-solr/pull/824#issuecomment-519345358
 
 
   @dsmiley for backward compatibility problem, is there something like 
"migration" in lucene?  When I develop web project, I find that migration idea 
benefits me a lot when the schema changes. 
   
   Does the indexed data contain luncene version info which created the index? 
If it contains, We can use old code to search in old indexed data.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] chenkovsky commented on issue #824: LUCENE-8755: QuadPrefixTree robustness

2019-08-07 Thread GitBox
chenkovsky commented on issue #824: LUCENE-8755: QuadPrefixTree robustness
URL: https://github.com/apache/lucene-solr/pull/824#issuecomment-519342718
 
 
   yes, I run  spatial-extras tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13682) command line option to export data to a file

2019-08-07 Thread Noble Paul (JIRA)
Noble Paul created SOLR-13682:
-

 Summary: command line option to export data to a file
 Key: SOLR-13682
 URL: https://issues.apache.org/jira/browse/SOLR-13682
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul


example 
{code}
bin/solr export --url http://localhost:8983/solr/gettingstarted
{code}
This will export all the docs in a collection called {{gettingstarted}} into a 
file called {{gettingstarted.javabin}}

additional options are
* format : jsonl or javabin 
* file :  export file name .(if this starts with "http://; the output will be 
piped to that url. Can be used to pipe docs to another cluster)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins

2019-08-07 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902561#comment-16902561
 ] 

Hoss Man commented on SOLR-9658:


* i don't see anything that updates {{oldestEntryNs}} except 
{{markAndSweepByIdleTime}} ?
 ** this means that {{markAndSweep()}} may unneccessarily call 
{{markAndSweepByIdleTime()}} (looping over every entry) even if everything 
older then the maxIdleTime has already been purged by earlier method calls like 
{{markAndSweepByCacheSize()}} or {{markAndSweepByRamSize()}}
 ** off the top of my head, i can't think of an efficient way to "update" 
{{oldestEntryNs}} in some place like {{postRemoveEntry()}} w/o scanning every 
cache entry again, but...
 ** why not move {{markAndSweepByIdleTime()}} _before_ 
{{markAndSweepByCacheSize()}} and {{markAndSweepByRamSize()}} ?
 *** since the {{postRemoveEntry()}} calls made as a result of any eviction due 
to idle time *can* (and already do) efficiently update the results of 
{{size()}} and {{ramBytesUsed()}} that could potentially save the need for 
those additional scans of the cache in many situations.

 * rather then complicating the patch by changing the constructor of the 
{{CleanupThread}} class(es) to take in the maxIdle values directly, why not 
read that info from a (new) method on the ConcurrentXXXCache objects already 
passed to the constructors?
 ** with some small tweaks to the while loop, the {{wait()}} call could actual 
read this value dynamically from the cache element, eliminating the need to 
call {{setRunCleanupThread()}} from inside {{setMaxIdleTime()}} in the event 
that the value is changed dynamically.
 *** which is currently broken anyway since {{setRunCleanupThread()}} is 
currently a No-Op if {{this.runCleanupThread}} is true and {{cleanupThread}} is 
already non-null.
 ** assuming {{CleanupThread}} is changed to dynamically read the maxIdleTime 
directly from the cache, {{setMaxIdleTime()}} could just call {{wakeThread()}} 
if the new maxIdleTime is less then the old maxIdleTime
 *** or leave the call to {{setRunCleanupThread()}} as is, but change the {{if 
(cleanupThread == null)}} condition of {{setRunCleanupThread()}} to have an 
"else" code path that calls {{wakeThread()}} so it will call {{markAndSweep()}} 
(with the udpated settings) and then re-wait (with the new maxIdleTime)

 * although not likely to be problematic in practice, you've broken backcompat 
on the public "ConcurrentXXXCache" class(es) by adding an arg to the 
constructor.
 ** i would suggest adding a new constructor instead, and making the old one 
call the new one with "-1" – if for no other reason then to simplify the touch 
points / discussion in the patch...
 ** ie: in order to make this change, you had to modify both 
{{TestJavaBinCodec}} and {{TemplateUpdateProcessorFactory}} – but you wound up 
not using a backcompat equivilent value in {{TemplateUpdateProcessorFactory}} 
so your changes actually modify the behavior of that (end user facing class) in 
an undocumented way (that users can't override, and may actually have some 
noticible performance impacts on "put" since that existing usage doens't 
involve the cleanup thread) which should be discussed before committing (but 
are largely unrelated to the goals in this jira)

 * under no circumstances should we be committing new test code that makes 
arbitrary {{Thread.sleep(5000)}} calls
 ** i am willing to say categorically that this approach: DOES. NOT. WORK. – 
and has represented an overwelming percentage of the root causes of our tests 
being unreliable

 *** there is no garuntee the JVM will sleep as long as you ask it too 
(particularly on virtual hardware)
 *** there is no garuntee that "background threads/logic" will be 
scheduled/finished during the "sleep"
 ** it is far better to add whatever {{@lucene.internal}} methods we need to 
"hook into" the core code from test code and have white-box / grey-box tests 
that ensure methods get called when we expect, ex:
 *** if we want to test that the user level configuration results in the 
appropriate values being set on the underlying objects, we should add public 
getter methods for those values to those classes, and have the test reach into 
the SolrCore to get those objects and assert the expected results on those 
methods (NOT just "wait" to see the code run and have the expected side effects)
 *** if we want to test that {{ConcurrentXXXCache.markAndSweep()}} gets called 
by the {{CleanupThread}} _eventually_ when maxIdle time is configured even if 
nothing calls {{wakeThread()}} then we should use a mock/subclass of the 
ConcurrentXXXCache that overrides {{markAndSweep()}} to set a latch that we can 
{{await(...)}} on from the test code.
 *** if we want to test that calls to {{ConcurrentXXXCache.markAndSweep()}} 
result in items being removed if their {{createTime}} is "too old" then we 
should add a special internal only version of 

[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-13-ea+26) - Build # 8074 - Failure!

2019-08-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8074/
Java: 64bit/jdk-13-ea+26 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 7049 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\temp\junit4-J1-20190808_002851_19515244219493498460892.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  EXCEPTION_ACCESS_VIOLATION (0xc005) at 
pc=0x7fffb2576f32, pid=35188, tid=30440
   [junit4] #
   [junit4] # JRE version: OpenJDK Runtime Environment (13.0+26) (build 
13-ea+26)
   [junit4] # Java VM: OpenJDK 64-Bit Server VM (13-ea+26, mixed mode, tiered, 
g1 gc, windows-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [jvm.dll+0x546f32]
   [junit4] #
   [junit4] # No core dump will be written. Minidumps are not enabled by 
default on client versions of Windows
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J1\hs_err_pid35188.log
   [junit4] #
   [junit4] # Compiler replay data is saved as:
   [junit4] # 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J1\replay_pid35188.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J1: EOF 

[...truncated 83 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
C:\Users\jenkins\tools\java\64bit\jdk-13-ea+26\bin\java.exe 
-XX:-UseCompressedOops -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\heapdumps
 -ea -esa --illegal-access=deny -Dtests.prefix=tests 
-Dtests.seed=DD5DC5105A92F24D -Xmx512M -Dtests.iters= -Dtests.verbose=false 
-Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=9.0.0 -Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\tools\junit4\logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Dcommon.dir=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene 
-Dclover.db.dir=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\clover\db
 
-Djava.security.policy=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\tools\junit4\tests.policy
 -Dtests.LUCENE_VERSION=9.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J1
 
-Djunit4.tempDir=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\temp
 -Djunit4.childvm.id=1 -Djunit4.childvm.count=2 -Dfile.encoding=ISO-8859-1 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false 
-classpath 

[JENKINS] Lucene-Solr-8.x-Solaris (64bit/jdk1.8.0) - Build # 254 - Unstable!

2019-08-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Solaris/254/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testListenerAcceptance

Error Message:
Did not expect the processor to fire on first run! event={   
"id":"140678cb2Tasgcy3vn4xxuy70q2pc9qzcxr",   
"source":"node_added_trigger",   "eventTime":352311012986034,   
"eventType":"NODEADDED",   "properties":{ "eventTimes":[352311012986034],   
  "preferredOperation":"movereplica", "nodeNames":["127.0.0.1:45685_solr"]}}

Stack Trace:
java.lang.AssertionError: Did not expect the processor to fire on first run! 
event={
  "id":"140678cb2Tasgcy3vn4xxuy70q2pc9qzcxr",
  "source":"node_added_trigger",
  "eventTime":352311012986034,
  "eventType":"NODEADDED",
  "properties":{
"eventTimes":[352311012986034],
"preferredOperation":"movereplica",
"nodeNames":["127.0.0.1:45685_solr"]}}
at 
__randomizedtesting.SeedInfo.seed([52364E39BDCD0AC3:438214DE04441E35]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:50)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:186)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testListenerAcceptance(NodeAddedTriggerTest.java:231)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Issue Comment Deleted] (LUCENE-8755) QuadPrefixTree robustness: can throw exception while indexing a point at high precision

2019-08-07 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-8755:
-
Comment: was deleted

(was: As I write this, there is strangely no automated linking here to the PR, 
so I will specify it: https://github.com/apache/lucene-solr/pull/824)

> QuadPrefixTree robustness: can throw exception while indexing a point at high 
> precision
> ---
>
> Key: LUCENE-8755
> URL: https://issues.apache.org/jira/browse/LUCENE-8755
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial-extras
>Reporter: senthil nathan
>Priority: Critical
> Attachments: LUCENE-8755.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When trying to index a below document with apache solr 7.5.0 i am getting 
> java.lang.IndexOutOfBoundsException, this data is causing the whole full 
> import to be failed. I have also defined my schema for your reference 
>  
> Data:
> [
> { "street_description":"SAMPLE_TEXT", "pao_start_number":6, 
> "x_coordinate":244502.06, "sao_text":"FIRST FLOOR", "logical_status":"1", 
> "street_record_type":1, "id":"AA60L12-ENG", 
> "street_description_str":"SAMPLE_TEXT", "lpi_logical_status":"1", 
> "administrative_area":"SAMPLE_TEXT & HOVE", "uprn":"8899889", 
> "town_name":"TEST TOWN", "street_description_full":"60 DEMO ", 
> "y_coordinate":639062.07, "postcode_locator":"AB1 1BB", "location":"244502.06 
> 639062.07" }
> ]
>  
> Configuration in managed-schema.xml
>  
>  geo="false" maxDistErr="0.09" worldBounds="ENVELOPE(0,70,130,0)" 
> distErrPct="0.15"/>
>  stored="false"/>
>   stored="false"/>
>  
>   indexed="true" stored="true"/>
>   stored="true"/>
>   required="true" stored="true"/>
>   stored="true"/>
>   stored="true"/>
>   indexed="false" stored="true"/>
>   indexed="false" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   multiValued="false" indexed="true" stored="true"/>
>   multiValued="false" indexed="true" stored="true"/> 
>   indexed="false" stored="true"/>
>   stored="true"/>
>   stored="true"/>
>   stored="true"/>
>   stored="true"/>



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8755) QuadPrefixTree robustness: can throw exception while indexing a point at high precision

2019-08-07 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902540#comment-16902540
 ] 

David Smiley commented on LUCENE-8755:
--

As I write this, there is strangely no automated linking here to the PR, so I 
will specify it: https://github.com/apache/lucene-solr/pull/824

> QuadPrefixTree robustness: can throw exception while indexing a point at high 
> precision
> ---
>
> Key: LUCENE-8755
> URL: https://issues.apache.org/jira/browse/LUCENE-8755
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial-extras
>Reporter: senthil nathan
>Priority: Critical
> Attachments: LUCENE-8755.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When trying to index a below document with apache solr 7.5.0 i am getting 
> java.lang.IndexOutOfBoundsException, this data is causing the whole full 
> import to be failed. I have also defined my schema for your reference 
>  
> Data:
> [
> { "street_description":"SAMPLE_TEXT", "pao_start_number":6, 
> "x_coordinate":244502.06, "sao_text":"FIRST FLOOR", "logical_status":"1", 
> "street_record_type":1, "id":"AA60L12-ENG", 
> "street_description_str":"SAMPLE_TEXT", "lpi_logical_status":"1", 
> "administrative_area":"SAMPLE_TEXT & HOVE", "uprn":"8899889", 
> "town_name":"TEST TOWN", "street_description_full":"60 DEMO ", 
> "y_coordinate":639062.07, "postcode_locator":"AB1 1BB", "location":"244502.06 
> 639062.07" }
> ]
>  
> Configuration in managed-schema.xml
>  
>  geo="false" maxDistErr="0.09" worldBounds="ENVELOPE(0,70,130,0)" 
> distErrPct="0.15"/>
>  stored="false"/>
>   stored="false"/>
>  
>   indexed="true" stored="true"/>
>   stored="true"/>
>   required="true" stored="true"/>
>   stored="true"/>
>   stored="true"/>
>   indexed="false" stored="true"/>
>   indexed="false" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   multiValued="false" indexed="true" stored="true"/>
>   multiValued="false" indexed="true" stored="true"/> 
>   indexed="false" stored="true"/>
>   stored="true"/>
>   stored="true"/>
>   stored="true"/>
>   stored="true"/>



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on issue #824: LUCENE-8755: QuadPrefixTree robustness

2019-08-07 Thread GitBox
dsmiley commented on issue #824: LUCENE-8755: QuadPrefixTree robustness
URL: https://github.com/apache/lucene-solr/pull/824#issuecomment-519303495
 
 
   Thanks for the PR!  Just curious; did you encounter this problem and thus 
were motivated to fix it?  
   
   I didn't spend much time on it tonight, but it looks pretty good, and it has 
a cleaner appearance that reads better to me.  You did remove a comment or two 
that are still appropriate, like the link to Z-order.  Did you run Lucene 
spatial-extras tests?  "ant precommit"?
   
   I suspect a tokenization change ought to be toggled by Version, similar to 
some other behavior changes in Lucene analyzers.  And that means finding a way 
for the old tokenization and the new-one to co-exist, plus a means to 
communicate that choice -- probably via 
`org.apache.lucene.spatial.prefix.tree.SpatialPrefixTreeFactory#init`.  If we 
don't do this, then some use-cases with existing indexes may break, like exact 
point lookup of itself should round-trip.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Linux (32bit/jdk1.8.0_201) - Build # 983 - Unstable!

2019-08-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/983/
Java: 32bit/jdk1.8.0_201 -server -XX:+UseSerialGC

4 tests failed.
FAILED:  org.apache.solr.search.facet.TestCloudJSONFacetSKG.testRandom

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:44847/solr/org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:44847/solr/org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection]
at 
__randomizedtesting.SeedInfo.seed([6B21963273F8174C:196DB33DC298A13F]:0)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:345)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:987)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1002)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.getNumFound(TestCloudJSONFacetSKG.java:669)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.verifySKGResults(TestCloudJSONFacetSKG.java:446)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.assertFacetSKGsAreCorrect(TestCloudJSONFacetSKG.java:392)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.assertFacetSKGsAreCorrect(TestCloudJSONFacetSKG.java:402)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.assertFacetSKGsAreCorrect(TestCloudJSONFacetSKG.java:349)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.testRandom(TestCloudJSONFacetSKG.java:274)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Updated] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins

2019-08-07 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-9658:

Attachment: (was: SOLR-9658.patch)

> Caches should have an optional way to clean if idle for 'x' mins
> 
>
> Key: SOLR-9658
> URL: https://issues.apache.org/jira/browse/SOLR-9658
> Project: Solr
>  Issue Type: New Feature
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-9658.patch, SOLR-9658.patch, SOLR-9658.patch
>
>
> If a cache is idle for long, it consumes precious memory. It should be 
> configurable to clear the cache if it was not accessed for 'x' secs. The 
> cache configuration can have an extra config {{maxIdleTime}} . if we wish it 
> to the cleaned after 10 mins of inactivity set it to {{maxIdleTime=600}}. 
> [~dragonsinth] would it be a solution for the memory leak you mentioned?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins

2019-08-07 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-9658:

Attachment: SOLR-9658.patch

> Caches should have an optional way to clean if idle for 'x' mins
> 
>
> Key: SOLR-9658
> URL: https://issues.apache.org/jira/browse/SOLR-9658
> Project: Solr
>  Issue Type: New Feature
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-9658.patch, SOLR-9658.patch, SOLR-9658.patch
>
>
> If a cache is idle for long, it consumes precious memory. It should be 
> configurable to clear the cache if it was not accessed for 'x' secs. The 
> cache configuration can have an extra config {{maxIdleTime}} . if we wish it 
> to the cleaned after 10 mins of inactivity set it to {{maxIdleTime=600}}. 
> [~dragonsinth] would it be a solution for the memory leak you mentioned?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins

2019-08-07 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-9658:

Attachment: SOLR-9658.patch

> Caches should have an optional way to clean if idle for 'x' mins
> 
>
> Key: SOLR-9658
> URL: https://issues.apache.org/jira/browse/SOLR-9658
> Project: Solr
>  Issue Type: New Feature
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-9658.patch, SOLR-9658.patch, SOLR-9658.patch
>
>
> If a cache is idle for long, it consumes precious memory. It should be 
> configurable to clear the cache if it was not accessed for 'x' secs. The 
> cache configuration can have an extra config {{maxIdleTime}} . if we wish it 
> to the cleaned after 10 mins of inactivity set it to {{maxIdleTime=600}}. 
> [~dragonsinth] would it be a solution for the memory leak you mentioned?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Separate dev mailing list for automated mails?

2019-08-07 Thread Shawn Heisey

On 8/6/2019 5:17 PM, Jan Høydahl wrote:

Personally I think the ratio of notifications vs human emails is a bit too 
high. I fear external devs who just want to follow the project may get 
overwhelmed and unsubscribe.
One suggestion is therefore to add a new list where detailed JIRA comments and 
Github comments / reviews go. All committers should of course subscribe!
I saw the Zookeeper project have a notifications@ list for GitHub comments and 
issues@ for JIRA comments (Except the first [Created] email for a JIRA will 
also go to dev@)
The Maven project follows the same scheme and they also send Jenkins mails to 
the notifications@ list. The Cassandra project seems to divert all jira 
comments to the commits@ list.
The HBase project has keeps only [Created]/[Resolved] mails on dev@ and all 
other from Jira/GH on issues@ list and Jenkins mails on a separate builds@ list.

Is it time we did something similar? I propose a single new notifications@ list 
for everything JIRA, GitHub and Jenkins but keep [Created|Resolved] mails on 
dev@


If it weren't for server-side filtering on my mail server 
(sieve/dovecot), I'd probably have brought this up a LONG time ago. :) 
The filtering separates all those things out into their own folders so 
my "lucene-dev" folder shows only human-generated traffic for the most part.


I'm also subscribed to commits, and that goes to its own folder too.

+1 to this idea.  Having separate lists would mean my filters will be 
more reliable, and people who have an interest in dev discussions 
without tons of computer-generated junk can join us.


Here's my bikeshed paint:

issues@ for detailed Jira/GH activity.
builds@ for things like Jenkins.
I'm ambivalent about whether dev should get the created/resolved 
activity from Jira/GH.  I can see arguments both ways.


Thanks,
Shawn

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk1.8.0) - Build # 261 - Failure!

2019-08-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/261/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 14365 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX/solr/build/solr-core/test/temp/junit4-J0-20190807_194449_3106609388390366377581.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  Internal Error (sharedRuntime.cpp:876), pid=56409, 
tid=0x0003a5cb
   [junit4] #  guarantee(nm != NULL) failed: must have containing nmethod for 
implicit division-by-zero exceptions
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (8.0_201-b09) (build 
1.8.0_201-b09)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.201-b09 mixed mode 
bsd-amd64 compressed oops)
   [junit4] # Failed to write core dump. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX/solr/build/solr-core/test/J0/hs_err_pid56409.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J0: EOF 

[...truncated 1664 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/Library/Java/JavaVirtualMachines/jdk1.8.0_201.jdk/Contents/Home/jre/bin/java 
-XX:+UseCompressedOops -XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX/heapdumps -ea 
-esa -Dtests.prefix=tests -Dtests.seed=D0EF4E202893C4B5 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=8.3.0 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Dcommon.dir=/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX/lucene 
-Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX/lucene/build/clover/db
 
-Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=8.3.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX/solr/build/solr-core/test/J0
 
-Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX/solr/build/solr-core/test/temp
 -Djunit4.childvm.id=0 -Djunit4.childvm.count=2 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true -Dtests.disableHdfs=true -Dtests.badapples=false 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=US-ASCII -classpath 

Re: Separate dev mailing list for automated mails?

2019-08-07 Thread Doug Turnbull
+1 - Just two days ago I created a filter to send [JENKINS] emails
elsewhere... I don't want to completely unsubscribe from Lucene development
emails, but the traffic here is a bit overwhelming and it's hard to see the
signal in the noise sometimes (high recall, low precision you might say!)

On Wed, Aug 7, 2019 at 5:27 PM Noble Paul  wrote:

> +1
>
> The mail list is sending so many mails that it has become difficult to
> catch up
>
> On Thu, Aug 8, 2019 at 12:26 AM Michael Sokolov 
> wrote:
> >
> > big +1 -- I'm also curious why the subject lines of many automated
> > emails (from Jira?) start with [CREATED] even though they are
> > generated by comments or other kinds of updates (not creating a new
> > issue). Overall, I think we have way too much comment spam. In
> > particular Github comments are so poorly formatted in email (at least
> > in gmail?) as to be almost unreadable - I think because they always
> > include the complete comment history. I wonder if there is a way to
> > neaten them up (especially the subject lines, so you can scan
> > quickly)?
> >
> > On Tue, Aug 6, 2019 at 7:17 PM Jan Høydahl 
> wrote:
> > >
> > > Hi
> > >
> > > The mail volume on dev@ is fairly high, betwen 2500-3500/month.
> > > To break down the numbers last month, see
> https://lists.apache.org/trends.html?dev@lucene.apache.org:lte=1M:
> > >
> > > Top 10 participants:
> > > -GitBox: 420 emails
> > > -ASF subversion and git services (JIRA): 351 emails
> > > -Apache Jenkins Server: 261 emails
> > > -Policeman Jenkins Server: 234 emails
> > > -Munendra S N (JIRA): 134 emails
> > > -Joel Bernstein (JIRA): 84 emails
> > > -Tomoko Uchida (JIRA): 77 emails
> > > -Jan Høydahl (JIRA): 52 emails
> > > -Andrzej Bialecki (JIRA): 47 emails
> > > -Adrien Grand (JIRA): 46 emails
> > >
> > > I have especially noticed how every single GitHub PR review comment
> triggers its own email instead of one email per review session.
> > > Also, every commit/push triggers an email since a bot adds a comment
> to JIRA for it.
> > >
> > > Personally I think the ratio of notifications vs human emails is a bit
> too high. I fear external devs who just want to follow the project may get
> overwhelmed and unsubscribe.
> > > One suggestion is therefore to add a new list where detailed JIRA
> comments and Github comments / reviews go. All committers should of course
> subscribe!
> > > I saw the Zookeeper project have a notifications@ list for GitHub
> comments and issues@ for JIRA comments (Except the first [Created] email
> for a JIRA will also go to dev@)
> > > The Maven project follows the same scheme and they also send Jenkins
> mails to the notifications@ list. The Cassandra project seems to divert
> all jira comments to the commits@ list.
> > > The HBase project has keeps only [Created]/[Resolved] mails on dev@
> and all other from Jira/GH on issues@ list and Jenkins mails on a
> separate builds@ list.
> > >
> > > Is it time we did something similar? I propose a single new
> notifications@ list for everything JIRA, GitHub and Jenkins but keep
> [Created|Resolved] mails on dev@
> > >
> > > --
> > > Jan Høydahl, search solution architect
> > > Cominvent AS - www.cominvent.com
> > >
> > >
> > > -
> > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
> --
> -
> Noble Paul
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

-- 
*Doug Turnbull **| CTO* | OpenSource Connections
, LLC | 240.476.9983
Author: Relevant Search 
This e-mail and all contents, including attachments, is considered to be
Company Confidential unless explicitly stated otherwise, regardless
of whether attachments are marked as such.


Re: Separate dev mailing list for automated mails?

2019-08-07 Thread Noble Paul
+1

The mail list is sending so many mails that it has become difficult to catch up

On Thu, Aug 8, 2019 at 12:26 AM Michael Sokolov  wrote:
>
> big +1 -- I'm also curious why the subject lines of many automated
> emails (from Jira?) start with [CREATED] even though they are
> generated by comments or other kinds of updates (not creating a new
> issue). Overall, I think we have way too much comment spam. In
> particular Github comments are so poorly formatted in email (at least
> in gmail?) as to be almost unreadable - I think because they always
> include the complete comment history. I wonder if there is a way to
> neaten them up (especially the subject lines, so you can scan
> quickly)?
>
> On Tue, Aug 6, 2019 at 7:17 PM Jan Høydahl  wrote:
> >
> > Hi
> >
> > The mail volume on dev@ is fairly high, betwen 2500-3500/month.
> > To break down the numbers last month, see 
> > https://lists.apache.org/trends.html?dev@lucene.apache.org:lte=1M:
> >
> > Top 10 participants:
> > -GitBox: 420 emails
> > -ASF subversion and git services (JIRA): 351 emails
> > -Apache Jenkins Server: 261 emails
> > -Policeman Jenkins Server: 234 emails
> > -Munendra S N (JIRA): 134 emails
> > -Joel Bernstein (JIRA): 84 emails
> > -Tomoko Uchida (JIRA): 77 emails
> > -Jan Høydahl (JIRA): 52 emails
> > -Andrzej Bialecki (JIRA): 47 emails
> > -Adrien Grand (JIRA): 46 emails
> >
> > I have especially noticed how every single GitHub PR review comment 
> > triggers its own email instead of one email per review session.
> > Also, every commit/push triggers an email since a bot adds a comment to 
> > JIRA for it.
> >
> > Personally I think the ratio of notifications vs human emails is a bit too 
> > high. I fear external devs who just want to follow the project may get 
> > overwhelmed and unsubscribe.
> > One suggestion is therefore to add a new list where detailed JIRA comments 
> > and Github comments / reviews go. All committers should of course subscribe!
> > I saw the Zookeeper project have a notifications@ list for GitHub comments 
> > and issues@ for JIRA comments (Except the first [Created] email for a JIRA 
> > will also go to dev@)
> > The Maven project follows the same scheme and they also send Jenkins mails 
> > to the notifications@ list. The Cassandra project seems to divert all jira 
> > comments to the commits@ list.
> > The HBase project has keeps only [Created]/[Resolved] mails on dev@ and all 
> > other from Jira/GH on issues@ list and Jenkins mails on a separate builds@ 
> > list.
> >
> > Is it time we did something similar? I propose a single new notifications@ 
> > list for everything JIRA, GitHub and Jenkins but keep [Created|Resolved] 
> > mails on dev@
> >
> > --
> > Jan Høydahl, search solution architect
> > Cominvent AS - www.cominvent.com
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>


-- 
-
Noble Paul

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3517 - Unstable

2019-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3517/

1 tests failed.
FAILED:  org.apache.solr.client.solrj.TestLBHttp2SolrClient.testTwoServers

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:38242/solr/collection1/select?q=*%3A*=javabin=2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: 
http://127.0.0.1:38242/solr/collection1/select?q=*%3A*=javabin=2
at 
__randomizedtesting.SeedInfo.seed([D63F3092CAA9B65A:76D59E1F11E2987A]:0)
at 
org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:406)
at 
org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:746)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:605)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:581)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:987)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1002)
at 
org.apache.solr.client.solrj.TestLBHttp2SolrClient.testTwoServers(TestLBHttp2SolrClient.java:186)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

Merge multiple sorted indices

2019-08-07 Thread Aravind S (User Intent)
Hi,

We are currently trying to merge sorted indices in an offline process. This
process is taking a lot of time in merging. We tried using
ConcurrentMergeScheduler with tiered merge policy.

We see the merger threads to be maximum to be set as 4 in
ConcurrentMergerScheduler setDefaultMaxMergesAndThreads method.

Is there a way to decrease the time in merging this sorted indices. Is
there any recommendation that could be followed to scale merging with an
increase in a number of indices to be merged.

Regards,
Aravind S

-- 



*-*


*This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the 
system manager. This message contains confidential information and is 
intended only for the individual named. If you are not the named addressee, 
you should not disseminate, distribute or copy this email. Please notify 
the sender immediately by email if you have received this email by mistake 
and delete this email from your system. If you are not the intended 
recipient, you are notified that disclosing, copying, distributing or 
taking any action in reliance on the contents of this information is 
strictly prohibited.*

 

*Any views or opinions presented in this 
email are solely those of the author and do not necessarily represent those 
of the organization. Any information on shares, debentures or similar 
instruments, recommended product pricing, valuations and the like are for 
information purposes only. It is not meant to be an instruction or 
recommendation, as the case may be, to buy or to sell securities, products, 
services nor an offer to buy or sell securities, products or services 
unless specifically stated to be so on behalf of the Flipkart group. 
Employees of the Flipkart group of companies are expressly required not to 
make defamatory statements and not to infringe or authorise any 
infringement of copyright or any other legal right by email communications. 
Any such communication is contrary to organizational policy and outside the 
scope of the employment of the individual concerned. The organization will 
not accept any liability in respect of such communication, and the employee 
responsible will be personally liable for any damages or other liability 
arising.*

 

*Our organization accepts no liability for the 
content of this email, or for the consequences of any actions taken on the 
basis of the information *provided,* unless that information is 
subsequently confirmed in writing. If you are not the intended recipient, 
you are notified that disclosing, copying, distributing or taking any 
action in reliance on the contents of this information is strictly 
prohibited.*


_-_



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1414 - Failure

2019-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1414/

No tests ran.

Build Log:
[...truncated 24456 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2590 links (2119 relative) to 3409 anchors in 259 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-9.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.


[jira] [Commented] (SOLR-13622) Add FileStream Streaming Expression

2019-08-07 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902342#comment-16902342
 ] 

Jason Gerlowski commented on SOLR-13622:


Sorry, that's my mistake.  Dumb mistake.  I'll fix it right away.

Joel, I'll do the rename while I'm at it.

> Add FileStream Streaming Expression
> ---
>
> Key: SOLR-13622
> URL: https://issues.apache.org/jira/browse/SOLR-13622
> Project: Solr
>  Issue Type: New Feature
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Jason Gerlowski
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13622.patch, SOLR-13622.patch
>
>
> The FileStream will read files from a local filesystem and Stream back each 
> line of the file as a tuple.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13240) UTILIZENODE action results in an exception

2019-08-07 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902321#comment-16902321
 ] 

Christine Poerschke commented on SOLR-13240:


bq. ... Sorry for the delay in the reply ...

No worries at all, we all contribute here as and when time permits and 
inevitably that will vary. Thanks for returning to this and continuing!

bq. ... Your interpretation seems correct to me also ...

Thanks for the second opinion.

bq. ... _(without approaching it in a way of, lets just change it to what it's 
complaining about to make the test pass)_ ...

Indeed, that's very important. :-)

bq. ... What doesn't make sense to me ... If you spot something that I can't 
see then let me know. ...

bq. ... SOLR-13240 does not apply to master. Rebase required? Wrong Branch? See 
https://wiki.apache.org/solr/HowToContribute#Creating_the_patch_file for help. 
...

It seems that the latest patch is relative to a different branch (7.4?) and 
thus (in this particular case) Lucene/Solr QA cannot apply it. I'm wondering if 
the code and/or the tests subsequently changed in a way that would make the 
equivalent analysis or interpretation (and test adjustments) for the master 
branch easier?

> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Priority: Major
> Attachments: SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> SOLR-13240.patch, SOLR-13240.patch, solr-solrj-7.5.0.jar
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> 

[jira] [Reopened] (SOLR-13622) Add FileStream Streaming Expression

2019-08-07 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-13622:
-

StreamExpressionTest.testFileStreamDirectoryCrawl seems to make filesystem 
specific assumptions that fail hard on windows..

{noformat}
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testFileStreamDirectoryCrawl

Error Message:
expected: but was:

Stack Trace:
org.junit.ComparisonFailure: expected: but 
was:
at 
__randomizedtesting.SeedInfo.seed([92C40A8131F8CF7D:362DC46DFDF7A898]:0)
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testFileStreamDirectoryCrawl(StreamExpressionTest.java:3128)

{noformat}

> Add FileStream Streaming Expression
> ---
>
> Key: SOLR-13622
> URL: https://issues.apache.org/jira/browse/SOLR-13622
> Project: Solr
>  Issue Type: New Feature
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Jason Gerlowski
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13622.patch, SOLR-13622.patch
>
>
> The FileStream will read files from a local filesystem and Stream back each 
> line of the file as a tuple.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



thetaphi jenkins RSS feed broken? (not showing up in fucit reports since July 24th)

2019-08-07 Thread Chris Hostetter



Uwe: I just realized that my jenkins reports haven't mentioned any 
failures from your jenkins box all week -- and that's because aparently 
even though the the RSS feed is up to date, and lists recent jobs, the 
URLs are all "wrong" and so the script can't get the test results & 
logs.

http://fucit.org/solr-jenkins-reports/


Note the "link" for this single entry currently in your RSS feed...

https://jenkins.thetaphi.de/view/Lucene-Solr/rssAll

 Lucene-Solr-master-Windows #8073 (9 more tests are failing (total 
16))
 https://jenkins.thetaphi.de/"/>
 tag:hudson.dev.java.net,2019:Lucene-Solr-master-Windows:8073
 
 
 Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops 
-XX:+UseConcMarkSweepGC


...every "entry" in your jenkin's feed has that exact same idential link 
href -- just for the base URL of your server, w/o the job specifics path.


Also note, if it helps track down the problem, that the published & 
updated dates are also blank (which i think my RSS crawler could handle 
by assuming 'now' if the 'link' was valid)


The problem seems to have started on July 24th (that's the last time my 
scripts reported seeing a "new" jenkins job in our feed -- with a valid 
'link')



For contrast, compare to what entries from the apache jenkins RSS feed 
look like ...


https://builds.apache.org/view/L/view/Lucene/rssAll

 Lucene-Solr-Tests-master #3516 (broken since build #3515)
 https://builds.apache.org/view/L/view/Lucene/job/Lucene-Solr-Tests-master/3516/"/>
 tag:hudson.dev.java.net,2019:Lucene-Solr-Tests-master:3516
 2019-08-07T14:12:25Z
 2019-08-07T14:12:25Z




-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Windows (32bit/jdk1.8.0_201) - Build # 386 - Still Unstable!

2019-08-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/386/
Java: 32bit/jdk1.8.0_201 -server -XX:+UseG1GC

5 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testFileStreamDirectoryCrawl

Error Message:
expected: but was:

Stack Trace:
org.junit.ComparisonFailure: expected: but 
was:
at 
__randomizedtesting.SeedInfo.seed([101A97E527BD6DBA:B4F35909EBB20A5F]:0)
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testFileStreamDirectoryCrawl(StreamExpressionTest.java:3128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testFileStreamDirectoryCrawl

Error Message:
expected: but was:

Stack Trace:
org.junit.ComparisonFailure: expected: but 
was:
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 3516 - Still Failing

2019-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3516/

All tests passed

Build Log:
[...truncated 63958 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj1645263943
 [ecj-lint] Compiling 1284 source files to /tmp/ecj1645263943
 [ecj-lint] Processing annotations
 [ecj-lint] Annotations processed
 [ecj-lint] Processing annotations
 [ecj-lint] No elements to process
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
 (at line 219)
 [ecj-lint] return (NamedList) new 
JavaBinCodec(resolver).unmarshal(in);
 [ecj-lint]^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java
 (at line 788)
 [ecj-lint] throw new UnsupportedOperationException("must add at least 1 
node first");
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'queryRequest' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java
 (at line 794)
 [ecj-lint] throw new UnsupportedOperationException("must add at least 1 
node first");
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'queryRequest' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 19)
 [ecj-lint] import javax.naming.Context;
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 20)
 [ecj-lint] import javax.naming.InitialContext;
 [ecj-lint]^^^
 [ecj-lint] The type javax.naming.InitialContext is not accessible
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 21)
 [ecj-lint] import javax.naming.NamingException;
 [ecj-lint]
 [ecj-lint] The type javax.naming.NamingException is not accessible
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 22)
 [ecj-lint] import javax.naming.NoInitialContextException;
 [ecj-lint]^^
 [ecj-lint] The type javax.naming.NoInitialContextException is not accessible
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 776)
 [ecj-lint] Context c = new InitialContext();
 [ecj-lint] ^^^
 [ecj-lint] Context cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 776)
 [ecj-lint] Context c = new InitialContext();
 [ecj-lint] ^^
 [ecj-lint] InitialContext cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 10. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 779)
 [ecj-lint] } catch (NoInitialContextException e) {
 [ecj-lint]  ^
 [ecj-lint] NoInitialContextException cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 11. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 781)
 [ecj-lint] } catch (NamingException e) {
 [ecj-lint]  ^^^
 [ecj-lint] NamingException cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 12. WARNING in 

[jira] [Commented] (SOLR-12230) Deprecate SortingMergePolicy

2019-08-07 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902182#comment-16902182
 ] 

Christine Poerschke commented on SOLR-12230:


ticket cross-referencing: there is overlap between SOLR-9108 and SOLR-12230 and 
SOLR-13681 tickets

> Deprecate SortingMergePolicy
> 
>
> Key: SOLR-12230
> URL: https://issues.apache.org/jira/browse/SOLR-12230
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-12230.patch
>
>
> The SortingMergePolicy should be deprecated since first class support is now 
> available (LUCENE-6766). The indexSort configuration can be accepted via the 
> solrconfig's indexConfig section directly, and SMP can throw a deprecation 
> warning through the 7x versions of Solr.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9108) Improve how index time sorting is configured

2019-08-07 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902181#comment-16902181
 ] 

Christine Poerschke commented on SOLR-9108:
---

ticket cross-referencing: there is overlap between SOLR-9108 and SOLR-12230 and 
SOLR-13681 tickets

> Improve how index time sorting is configured
> 
>
> Key: SOLR-9108
> URL: https://issues.apache.org/jira/browse/SOLR-9108
> Project: Solr
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Priority: Major
>
> Spinoff from LUCENE-6766.
> We used to have a {{SortingMergePolicy}} to configure index time sorting, but 
> with LUCENE-6766 you now set this on {{IndexWriterConfig}}.
> Solr had exposed index time sorting, so to preserve back-compat, I kept 
> {{SortingMergePolicy}} alive, moved to solr's sources, but use it simply as a 
> holder to pull the index sort from and pass to IWC.
> This preserves back compat, but I think it'd be cleaner going forward to just 
> allow index sort to be specified somewhere in {{solrconfig.xml}} wherever 
> other index writer settings are set?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13681) make Lucene's index sorting directly configurable in Solr

2019-08-07 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902180#comment-16902180
 ] 

Christine Poerschke commented on SOLR-13681:


ticket cross-referencing: there is overlap between SOLR-9108 and SOLR-12230 and 
SOLR-13681 tickets

(I've chosen to create SOLR-13681 here separately as a potential way of making 
the index sorting configurable _separately_ from any sorting  merge policy 
related changes but I have no immediate plans to work on this much further here 
at this time.)

> make Lucene's index sorting directly configurable in Solr
> -
>
> Key: SOLR-13681
> URL: https://issues.apache.org/jira/browse/SOLR-13681
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-13681.patch
>
>
> History/Background:
> * SOLR-5730 made Lucene's SortingMergePolicy and 
> EarlyTerminatingSortingCollector configurable in Solr 6.0 or later.
> * LUCENE-6766 make index sorting a first-class citizen in Lucene 6.2 or later.
> Current status:
> * In Solr 8.2 use of index sorting is only available via configuration of a 
> (top-level) merge policy that is a SortingMergePolicy and that policy's sort 
> is then passed to the index writer config via the 
> {code}
> if (mergePolicy instanceof SortingMergePolicy) {
>   Sort indexSort = ((SortingMergePolicy) mergePolicy).getSort();
>   iwc.setIndexSort(indexSort);
> }
> {code}
> https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.2.0/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java#L241-L244
>  code path.
> Proposed change:
> * in-scope for this ticket: To add direct support for index sorting 
> configuration in Solr.
> * out-of-scope for this ticket: deprecation and removal of SortingMergePolicy 
> support



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13681) make Lucene's index sorting directly configurable in Solr

2019-08-07 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902179#comment-16902179
 ] 

Christine Poerschke commented on SOLR-13681:


Attached initial partial patch is for an {{indexSort}} element within the 
{{indexConfig}} element in {{solrconfig.xml}} configuration.

Illustration:
{code}

  timestamp desc

{code}
Related Solr Ref Guide documentation:
 * [https://lucene.apache.org/solr/guide/8_1/indexconfig-in-solrconfig.html]

> make Lucene's index sorting directly configurable in Solr
> -
>
> Key: SOLR-13681
> URL: https://issues.apache.org/jira/browse/SOLR-13681
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-13681.patch
>
>
> History/Background:
> * SOLR-5730 made Lucene's SortingMergePolicy and 
> EarlyTerminatingSortingCollector configurable in Solr 6.0 or later.
> * LUCENE-6766 make index sorting a first-class citizen in Lucene 6.2 or later.
> Current status:
> * In Solr 8.2 use of index sorting is only available via configuration of a 
> (top-level) merge policy that is a SortingMergePolicy and that policy's sort 
> is then passed to the index writer config via the 
> {code}
> if (mergePolicy instanceof SortingMergePolicy) {
>   Sort indexSort = ((SortingMergePolicy) mergePolicy).getSort();
>   iwc.setIndexSort(indexSort);
> }
> {code}
> https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.2.0/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java#L241-L244
>  code path.
> Proposed change:
> * in-scope for this ticket: To add direct support for index sorting 
> configuration in Solr.
> * out-of-scope for this ticket: deprecation and removal of SortingMergePolicy 
> support



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13681) make Lucene's index sorting directly configurable in Solr

2019-08-07 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-13681:
---
Attachment: SOLR-13681.patch

> make Lucene's index sorting directly configurable in Solr
> -
>
> Key: SOLR-13681
> URL: https://issues.apache.org/jira/browse/SOLR-13681
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-13681.patch
>
>
> History/Background:
> * SOLR-5730 made Lucene's SortingMergePolicy and 
> EarlyTerminatingSortingCollector configurable in Solr 6.0 or later.
> * LUCENE-6766 make index sorting a first-class citizen in Lucene 6.2 or later.
> Current status:
> * In Solr 8.2 use of index sorting is only available via configuration of a 
> (top-level) merge policy that is a SortingMergePolicy and that policy's sort 
> is then passed to the index writer config via the 
> {code}
> if (mergePolicy instanceof SortingMergePolicy) {
>   Sort indexSort = ((SortingMergePolicy) mergePolicy).getSort();
>   iwc.setIndexSort(indexSort);
> }
> {code}
> https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.2.0/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java#L241-L244
>  code path.
> Proposed change:
> * in-scope for this ticket: To add direct support for index sorting 
> configuration in Solr.
> * out-of-scope for this ticket: deprecation and removal of SortingMergePolicy 
> support



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13681) make Lucene's index sorting directly configurable in Solr

2019-08-07 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-13681:
--

 Summary: make Lucene's index sorting directly configurable in Solr
 Key: SOLR-13681
 URL: https://issues.apache.org/jira/browse/SOLR-13681
 Project: Solr
  Issue Type: New Feature
Reporter: Christine Poerschke


History/Background:
* SOLR-5730 made Lucene's SortingMergePolicy and 
EarlyTerminatingSortingCollector configurable in Solr 6.0 or later.
* LUCENE-6766 make index sorting a first-class citizen in Lucene 6.2 or later.

Current status:
* In Solr 8.2 use of index sorting is only available via configuration of a 
(top-level) merge policy that is a SortingMergePolicy and that policy's sort is 
then passed to the index writer config via the 
{code}
if (mergePolicy instanceof SortingMergePolicy) {
  Sort indexSort = ((SortingMergePolicy) mergePolicy).getSort();
  iwc.setIndexSort(indexSort);
}
{code}
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.2.0/solr/core/src/java/org/apache/solr/update/SolrIndexConfig.java#L241-L244
 code path.

Proposed change:
* in-scope for this ticket: To add direct support for index sorting 
configuration in Solr.
* out-of-scope for this ticket: deprecation and removal of SortingMergePolicy 
support




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5945) Add retry for zookeeper reconnect failure

2019-08-07 Thread Endika Posadas (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Endika Posadas updated SOLR-5945:
-
Attachment: solr_6_6-5945.patch

> Add retry for zookeeper reconnect failure
> -
>
> Key: SOLR-5945
> URL: https://issues.apache.org/jira/browse/SOLR-5945
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.7
>Reporter: Jessica Cheng Mallet
>Priority: Major
>  Labels: solrcloud, zookeeper
> Attachments: solr_6_6-5945.patch
>
>
> We had some network issue where we temporarily lost connection and DNS. The 
> zookeeper client properly triggered the watcher. However, when trying to 
> reconnect, this following Exception is thrown:
> 2014-03-27 17:24:46,882 ERROR [main-EventThread] SolrException.java (line 
> 121) :java.net.UnknownHostException: : Name or service 
> not known
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866)
> at 
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1211)
> at java.net.InetAddress.getAllByName(InetAddress.java:1127)
> at java.net.InetAddress.getAllByName(InetAddress.java:1063)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:60)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at 
> org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:147)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> I tried to look at the code and it seems that there'd be no further retries 
> to connect to Zookeeper, and the node is basically left in a bad state and 
> will not recover on its own. (Please correct me if I'm reading this wrong.) 
> Thinking about it, this is probably fair, since normally you wouldn't expect 
> retries to fix an "unknown host" issue (even though in our case it would 
> have) but I'm wondering what we should do to handle this situation if it 
> happens again in the future.
> Any advice is appreciated.
> From Mark Miller:
> We don’t currently retry, but I don’t think it would hurt much if we did - at 
> least briefly.
> If you want to file a JIRA issue, that would be the best way to get it in a 
> future release.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5945) Add retry for zookeeper reconnect failure

2019-08-07 Thread Endika Posadas (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902148#comment-16902148
 ] 

Endika Posadas edited comment on SOLR-5945 at 8/7/19 3:16 PM:
--

There are circumstances when retrying makes sense. E.g.: the whole environment 
goes down, including the DNS. If Solr comes back up before the DNS is up and 
running, it will fall into this UnknownHostException and go into an invalid 
state. However, if it keeps retrying, it will eventually come back alive when 
the DNS is back.

I have attached a patch for Solr 6.6 that will retry to connect based on a 
timeout.


was (Author: enpos):
There are circumstances when retrying makes sense. E.g.: the whole environment 
goes down, including the DNS. If Solr comes back up before the DNS is up and 
running, it will fall into this UnknownHostException and go into an invalid 
state. However, if it keeps retrying, it will eventually come back alive when 
the DNS is back.

I have attached a patch that will retry to connect based on a timeout.

> Add retry for zookeeper reconnect failure
> -
>
> Key: SOLR-5945
> URL: https://issues.apache.org/jira/browse/SOLR-5945
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.7
>Reporter: Jessica Cheng Mallet
>Priority: Major
>  Labels: solrcloud, zookeeper
> Attachments: solr_6_6-5945.patch
>
>
> We had some network issue where we temporarily lost connection and DNS. The 
> zookeeper client properly triggered the watcher. However, when trying to 
> reconnect, this following Exception is thrown:
> 2014-03-27 17:24:46,882 ERROR [main-EventThread] SolrException.java (line 
> 121) :java.net.UnknownHostException: : Name or service 
> not known
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866)
> at 
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1211)
> at java.net.InetAddress.getAllByName(InetAddress.java:1127)
> at java.net.InetAddress.getAllByName(InetAddress.java:1063)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:60)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at 
> org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:147)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> I tried to look at the code and it seems that there'd be no further retries 
> to connect to Zookeeper, and the node is basically left in a bad state and 
> will not recover on its own. (Please correct me if I'm reading this wrong.) 
> Thinking about it, this is probably fair, since normally you wouldn't expect 
> retries to fix an "unknown host" issue (even though in our case it would 
> have) but I'm wondering what we should do to handle this situation if it 
> happens again in the future.
> Any advice is appreciated.
> From Mark Miller:
> We don’t currently retry, but I don’t think it would hurt much if we did - at 
> least briefly.
> If you want to file a JIRA issue, that would be the best way to get it in a 
> future release.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5945) Add retry for zookeeper reconnect failure

2019-08-07 Thread Endika Posadas (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Endika Posadas updated SOLR-5945:
-
Attachment: (was: retryConnectingToZookeeper.patch)

> Add retry for zookeeper reconnect failure
> -
>
> Key: SOLR-5945
> URL: https://issues.apache.org/jira/browse/SOLR-5945
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.7
>Reporter: Jessica Cheng Mallet
>Priority: Major
>  Labels: solrcloud, zookeeper
>
> We had some network issue where we temporarily lost connection and DNS. The 
> zookeeper client properly triggered the watcher. However, when trying to 
> reconnect, this following Exception is thrown:
> 2014-03-27 17:24:46,882 ERROR [main-EventThread] SolrException.java (line 
> 121) :java.net.UnknownHostException: : Name or service 
> not known
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866)
> at 
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1211)
> at java.net.InetAddress.getAllByName(InetAddress.java:1127)
> at java.net.InetAddress.getAllByName(InetAddress.java:1063)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:60)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at 
> org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:147)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> I tried to look at the code and it seems that there'd be no further retries 
> to connect to Zookeeper, and the node is basically left in a bad state and 
> will not recover on its own. (Please correct me if I'm reading this wrong.) 
> Thinking about it, this is probably fair, since normally you wouldn't expect 
> retries to fix an "unknown host" issue (even though in our case it would 
> have) but I'm wondering what we should do to handle this situation if it 
> happens again in the future.
> Any advice is appreciated.
> From Mark Miller:
> We don’t currently retry, but I don’t think it would hurt much if we did - at 
> least briefly.
> If you want to file a JIRA issue, that would be the best way to get it in a 
> future release.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5945) Add retry for zookeeper reconnect failure

2019-08-07 Thread Endika Posadas (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902148#comment-16902148
 ] 

Endika Posadas commented on SOLR-5945:
--

There are circumstances when retrying makes sense. E.g.: the whole environment 
goes down, including the DNS. If Solr comes back up before the DNS is up and 
running, it will fall into this UnknownHostException and go into an invalid 
state. However, if it keeps retrying, it will eventually come back alive when 
the DNS is back.

I have attached a patch that will retry to connect based on a timeout.

> Add retry for zookeeper reconnect failure
> -
>
> Key: SOLR-5945
> URL: https://issues.apache.org/jira/browse/SOLR-5945
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.7
>Reporter: Jessica Cheng Mallet
>Priority: Major
>  Labels: solrcloud, zookeeper
> Attachments: retryConnectingToZookeeper.patch
>
>
> We had some network issue where we temporarily lost connection and DNS. The 
> zookeeper client properly triggered the watcher. However, when trying to 
> reconnect, this following Exception is thrown:
> 2014-03-27 17:24:46,882 ERROR [main-EventThread] SolrException.java (line 
> 121) :java.net.UnknownHostException: : Name or service 
> not known
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866)
> at 
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1211)
> at java.net.InetAddress.getAllByName(InetAddress.java:1127)
> at java.net.InetAddress.getAllByName(InetAddress.java:1063)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:60)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at 
> org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:147)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> I tried to look at the code and it seems that there'd be no further retries 
> to connect to Zookeeper, and the node is basically left in a bad state and 
> will not recover on its own. (Please correct me if I'm reading this wrong.) 
> Thinking about it, this is probably fair, since normally you wouldn't expect 
> retries to fix an "unknown host" issue (even though in our case it would 
> have) but I'm wondering what we should do to handle this situation if it 
> happens again in the future.
> Any advice is appreciated.
> From Mark Miller:
> We don’t currently retry, but I don’t think it would hurt much if we did - at 
> least briefly.
> If you want to file a JIRA issue, that would be the best way to get it in a 
> future release.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5945) Add retry for zookeeper reconnect failure

2019-08-07 Thread Endika Posadas (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Endika Posadas updated SOLR-5945:
-
Attachment: retryConnectingToZookeeper.patch

> Add retry for zookeeper reconnect failure
> -
>
> Key: SOLR-5945
> URL: https://issues.apache.org/jira/browse/SOLR-5945
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.7
>Reporter: Jessica Cheng Mallet
>Priority: Major
>  Labels: solrcloud, zookeeper
> Attachments: retryConnectingToZookeeper.patch
>
>
> We had some network issue where we temporarily lost connection and DNS. The 
> zookeeper client properly triggered the watcher. However, when trying to 
> reconnect, this following Exception is thrown:
> 2014-03-27 17:24:46,882 ERROR [main-EventThread] SolrException.java (line 
> 121) :java.net.UnknownHostException: : Name or service 
> not known
> at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866)
> at 
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258)
> at java.net.InetAddress.getAllByName0(InetAddress.java:1211)
> at java.net.InetAddress.getAllByName(InetAddress.java:1127)
> at java.net.InetAddress.getAllByName(InetAddress.java:1063)
> at 
> org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:60)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
> at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
> at 
> org.apache.solr.common.cloud.SolrZooKeeper.(SolrZooKeeper.java:41)
> at 
> org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:53)
> at 
> org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:147)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
> I tried to look at the code and it seems that there'd be no further retries 
> to connect to Zookeeper, and the node is basically left in a bad state and 
> will not recover on its own. (Please correct me if I'm reading this wrong.) 
> Thinking about it, this is probably fair, since normally you wouldn't expect 
> retries to fix an "unknown host" issue (even though in our case it would 
> have) but I'm wondering what we should do to handle this situation if it 
> happens again in the future.
> Any advice is appreciated.
> From Mark Miller:
> We don’t currently retry, but I don’t think it would hurt much if we did - at 
> least briefly.
> If you want to file a JIRA issue, that would be the best way to get it in a 
> future release.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] msokolov commented on issue #815: LUCENE-8213: Introduce Asynchronous Caching in LRUQueryCache

2019-08-07 Thread GitBox
msokolov commented on issue #815: LUCENE-8213: Introduce Asynchronous Caching 
in LRUQueryCache
URL: https://github.com/apache/lucene-solr/pull/815#issuecomment-519137707
 
 
   It should be enough to report the stats after the last iteration - it is 
cumulative, so the previous ones just add noise? I agree QPS looks pretty 
noisy, probably no real change. Could you post the latency stats in a more 
readable table here? It looks as if you have markdown there: I think github 
will accept that


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8369) Remove the spatial module as it is obsolete

2019-08-07 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902133#comment-16902133
 ] 

Simon Willnauer commented on LUCENE-8369:
-

I don't think we should scarify the existence of LatLong point searching out of 
core for the sake of code visibility.  I think we should keep it in core and 
open up visibility to enable code-reuse in the modules and use 
_@lucene.internal_ in order to mark classes as internal and prevent users from 
complaining when the API changes. It's not ideal but progress. Can we separate 
the disucssion of getting rid of the spacial module from graduating the various 
shapes from sandbox to wherever? I think keeping a module for 2 classes doesn't 
make sense. We can move those two classes to core too or even get rid of them 
altogether I don't think it should influence the discussion if something else 
should be graduated. 

One other option would be we move all non-core spacials from sandbox to spatial 
as long as they don't add any additional dependency. that would be an 
intermediate step. we can still graduate from there then.

> Remove the spatial module as it is obsolete
> ---
>
> Key: LUCENE-8369
> URL: https://issues.apache.org/jira/browse/LUCENE-8369
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8369.patch
>
>
> The "spatial" module is at this juncture nearly empty with only a couple 
> utilities that aren't used by anything in the entire codebase -- 
> GeoRelationUtils, and MortonEncoder.  Perhaps it should have been removed 
> earlier in LUCENE-7664 which was the removal of GeoPointField which was 
> essentially why the module existed.  Better late than never.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11616) Backup failing on a constantly changing index with NoSuchFileException

2019-08-07 Thread Andrian Jardan (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901938#comment-16901938
 ] 

Andrian Jardan edited comment on SOLR-11616 at 8/7/19 2:54 PM:
---

 It seems like this issue is back in 7.7.2. We are using the official 7.7.2 
container, and we see this during backups sometimes:

Is it a regression, or there's something new ?

{noformat}
"level\":\"ERROR\", \"collection\":\"\", \"shard\":\"\", \"replica\":\"\", 
\"core\":\"\", \"location\":\"org.apache.solr.handler.SnapShooter\", 
\"message\":\"Exception while creating snapshot\" ,\"stacktrace\":\" 
java.nio.file.NoSuchFileException:
 
/store/data/indexname_shard6_0_replica_n53/data/index.20190729153842861/_vngz.fdt\
java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)\
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)\
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)\
java.base/sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:178)\
java.base/java.nio.channels.FileChannel.open(FileChannel.java:292)\
java.base/java.nio.channels.FileChannel.open(FileChannel.java:345)\
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:238)\
org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:181)\
org.apache.lucene.store.Directory.copyFrom(Directory.java:182)\

org.apache.solr.core.backup.repository.LocalFileSystemRepository.copyFileFrom(LocalFileSystemRepository.java:145)\
org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:238)\
org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$2(SnapShooter.java:205)\
java.base/java.lang.Thread.run(Thread.java:834)\
\"}
{noformat}

It is also worth mentioning that this mainly happens when Solr is under load, 
mostly with READ requests. With no load everything works fine.


was (Author: macros):
 It seems like this issue is back in 7.7.2. We are using the official 7.7.2 
container, and we see this during backups sometimes:

Is it a regression, or there's something new ?

{noformat}
"level\":\"ERROR\", \"collection\":\"\", \"shard\":\"\", \"replica\":\"\", 
\"core\":\"\", \"location\":\"org.apache.solr.handler.SnapShooter\", 
\"message\":\"Exception while creating snapshot\" ,\"stacktrace\":\" 
java.nio.file.NoSuchFileException:
 
/store/data/indexname_shard6_0_replica_n53/data/index.20190729153842861/_vngz.fdt\
java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)\
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)\
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)\
java.base/sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:178)\
java.base/java.nio.channels.FileChannel.open(FileChannel.java:292)\
java.base/java.nio.channels.FileChannel.open(FileChannel.java:345)\
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:238)\
org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:181)\
org.apache.lucene.store.Directory.copyFrom(Directory.java:182)\

org.apache.solr.core.backup.repository.LocalFileSystemRepository.copyFileFrom(LocalFileSystemRepository.java:145)\
org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:238)\
org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$2(SnapShooter.java:205)\
java.base/java.lang.Thread.run(Thread.java:834)\
\"}
{noformat}

> Backup failing on a constantly changing index with NoSuchFileException
> --
>
> Key: SOLR-11616
> URL: https://issues.apache.org/jira/browse/SOLR-11616
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.2, 8.0
>
> Attachments: SOLR-11616.patch, SOLR-11616.patch, solr-6.3.log, 
> solr-7.1.log
>
>
> As reported by several users on SOLR-9120 , Solr backups fail with 
> NoSuchFileException on a constantly changing index. 
> Users linked SOLR-9120 to the root cause as the stack trace is the same , but 
> the fix proposed there won't fix backups to stop failing.
> We need to implement a similar fix in {{SnapShooter#createSnapshot}} to fix 
> the problem



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Separate dev mailing list for automated mails?

2019-08-07 Thread Michael Sokolov
big +1 -- I'm also curious why the subject lines of many automated
emails (from Jira?) start with [CREATED] even though they are
generated by comments or other kinds of updates (not creating a new
issue). Overall, I think we have way too much comment spam. In
particular Github comments are so poorly formatted in email (at least
in gmail?) as to be almost unreadable - I think because they always
include the complete comment history. I wonder if there is a way to
neaten them up (especially the subject lines, so you can scan
quickly)?

On Tue, Aug 6, 2019 at 7:17 PM Jan Høydahl  wrote:
>
> Hi
>
> The mail volume on dev@ is fairly high, betwen 2500-3500/month.
> To break down the numbers last month, see 
> https://lists.apache.org/trends.html?dev@lucene.apache.org:lte=1M:
>
> Top 10 participants:
> -GitBox: 420 emails
> -ASF subversion and git services (JIRA): 351 emails
> -Apache Jenkins Server: 261 emails
> -Policeman Jenkins Server: 234 emails
> -Munendra S N (JIRA): 134 emails
> -Joel Bernstein (JIRA): 84 emails
> -Tomoko Uchida (JIRA): 77 emails
> -Jan Høydahl (JIRA): 52 emails
> -Andrzej Bialecki (JIRA): 47 emails
> -Adrien Grand (JIRA): 46 emails
>
> I have especially noticed how every single GitHub PR review comment triggers 
> its own email instead of one email per review session.
> Also, every commit/push triggers an email since a bot adds a comment to JIRA 
> for it.
>
> Personally I think the ratio of notifications vs human emails is a bit too 
> high. I fear external devs who just want to follow the project may get 
> overwhelmed and unsubscribe.
> One suggestion is therefore to add a new list where detailed JIRA comments 
> and Github comments / reviews go. All committers should of course subscribe!
> I saw the Zookeeper project have a notifications@ list for GitHub comments 
> and issues@ for JIRA comments (Except the first [Created] email for a JIRA 
> will also go to dev@)
> The Maven project follows the same scheme and they also send Jenkins mails to 
> the notifications@ list. The Cassandra project seems to divert all jira 
> comments to the commits@ list.
> The HBase project has keeps only [Created]/[Resolved] mails on dev@ and all 
> other from Jira/GH on issues@ list and Jenkins mails on a separate builds@ 
> list.
>
> Is it time we did something similar? I propose a single new notifications@ 
> list for everything JIRA, GitHub and Jenkins but keep [Created|Resolved] 
> mails on dev@
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13680) Close Resources Properly

2019-08-07 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902100#comment-16902100
 ] 

Lucene/Solr QA commented on SOLR-13680:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 32m 15s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.rest.schema.analysis.TestManagedStopFilterFactory |
|   | solr.security.AuditLoggerIntegrationTest |
|   | solr.core.TestCoreContainer |
|   | solr.schema.TestSchemalessBufferedUpdates |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-13680 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976936/SOLR-13680.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 21842999fe |
| ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 |
| Default Java | LTS |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/523/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/523/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/523/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Close Resources Properly
> 
>
> Key: SOLR-13680
> URL: https://issues.apache.org/jira/browse/SOLR-13680
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.2
>Reporter: Furkan KAMACI
>Assignee: Munendra S N
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13680.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Files, streams or connections which implements Closeable or AutoCloseable 
> interface should be closed after use.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3515 - Failure

2019-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3515/

All tests passed

Build Log:
[...truncated 63953 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj1651358264
 [ecj-lint] Compiling 1284 source files to /tmp/ecj1651358264
 [ecj-lint] Processing annotations
 [ecj-lint] Annotations processed
 [ecj-lint] Processing annotations
 [ecj-lint] No elements to process
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
 (at line 219)
 [ecj-lint] return (NamedList) new 
JavaBinCodec(resolver).unmarshal(in);
 [ecj-lint]^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java
 (at line 788)
 [ecj-lint] throw new UnsupportedOperationException("must add at least 1 
node first");
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'queryRequest' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java
 (at line 794)
 [ecj-lint] throw new UnsupportedOperationException("must add at least 1 
node first");
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'queryRequest' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 19)
 [ecj-lint] import javax.naming.Context;
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 20)
 [ecj-lint] import javax.naming.InitialContext;
 [ecj-lint]^^^
 [ecj-lint] The type javax.naming.InitialContext is not accessible
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 21)
 [ecj-lint] import javax.naming.NamingException;
 [ecj-lint]
 [ecj-lint] The type javax.naming.NamingException is not accessible
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 22)
 [ecj-lint] import javax.naming.NoInitialContextException;
 [ecj-lint]^^
 [ecj-lint] The type javax.naming.NoInitialContextException is not accessible
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 776)
 [ecj-lint] Context c = new InitialContext();
 [ecj-lint] ^^^
 [ecj-lint] Context cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 776)
 [ecj-lint] Context c = new InitialContext();
 [ecj-lint] ^^
 [ecj-lint] InitialContext cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 10. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 779)
 [ecj-lint] } catch (NoInitialContextException e) {
 [ecj-lint]  ^
 [ecj-lint] NoInitialContextException cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 11. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 781)
 [ecj-lint] } catch (NamingException e) {
 [ecj-lint]  ^^^
 [ecj-lint] NamingException cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 12. WARNING in 

[jira] [Commented] (SOLR-13680) Close Resources Properly

2019-08-07 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902052#comment-16902052
 ] 

Munendra S N commented on SOLR-13680:
-

 [^SOLR-13680.patch] 
Attaching the patch generated using Github PR. Not sure why preCommit test 
build didn't trigger.

> Close Resources Properly
> 
>
> Key: SOLR-13680
> URL: https://issues.apache.org/jira/browse/SOLR-13680
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.2
>Reporter: Furkan KAMACI
>Assignee: Munendra S N
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13680.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Files, streams or connections which implements Closeable or AutoCloseable 
> interface should be closed after use.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13680) Close Resources Properly

2019-08-07 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-13680:

Attachment: SOLR-13680.patch

> Close Resources Properly
> 
>
> Key: SOLR-13680
> URL: https://issues.apache.org/jira/browse/SOLR-13680
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.2
>Reporter: Furkan KAMACI
>Assignee: Munendra S N
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13680.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Files, streams or connections which implements Closeable or AutoCloseable 
> interface should be closed after use.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902047#comment-16902047
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 1645075b5f64aad9a26f67dd65adf7167ee04366 in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1645075 ]

SOLR-13105: Add text to loading page 8


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them

2019-08-07 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902043#comment-16902043
 ] 

Noble Paul commented on SOLR-13677:
---

[~ab] please take a look at the new PR

> All Metrics Gauges should be unregistered by the objects that registered them
> -
>
> Key: SOLR-13677
> URL: https://issues.apache.org/jira/browse/SOLR-13677
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Noble Paul
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The life cycle of Metrics producers are managed by the core (mostly). So, if 
> the lifecycle of the object is different from that of the core itself, these 
> objects will never be unregistered from the metrics registry. This will lead 
> to memory leaks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul opened a new pull request #825: SOLR-13677 All Metrics Gauges should be unregistered by the objects that registered them

2019-08-07 Thread GitBox
noblepaul opened a new pull request #825: SOLR-13677 All Metrics Gauges should 
be unregistered by the objects that registered them
URL: https://github.com/apache/lucene-solr/pull/825
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them

2019-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902040#comment-16902040
 ] 

ASF subversion and git services commented on SOLR-13677:


Commit 9b0003a7037206d937b9f4aa48e5dc4cf80fdd0f in lucene-solr's branch 
refs/heads/jira/SOLR-13677_1 from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9b0003a ]

SOLR-13677: Take 2


> All Metrics Gauges should be unregistered by the objects that registered them
> -
>
> Key: SOLR-13677
> URL: https://issues.apache.org/jira/browse/SOLR-13677
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Noble Paul
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The life cycle of Metrics producers are managed by the core (mostly). So, if 
> the lifecycle of the object is different from that of the core itself, these 
> objects will never be unregistered from the metrics registry. This will lead 
> to memory leaks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul closed pull request #820: SOLR-13677: All Metrics Gauges should be unregistered by the objects that registered them

2019-08-07 Thread GitBox
noblepaul closed pull request #820: SOLR-13677: All Metrics Gauges should be 
unregistered by the objects that registered them
URL: https://github.com/apache/lucene-solr/pull/820
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Separate dev mailing list for automated mails?

2019-08-07 Thread David Smiley
It's a problem.  I am mentoring a colleague who is stressed with the
prospect of keeping up with our community because of the volume of email,
and so it's a serious barrier to community involvement.  I too have email
filters to help me, and it took some time to work out a system.  We could
share our filter descriptions for this with workflow?  I'm sure I could
learn from you all on your approaches, and new collaborators would
appreciate this advise.

I think automated builds (Jenkins/CI) could warrant its own list.  Separate
lists would make setting up email filters easier in general.

I like the idea of a list, like dev, but which does not include JIRA
comments or GH code review comments, and does not include Jenkins/CI  This
would be a good way for potential contributors to have a light-weight way
of getting involved.  If they are involved or interested in specific
issues, they can "watch" / "subscribe" to JIRA/GH issues and consequently
they will get direct notifications from those systems.  Then people who
choose to get more involved, like us, can subscribe to the other list(s).

We do have instances where "ASF subversion and git services" can be
excessive due to feature branches that ought not to generate JIRA posts to
unrelated issues, and I think we should work to prevent that.

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


On Wed, Aug 7, 2019 at 7:01 AM Tomoko Uchida 
wrote:

> Hi
>
> +1 for separated list(s) for JIRA/Github updates and Jenkins jobs.
> While I myself am not in trouble with assorting the mails thanks to
> gmail filters, I know an user (external dev) who unsubscribed this
> list. The one reason is the volume of the mail flow :)
>
> Tomoko
>
> 2019年8月7日(水) 8:17 Jan Høydahl :
> >
> > Hi
> >
> > The mail volume on dev@ is fairly high, betwen 2500-3500/month.
> > To break down the numbers last month, see
> https://lists.apache.org/trends.html?dev@lucene.apache.org:lte=1M:
> >
> > Top 10 participants:
> > -GitBox: 420 emails
> > -ASF subversion and git services (JIRA): 351 emails
> > -Apache Jenkins Server: 261 emails
> > -Policeman Jenkins Server: 234 emails
> > -Munendra S N (JIRA): 134 emails
> > -Joel Bernstein (JIRA): 84 emails
> > -Tomoko Uchida (JIRA): 77 emails
> > -Jan Høydahl (JIRA): 52 emails
> > -Andrzej Bialecki (JIRA): 47 emails
> > -Adrien Grand (JIRA): 46 emails
> >
> > I have especially noticed how every single GitHub PR review comment
> triggers its own email instead of one email per review session.
> > Also, every commit/push triggers an email since a bot adds a comment to
> JIRA for it.
> >
> > Personally I think the ratio of notifications vs human emails is a bit
> too high. I fear external devs who just want to follow the project may get
> overwhelmed and unsubscribe.
> > One suggestion is therefore to add a new list where detailed JIRA
> comments and Github comments / reviews go. All committers should of course
> subscribe!
> > I saw the Zookeeper project have a notifications@ list for GitHub
> comments and issues@ for JIRA comments (Except the first [Created] email
> for a JIRA will also go to dev@)
> > The Maven project follows the same scheme and they also send Jenkins
> mails to the notifications@ list. The Cassandra project seems to divert
> all jira comments to the commits@ list.
> > The HBase project has keeps only [Created]/[Resolved] mails on dev@ and
> all other from Jira/GH on issues@ list and Jenkins mails on a separate
> builds@ list.
> >
> > Is it time we did something similar? I propose a single new
> notifications@ list for everything JIRA, GitHub and Jenkins but keep
> [Created|Resolved] mails on dev@
> >
> > --
> > Jan Høydahl, search solution architect
> > Cominvent AS - www.cominvent.com
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11.0.3) - Build # 8073 - Still Unstable!

2019-08-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8073/
Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

16 tests failed.
FAILED:  
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testFilePersistence

Error Message:
Software caused connection abort: recv failed

Stack Trace:
javax.net.ssl.SSLException: Software caused connection abort: recv failed
at 
__randomizedtesting.SeedInfo.seed([B8489C25120AA1DC:9AE9F858B766C799]:0)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:127)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:321)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:259)
at 
java.base/sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1314)
at 
java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:839)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.solr.util.RestTestHarness.getResponse(RestTestHarness.java:215)
at org.apache.solr.util.RestTestHarness.query(RestTestHarness.java:107)
at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:226)
at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
at 
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testFilePersistence(TestModelManagerPersistence.java:168)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 

[jira] [Resolved] (LUCENE-8747) Allow access to submatches from Matches instances

2019-08-07 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-8747.
---
   Resolution: Fixed
Fix Version/s: 8.3

> Allow access to submatches from Matches instances
> -
>
> Key: LUCENE-8747
> URL: https://issues.apache.org/jira/browse/LUCENE-8747
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.3
>
> Attachments: LUCENE-8747.patch, LUCENE-8747.patch, LUCENE-8747.patch, 
> LUCENE-8747.patch, LUCENE-8747.patch
>
>
> A Matches object currently allows access to all matching terms from a query, 
> but the structure of the matching query is flattened out, so if you want to 
> find which subqueries have matched you need to iterate over all matches, 
> collecting queries as you go.  It should be easier to get this information 
> from the parent Matches object.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8941) Build wildcard matches more lazily

2019-08-07 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-8941.
---
   Resolution: Fixed
Fix Version/s: 8.3

> Build wildcard matches more lazily
> --
>
> Key: LUCENE-8941
> URL: https://issues.apache.org/jira/browse/LUCENE-8941
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.3
>
> Attachments: LUCENE-8941.patch, LUCENE-8941.patch
>
>
> When retrieving a Matches object from a multi-term query, such as an 
> AutomatonQuery or TermInSetQuery, we currently find all matching term 
> iterators up-front, to return a disjunction over all of them.  This can be 
> inefficient if we're only interested in finding out if anything matched, and 
> are iterating over a different field to retrieve offsets.
> We can improve this by returning immediately when the first matching term is 
> found, and only collecting other matching terms when we start iterating.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8747) Allow access to submatches from Matches instances

2019-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901976#comment-16901976
 ] 

ASF subversion and git services commented on LUCENE-8747:
-

Commit 21842999fe559bcbb4aebf7504aee6e8db45b38e in lucene-solr's branch 
refs/heads/master from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2184299 ]

LUCENE-8747: Allow access to submatches from Matches


> Allow access to submatches from Matches instances
> -
>
> Key: LUCENE-8747
> URL: https://issues.apache.org/jira/browse/LUCENE-8747
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8747.patch, LUCENE-8747.patch, LUCENE-8747.patch, 
> LUCENE-8747.patch, LUCENE-8747.patch
>
>
> A Matches object currently allows access to all matching terms from a query, 
> but the structure of the matching query is flattened out, so if you want to 
> find which subqueries have matched you need to iterate over all matches, 
> collecting queries as you go.  It should be easier to get this information 
> from the parent Matches object.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8747) Allow access to submatches from Matches instances

2019-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901974#comment-16901974
 ] 

ASF subversion and git services commented on LUCENE-8747:
-

Commit 8dd116a615821c7d9b539316b051f466009b5130 in lucene-solr's branch 
refs/heads/branch_8x from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8dd116a ]

LUCENE-8747: Allow access to submatches from Matches


> Allow access to submatches from Matches instances
> -
>
> Key: LUCENE-8747
> URL: https://issues.apache.org/jira/browse/LUCENE-8747
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8747.patch, LUCENE-8747.patch, LUCENE-8747.patch, 
> LUCENE-8747.patch, LUCENE-8747.patch
>
>
> A Matches object currently allows access to all matching terms from a query, 
> but the structure of the matching query is flattened out, so if you want to 
> find which subqueries have matched you need to iterate over all matches, 
> collecting queries as you go.  It should be easier to get this information 
> from the parent Matches object.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8941) Build wildcard matches more lazily

2019-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901975#comment-16901975
 ] 

ASF subversion and git services commented on LUCENE-8941:
-

Commit fa72da1c7112a2e5c259f1f0181e6b27766ed4ad in lucene-solr's branch 
refs/heads/master from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fa72da1 ]

LUCENE-8941: Build wildcard matches lazily


> Build wildcard matches more lazily
> --
>
> Key: LUCENE-8941
> URL: https://issues.apache.org/jira/browse/LUCENE-8941
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8941.patch, LUCENE-8941.patch
>
>
> When retrieving a Matches object from a multi-term query, such as an 
> AutomatonQuery or TermInSetQuery, we currently find all matching term 
> iterators up-front, to return a disjunction over all of them.  This can be 
> inefficient if we're only interested in finding out if anything matched, and 
> are iterating over a different field to retrieve offsets.
> We can improve this by returning immediately when the first matching term is 
> found, and only collecting other matching terms when we start iterating.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8941) Build wildcard matches more lazily

2019-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901973#comment-16901973
 ] 

ASF subversion and git services commented on LUCENE-8941:
-

Commit b5b78e0adeb9db9345b69abedabd8c5cd684df7b in lucene-solr's branch 
refs/heads/branch_8x from Alan Woodward
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b5b78e0 ]

LUCENE-8941: Build wildcard matches lazily


> Build wildcard matches more lazily
> --
>
> Key: LUCENE-8941
> URL: https://issues.apache.org/jira/browse/LUCENE-8941
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8941.patch, LUCENE-8941.patch
>
>
> When retrieving a Matches object from a multi-term query, such as an 
> AutomatonQuery or TermInSetQuery, we currently find all matching term 
> iterators up-front, to return a disjunction over all of them.  This can be 
> inefficient if we're only interested in finding out if anything matched, and 
> are iterating over a different field to retrieve offsets.
> We can improve this by returning immediately when the first matching term is 
> found, and only collecting other matching terms when we start iterating.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11616) Backup failing on a constantly changing index with NoSuchFileException

2019-08-07 Thread Andrian Jardan (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901938#comment-16901938
 ] 

Andrian Jardan edited comment on SOLR-11616 at 8/7/19 9:53 AM:
---

 It seems like this issue is back in 7.7.2. We are using the official 7.7.2 
container, and we see this during backups sometimes:

Is it a regression, or there's something new ?

{noformat}
"level\":\"ERROR\", \"collection\":\"\", \"shard\":\"\", \"replica\":\"\", 
\"core\":\"\", \"location\":\"org.apache.solr.handler.SnapShooter\", 
\"message\":\"Exception while creating snapshot\" ,\"stacktrace\":\" 
java.nio.file.NoSuchFileException:
 
/store/data/indexname_shard6_0_replica_n53/data/index.20190729153842861/_vngz.fdt\
java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)\
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)\
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)\
java.base/sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:178)\
java.base/java.nio.channels.FileChannel.open(FileChannel.java:292)\
java.base/java.nio.channels.FileChannel.open(FileChannel.java:345)\
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:238)\
org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:181)\
org.apache.lucene.store.Directory.copyFrom(Directory.java:182)\

org.apache.solr.core.backup.repository.LocalFileSystemRepository.copyFileFrom(LocalFileSystemRepository.java:145)\
org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:238)\
org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$2(SnapShooter.java:205)\
java.base/java.lang.Thread.run(Thread.java:834)\
\"}
{noformat}


was (Author: macros):
 It seems like this issue is back in 7.7.2. We are using the official 7.7.2 
container, and we see this during backups sometimes:

Is it a regression, or there's something new ?

{noformat}
"level\":\"ERROR\", \"collection\":\"\", \"shard\":\"\", \"replica\":\"\", 
\"core\":\"\", \"location\":\"org.apache.solr.handler.SnapShooter\", 
\"message\":\"Exception while creating snapshot\" ,\"stacktrace\":\" 
java.nio.file.NoSuchFileException: 
/store/data/indexname_shard6_0_replica_n53/data/index.20190729153842861/_vngz.fdt\
\\tat 
java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)\
\\tat 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)\
\\tat 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)\
\\tat 
java.base/sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:178)\
\\tat java.base/java.nio.channels.FileChannel.open(FileChannel.java:292)\
\\tat java.base/java.nio.channels.FileChannel.open(FileChannel.java:345)\
\\tat org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:238)\
\\tat 
org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:181)\
\\tat org.apache.lucene.store.Directory.copyFrom(Directory.java:182)\
\\tat 
org.apache.solr.core.backup.repository.LocalFileSystemRepository.copyFileFrom(LocalFileSystemRepository.java:145)\
\\tat org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:238)\
\\tat 
org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$2(SnapShooter.java:205)\
\\tat java.base/java.lang.Thread.run(Thread.java:834)\
\"}
{noformat}

> Backup failing on a constantly changing index with NoSuchFileException
> --
>
> Key: SOLR-11616
> URL: https://issues.apache.org/jira/browse/SOLR-11616
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.2, 8.0
>
> Attachments: SOLR-11616.patch, SOLR-11616.patch, solr-6.3.log, 
> solr-7.1.log
>
>
> As reported by several users on SOLR-9120 , Solr backups fail with 
> NoSuchFileException on a constantly changing index. 
> Users linked SOLR-9120 to the root cause as the stack trace is the same , but 
> the fix proposed there won't fix backups to stop failing.
> We need to implement a similar fix in {{SnapShooter#createSnapshot}} to fix 
> the problem



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11616) Backup failing on a constantly changing index with NoSuchFileException

2019-08-07 Thread Andrian Jardan (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901938#comment-16901938
 ] 

Andrian Jardan commented on SOLR-11616:
---

 It seems like this issue is back in 7.7.2. We are using the official 7.7.2 
container, and we see this during backups sometimes:

Is it a regression, or there's something new ?

{noformat}
"level\":\"ERROR\", \"collection\":\"\", \"shard\":\"\", \"replica\":\"\", 
\"core\":\"\", \"location\":\"org.apache.solr.handler.SnapShooter\", 
\"message\":\"Exception while creating snapshot\" ,\"stacktrace\":\" 
java.nio.file.NoSuchFileException: 
/store/data/indexname_shard6_0_replica_n53/data/index.20190729153842861/_vngz.fdt\
\\tat 
java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)\
\\tat 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)\
\\tat 
java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)\
\\tat 
java.base/sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:178)\
\\tat java.base/java.nio.channels.FileChannel.open(FileChannel.java:292)\
\\tat java.base/java.nio.channels.FileChannel.open(FileChannel.java:345)\
\\tat org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:238)\
\\tat 
org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:181)\
\\tat org.apache.lucene.store.Directory.copyFrom(Directory.java:182)\
\\tat 
org.apache.solr.core.backup.repository.LocalFileSystemRepository.copyFileFrom(LocalFileSystemRepository.java:145)\
\\tat org.apache.solr.handler.SnapShooter.createSnapshot(SnapShooter.java:238)\
\\tat 
org.apache.solr.handler.SnapShooter.lambda$createSnapAsync$2(SnapShooter.java:205)\
\\tat java.base/java.lang.Thread.run(Thread.java:834)\
\"}
{noformat}

> Backup failing on a constantly changing index with NoSuchFileException
> --
>
> Key: SOLR-11616
> URL: https://issues.apache.org/jira/browse/SOLR-11616
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.2, 8.0
>
> Attachments: SOLR-11616.patch, SOLR-11616.patch, solr-6.3.log, 
> solr-7.1.log
>
>
> As reported by several users on SOLR-9120 , Solr backups fail with 
> NoSuchFileException on a constantly changing index. 
> Users linked SOLR-9120 to the root cause as the stack trace is the same , but 
> the fix proposed there won't fix backups to stop failing.
> We need to implement a similar fix in {{SnapShooter#createSnapshot}} to fix 
> the problem



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error

2019-08-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-13672.

Resolution: Fixed

> Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
> --
>
> Key: SOLR-13672
> URL: https://issues.apache.org/jira/browse/SOLR-13672
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.2
>Reporter: Jörn Franke
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13672.patch, zk-status.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error 
> in the Admin UI / Cloud / ZkStatus: 
> {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting 
> in zookeeper{color}
>  {color:#22}configuration file."*{color}
> {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly 
> normal.{color}
> {color:#22}This issue only occurs with ZooKeeper ensembles. It does not 
> appear if one Zookeeper standalone instance is used.{color}
> {color:#22}We tried the 4lw.commands.whitelist with wildcard * and 
> "mntr,conf,ruok" (with and without spaces).{color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error

2019-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901852#comment-16901852
 ] 

ASF subversion and git services commented on SOLR-13672:


Commit f853198f72f802611c3e0ee8882cdd6a80a818aa in lucene-solr's branch 
refs/heads/branch_8x from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f853198 ]

SOLR-13672: Cloud -> Zk Status page now parses response from Zookeeper 3.5.5 
correctly

(Back ported from 8 commits on master branch)


> Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
> --
>
> Key: SOLR-13672
> URL: https://issues.apache.org/jira/browse/SOLR-13672
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.2
>Reporter: Jörn Franke
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13672.patch, zk-status.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error 
> in the Admin UI / Cloud / ZkStatus: 
> {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting 
> in zookeeper{color}
>  {color:#22}configuration file."*{color}
> {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly 
> normal.{color}
> {color:#22}This issue only occurs with ZooKeeper ensembles. It does not 
> appear if one Zookeeper standalone instance is used.{color}
> {color:#22}We tried the 4lw.commands.whitelist with wildcard * and 
> "mntr,conf,ruok" (with and without spaces).{color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8755) QuadPrefixTree robustness: can throw exception while indexing a point at high precision

2019-08-07 Thread Chongchen Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901896#comment-16901896
 ] 

Chongchen Chen commented on LUCENE-8755:


Hi, [~dsmiley] I submit a pull request for this bug. could you please review it.

> QuadPrefixTree robustness: can throw exception while indexing a point at high 
> precision
> ---
>
> Key: LUCENE-8755
> URL: https://issues.apache.org/jira/browse/LUCENE-8755
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial-extras
>Reporter: senthil nathan
>Priority: Critical
> Attachments: LUCENE-8755.patch
>
>
> When trying to index a below document with apache solr 7.5.0 i am getting 
> java.lang.IndexOutOfBoundsException, this data is causing the whole full 
> import to be failed. I have also defined my schema for your reference 
>  
> Data:
> [
> { "street_description":"SAMPLE_TEXT", "pao_start_number":6, 
> "x_coordinate":244502.06, "sao_text":"FIRST FLOOR", "logical_status":"1", 
> "street_record_type":1, "id":"AA60L12-ENG", 
> "street_description_str":"SAMPLE_TEXT", "lpi_logical_status":"1", 
> "administrative_area":"SAMPLE_TEXT & HOVE", "uprn":"8899889", 
> "town_name":"TEST TOWN", "street_description_full":"60 DEMO ", 
> "y_coordinate":639062.07, "postcode_locator":"AB1 1BB", "location":"244502.06 
> 639062.07" }
> ]
>  
> Configuration in managed-schema.xml
>  
>  geo="false" maxDistErr="0.09" worldBounds="ENVELOPE(0,70,130,0)" 
> distErrPct="0.15"/>
>  stored="false"/>
>   stored="false"/>
>  
>   indexed="true" stored="true"/>
>   stored="true"/>
>   required="true" stored="true"/>
>   stored="true"/>
>   stored="true"/>
>   indexed="false" stored="true"/>
>   indexed="false" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   multiValued="false" indexed="true" stored="true"/>
>   multiValued="false" indexed="true" stored="true"/> 
>   indexed="false" stored="true"/>
>   stored="true"/>
>   stored="true"/>
>   stored="true"/>
>   stored="true"/>



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13593) Allow to specify analyzer components by their SPI names in schema definition

2019-08-07 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901888#comment-16901888
 ] 

Uwe Schindler commented on SOLR-13593:
--

Looks good to me! +1

> Allow to specify analyzer components by their SPI names in schema definition
> 
>
> Key: SOLR-13593
> URL: https://issues.apache.org/jira/browse/SOLR-13593
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis
>Reporter: Tomoko Uchida
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Now each analysis factory has explicitely documented SPI name which is stored 
> in the static "NAME" field (LUCENE-8778).
>  Solr uses factories' simple class name in schema definition (like 
> class="solr.WhitespaceTokenizerFactory"), but we should be able to also use 
> more concise SPI names (like name="whitespace").
> e.g.:
> {code:xml}
> 
>   
> 
>  />
> 
>   
> 
> {code}
> would be
> {code:xml}
> 
>   
> 
> 
> 
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.2-Linux (32bit/jdk1.8.0_201) - Build # 530 - Unstable!

2019-08-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.2-Linux/530/
Java: 32bit/jdk1.8.0_201 -server -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ReindexCollectionTest.testSameTargetReindexing

Error Message:
num docs expected:<200> but was:<168>

Stack Trace:
java.lang.AssertionError: num docs expected:<200> but was:<168>
at 
__randomizedtesting.SeedInfo.seed([E5DAE126546C3538:50A30AF1530AAD7C]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.cloud.ReindexCollectionTest.indexDocs(ReindexCollectionTest.java:411)
at 
org.apache.solr.cloud.ReindexCollectionTest.doTestSameTargetReindexing(ReindexCollectionTest.java:166)
at 
org.apache.solr.cloud.ReindexCollectionTest.testSameTargetReindexing(ReindexCollectionTest.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)



[JENKINS] Lucene-Solr-Tests-master - Build # 3513 - Unstable

2019-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3513/

1 tests failed.
FAILED:  org.apache.solr.search.facet.TestCloudJSONFacetSKG.testRandom

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:39613/solr/org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:39613/solr/org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection]
at 
__randomizedtesting.SeedInfo.seed([4B6C7437C6F5256C:392051387795931F]:0)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:345)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:987)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1002)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.getNumFound(TestCloudJSONFacetSKG.java:669)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.verifySKGResults(TestCloudJSONFacetSKG.java:446)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.assertFacetSKGsAreCorrect(TestCloudJSONFacetSKG.java:392)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.assertFacetSKGsAreCorrect(TestCloudJSONFacetSKG.java:402)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.assertFacetSKGsAreCorrect(TestCloudJSONFacetSKG.java:349)
at 
org.apache.solr.search.facet.TestCloudJSONFacetSKG.testRandom(TestCloudJSONFacetSKG.java:274)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Assigned] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error

2019-08-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-13672:
--

Assignee: Jan Høydahl

> Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
> --
>
> Key: SOLR-13672
> URL: https://issues.apache.org/jira/browse/SOLR-13672
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.2
>Reporter: Jörn Franke
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13672.patch, zk-status.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error 
> in the Admin UI / Cloud / ZkStatus: 
> {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting 
> in zookeeper{color}
>  {color:#22}configuration file."*{color}
> {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly 
> normal.{color}
> {color:#22}This issue only occurs with ZooKeeper ensembles. It does not 
> appear if one Zookeeper standalone instance is used.{color}
> {color:#22}We tried the 4lw.commands.whitelist with wildcard * and 
> "mntr,conf,ruok" (with and without spaces).{color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error

2019-08-07 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13672:
---
Fix Version/s: 8.3

> Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
> --
>
> Key: SOLR-13672
> URL: https://issues.apache.org/jira/browse/SOLR-13672
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.2
>Reporter: Jörn Franke
>Priority: Major
> Fix For: 8.3
>
> Attachments: SOLR-13672.patch, zk-status.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error 
> in the Admin UI / Cloud / ZkStatus: 
> {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting 
> in zookeeper{color}
>  {color:#22}configuration file."*{color}
> {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly 
> normal.{color}
> {color:#22}This issue only occurs with ZooKeeper ensembles. It does not 
> appear if one Zookeeper standalone instance is used.{color}
> {color:#22}We tried the 4lw.commands.whitelist with wildcard * and 
> "mntr,conf,ruok" (with and without spaces).{color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error

2019-08-07 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901804#comment-16901804
 ] 

Jan Høydahl commented on SOLR-13672:


Merged to Master. Used the GitHub UI and the intention was to squash merge but 
appears that all individual commits were merged, sorry for the noise.

> Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
> --
>
> Key: SOLR-13672
> URL: https://issues.apache.org/jira/browse/SOLR-13672
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.2
>Reporter: Jörn Franke
>Priority: Major
> Attachments: SOLR-13672.patch, zk-status.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error 
> in the Admin UI / Cloud / ZkStatus: 
> {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting 
> in zookeeper{color}
>  {color:#22}configuration file."*{color}
> {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly 
> normal.{color}
> {color:#22}This issue only occurs with ZooKeeper ensembles. It does not 
> appear if one Zookeeper standalone instance is used.{color}
> {color:#22}We tried the 4lw.commands.whitelist with wildcard * and 
> "mntr,conf,ruok" (with and without spaces).{color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy merged pull request #818: SOLR-13672: Zk Status page now parses response from Zookeeper 3.5.5 correctly

2019-08-07 Thread GitBox
janhoy merged pull request #818: SOLR-13672: Zk Status page now parses response 
from Zookeeper 3.5.5 correctly
URL: https://github.com/apache/lucene-solr/pull/818
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error

2019-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901800#comment-16901800
 ] 

ASF subversion and git services commented on SOLR-13672:


Commit 64884be0444e3ed7ae2a0adce2689b03da934188 in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=64884be ]

SOLR-13672: Zk Status page now parses response from Zookeeper 3.5.5 correctly 
(#818)

* SOLR-13672: Cloud -> Zk Status page now parses response from Zookeeper 3.5.5 
correctly

> Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
> --
>
> Key: SOLR-13672
> URL: https://issues.apache.org/jira/browse/SOLR-13672
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.2
>Reporter: Jörn Franke
>Priority: Major
> Attachments: SOLR-13672.patch, zk-status.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error 
> in the Admin UI / Cloud / ZkStatus: 
> {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting 
> in zookeeper{color}
>  {color:#22}configuration file."*{color}
> {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly 
> normal.{color}
> {color:#22}This issue only occurs with ZooKeeper ensembles. It does not 
> appear if one Zookeeper standalone instance is used.{color}
> {color:#22}We tried the 4lw.commands.whitelist with wildcard * and 
> "mntr,conf,ruok" (with and without spaces).{color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error

2019-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901799#comment-16901799
 ] 

ASF subversion and git services commented on SOLR-13672:


Commit 64884be0444e3ed7ae2a0adce2689b03da934188 in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=64884be ]

SOLR-13672: Zk Status page now parses response from Zookeeper 3.5.5 correctly 
(#818)

* SOLR-13672: Cloud -> Zk Status page now parses response from Zookeeper 3.5.5 
correctly

> Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
> --
>
> Key: SOLR-13672
> URL: https://issues.apache.org/jira/browse/SOLR-13672
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.2
>Reporter: Jörn Franke
>Priority: Major
> Attachments: SOLR-13672.patch, zk-status.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error 
> in the Admin UI / Cloud / ZkStatus: 
> {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting 
> in zookeeper{color}
>  {color:#22}configuration file."*{color}
> {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly 
> normal.{color}
> {color:#22}This issue only occurs with ZooKeeper ensembles. It does not 
> appear if one Zookeeper standalone instance is used.{color}
> {color:#22}We tried the 4lw.commands.whitelist with wildcard * and 
> "mntr,conf,ruok" (with and without spaces).{color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error

2019-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901798#comment-16901798
 ] 

ASF subversion and git services commented on SOLR-13672:


Commit 1123afae94f36027bcb7b2dc40b089653ed4d1c8 in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1123afa ]

SOLR-13672: Cloud -> Zk Status page now parses response from Zookeeper 3.5.5 
correctly


> Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
> --
>
> Key: SOLR-13672
> URL: https://issues.apache.org/jira/browse/SOLR-13672
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.2
>Reporter: Jörn Franke
>Priority: Major
> Attachments: SOLR-13672.patch, zk-status.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error 
> in the Admin UI / Cloud / ZkStatus: 
> {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting 
> in zookeeper{color}
>  {color:#22}configuration file."*{color}
> {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly 
> normal.{color}
> {color:#22}This issue only occurs with ZooKeeper ensembles. It does not 
> appear if one Zookeeper standalone instance is used.{color}
> {color:#22}We tried the 4lw.commands.whitelist with wildcard * and 
> "mntr,conf,ruok" (with and without spaces).{color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org