Re: New feature idea - Backwards (FST) dictionary for approximate string search

2019-07-08 Thread Juan Caicedo
Hi Michael,

I guess that I should have added more details :-)

The main benefit from the technique is to do approximate search in
large dictionaries. That is: find all the entries that are within 1 or
2 edit steps from the query. It's more useful for longer (starting
from ~7 characters, iirc), but in general the technique can be applied
to queries of any length. The main requirement is that we build the
dictionary as an FST, using the backwards (reversed) keys of the
original dictionary.

I initially used the technique to implement a stand-alone
spellchecker, but I think that it can also be used to optimize the
Lucene fuzzy queries (e.g. for the spelling/suggest module). However,
I'll need to look how can it be integrated with the part that creates
the dictionary.

I'll take a look at the code this week and I'll try to publish it in a
public repository so that we can discuss about this with more concrete
details.

I skimmed the paper trying to understand possible applications of the
technique. It sounds like efficient approximate (ie with some edits)
substring search is the main idea? I don't believe such a query exists
today in Lucene (nor any Suggester as far as I know). It sounds as if
this would be useful for searching within large strings, eg DNA
sequences or something like that, and maybe less applicable to typical
"full text" (ie tokenized) search where the strings being searched are
relatively shorter - does that sound right?

On Sat, Jul 6, 2019 at 2:01 PM Michael Sokolov  wrote:
>
> Juan, that sounds intriguing.
>
> I skimmed the paper trying to understand possible applications of the
> technique. It sounds like efficient approximate (ie with some edits)
> substring search is the main idea? I don't believe such a query exists
> today in Lucene (nor any Suggester as far as I know). It sounds as if
> this would be useful for searching within large strings, eg DNA
> sequences or something like that, and maybe less applicable to typical
> "full text" (ie tokenized) search where the strings being searched are
> relatively shorter - does that sound right?
>
> On Sat, Jul 6, 2019 at 12:35 PM Juan Caicedo  
> wrote:
> >
> > Hello,
> >
> > I've been working on a project for extending LevenshteinAutomata and
> > I'd like to know if it would be useful to add it to Lucene.
> >
> > I've implemented the 'backwards dictionary' technique (see [1],
> > section 6) for speeding up approximate search. This technique allows
> > us to narrow down the search and, therefore, reduce the running time
> > (at the expense of using more memory).
> >
> > I implemented it quite some time ago using an older version of Lucene,
> > so I need to revisit the code. However, the implementation was
> > relatively simple and it didn't require major changes to the core
> > classes. I can share the code in a public repository and iterate on
> > it, while I make it compatible for new Lucene APIs, add benchmarks,
> > and more unit tests.
> >
> > Ideally, I'd like to contribute to Lucene, either as part of core,
> > suggest or a different module.
> >
> > What do you think?
> >
> > [1] 
> > https://www.cis.uni-muenchen.de/download/publikationen/fastapproxsearch.pdf
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Windows (64bit/jdk-11.0.3) - Build # 357 - Still Unstable!

2019-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/357/
Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseSerialGC

10 tests failed.
FAILED:  
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testWrapperModelPersistence

Error Message:
Software caused connection abort: recv failed

Stack Trace:
javax.net.ssl.SSLException: Software caused connection abort: recv failed
at 
__randomizedtesting.SeedInfo.seed([D169CA668FEBF160:A403B9B830178825]:0)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:127)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:321)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:259)
at 
java.base/sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1314)
at 
java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:839)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.solr.util.RestTestHarness.getResponse(RestTestHarness.java:215)
at org.apache.solr.util.RestTestHarness.query(RestTestHarness.java:107)
at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:226)
at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
at 
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.doWrapperModelPersistenceChecks(TestModelManagerPersistence.java:202)
at 
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testWrapperModelPersistence(TestModelManagerPersistence.java:255)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 

[jira] [Commented] (SOLR-13375) Dimensional Routed Aliases

2019-07-08 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880918#comment-16880918
 ] 

David Smiley commented on SOLR-13375:
-

Fascinating bug to track down; congrats on that!  I hope it might help some 
other tests to be less flakey.

> Dimensional Routed Aliases
> --
>
> Key: SOLR-13375
> URL: https://issues.apache.org/jira/browse/SOLR-13375
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Attachments: SOLR-13375.patch, SOLR-13375.patch, SOLR-13375.patch
>
>
> Current available routed aliases are restricted to a single field. This 
> feature will allow Solr to provide data driven collection access, creation 
> and management based on multiple fields in a document. The collections will 
> be queried and updated in a unified manner via an alias. Current routing is 
> restricted to the values of a single field. The particularly useful 
> combination at this time will be Category X Time routing but Category X 
> Category may also be useful. More importantly, if additional routing schemes 
> are created in the future (either as contributions or as custom code by 
> users) combination among these should be supported. 
> It is expected that not all combinations will be useful, and that 
> determination of usefulness I expect to leave up to the user. Some Routing 
> schemes may need to be limited to be the leaf/last routing scheme for 
> technical reasons, though I'm not entirely convinced of that yet. If so, a 
> flag will be added to the RoutedAlias interface.
> Initial desire is to support two levels, though if arbitrary levels can be 
> supported easily that will be done.
> This could also have been called CompositeRoutedAlias, but that creates a TLA 
> clash with CategoryRoutedAlias.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-12.0.1) - Build # 24372 - Still Unstable!

2019-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24372/
Java: 64bit/jdk-12.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC

10 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TestPolicyCloud.testCreateCollectionSplitShard

Error Message:
Timeout occurred while waiting response from server at: 
https://127.0.0.1:43275/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: https://127.0.0.1:43275/solr
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:667)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.autoscaling.TestPolicyCloud.testCreateCollectionSplitShard(TestPolicyCloud.java:246)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-NightlyTests-8.1 - Build # 56 - Unstable

2019-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.1/56/

1 tests failed.
FAILED:  org.apache.solr.cloud.RollingRestartTest.test

Error Message:
Timeout occurred while waiting response from server at: 
https://127.0.0.1:34101/_/c

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: https://127.0.0.1:34101/_/c
at 
__randomizedtesting.SeedInfo.seed([6E3AAFE55714AFDE:E66E903FF9E8C226]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:660)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1068)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:837)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:769)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.RollingRestartTest.restartWithRolesTest(RollingRestartTest.java:74)
at 
org.apache.solr.cloud.RollingRestartTest.test(RollingRestartTest.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
 

[GitHub] [lucene-solr] atris commented on issue #769: LUCENE-8905: Better Error Handling For Illegal Arguments

2019-07-08 Thread GitBox
atris commented on issue #769: LUCENE-8905: Better Error Handling For Illegal 
Arguments
URL: https://github.com/apache/lucene-solr/pull/769#issuecomment-509478042
 
 
   Ok, it breaks quite a lot of tests. I investigated around 30 odd failures, 
and looks like all of those tests are failing on the new exception. Does that 
mean that our tests are silently passing in malformed arguments, and relying on 
the inability of TopDocsCollector to complain?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Propose CHANGES.txt releases begin with the categories (empty)

2019-07-08 Thread David Smiley
https://issues.apache.org/jira/browse/LUCENE-8883   and now with a simple
patch

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


On Tue, Jun 25, 2019 at 3:45 AM Jan Høydahl  wrote:

> +1
>
> PS: Check out the template in scripts/addVersion.py which now just adds
> "(no changes)"
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 25. jun. 2019 kl. 09:02 skrev Adrien Grand :
>
> +1, it's otherwise tempting to reuse an existing category even if it
> doesn't fit as well as a category that is not listed yet.
>
> On Tue, Jun 25, 2019 at 6:40 AM David Smiley 
> wrote:
>
>
> Looking at Solr's CHANGES.txt for 8.2 I see we have some sections:
> "Upgrade Notes", "New Features", "Bug Fixes", and "Other Changes".  There
> is no "Improvements" so no surprise here, the New Features category has
> issues that ought to be listed as such.  I think the order vary as well.  I
> propose that on new releases, the initial state of the next release in
> CHANGES.txt have these sections.  They can easily be removed at the
> upcoming release if there are no such sections, or they could stay as
> empty.  It seems addVersion.py is the code that sets this up.  Any opinions?
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
>
>
> --
> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> 
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
>
>
>


[jira] [Updated] (LUCENE-8883) CHANGES.txt: Auto add issue categories on new releases

2019-07-08 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-8883:
-
Attachment: LUCENE-8883.patch
Status: Open  (was: Open)

Here's the patch.  Note I have never written Python code before so it'd be 
helpful if someone who has might eyeball these changes.  I think the changes 
were simple enough and there was enough existing Python code here for me to 
learn from that I did it right.  I did run the changes and saw it work as I 
intended.

All the patch does is add the names of the headers with a blank line 
in-between.  I did not add a "--" line below each; I see Lucene hasn't been 
doing this but Solr has, and I like Lucene's approach better just barely.  Also 
I didn't add "(No changes)"; seems needless / self-evident.  I could have added 
an "Upgrade Notes" section but opted not to... I think this won't be as much of 
an issue but I could easily go either way.

Alexandre:  Are you proposing additional python scripts to basically do all 
CHANGES.txt manipulation?  I'm not sure what to think of that... I'm lukewarm I 
guess.

> CHANGES.txt: Auto add issue categories on new releases
> --
>
> Key: LUCENE-8883
> URL: https://issues.apache.org/jira/browse/LUCENE-8883
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE-8883.patch
>
>
> As I write this, looking at Solr's CHANGES.txt for 8.2 I see we have some 
> sections: "Upgrade Notes", "New Features", "Bug Fixes", and "Other Changes".  
> There is no "Improvements" so no surprise here, the New Features category 
> has issues that ought to be listed as such.  I think the order vary as well.  
> I propose that on new releases, the initial state of the next release in 
> CHANGES.txt have these sections.  They can easily be removed at the upcoming 
> release if there are no such sections, or they could stay as empty.  It seems 
> addVersion.py is the code that sets this up and it could be enhanced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: SolrCloud - "[not a shard request]" is returned when search request is short circuited

2019-07-08 Thread gopikannan
Thanks David, Submitted https://issues.apache.org/jira/browse/SOLR-13595.


On Wed, Jul 3, 2019 at 3:11 PM David Smiley 
wrote:

> Sounds like a bug to me; please do file an issue.
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Fri, Jun 28, 2019 at 7:24 PM gopikannan  wrote:
>
>> Hello,
>>If the collection has only one shard/replica or in case when _route_
>> param points to the hosted core,  [shard] field in response is set to
>> "[not a shard request]".
>>
>> When short-circuiting in below code "shard.url" is not populated in
>> request param.
>> Please let me know if I  submit a JIRA.
>>
>>
>> https://github.com/apache/lucene-solr/blob/301ea0e4624c2bd693fc034a801c4abb91cba299/solr/core/src/java/org/apache/solr/handler/component/HttpShardHandler.java#L405
>>
>> http://localhost:8983/solr/collection1/selct?q=*:*=[shard]
>>
>> Thanks
>> Gopi
>>
>


[jira] [Commented] (SOLR-6672) function results' names should not include trailing whitespace

2019-07-08 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880856#comment-16880856
 ] 

Munendra S N commented on SOLR-6672:


Yes, I have tested it manually. I will add tests once approach is finalized

> function results' names should not include trailing whitespace
> --
>
> Key: SOLR-6672
> URL: https://issues.apache.org/jira/browse/SOLR-6672
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Reporter: Mike Sokolov
>Priority: Minor
> Attachments: SOLR-6672.patch
>
>
> If you include a function as a result field in a list of multiple fields 
> separated by white space, the corresponding key in the result markup includes 
> trailing whitespace; Example:
> {code}
> fl="id field(units_used) archive_id"
> {code}
> ends up returning results like this:
> {code}
>   {
> "id": "nest.epubarchive.1",
> "archive_id": "urn:isbn:97849D42C5A01",
> "field(units_used) ": 123
>   ^
>   }
> {code}
> A workaround is to use comma separators instead of whitespace
> {code} 
> fl="id,field(units_used),archive_id"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] msokolov commented on issue #769: LUCENE-8905: Better Error Handling For Illegal Arguments

2019-07-08 Thread GitBox
msokolov commented on issue #769: LUCENE-8905: Better Error Handling For 
Illegal Arguments
URL: https://github.com/apache/lucene-solr/pull/769#issuecomment-509459805
 
 
   This seems better to me, but it could break people that rely on the 
leniency, so it should only go to master, and requires a changes entry to warn 
people


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6672) function results' names should not include trailing whitespace

2019-07-08 Thread Mike Sokolov (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880853#comment-16880853
 ] 

Mike Sokolov commented on SOLR-6672:


Thanks! I had forgotten about this. Did you at least test interactively?

> function results' names should not include trailing whitespace
> --
>
> Key: SOLR-6672
> URL: https://issues.apache.org/jira/browse/SOLR-6672
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Reporter: Mike Sokolov
>Priority: Minor
> Attachments: SOLR-6672.patch
>
>
> If you include a function as a result field in a list of multiple fields 
> separated by white space, the corresponding key in the result markup includes 
> trailing whitespace; Example:
> {code}
> fl="id field(units_used) archive_id"
> {code}
> ends up returning results like this:
> {code}
>   {
> "id": "nest.epubarchive.1",
> "archive_id": "urn:isbn:97849D42C5A01",
> "field(units_used) ": 123
>   ^
>   }
> {code}
> A workaround is to use comma separators instead of whitespace
> {code} 
> fl="id,field(units_used),archive_id"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4312) Index format to store position length per position

2019-07-08 Thread Robert Muir (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880828#comment-16880828
 ] 

Robert Muir commented on LUCENE-4312:
-

I don't think chicken and the egg description works well as an argument for 
something to add to the index. We should have a high bar in order to do that, 
because once something gets added its basically impossible to remove.

My earlier suggestion (payloads) was based on the fact that we are talking 
about corner-cases as far as search improvements, at a heavy complexity cost.

Maybe we could first address the search side with payload-based queries (maybe 
in sandbox, similar to what you already developed?) to try to address 
[~jpountz] concerns about scalability before actually optimizing it further by 
encoding in the index?

This way it wouldn't have to be all solved at once.

> Index format to store position length per position
> --
>
> Key: LUCENE-4312
> URL: https://issues.apache.org/jira/browse/LUCENE-4312
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 6.0
>Reporter: Gang Luo
>Priority: Minor
>  Labels: Suggestion
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Mike Mccandless said:TokenStreams are actually graphs.
> Indexer ignores PositionLengthAttribute.Need change the index format (and 
> Codec APIs) to store an additional int position length per position.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1893 - Still unstable

2019-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1893/

1 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=4630, 
name=testExecutor-1196-thread-7, state=RUNNABLE, 
group=TGRP-HdfsUnloadDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=4630, name=testExecutor-1196-thread-7, 
state=RUNNABLE, group=TGRP-HdfsUnloadDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: http://127.0.0.1:34920/qlev/x
at __randomizedtesting.SeedInfo.seed([2B7759AB7508681F]:0)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCollectionInOneInstance$2(BasicDistributedZkTest.java:762)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occurred 
while waiting response from server at: http://127.0.0.1:34920/qlev/x
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:667)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.BasicDistributedZkTest.lambda$createCollectionInOneInstance$2(BasicDistributedZkTest.java:760)
... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.base/java.net.SocketInputStream.socketRead0(Native Method)
at 
java.base/java.net.SocketInputStream.socketRead(SocketInputStream.java:115)
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:168)
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:555)
... 9 more




Build Log:
[...truncated 14193 lines...]
   [junit4] Suite: org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsUnloadDistributedZkTest_2B7759AB7508681F-001/init-core-data-001
   [junit4]   2> 844055 INFO  
(SUITE-HdfsUnloadDistributedZkTest-seed#[2B7759AB7508681F]-worker) [ ] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 844055 INFO  
(SUITE-HdfsUnloadDistributedZkTest-seed#[2B7759AB7508681F]-worker) [ ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 

[GitHub] [lucene-solr] noblepaul commented on issue #768: SOLR-13472: Defer authorization to be done on forwarded nodes

2019-07-08 Thread GitBox
noblepaul commented on issue #768: SOLR-13472: Defer authorization to be done 
on forwarded nodes
URL: https://github.com/apache/lucene-solr/pull/768#issuecomment-509445051
 
 
   LGTM


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-13-ea+26) - Build # 24371 - Unstable!

2019-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24371/
Java: 64bit/jdk-13-ea+26 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
max version bucket seed not updated after recovery!

Stack Trace:
java.lang.AssertionError: max version bucket seed not updated after recovery!
at 
__randomizedtesting.SeedInfo.seed([A8D43C15A81A70D0:208003CF06E61D28]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:301)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:135)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk-12.0.1) - Build # 845 - Unstable!

2019-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/845/
Java: 64bit/jdk-12.0.1 -XX:+UseCompressedOops -XX:+UseG1GC

7 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TestPolicyCloud.testCreateCollectionAddReplica

Error Message:
Timeout occurred while waiting response from server at: 
https://127.0.0.1:38829/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: https://127.0.0.1:38829/solr
at 
__randomizedtesting.SeedInfo.seed([F69575F81CFD96F0:76B510D60DBE7E56]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:667)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:547)
at 
org.apache.solr.cloud.autoscaling.TestPolicyCloud.after(TestPolicyCloud.java:87)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk1.8.0) - Build # 231 - Unstable!

2019-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/231/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.schema.TestUseDocValuesAsStored.testDuplicateMultiValued

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([B941632FB6121D09:579C770F78A3EBB5]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:947)
at 
org.apache.solr.schema.TestUseDocValuesAsStored.doTest(TestUseDocValuesAsStored.java:367)
at 
org.apache.solr.schema.TestUseDocValuesAsStored.testDuplicateMultiValued(TestUseDocValuesAsStored.java:167)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//arr[@name='test_is_dvo']/int[.='42']
xml response was: 


[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 143 - Still Failing

2019-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/143/

No tests ran.

Build Log:
[...truncated 24989 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2587 links (2117 relative) to 3396 anchors in 259 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.2.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings 

[jira] [Resolved] (LUCENE-8632) XYShape: Adapt LatLonShape tessellator, field type, and queries to non-geo shapes

2019-07-08 Thread Nicholas Knize (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize resolved LUCENE-8632.

Resolution: Implemented

> XYShape: Adapt LatLonShape tessellator, field type, and queries to non-geo 
> shapes
> -
>
> Key: LUCENE-8632
> URL: https://issues.apache.org/jira/browse/LUCENE-8632
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Major
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Currently the tessellator is tightly coupled with latitude and longitude 
> (WGS84) geospatial coordinates. This issue will explore generalizing the 
> tessellator, {{LatLonShape}} field and {{LatLonShapeQuery}} to non geospatial 
> (cartesian) coordinate systems so lucene can provide the index & search 
> capability for general geometry / non GIS type use cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on issue #726: LUCENE-8632: New XYShape Field and Queries for indexing and searching general cartesian geometries

2019-07-08 Thread GitBox
nknize commented on issue #726: LUCENE-8632: New XYShape Field and Queries for 
indexing and searching general cartesian geometries
URL: https://github.com/apache/lucene-solr/pull/726#issuecomment-509379561
 
 
   Closing: Merged in commit 0c09481374cab029d57b0f9b45994822c0dcd39b


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize closed pull request #726: LUCENE-8632: New XYShape Field and Queries for indexing and searching general cartesian geometries

2019-07-08 Thread GitBox
nknize closed pull request #726: LUCENE-8632: New XYShape Field and Queries for 
indexing and searching general cartesian geometries
URL: https://github.com/apache/lucene-solr/pull/726
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8632) XYShape: Adapt LatLonShape tessellator, field type, and queries to non-geo shapes

2019-07-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880686#comment-16880686
 ] 

ASF subversion and git services commented on LUCENE-8632:
-

Commit 81c88e2df30428f61fad6129525d839c57e08504 in lucene-solr's branch 
refs/heads/branch_8x from Nicholas Knize
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=81c88e2 ]

LUCENE-8632: New XYShape Field and Queries for indexing and searching general 
cartesian geometries

The LatLonShape field and LatLonShape query classes added the ability to index 
and search geospatial
geometries in the WGS-84 latitude, longitude coordinate reference system. The 
foundation for this
capability is provided by the Tessellator that converts an array of vertices 
describing a Point Line
or Polygon into a stream of 3 vertex triangles that are encoded as a seven 
dimension point and
indexed using the BKD POINT structure. A nice property of the Tessellator is 
that lat, lon
restrictions are artificial and really only bound by the API.

This commit builds on top of / abstracts the Tessellator LatLonShape and 
LatLonShapeQuery classes to
provide the ability to index & search general cartesian (non WGS84 lat,lon 
restricted) geometry.
It does so by introducing two new base classes: ShapeField and ShapeQuery that 
provide the indexing
and search foundation for LatLonShape and the LatLonShape derived query classes
(LatLonShapeBoundingBoxQuery, LatLonShapeLineQuery, LatLonShapePolygonQuery) 
and introducing a new
XYShape factory class along with XYShape derived query classes 
(XYShapeBoundingBoxQuery,
XYShapeLineQuery, XYShapePolygonQuery). The heart of the cartesian indexing is 
achieved through
XYShapeEncodingUtils that converts the double precision vertices into an 
integer encoded seven
dimension point (similar to LatLonShape).

The test framework is also further abstracted and extended to provide a full 
test suite for the
new XYShape capability that works the same way as the LatLonShape test suite 
(but applied to non
GIS geometries).


> XYShape: Adapt LatLonShape tessellator, field type, and queries to non-geo 
> shapes
> -
>
> Key: LUCENE-8632
> URL: https://issues.apache.org/jira/browse/LUCENE-8632
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Major
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Currently the tessellator is tightly coupled with latitude and longitude 
> (WGS84) geospatial coordinates. This issue will explore generalizing the 
> tessellator, {{LatLonShape}} field and {{LatLonShapeQuery}} to non geospatial 
> (cartesian) coordinate systems so lucene can provide the index & search 
> capability for general geometry / non GIS type use cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8632) XYShape: Adapt LatLonShape tessellator, field type, and queries to non-geo shapes

2019-07-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880676#comment-16880676
 ] 

ASF subversion and git services commented on LUCENE-8632:
-

Commit 0c09481374cab029d57b0f9b45994822c0dcd39b in lucene-solr's branch 
refs/heads/master from Nicholas Knize
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0c09481 ]

LUCENE-8632: New XYShape Field and Queries for indexing and searching general 
cartesian geometries

The LatLonShape field and LatLonShape query classes added the ability to index 
and search geospatial
geometries in the WGS-84 latitude, longitude coordinate reference system. The 
foundation for this
capability is provided by the Tessellator that converts an array of vertices 
describing a Point Line
or Polygon into a stream of 3 vertex triangles that are encoded as a seven 
dimension point and
indexed using the BKD POINT structure. A nice property of the Tessellator is 
that lat, lon
restrictions are artificial and really only bound by the API.

This commit builds on top of / abstracts the Tessellator LatLonShape and 
LatLonShapeQuery classes to
provide the ability to index & search general cartesian (non WGS84 lat,lon 
restricted) geometry.
It does so by introducing two new base classes: ShapeField and ShapeQuery that 
provide the indexing
and search foundation for LatLonShape and the LatLonShape derived query classes
(LatLonShapeBoundingBoxQuery, LatLonShapeLineQuery, LatLonShapePolygonQuery) 
and introducing a new
XYShape factory class along with XYShape derived query classes 
(XYShapeBoundingBoxQuery,
XYShapeLineQuery, XYShapePolygonQuery). The heart of the cartesian indexing is 
achieved through
XYShapeEncodingUtils that converts the double precision vertices into an 
integer encoded seven
dimension point (similar to LatLonShape).

The test framework is also further abstracted and extended to provide a full 
test suite for the
new XYShape capability that works the same way as the LatLonShape test suite 
(but applied to non
GIS geometries).


> XYShape: Adapt LatLonShape tessellator, field type, and queries to non-geo 
> shapes
> -
>
> Key: LUCENE-8632
> URL: https://issues.apache.org/jira/browse/LUCENE-8632
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Major
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Currently the tessellator is tightly coupled with latitude and longitude 
> (WGS84) geospatial coordinates. This issue will explore generalizing the 
> tessellator, {{LatLonShape}} field and {{LatLonShapeQuery}} to non geospatial 
> (cartesian) coordinate systems so lucene can provide the index & search 
> capability for general geometry / non GIS type use cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on issue #762: LUCENE-8903: Add LatLonShape point query

2019-07-08 Thread GitBox
nknize commented on issue #762: LUCENE-8903: Add LatLonShape point query
URL: https://github.com/apache/lucene-solr/pull/762#issuecomment-509361378
 
 
   I think this is duplicate of 
[LUCENE-8670](https://issues.apache.org/jira/projects/LUCENE/issues/LUCENE-8670)
 which I opened and posted a patch back at the end of January. If I remember 
right, the only reason we were holding off on that patch was because querying 
by `MULTIPOINT` (array of points) were done brute force and we had discussed 
ways of speeding it up using a simple in memory R tree. It looks like this is a 
slimmed down version that only accepts a single point. Perhaps we iterate on 
LUCENE-8670 and improve this query for multiple points?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-12.0.1) - Build # 24369 - Unstable!

2019-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24369/
Java: 64bit/jdk-12.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

12 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Timeout occurred while waiting response from server at: 
https://127.0.0.1:35217/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: https://127.0.0.1:35217/solr
at 
__randomizedtesting.SeedInfo.seed([AC459512C22F1EE9:C653F4C2AADD5423]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:667)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:384)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:256)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

Re: Concerned about Solr's V2 API synchronized with V1

2019-07-08 Thread Cassandra Targett
The v2 examples that are in the Ref Guide already use the widget you mention. 
See something like 
https://lucene.apache.org/solr/guide/8_1/adding-custom-plugins-in-solrcloud-mode.html#config-api-commands-to-use-jars-as-runtime-libraries
 for an example.

SOLR-11646 tracks the effort to add v2 examples and lists which pages have been 
updated and which remain to be done. I left the Collections APIs (the hardest) 
for last, but haven’t had time to get back to it recently. Whenever I make 
examples I make real examples I’ve actually tested, so I need to have enough 
time to actually run through each one of them for the alternate syntaxes.

Cassandra
On Jul 8, 2019, 1:09 PM -0500, Gus Heck , wrote:
> We have places where there are curl/solrj alternatives in the examples. Maybe 
> a similar widget could be used for V1/V2 examples? or even better v1/v2/solrj 
> examples for collections api :)
>
> > On Mon, Jul 8, 2019 at 2:02 PM Gus Heck  wrote:
> > > Also the Collections API docs are almost devoid of v2 examples. Just 
> > > fixing this would provide a really good reminder to those implementing 
> > > features to check that it works in v2. (unless they add features without 
> > > documenting them... which usually doesn't happen)
> > >
> > > > On Sun, Jul 7, 2019 at 9:51 PM Noble Paul  wrote:
> > > > > This is a problem. V2 APIs need a lot more metadata and nobody is 
> > > > > doing it. This leads to a lot of technical debt
> > > > >
> > > > > > On Fri, May 17, 2019, 3:42 AM David Smiley 
> > > > > >  wrote:
> > > > > > > I'm concerned about Solr's V2 API and the maintenance burden of 
> > > > > > > attempting to maintain consistency with V1.  For example upon 
> > > > > > > looking through the release notes and seeing a new exciting 
> > > > > > > REINDEXCOLLECTION command (a V1 reference), I see no 
> > > > > > > corresponding adjustments in V2 -- 
> > > > > > > lucene-solr/solr/solrj/src/resources/apispec/*   It's so easy for 
> > > > > > > this to fall out of sync.  When working on a feature affecting 
> > > > > > > admin API stuff I need to somehow just remember/know and then ask 
> > > > > > > myself if I want to test a new feature with just one API or both. 
> > > > > > > Ugh.  Additionally, the vast majority of our documentation is in 
> > > > > > > V1, and our help in solr-user and elsewhere often uses a 
> > > > > > > one-liner URL to the V1 API as well.
> > > > > > >
> > > > > > > As if Solr needed more maintenance challenges than it has already 
> > > > > > > (e.g. tests).   :-(
> > > > > > >
> > > > > > > I mainly want to point out this problem right now to see if 
> > > > > > > others also see the problem and if anyone else has thought about 
> > > > > > > it.  While working on Time Routed Aliases, I saw it but didn't 
> > > > > > > call it out.  I thought maybe somehow our implementation of the 
> > > > > > > admin functionality could be done differently so as to nearly 
> > > > > > > require a V2 adjustment, and thus we don't forget.  For example 
> > > > > > > if the V2 API was basically primary, and if it had metadata that 
> > > > > > > described how a virtual V1 API could work based off metadata in 
> > > > > > > the V2 apispec there that does mapping.  In this way, everything 
> > > > > > > would work in V2 and V1 by default, or at least the majority of 
> > > > > > > the time.  V2 requires more information than V1, so if we 
> > > > > > > continue to have V1 primary (i.e. do nothing), V2 will always be 
> > > > > > > falling behind.
> > > > > > >
> > > > > > > ~ David Smiley
> > > > > > > Apache Lucene/Solr Search Developer
> > > > > > > http://www.linkedin.com/in/davidwsmiley
> > >
> > >
> > > --
> > > http://www.needhamsoftware.com (work)
> > > http://www.the111shift.com (play)
>
>
> --
> http://www.needhamsoftware.com (work)
> http://www.the111shift.com (play)


[jira] [Commented] (LUCENE-4312) Index format to store position length per position

2019-07-08 Thread Michael Gibney (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880635#comment-16880635
 ] 

Michael Gibney commented on LUCENE-4312:


True, both good points. But it's kind of a chicken-or-egg situation ... there 
would have been no point to address these implied challenges, so long as 
position length has not been recorded in the index (and is thus not available 
at query time). That doesn't mean there _aren't_ ways to address the challenges.

Regarding the "A B C" example, I addressed this in the LUCENE-7398 work by 
indexing next start position as a lookahead. As a proof of concept this was 
done with Payloads, but in principle I could see slight modifications 
(somewhere at the intersection of codecs and postings API) that would natively 
read next start position "early" and expose it as a lookahead. This would avoid 
the type of problematic call to {{PostingsEnum.nextPosition()}} that would (as 
you correctly point out) result in the need to buffer all information 
associated with _every_ position. I've described this approach in more detail 
[here|https://michaelgibney.net/2018/09/lucene-graph-queries-2/#index-lookahead-don-t-buffer-positions-if-you-don-t-have-to].
{quote}we can't advance positions on terms in the order we want anymore.
{quote}
Yes, I'd argue that's the toughest challenge. I addressed it indirectly by 
constructing CommonGrams-style shingles used specifically for pre-filtering 
conjunctions in the "approximation" phase of two-phase iteration (ensuring that 
common terms at subclause index 0 don't kill performance). This is described in 
more detail 
[here|https://michaelgibney.net/2018/09/lucene-graph-queries-2/#shingle-based-pre-filtering-of-conjunctionspans].

I'm not intending this to be about these particular solutions, and you might 
take issue with the solutions themselves. The more general point I guess is 
that indexed position length is fundamental, and is a prerequisite for the 
development of ways to address these challenges.

> Index format to store position length per position
> --
>
> Key: LUCENE-4312
> URL: https://issues.apache.org/jira/browse/LUCENE-4312
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 6.0
>Reporter: Gang Luo
>Priority: Minor
>  Labels: Suggestion
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Mike Mccandless said:TokenStreams are actually graphs.
> Indexer ignores PositionLengthAttribute.Need change the index format (and 
> Codec APIs) to store an additional int position length per position.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4312) Index format to store position length per position

2019-07-08 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880606#comment-16880606
 ] 

Adrien Grand commented on LUCENE-4312:
--

bq. the complexity of query execution would be driven by what's actually in the 
index

I don't think this is true.

For instance an exact phrase query trying to match "A B C" that is currently 
positioned on A (position=3, length=1), B (position=4, length=1), C 
(position=6, length=1) would need to advance B to the next position in case 
there is another match on position 4 that has a length of 2. And then we should 
advance C first because maybe because it also has another match on position 4 
of a different length.

Also we can't advance positions on terms in the order we want anymore. Today we 
use the rarer term to lead the iteration of positions. If we had position 
lengths in the index we would need to advance positions in the order in which 
terms occur in the phrase query since the start position that B must have 
depends on the length of A on the current position: position starts are 
guaranteed to come in order in the index but position ends are not (at least we 
don't enforce it in token streams today).

> Index format to store position length per position
> --
>
> Key: LUCENE-4312
> URL: https://issues.apache.org/jira/browse/LUCENE-4312
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 6.0
>Reporter: Gang Luo
>Priority: Minor
>  Labels: Suggestion
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Mike Mccandless said:TokenStreams are actually graphs.
> Indexer ignores PositionLengthAttribute.Need change the index format (and 
> Codec APIs) to store an additional int position length per position.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Concerned about Solr's V2 API synchronized with V1

2019-07-08 Thread Gus Heck
We have places where there are curl/solrj alternatives in the examples.
Maybe a similar widget could be used for V1/V2 examples? or even better
v1/v2/solrj examples for collections api :)

On Mon, Jul 8, 2019 at 2:02 PM Gus Heck  wrote:

> Also the Collections API docs are almost devoid of v2 examples. Just
> fixing this would provide a really good reminder to those implementing
> features to check that it works in v2. (unless they add features without
> documenting them... which usually doesn't happen)
>
> On Sun, Jul 7, 2019 at 9:51 PM Noble Paul  wrote:
>
>> This is a problem. V2 APIs need a lot more metadata and nobody is doing
>> it. This leads to a lot of technical debt
>>
>> On Fri, May 17, 2019, 3:42 AM David Smiley 
>> wrote:
>>
>>> I'm concerned about Solr's V2 API and the maintenance burden of
>>> attempting to maintain consistency with V1.  For example upon looking
>>> through the release notes and seeing a new exciting REINDEXCOLLECTION
>>> command (a V1 reference), I see no corresponding adjustments in V2 --
>>> lucene-solr/solr/solrj/src/resources/apispec/*   It's so easy for this to
>>> fall out of sync.  When working on a feature affecting admin API stuff I
>>> need to somehow just remember/know and then ask myself if I want to test a
>>> new feature with just one API or both. Ugh.  Additionally, the vast
>>> majority of our documentation is in V1, and our help in solr-user and
>>> elsewhere often uses a one-liner URL to the V1 API as well.
>>>
>>> As if Solr needed more maintenance challenges than it has already (e.g.
>>> tests).   :-(
>>>
>>> I mainly want to point out this problem right now to see if others also
>>> see the problem and if anyone else has thought about it.  While working on
>>> Time Routed Aliases, I saw it but didn't call it out.  I thought maybe
>>> somehow our implementation of the admin functionality could be done
>>> differently so as to nearly require a V2 adjustment, and thus we don't
>>> forget.  For example if the V2 API was basically primary, and if it had
>>> metadata that described how a virtual V1 API could work based off metadata
>>> in the V2 apispec there that does mapping.  In this way, everything would
>>> work in V2 and V1 by default, or at least the majority of the time.  V2
>>> requires more information than V1, so if we continue to have V1 primary
>>> (i.e. do nothing), V2 will always be falling behind.
>>>
>>> ~ David Smiley
>>> Apache Lucene/Solr Search Developer
>>> http://www.linkedin.com/in/davidwsmiley
>>>
>>
>
> --
> http://www.needhamsoftware.com (work)
> http://www.the111shift.com (play)
>


-- 
http://www.needhamsoftware.com (work)
http://www.the111shift.com (play)


[GitHub] [lucene-solr] danmuzi commented on a change in pull request #767: LUCENE-8904: enhance Nori DictionaryBuilder tool

2019-07-08 Thread GitBox
danmuzi commented on a change in pull request #767: LUCENE-8904: enhance Nori 
DictionaryBuilder tool
URL: https://github.com/apache/lucene-solr/pull/767#discussion_r301229020
 
 

 ##
 File path: 
lucene/analysis/nori/src/tools/java/org/apache/lucene/analysis/ko/util/BinaryDictionaryWriter.java
 ##
 @@ -137,14 +139,17 @@ public int put(String[] entry) {
   flags |= BinaryDictionary.HAS_READING;
 }
 
-assert leftId < 8192; // there are still unused bits
-assert posType.ordinal() < 4;
+if (leftId >= ID_LIMIT) {
+  throw new IllegalArgumentException("leftId >= " + ID_LIMIT + ": " + 
leftId);
+}
+if (posType.ordinal() >= 4) {
+  throw new IllegalArgumentException("posType.ordinal() >= " + 4 + ": " + 
posType.ordinal());
 
 Review comment:
   Thanks :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] danmuzi commented on a change in pull request #767: LUCENE-8904: enhance Nori DictionaryBuilder tool

2019-07-08 Thread GitBox
danmuzi commented on a change in pull request #767: LUCENE-8904: enhance Nori 
DictionaryBuilder tool
URL: https://github.com/apache/lucene-solr/pull/767#discussion_r301228809
 
 

 ##
 File path: 
lucene/analysis/nori/src/tools/java/org/apache/lucene/analysis/ko/util/BinaryDictionaryWriter.java
 ##
 @@ -137,14 +139,17 @@ public int put(String[] entry) {
   flags |= BinaryDictionary.HAS_READING;
 }
 
-assert leftId < 8192; // there are still unused bits
-assert posType.ordinal() < 4;
+if (leftId >= ID_LIMIT) {
+  throw new IllegalArgumentException("leftId >= " + ID_LIMIT + ": " + 
leftId);
+}
+if (posType.ordinal() >= 4) {
+  throw new IllegalArgumentException("posType.ordinal() >= " + 4 + ": " + 
posType.ordinal());
+}
 buffer.putShort((short)(leftId << 2 | posType.ordinal()));
 buffer.putShort((short) (rightId << 2 | flags));
 buffer.putShort(wordCost);
 
 if (posType == POS.Type.MORPHEME) {
-  assert leftPOS == rightPOS;
 
 Review comment:
   OK :)
   I understand what you mean and I'll keep it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Concerned about Solr's V2 API synchronized with V1

2019-07-08 Thread Gus Heck
Also the Collections API docs are almost devoid of v2 examples. Just fixing
this would provide a really good reminder to those implementing features to
check that it works in v2. (unless they add features without documenting
them... which usually doesn't happen)

On Sun, Jul 7, 2019 at 9:51 PM Noble Paul  wrote:

> This is a problem. V2 APIs need a lot more metadata and nobody is doing
> it. This leads to a lot of technical debt
>
> On Fri, May 17, 2019, 3:42 AM David Smiley 
> wrote:
>
>> I'm concerned about Solr's V2 API and the maintenance burden of
>> attempting to maintain consistency with V1.  For example upon looking
>> through the release notes and seeing a new exciting REINDEXCOLLECTION
>> command (a V1 reference), I see no corresponding adjustments in V2 --
>> lucene-solr/solr/solrj/src/resources/apispec/*   It's so easy for this to
>> fall out of sync.  When working on a feature affecting admin API stuff I
>> need to somehow just remember/know and then ask myself if I want to test a
>> new feature with just one API or both. Ugh.  Additionally, the vast
>> majority of our documentation is in V1, and our help in solr-user and
>> elsewhere often uses a one-liner URL to the V1 API as well.
>>
>> As if Solr needed more maintenance challenges than it has already (e.g.
>> tests).   :-(
>>
>> I mainly want to point out this problem right now to see if others also
>> see the problem and if anyone else has thought about it.  While working on
>> Time Routed Aliases, I saw it but didn't call it out.  I thought maybe
>> somehow our implementation of the admin functionality could be done
>> differently so as to nearly require a V2 adjustment, and thus we don't
>> forget.  For example if the V2 API was basically primary, and if it had
>> metadata that described how a virtual V1 API could work based off metadata
>> in the V2 apispec there that does mapping.  In this way, everything would
>> work in V2 and V1 by default, or at least the majority of the time.  V2
>> requires more information than V1, so if we continue to have V1 primary
>> (i.e. do nothing), V2 will always be falling behind.
>>
>> ~ David Smiley
>> Apache Lucene/Solr Search Developer
>> http://www.linkedin.com/in/davidwsmiley
>>
>

-- 
http://www.needhamsoftware.com (work)
http://www.the111shift.com (play)


[jira] [Resolved] (SOLR-10181) CREATEALIAS and DELETEALIAS commands consistency problems under concurrency

2019-07-08 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-10181.
---
Resolution: Duplicate

Calling this a "duplicate" since it was fixed in SOLR-11444

> CREATEALIAS and DELETEALIAS commands consistency problems under concurrency
> ---
>
> Key: SOLR-10181
> URL: https://issues.apache.org/jira/browse/SOLR-10181
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.4, 5.5, 6.4.1
>Reporter: Samuel García Martínez
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-10181_testcase.patch
>
>
> When several CREATEALIAS are run at the same time by the OCP it could happen 
> that, even tho the API response is OK, some of those CREATEALIAS request 
> changes are lost.
> h3. The problem
> The problem happens because the CREATEALIAS cmd implementation relies on 
> _zkStateReader.getAliases()_ to create the map that will be stored in ZK. If 
> several threads reach that line at the same time it will happen that only one 
> will be stored correctly and the others will be overridden.
> The code I'm referencing is [this 
> piece|https://github.com/apache/lucene-solr/blob/8c1e67e30e071ceed636083532d4598bf6a8791f/solr/core/src/java/org/apache/solr/cloud/CreateAliasCmd.java#L65].
>  As an example, let's say that the current aliases map has {a:colA, b:colB}. 
> If two CREATEALIAS (one adding c:colC and other creating d:colD) are 
> submitted to the _tpe_ and reach that line at the same time, the resulting 
> maps will look like {a:colA, b:colB, c:colC} and {a:colA, b:colB, d:colD} and 
> only one of them will be stored correctly in ZK, resulting in "data loss", 
> meaning that API is returning OK despite that it didn't work as expected.
> On top of this, another concurrency problem could happen when the command 
> checks if the alias has been set using _checkForAlias_ method. if these two 
> CREATEALIAS zk writes had ran at the same time, the alias check fir one of 
> the threads can timeout since only one of the writes has "survived" and has 
> been "committed" to the _zkStateReader.getAliases()_ map.
> h3. How to fix it
> I can post a patch to this if someone gives me directions on how it should be 
> fixed. As I see this, there are two places where the issue can be fixed: in 
> the processor (OverseerCollectionMessageHandler) in a generic way or inside 
> the command itself.
> h5. The processor fix
> The locking mechanism (_OverseerCollectionMessageHandler#lockTask_) should be 
> the place to fix this inside the processor. I thought that adding the 
> operation name instead of only "collection" or "name" to the locking key 
> would fix the issue, but I realized that the problem will happen anyway if 
> the concurrency happens between different operations modifying the same 
> resource (like CREATEALIAS and DELETEALIAS do). So, if this should be the 
> path to follow I don't know what should be used as a locking key.
> h5. The command fix
> Fixing it at the command level (_CreateAliasCmd_ and _DeleteAliasCmd_) would 
> be relatively easy. Using optimistic locking, i.e, using the aliases.json zk 
> version in the keeper.setData. To do that, Aliases class should offer the 
> aliases version so the commands can forward that version with the update and 
> retry when it fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Windows (64bit/jdk1.8.0_201) - Build # 356 - Still Unstable!

2019-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/356/
Java: 64bit/jdk1.8.0_201 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.ReplicationFactorTest.test

Error Message:
Expected rf=2 because batch should have succeeded on 2 replicas (only one 
replica should be down) but got 1; clusterState: {   "control_collection":{ 
"pullReplicas":"0", "replicationFactor":"1", "shards":{"shard1":{   
  "range":"8000-7fff", "state":"active", 
"replicas":{"core_node2":{ 
"core":"control_collection_shard1_replica_n1", 
"base_url":"http://127.0.0.1:49210/_fo/z;, 
"node_name":"127.0.0.1:49210__fo%2Fz", "state":"active",
 "type":"NRT", "leader":"true", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "nrtReplicas":"1", "tlogReplicas":"0"},   
"repfacttest_c8n_1x3":{ "pullReplicas":"0", "replicationFactor":"3",
 "shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node4":{ 
"core":"repfacttest_c8n_1x3_shard1_replica_n2", 
"base_url":"http://127.0.0.1:49252/_fo/z;, 
"node_name":"127.0.0.1:49252__fo%2Fz", "state":"active",
 "type":"NRT"},   "core_node5":{ 
"core":"repfacttest_c8n_1x3_shard1_replica_n3", 
"base_url":"http://127.0.0.1:49210/_fo/z;, 
"node_name":"127.0.0.1:49210__fo%2Fz", "state":"active",
 "type":"NRT"},   "core_node6":{ 
"core":"repfacttest_c8n_1x3_shard1_replica_n1", 
"base_url":"http://127.0.0.1:49296/_fo/z;, 
"node_name":"127.0.0.1:49296__fo%2Fz", "state":"active",
 "type":"NRT", "leader":"true", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "nrtReplicas":"3", "tlogReplicas":"0"},   
"collection1":{ "pullReplicas":"0", "replicationFactor":"1", 
"shards":{   "shard1":{ "range":"8000-d554", 
"state":"active", "replicas":{"core_node6":{ 
"core":"collection1_shard1_replica_n3", 
"base_url":"http://127.0.0.1:49296/_fo/z;, 
"node_name":"127.0.0.1:49296__fo%2Fz", "state":"active",
 "type":"NRT", "leader":"true"}}},   "shard2":{ 
"range":"d555-2aa9", "state":"active", 
"replicas":{"core_node5":{ "core":"collection1_shard2_replica_n1",  
   "base_url":"http://127.0.0.1:49252/_fo/z;, 
"node_name":"127.0.0.1:49252__fo%2Fz", "state":"active",
 "type":"NRT", "leader":"true"}}},   "shard3":{ 
"range":"2aaa-7fff", "state":"active", 
"replicas":{"core_node4":{ "core":"collection1_shard3_replica_n2",  
   "base_url":"http://127.0.0.1:49272/_fo/z;, 
"node_name":"127.0.0.1:49272__fo%2Fz", "state":"active",
 "type":"NRT", "leader":"true", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "nrtReplicas":"1", "tlogReplicas":"0"}}

Stack Trace:
java.lang.AssertionError: Expected rf=2 because batch should have succeeded on 
2 replicas (only one replica should be down) but got 1; clusterState: {
  "control_collection":{
"pullReplicas":"0",
"replicationFactor":"1",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{"core_node2":{
"core":"control_collection_shard1_replica_n1",
"base_url":"http://127.0.0.1:49210/_fo/z;,
"node_name":"127.0.0.1:49210__fo%2Fz",
"state":"active",
"type":"NRT",
"leader":"true",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"nrtReplicas":"1",
"tlogReplicas":"0"},
  "repfacttest_c8n_1x3":{
"pullReplicas":"0",
"replicationFactor":"3",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{
  "core_node4":{
"core":"repfacttest_c8n_1x3_shard1_replica_n2",
"base_url":"http://127.0.0.1:49252/_fo/z;,
"node_name":"127.0.0.1:49252__fo%2Fz",
"state":"active",
"type":"NRT"},
  "core_node5":{
"core":"repfacttest_c8n_1x3_shard1_replica_n3",
"base_url":"http://127.0.0.1:49210/_fo/z;,
"node_name":"127.0.0.1:49210__fo%2Fz",
"state":"active",
"type":"NRT"},
  "core_node6":{
"core":"repfacttest_c8n_1x3_shard1_replica_n1",

[jira] [Updated] (SOLR-13257) Enable replica routing affinity for better cache usage

2019-07-08 Thread Michael Gibney (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Gibney updated SOLR-13257:
--
Attachment: SOLR-13257.patch
Status: Patch Available  (was: Patch Available)

New patch to address test failures.

> Enable replica routing affinity for better cache usage
> --
>
> Key: SOLR-13257
> URL: https://issues.apache.org/jira/browse/SOLR-13257
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Michael Gibney
>Priority: Minor
> Attachments: AffinityShardHandlerFactory.java, SOLR-13257.patch, 
> SOLR-13257.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For each shard in a distributed request, Solr currently routes each request 
> randomly via 
> [ShufflingReplicaListTransformer|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/component/ShufflingReplicaListTransformer.java]
>  to a particular replica. In setups with replication factor >1, this normally 
> results in a situation where subsequent requests (which one would hope/expect 
> to leverage cached results from previous related requests) end up getting 
> routed to a replica that hasn't seen any related requests.
> The problem can be replicated by issuing a relatively expensive query (maybe 
> containing common terms?). The first request initializes the 
> {{queryResultCache}} on the consulted replicas. If replication factor >1 and 
> there are a sufficient number of shards, subsequent requests will likely be 
> routed to at least one replica that _hasn't_ seen the query before. The 
> replicas with uninitialized caches become a bottleneck, and from the client's 
> perspective, many subsequent requests appear not to benefit from caching at 
> all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] msokolov commented on a change in pull request #767: LUCENE-8904: enhance Nori DictionaryBuilder tool

2019-07-08 Thread GitBox
msokolov commented on a change in pull request #767: LUCENE-8904: enhance Nori 
DictionaryBuilder tool
URL: https://github.com/apache/lucene-solr/pull/767#discussion_r301200364
 
 

 ##
 File path: 
lucene/analysis/nori/src/tools/java/org/apache/lucene/analysis/ko/util/BinaryDictionaryWriter.java
 ##
 @@ -137,14 +139,17 @@ public int put(String[] entry) {
   flags |= BinaryDictionary.HAS_READING;
 }
 
-assert leftId < 8192; // there are still unused bits
-assert posType.ordinal() < 4;
+if (leftId >= ID_LIMIT) {
+  throw new IllegalArgumentException("leftId >= " + ID_LIMIT + ": " + 
leftId);
+}
+if (posType.ordinal() >= 4) {
+  throw new IllegalArgumentException("posType.ordinal() >= " + 4 + ": " + 
posType.ordinal());
+}
 buffer.putShort((short)(leftId << 2 | posType.ordinal()));
 buffer.putShort((short) (rightId << 2 | flags));
 buffer.putShort(wordCost);
 
 if (posType == POS.Type.MORPHEME) {
-  assert leftPOS == rightPOS;
 
 Review comment:
   Well, assertions are always unnecessary! I think their value is exactly in 
helping ensure that the correct code path was followed to get here, and 
confirming what we know to be true by analysis or anyway knowledge, in fact 
holds in practice (empirically). For example, if I were to somehow mess up the 
code above that guarantees this, due to my ignorance, this assertion would 
helpfully let me know. So I think we should keep it?  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] msokolov commented on a change in pull request #767: LUCENE-8904: enhance Nori DictionaryBuilder tool

2019-07-08 Thread GitBox
msokolov commented on a change in pull request #767: LUCENE-8904: enhance Nori 
DictionaryBuilder tool
URL: https://github.com/apache/lucene-solr/pull/767#discussion_r301200402
 
 

 ##
 File path: 
lucene/analysis/nori/src/tools/java/org/apache/lucene/analysis/ko/util/BinaryDictionaryWriter.java
 ##
 @@ -137,14 +139,17 @@ public int put(String[] entry) {
   flags |= BinaryDictionary.HAS_READING;
 }
 
-assert leftId < 8192; // there are still unused bits
-assert posType.ordinal() < 4;
+if (leftId >= ID_LIMIT) {
+  throw new IllegalArgumentException("leftId >= " + ID_LIMIT + ": " + 
leftId);
+}
+if (posType.ordinal() >= 4) {
+  throw new IllegalArgumentException("posType.ordinal() >= " + 4 + ": " + 
posType.ordinal());
 
 Review comment:
   (1), please!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10564) NPE in QueryComponent when RTG

2019-07-08 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N resolved SOLR-10564.
-
   Resolution: Not A Problem
Fix Version/s: (was: 7.0)

> NPE in QueryComponent when RTG
> --
>
> Key: SOLR-10564
> URL: https://issues.apache.org/jira/browse/SOLR-10564
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.5
>Reporter: Markus Jelsma
>Priority: Major
> Attachments: SOLR-10564.patch, SOLR-10564.patch, screenshot-1.png, 
> screenshot-2.png, screenshot-3.png, screenshot-4.png, screenshot-5.png
>
>
> The following URL:
> {code}
> /get?fl=queries,prob_*,view_score,feedback_score=
> {code}
> Kindly returns the document.
> This once, however:
> {code}
> /select?qt=/get=queries,prob_*,view_score,feedback_score=
> {code}
> throws:
> {code}
> 2017-04-25 10:23:26.222 ERROR (qtp1873653341-28693) [c:documents s:shard1 
> r:core_node3 x:documents_shard1_replica1] o.a.s.s.HttpSolrCall 
> null:java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.QueryComponent.unmarshalSortValues(QueryComponent.java:1226)
> at 
> org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:1077)
> at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:777)
> at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:756)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:428)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2440)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:347)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:298)
> {code}
> This is thrown when i do it manually, but the error does not appear when Solr 
> issues those same queries under the hood.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10377) Improve readability of the explain output for JSON format

2019-07-08 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N resolved SOLR-10377.
-
Resolution: Not A Problem

Based on above comment, closing this

> Improve readability of the explain output for JSON format
> -
>
> Key: SOLR-10377
> URL: https://issues.apache.org/jira/browse/SOLR-10377
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Priority: Minor
>
> Today when I ask solr for the debug query output In json with indent I get 
> this:
> {code}
> 1: " 3.545981 = sum of: 3.545981 = weight(name:dns in 0) [SchemaSimilarity], 
> result of: 3.545981 = score(doc=0,freq=1.0 = termFreq=1.0 ), product of: 
> 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 
> 0.5)) from: 2.0 = docFreq 24.0 = docCount 1.54 = tfNorm, computed as (freq * 
> (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: 
> 1.0 = termFreq=1.0 1.2 = parameter k1 0.75 = parameter b 7.0 = avgFieldLength 
> 1.0 = fieldLength ",
> 2: " 7.4202514 = sum of: 7.4202514 = sum of: 2.7921112 = weight(name:domain 
> in 1) [SchemaSimilarity], result of: 2.7921112 = score(doc=1,freq=1.0 = 
> termFreq=1.0 ), product of: 2.3025851 = idf, computed as log(1 + (docCount - 
> docFreq + 0.5) / (docFreq + 0.5)) from: 2.0 = docFreq 24.0 = docCount 
> 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * 
> fieldLength / avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 = parameter k1 
> 0.75 = parameter b 7.0 = avgFieldLength 4.0 = fieldLength 2.7921112 = 
> weight(name:name in 1) [SchemaSimilarity], result of: 2.7921112 = 
> score(doc=1,freq=1.0 = termFreq=1.0 ), product of: 2.3025851 = idf, computed 
> as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: 2.0 = docFreq 
> 24.0 = docCount 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + 
> k1 * (1 - b + b * fieldLength / avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 
> = parameter k1 0.75 = parameter b 7.0 = avgFieldLength 4.0 = fieldLength 
> 1.8360289 = weight(name:system in 1) [SchemaSimilarity], result of: 1.8360289 
> = score(doc=1,freq=1.0 = termFreq=1.0 ), product of: 1.5141277 = idf, 
> computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: 5.0 = 
> docFreq 24.0 = docCount 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / 
> (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: 1.0 = 
> termFreq=1.0 1.2 = parameter k1 0.75 = parameter b 7.0 = avgFieldLength 4.0 = 
> fieldLength "
> {code}
> When I run the same query with "wt=ruby" I get a much nicer output
> {code}
> '2'=>'
> 7.4202514 = sum of:
>   7.4202514 = sum of:
> 2.7921112 = weight(name:domain in 1) [SchemaSimilarity], result of:
>   2.7921112 = score(doc=1,freq=1.0 = termFreq=1.0
> ), product of:
> 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / 
> (docFreq + 0.5)) from:
>   2.0 = docFreq
>   24.0 = docCount
> 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - 
> b + b * fieldLength / avgFieldLength)) from:
>   1.0 = termFreq=1.0
>   1.2 = parameter k1
>   0.75 = parameter b
>   7.0 = avgFieldLength
>   4.0 = fieldLength
> 2.7921112 = weight(name:name in 1) [SchemaSimilarity], result of:
>   2.7921112 = score(doc=1,freq=1.0 = termFreq=1.0
> ), product of:
> 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / 
> (docFreq + 0.5)) from:
>   2.0 = docFreq
>   24.0 = docCount
> 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - 
> b + b * fieldLength / avgFieldLength)) from:
>   1.0 = termFreq=1.0
>   1.2 = parameter k1
>   0.75 = parameter b
>   7.0 = avgFieldLength
>   4.0 = fieldLength
> 1.8360289 = weight(name:system in 1) [SchemaSimilarity], result of:
>   1.8360289 = score(doc=1,freq=1.0 = termFreq=1.0
> ), product of:
> 1.5141277 = idf, computed as log(1 + (docCount - docFreq + 0.5) / 
> (docFreq + 0.5)) from:
>   5.0 = docFreq
>   24.0 = docCount
> 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - 
> b + b * fieldLength / avgFieldLength)) from:
>   1.0 = termFreq=1.0
>   1.2 = parameter k1
>   0.75 = parameter b
>   7.0 = avgFieldLength
>   4.0 = fieldLength
> ',
>   '1'=>'
> 3.545981 = sum of:
>   3.545981 = weight(name:dns in 0) [SchemaSimilarity], result of:
> 3.545981 = score(doc=0,freq=1.0 = termFreq=1.0
> ), product of:
>   2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / 
> (docFreq + 0.5)) from:
> 2.0 = docFreq
> 24.0 = docCount
>   1.54 = tfNorm, computed as (freq * (k1 + 1)) / (freq + 

[jira] [Resolved] (SOLR-7695) ManagedStopFilterFactory throws exception when ignoreCase is specified

2019-07-08 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N resolved SOLR-7695.

Resolution: Not A Problem

ignoreCase should be configured using initArgs via REST API

> ManagedStopFilterFactory throws exception when ignoreCase is specified
> --
>
> Key: SOLR-7695
> URL: https://issues.apache.org/jira/browse/SOLR-7695
> Project: Solr
>  Issue Type: Bug
>Reporter: Mike Thomsen
>Priority: Major
>
> The source code and various tutorials suggest this should work:
>ignoreCase="true"
>   managed="english"/>
> Instead, that throws an IllegalArgumentException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5797) Explain plan transform does not work in Solr cloud

2019-07-08 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N resolved SOLR-5797.

Resolution: Cannot Reproduce

Not able to reproduce in the latest version

> Explain plan transform does not work in Solr cloud
> --
>
> Key: SOLR-5797
> URL: https://issues.apache.org/jira/browse/SOLR-5797
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Divya Mehta
>Priority: Major
>  Labels: explainPlan, solrcloud
>
> explain plan works as expected on single solr node, After moving to Solr 
> Cloud, it does not show any explanation field in returned documents.
> This is how we ask for explain output in our SolrQuery, as
> SolrQuery sq = new SolrQuery();
> 
> if (args.getExplain()) {
> sq.setParam(CommonParams.DEBUG_QUERY, true);
> sq.addField("explanation:[explain style=text]");
> }
> I checked the logs at both single node and cloud, but request and its 
> parameters are exactly the same.
> Is this a known issue or does it need some other configuration to make it 
> work on solr cloud. We have one main node and one shard and using standalone 
> zookeeper to manage solr cloud.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] danmuzi commented on issue #767: LUCENE-8904: enhance Nori DictionaryBuilder tool

2019-07-08 Thread GitBox
danmuzi commented on issue #767: LUCENE-8904: enhance Nori DictionaryBuilder 
tool
URL: https://github.com/apache/lucene-solr/pull/767#issuecomment-509287011
 
 
   Thank you for your review! @msokolov
   I left some replies :D


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13599) ReplicationFactorTest high failure rate on Windows jenkins VMs after 2019-06-22 OS/java upgrades

2019-07-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880493#comment-16880493
 ] 

ASF subversion and git services commented on SOLR-13599:


Commit 4fd1850d2ee2976efe4e1ee5645d32dc394714b1 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4fd1850 ]

SOLR-13599: additional 'checkpoint' logging to try and help diagnose strange 
failures

(cherry picked from commit b4a602f6b24196273adbdb7d47bf42fa8d08d807)


> ReplicationFactorTest high failure rate on Windows jenkins VMs after 
> 2019-06-22 OS/java upgrades
> 
>
> Key: SOLR-13599
> URL: https://issues.apache.org/jira/browse/SOLR-13599
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: thetaphi_Lucene-Solr-master-Windows_8025.log.txt
>
>
> We've started seeing some weirdly consistent (but not reliably reproducible) 
> failures from ReplicationFactorTest when running on Uwe's Windows jenkins 
> machines.
> The failures all seem to have started on June 22 -- when Uwe upgraded his 
> Windows VMs to upgrade the Java version, but happen across all versions of 
> java tested, and on both the master and branch_8x.
> While this test failed a total of 5 times, in different ways, on various 
> jenkins boxes between 2019-01-01 and 2019-06-21, it seems to have failed on 
> all but 1 or 2 of Uwe's "Windows" jenkins builds since that 2019-06-22, and 
> when it fails the {{reproduceJenkinsFailures.py}} logic used in Uwe's jenkins 
> builds frequently fails anywhere from 1-4 additional times.
> All of these failures occur in the exact same place, with the exact same 
> assertion: that the expected replicationFactor of 2 was not achieved, and an 
> rf=1 (ie: only the master) was returned, when sending a _batch_ of documents 
> to a collection with 1 shard, 3 replicas; while 1 of the replicas was 
> partitioned off due to a closed proxy.
> In the handful of logs I've examined closely, the 2nd "live" replica does in 
> fact log that it recieved & processed the update, but with a QTime of over 30 
> seconds, and it then it immediately logs an 
> {{org.eclipse.jetty.io.EofException: Reset cancel_stream_error}} Exception -- 
> meanwhile, the leader has one ({{updateExecutor}} thread logging copious 
> amount of {{java.net.ConnectException: Connection refused: no further 
> information}} regarding the replica that was partitioned off, before a second 
> {{updateExecutor}} thread ultimately logs 
> {{java.util.concurrent.ExecutionException: 
> java.util.concurrent.TimeoutException: idle_timeout}} regarding the "live" 
> replica.
> 
> What makes this perplexing is that this is not the first time in the test 
> that documents were added to this collection while one replica was 
> partitioned off, but it is the first time that all 3 of the following are 
> true _at the same time_:
> # the collection has recovered after some replicas were partitioned and 
> re-connected
> # a batch of multiple documents is being added
> # one replica has been "re" partitioned.
> ...prior to the point when this failure happens, only individual document 
> adds were tested while replicas where partitioned.  Batches of adds were only 
> tested when all 3 replicas were "live" after the proxies were re-opened and 
> the collection had fully recovered.  The failure also comes from the first 
> update to happen after a replica's proxy port has been "closed" for the 
> _second_ time.
> While this conflagration of events might concievible trigger some weird bug, 
> what makes these failures _particularly_ perplexing is that:
> * the failures only happen on Windows
> * the failures only started after the Windows VM update on June-22.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6672) function results' names should not include trailing whitespace

2019-07-08 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880491#comment-16880491
 ] 

Munendra S N commented on SOLR-6672:


 [^SOLR-6672.patch] 
func string is computed 
[here|https://github.com/apache/lucene-solr/blob/ac209b637d68c84ce1402b6b8967514ce9cf6854/solr/core/src/java/org/apache/solr/search/SolrReturnFields.java#L358].
 Func parser consumes whitespace that follow the func query, hence extra spaces.

Above patch is without tests. Trimming computed func string would solve the 
issue but not sure if that is right way to do it

> function results' names should not include trailing whitespace
> --
>
> Key: SOLR-6672
> URL: https://issues.apache.org/jira/browse/SOLR-6672
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Reporter: Mike Sokolov
>Priority: Minor
> Attachments: SOLR-6672.patch
>
>
> If you include a function as a result field in a list of multiple fields 
> separated by white space, the corresponding key in the result markup includes 
> trailing whitespace; Example:
> {code}
> fl="id field(units_used) archive_id"
> {code}
> ends up returning results like this:
> {code}
>   {
> "id": "nest.epubarchive.1",
> "archive_id": "urn:isbn:97849D42C5A01",
> "field(units_used) ": 123
>   ^
>   }
> {code}
> A workaround is to use comma separators instead of whitespace
> {code} 
> fl="id,field(units_used),archive_id"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4312) Index format to store position length per position

2019-07-08 Thread Michael Gibney (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880490#comment-16880490
 ] 

Michael Gibney commented on LUCENE-4312:


Thank you for the feedback, [~sokolov] and [~jpountz]!
{quote}Recording position lengths in the index is the easy part of the problem 
in my opinion.
{quote}
Yes, this is my view as well; and looking to the future, _respecting_ position 
length would certainly add complexity to phrase queries. But in terms of 
performance impact, the complexity of query execution would be driven by what's 
actually in the index (so for many use cases performance should be roughly 
equivalent to that of an implementation that ignores position length).

Regarding the challenges of query implementation... I'm taking a fresh look at 
this issue in the context of work done on LUCENE-7398, which seeks to implement 
backtracking phrase queries in an efficient way (including sloppy, nested, 
etc.). Despite that issue being nominally about "nested Span queries", it's 
really more generally about "proximity search over variable-length subclauses", 
and the techniques used in the implementation for LUCENE-7398 would be 
transferable to interval queries as well.

It's a fair point about the arbitrariness of sloppy phrase queries with 
intervening multi-term synonyms, but I wouldn't call such queries 
"meaningless"; in any case, I think that problem already exists for multi-term 
indexed synonyms, and is not exacerbated by the introduction of indexed 
position length. Sloppy phrase queries (and, for that matter, tokenization 
itself) are somewhat arbitrary by nature. Following that tangent, I can imagine 
some potential ways to mitigate such arbitrariness ... all of which themselves 
rely on the ability to index token graph structure (i.e., position length).

> Index format to store position length per position
> --
>
> Key: LUCENE-4312
> URL: https://issues.apache.org/jira/browse/LUCENE-4312
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 6.0
>Reporter: Gang Luo
>Priority: Minor
>  Labels: Suggestion
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Mike Mccandless said:TokenStreams are actually graphs.
> Indexer ignores PositionLengthAttribute.Need change the index format (and 
> Codec APIs) to store an additional int position length per position.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6672) function results' names should not include trailing whitespace

2019-07-08 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-6672:
---
Attachment: SOLR-6672.patch

> function results' names should not include trailing whitespace
> --
>
> Key: SOLR-6672
> URL: https://issues.apache.org/jira/browse/SOLR-6672
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Reporter: Mike Sokolov
>Priority: Minor
> Attachments: SOLR-6672.patch
>
>
> If you include a function as a result field in a list of multiple fields 
> separated by white space, the corresponding key in the result markup includes 
> trailing whitespace; Example:
> {code}
> fl="id field(units_used) archive_id"
> {code}
> ends up returning results like this:
> {code}
>   {
> "id": "nest.epubarchive.1",
> "archive_id": "urn:isbn:97849D42C5A01",
> "field(units_used) ": 123
>   ^
>   }
> {code}
> A workaround is to use comma separators instead of whitespace
> {code} 
> fl="id,field(units_used),archive_id"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] danmuzi commented on a change in pull request #767: LUCENE-8904: enhance Nori DictionaryBuilder tool

2019-07-08 Thread GitBox
danmuzi commented on a change in pull request #767: LUCENE-8904: enhance Nori 
DictionaryBuilder tool
URL: https://github.com/apache/lucene-solr/pull/767#discussion_r301171274
 
 

 ##
 File path: 
lucene/analysis/nori/src/tools/java/org/apache/lucene/analysis/ko/util/BinaryDictionaryWriter.java
 ##
 @@ -137,14 +139,17 @@ public int put(String[] entry) {
   flags |= BinaryDictionary.HAS_READING;
 }
 
-assert leftId < 8192; // there are still unused bits
-assert posType.ordinal() < 4;
+if (leftId >= ID_LIMIT) {
+  throw new IllegalArgumentException("leftId >= " + ID_LIMIT + ": " + 
leftId);
+}
+if (posType.ordinal() >= 4) {
+  throw new IllegalArgumentException("posType.ordinal() >= " + 4 + ": " + 
posType.ordinal());
+}
 buffer.putShort((short)(leftId << 2 | posType.ordinal()));
 buffer.putShort((short) (rightId << 2 | flags));
 buffer.putShort(wordCost);
 
 if (posType == POS.Type.MORPHEME) {
-  assert leftPOS == rightPOS;
 
 Review comment:
   It is an unnecessary condition.
   The leftPOS and rightPOS are determined on lines 83-90.
   If posType is POS.Type.MORPHEME, leftPOS and rightPOS are always the same.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] danmuzi commented on a change in pull request #767: LUCENE-8904: enhance Nori DictionaryBuilder tool

2019-07-08 Thread GitBox
danmuzi commented on a change in pull request #767: LUCENE-8904: enhance Nori 
DictionaryBuilder tool
URL: https://github.com/apache/lucene-solr/pull/767#discussion_r301171182
 
 

 ##
 File path: 
lucene/analysis/nori/src/tools/java/org/apache/lucene/analysis/ko/util/BinaryDictionaryWriter.java
 ##
 @@ -137,14 +139,17 @@ public int put(String[] entry) {
   flags |= BinaryDictionary.HAS_READING;
 }
 
-assert leftId < 8192; // there are still unused bits
-assert posType.ordinal() < 4;
+if (leftId >= ID_LIMIT) {
+  throw new IllegalArgumentException("leftId >= " + ID_LIMIT + ": " + 
leftId);
+}
+if (posType.ordinal() >= 4) {
+  throw new IllegalArgumentException("posType.ordinal() >= " + 4 + ": " + 
posType.ordinal());
 
 Review comment:
   +1.
   
   Which log style is better?
   1) `throw new IllegalArgumentException("posType.ordinal() >= " + 4 + ": " + 
posType.name());`
   2) `throw new IllegalArgumentException("posType should be MORPHOME or 
COMPOUND or INFLECT or PREANALYSIS" + ": " + posType.name());`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on issue #726: LUCENE-8632: New XYShape Field and Queries for indexing and searching general cartesian geometries

2019-07-08 Thread GitBox
nknize commented on issue #726: LUCENE-8632: New XYShape Field and Queries for 
indexing and searching general cartesian geometries
URL: https://github.com/apache/lucene-solr/pull/726#issuecomment-509283222
 
 
   Cleaned up the API a bit:
   
   * refactors `Tessellator.Triangle` `getLat / getLon` methods to `getY / 
getX`, respectively
   * Changed `XYShape.createIndexableFields` and `XYShape.newBoxQuery` to 
accept floats instead of doubles
   
   Refactored `TestLatLonShapeEncoding` to derive from a new 
`BaseShapeEncodingTestCase` class along with a new `TestXYShapeEncoding` class 
to equally test the `XYShapeEncoding` logic.
   
   I think this PR is just about ready to merge and continue iterating in 
sandbox?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7845) 2 arg "query()" does not exist for all docs, even though second arg specifies a default value

2019-07-08 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-7845:
---
Status: Patch Available  (was: Reopened)

> 2 arg "query()" does not exist for all docs, even though second arg specifies 
> a default value
> -
>
> Key: SOLR-7845
> URL: https://issues.apache.org/jira/browse/SOLR-7845
> Project: Solr
>  Issue Type: Bug
>Reporter: Bill Bell
>Priority: Major
> Attachments: SOLR-7845.patch
>
>
> The 2 arg version of the "query()" was designed so that the second argument 
> would specify the value used for any document that does not match the query 
> pecified by the first argument -- but the "exists" property of the resulting 
> ValueSource only takes into consideration wether or not the document matches 
> the query -- and ignores the use of the second argument.
> 
> The work around is to ignore the 2 arg form of the query() function, and 
> instead wrap he query function in def().
> for example:  {{def(query($something), $defaultval)}} instead of 
> {{query($something, $defaultval)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7845) 2 arg "query()" does not exist for all docs, even though second arg specifies a default value

2019-07-08 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880483#comment-16880483
 ] 

Munendra S N commented on SOLR-7845:


 [^SOLR-7845.patch] 
This is happening because objectValue returns null when doc doesn't match. 
Corrected this to return defVal.
objectVal(doc) could just call floatVal(doc) but haven't made this change.


> 2 arg "query()" does not exist for all docs, even though second arg specifies 
> a default value
> -
>
> Key: SOLR-7845
> URL: https://issues.apache.org/jira/browse/SOLR-7845
> Project: Solr
>  Issue Type: Bug
>Reporter: Bill Bell
>Priority: Major
> Attachments: SOLR-7845.patch
>
>
> The 2 arg version of the "query()" was designed so that the second argument 
> would specify the value used for any document that does not match the query 
> pecified by the first argument -- but the "exists" property of the resulting 
> ValueSource only takes into consideration wether or not the document matches 
> the query -- and ignores the use of the second argument.
> 
> The work around is to ignore the 2 arg form of the query() function, and 
> instead wrap he query function in def().
> for example:  {{def(query($something), $defaultval)}} instead of 
> {{query($something, $defaultval)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13599) ReplicationFactorTest high failure rate on Windows jenkins VMs after 2019-06-22 OS/java upgrades

2019-07-08 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880481#comment-16880481
 ] 

Hoss Man commented on SOLR-13599:
-

this is the epitome of a heisenbug ... 

5 days ago i commit a change to master that adds a bit of extra logging to the 
test, and since then there hasn't been a single master fail -- but in the same 
about of time, 7/10 of the 8x builds have failed, and all but one of those 
reproduced 3x (or more) times.

not sure what to do here except backport the loging changes to 8x, and hope we 
get another failure eventaully so we'll have something to diagnose.


> ReplicationFactorTest high failure rate on Windows jenkins VMs after 
> 2019-06-22 OS/java upgrades
> 
>
> Key: SOLR-13599
> URL: https://issues.apache.org/jira/browse/SOLR-13599
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: thetaphi_Lucene-Solr-master-Windows_8025.log.txt
>
>
> We've started seeing some weirdly consistent (but not reliably reproducible) 
> failures from ReplicationFactorTest when running on Uwe's Windows jenkins 
> machines.
> The failures all seem to have started on June 22 -- when Uwe upgraded his 
> Windows VMs to upgrade the Java version, but happen across all versions of 
> java tested, and on both the master and branch_8x.
> While this test failed a total of 5 times, in different ways, on various 
> jenkins boxes between 2019-01-01 and 2019-06-21, it seems to have failed on 
> all but 1 or 2 of Uwe's "Windows" jenkins builds since that 2019-06-22, and 
> when it fails the {{reproduceJenkinsFailures.py}} logic used in Uwe's jenkins 
> builds frequently fails anywhere from 1-4 additional times.
> All of these failures occur in the exact same place, with the exact same 
> assertion: that the expected replicationFactor of 2 was not achieved, and an 
> rf=1 (ie: only the master) was returned, when sending a _batch_ of documents 
> to a collection with 1 shard, 3 replicas; while 1 of the replicas was 
> partitioned off due to a closed proxy.
> In the handful of logs I've examined closely, the 2nd "live" replica does in 
> fact log that it recieved & processed the update, but with a QTime of over 30 
> seconds, and it then it immediately logs an 
> {{org.eclipse.jetty.io.EofException: Reset cancel_stream_error}} Exception -- 
> meanwhile, the leader has one ({{updateExecutor}} thread logging copious 
> amount of {{java.net.ConnectException: Connection refused: no further 
> information}} regarding the replica that was partitioned off, before a second 
> {{updateExecutor}} thread ultimately logs 
> {{java.util.concurrent.ExecutionException: 
> java.util.concurrent.TimeoutException: idle_timeout}} regarding the "live" 
> replica.
> 
> What makes this perplexing is that this is not the first time in the test 
> that documents were added to this collection while one replica was 
> partitioned off, but it is the first time that all 3 of the following are 
> true _at the same time_:
> # the collection has recovered after some replicas were partitioned and 
> re-connected
> # a batch of multiple documents is being added
> # one replica has been "re" partitioned.
> ...prior to the point when this failure happens, only individual document 
> adds were tested while replicas where partitioned.  Batches of adds were only 
> tested when all 3 replicas were "live" after the proxies were re-opened and 
> the collection had fully recovered.  The failure also comes from the first 
> update to happen after a replica's proxy port has been "closed" for the 
> _second_ time.
> While this conflagration of events might concievible trigger some weird bug, 
> what makes these failures _particularly_ perplexing is that:
> * the failures only happen on Windows
> * the failures only started after the Windows VM update on June-22.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7845) 2 arg "query()" does not exist for all docs, even though second arg specifies a default value

2019-07-08 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-7845:
---
Attachment: SOLR-7845.patch

> 2 arg "query()" does not exist for all docs, even though second arg specifies 
> a default value
> -
>
> Key: SOLR-7845
> URL: https://issues.apache.org/jira/browse/SOLR-7845
> Project: Solr
>  Issue Type: Bug
>Reporter: Bill Bell
>Priority: Major
> Attachments: SOLR-7845.patch
>
>
> The 2 arg version of the "query()" was designed so that the second argument 
> would specify the value used for any document that does not match the query 
> pecified by the first argument -- but the "exists" property of the resulting 
> ValueSource only takes into consideration wether or not the document matches 
> the query -- and ignores the use of the second argument.
> 
> The work around is to ignore the 2 arg form of the query() function, and 
> instead wrap he query function in def().
> for example:  {{def(query($something), $defaultval)}} instead of 
> {{query($something, $defaultval)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11.0.3) - Build # 24368 - Failure!

2019-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24368/
Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 64174 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj2009143157
 [ecj-lint] Compiling 1280 source files to /tmp/ecj2009143157
 [ecj-lint] Processing annotations
 [ecj-lint] Annotations processed
 [ecj-lint] Processing annotations
 [ecj-lint] No elements to process
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
 (at line 219)
 [ecj-lint] return (NamedList) new 
JavaBinCodec(resolver).unmarshal(in);
 [ecj-lint]^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java
 (at line 788)
 [ecj-lint] throw new UnsupportedOperationException("must add at least 1 
node first");
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'queryRequest' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java
 (at line 794)
 [ecj-lint] throw new UnsupportedOperationException("must add at least 1 
node first");
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'queryRequest' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 19)
 [ecj-lint] import javax.naming.Context;
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 20)
 [ecj-lint] import javax.naming.InitialContext;
 [ecj-lint]^^^
 [ecj-lint] The type javax.naming.InitialContext is not accessible
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 21)
 [ecj-lint] import javax.naming.NamingException;
 [ecj-lint]
 [ecj-lint] The type javax.naming.NamingException is not accessible
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 22)
 [ecj-lint] import javax.naming.NoInitialContextException;
 [ecj-lint]^^
 [ecj-lint] The type javax.naming.NoInitialContextException is not accessible
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 776)
 [ecj-lint] Context c = new InitialContext();
 [ecj-lint] ^^^
 [ecj-lint] Context cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 776)
 [ecj-lint] Context c = new InitialContext();
 [ecj-lint] ^^
 [ecj-lint] InitialContext cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 10. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 779)
 [ecj-lint] } catch (NoInitialContextException e) {
 [ecj-lint]  ^
 [ecj-lint] NoInitialContextException cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 11. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 781)
 [ecj-lint] } catch (NamingException e) {
 [ecj-lint]  ^^^
 [ecj-lint] NamingException cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 12. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/SolrConfigHandler.java
 (at line 

[jira] [Commented] (LUCENE-4312) Index format to store position length per position

2019-07-08 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880422#comment-16880422
 ] 

Adrien Grand commented on LUCENE-4312:
--

Recording position lengths in the index is the easy part of the problem in my 
opinion. I'm concerned that this will introduce significant complexity to 
phrase queries (they will require backtracking in order to deal with the case 
that a term exists twice at the same position with different position lengths), 
and even make sloppy phrase queries and their spans/intervals counterparts 
meaningless (as terms could be very distant according to the index only because 
there is one term in-between that has a multi-term synonym indexed). 

> Index format to store position length per position
> --
>
> Key: LUCENE-4312
> URL: https://issues.apache.org/jira/browse/LUCENE-4312
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 6.0
>Reporter: Gang Luo
>Priority: Minor
>  Labels: Suggestion
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Mike Mccandless said:TokenStreams are actually graphs.
> Indexer ignores PositionLengthAttribute.Need change the index format (and 
> Codec APIs) to store an additional int position length per position.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-13-ea+26) - Build # 8042 - Still Unstable!

2019-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8042/
Java: 64bit/jdk-13-ea+26 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testClassifyStream

Error Message:
expected:<0.0> but was:<0.9998245650830389>

Stack Trace:
java.lang.AssertionError: expected:<0.0> but was:<0.9998245650830389>
at 
__randomizedtesting.SeedInfo.seed([BA8ADD4C6B18F5F6:1FC247745240EC62]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:553)
at org.junit.Assert.assertEquals(Assert.java:683)
at 
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testClassifyStream(StreamDecoratorTest.java:3680)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:830)


FAILED:  
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testClassifyStream

Error Message:
expected:<0.0> but 

[JENKINS] Lucene-Solr-SmokeRelease-8.1 - Build # 55 - Still Failing

2019-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.1/55/

No tests ran.

Build Log:
[...truncated 23880 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2570 links (2103 relative) to 3374 anchors in 253 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/solr/package/solr-8.1.2.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:

[GitHub] [lucene-solr] atris commented on issue #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
atris commented on issue #754: LUCENE-8875: Introduce Optimized Collector For 
Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#issuecomment-509207157
 
 
   @jpountz  I have updated the PR per your comments. ant precommit passes.
   
   Apologies, this iteration also got force pushed. I have a local daemon which 
auto squashes and force pushes to my fork each time I create a new commit on a 
branch (this was meant to help the committer merge the PR without needing to 
squash). I will disable that in the next runs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



JDK 13 , JDK 14 & Valhalla Early Access builds are available.

2019-07-08 Thread Rory O'Donnell

 Hi Uwe & Dawid,

**OpenJDK* 13 Early Access build **28 is now available **at : - 
jdk.java.net/13/*


 * These early-access, open-source builds are provided under the GNU
   General Public License, version 2, with the Classpath Exception
   .
 * Changes in this build 28 [1]


*Reminder of a change in b24 - A jrt URI can only encode paths to files 
in /modules tree **(JDK-8224946 
)*


A |jrt| URL is a hierarchical URI with syntax |jrt:/[$MODULE[/$PATH]]|. 
When using the |jrt| file system, a |java.net.URI| object can be created 
with the |java.nio.file.Path::toUri| method to encode a normalized path 
to a file in the |/modules| tree. A |jrt| URL cannot encode a path to a 
file in the |/packages| tree. The |jrt| file system provider has changed 
in this release so that |toUri| fails with |IOError| when it is not 
possible to encode the file path as a jrt URI. *This change may impact 
tools have been making use of URLs that are not compliant with the 
syntax. Tools with paths to files in **|/packages|**can use the 
**|toRealPath()|**method to obtain the real path (in **|/modules|**) 
before attempting to convert the file path to a URI.*


*OpenJDK 14 **Early Access build 4 **is now available **at : - 
jdk.java.net/14/*


 * These early-access, open-source builds are provided under the GNU
   General Public License, version 2, with the Classpath Exception
   .
 * Changes in this build [2]


*Project Valhalla "L-World Inline Types" Early-Access Builds*

 * Build jdk-14-valhalla+1-8
 * These early-access builds are provided under the GNU General Public
   License, version 2, with the Classpath Exception
   .
 * Please send feedback via e-mail to valhalla-...@openjdk.java.net
   . To send e-mail to this
   address you must first subscribe to the mailing list.


*The Skara tooling is now open source *[3]
we are happy to announce that the tooling for project Skara is now open 
source and available at


 * https://github.com/openjdk/skara 

The Skara tooling includes both server-side tools (so called "bots") as 
well as several command-line tools **

If you have any questions, feedback etc. send them to Skara mailing list [4]

Rgds, Rory


[1] JDK 13 - Changes in b28 here 

[2] JDK 14 - Changes in b4 here 


[3] https://mail.openjdk.java.net/pipermail/skara-dev/2019-June/47.html
[4] https://mail.openjdk.java.net/mailman/listinfo/skara-dev

--
Rgds, Rory O'Donnell
Quality Engineering Manager
Oracle EMEA, Dublin, Ireland



[GitHub] [lucene-solr] atris commented on a change in pull request #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
atris commented on a change in pull request #754: LUCENE-8875: Introduce 
Optimized Collector For Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#discussion_r301065408
 
 

 ##
 File path: 
lucene/sandbox/src/test/org/apache/lucene/search/TestLargeNumHitsTopDocsCollector.java
 ##
 @@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import java.io.IOException;
+
+import org.apache.lucene.document.Document;
+import org.apache.lucene.index.IndexReader;
+import org.apache.lucene.index.RandomIndexWriter;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.LuceneTestCase;
+
+public class TestLargeNumHitsTopDocsCollector extends LuceneTestCase {
+  private Directory dir;
+  private IndexReader reader;
+
+  @Override
+  public void setUp() throws Exception {
+super.setUp();
+dir = newDirectory();
+RandomIndexWriter writer = new RandomIndexWriter(random(), dir);
+for (int i = 0; i < 200_000; i++) {
+  writer.addDocument(new Document());
+}
+reader = writer.getReader();
+writer.close();
+  }
+
+  @Override
+  public void tearDown() throws Exception {
+reader.close();
+dir.close();
+dir = null;
+super.tearDown();
+  }
+  public void testLargeNumAndSparseHits() throws Exception {
+runNumHits(100_000);
+  }
+
+  public void testSingleNumHit() throws Exception {
+runNumHits(1);
+  }
+
+  public void testLowNumberOfHits() throws Exception {
+runNumHits(25);
+  }
+
+  public void testIllegalArguments() throws IOException {
+Query q = new MatchAllDocsQuery();
+IndexSearcher searcher = newSearcher(reader);
+LargeNumHitsTopDocsCollector largeCollector = new 
LargeNumHitsTopDocsCollector(15);
+TopScoreDocCollector regularCollector = TopScoreDocCollector.create(15, 
null, Integer.MAX_VALUE);
+
+searcher.search(q, largeCollector);
+searcher.search(q, regularCollector);
+
+assertEquals(largeCollector.totalHits, regularCollector.totalHits);
+
+expectThrows(IllegalArgumentException.class, () -> {
+  largeCollector.topDocs(350_000);
+});
+  }
+
+  public void testNoPQBuild() throws IOException {
+Query q = new MatchAllDocsQuery();
+IndexSearcher searcher = newSearcher(reader);
+LargeNumHitsTopDocsCollector largeCollector = new 
LargeNumHitsTopDocsCollector(250_000);
+TopScoreDocCollector regularCollector = 
TopScoreDocCollector.create(250_000, null, Integer.MAX_VALUE);
+
+searcher.search(q, largeCollector);
+searcher.search(q, regularCollector);
+
+assertEquals(largeCollector.totalHits, regularCollector.totalHits);
+
+assertEquals(largeCollector.pq, null);
+assertEquals(largeCollector.pqTop, null);
+  }
+
+  public void testPQBuild() throws IOException {
+Query q = new MatchAllDocsQuery();
+IndexSearcher searcher = newSearcher(reader);
+LargeNumHitsTopDocsCollector largeCollector = new 
LargeNumHitsTopDocsCollector(100_000);
+TopScoreDocCollector regularCollector = 
TopScoreDocCollector.create(100_000, null, Integer.MAX_VALUE);
+
+searcher.search(q, largeCollector);
+searcher.search(q, regularCollector);
+
+assertEquals(largeCollector.totalHits, regularCollector.totalHits);
+
+assertNotEquals(largeCollector.pq, null);
+assertNotEquals(largeCollector.pqTop, null);
+  }
+
+  private void runNumHits(int numHits) throws IOException {
+Query q = new MatchAllDocsQuery();
 
 Review comment:
   +1, updated


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] iverase commented on issue #627: LUCENE-8746: Make EdgeTree (aka ComponentTree) support different type of components

2019-07-08 Thread GitBox
iverase commented on issue #627: LUCENE-8746: Make EdgeTree (aka ComponentTree) 
support different type of components
URL: https://github.com/apache/lucene-solr/pull/627#issuecomment-509201337
 
 
   I have opened #770 which probably supersedes this one


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] iverase opened a new pull request #770: LUCENE-8746: Component2D topology library that works on encoded space

2019-07-08 Thread GitBox
iverase opened a new pull request #770: LUCENE-8746: Component2D topology 
library that works on encoded space
URL: https://github.com/apache/lucene-solr/pull/770
 
 
   With the upcoming of a new Shape type working in cartesian space (#726), I 
think we need to put some structure in the objects that contain spatial logic. 
In particular I have tried to remove all the mixed notation between 
latitude/longitude and x/y as well as defined factory methods to create those 
shapes from LatLonShape. 
   
   This library chooses to use X/Y notation as it is mainly cartesian, it works 
on the encoding space and solves problems like the neighbourhood issue 
(https://discuss.elastic.co/t/neighboring-touching-geo-shapes-not-found/175543) 
when not encoded query shapes are used against encoded indexed shapes. It 
potentially can simplify all the query logic as it is only need a query by 
Component2D for this case.
   
   Currently it contains factory methods to create Component2D shapes from 
LatLonShapes, it should be trivial to add a factory class for XYShapes.
   
   @jpountz @nknize @rmuir @dsmiley  let me know what do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on issue #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
jpountz commented on issue #754: LUCENE-8875: Introduce Optimized Collector For 
Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#issuecomment-509197617
 
 
   @atris FYI avoiding force pushes would be helpful to reviewers as we could 
then look at what exactly changed compared to the previous PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
jpountz commented on a change in pull request #754: LUCENE-8875: Introduce 
Optimized Collector For Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#discussion_r301056433
 
 

 ##
 File path: 
lucene/sandbox/src/test/org/apache/lucene/search/TestLargeNumHitsTopDocsCollector.java
 ##
 @@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import java.io.IOException;
+
+import org.apache.lucene.document.Document;
+import org.apache.lucene.index.IndexReader;
+import org.apache.lucene.index.RandomIndexWriter;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.LuceneTestCase;
+
+public class TestLargeNumHitsTopDocsCollector extends LuceneTestCase {
+  private Directory dir;
+  private IndexReader reader;
+
+  @Override
+  public void setUp() throws Exception {
+super.setUp();
+dir = newDirectory();
+RandomIndexWriter writer = new RandomIndexWriter(random(), dir);
+for (int i = 0; i < 200_000; i++) {
+  writer.addDocument(new Document());
+}
+reader = writer.getReader();
+writer.close();
+  }
+
+  @Override
+  public void tearDown() throws Exception {
+reader.close();
+dir.close();
+dir = null;
+super.tearDown();
+  }
+  public void testLargeNumAndSparseHits() throws Exception {
+runNumHits(100_000);
+  }
+
+  public void testSingleNumHit() throws Exception {
+runNumHits(1);
+  }
+
+  public void testLowNumberOfHits() throws Exception {
+runNumHits(25);
+  }
+
+  public void testIllegalArguments() throws IOException {
+Query q = new MatchAllDocsQuery();
+IndexSearcher searcher = newSearcher(reader);
+LargeNumHitsTopDocsCollector largeCollector = new 
LargeNumHitsTopDocsCollector(15);
+TopScoreDocCollector regularCollector = TopScoreDocCollector.create(15, 
null, Integer.MAX_VALUE);
+
+searcher.search(q, largeCollector);
+searcher.search(q, regularCollector);
+
+assertEquals(largeCollector.totalHits, regularCollector.totalHits);
+
+expectThrows(IllegalArgumentException.class, () -> {
+  largeCollector.topDocs(350_000);
+});
+  }
+
+  public void testNoPQBuild() throws IOException {
+Query q = new MatchAllDocsQuery();
+IndexSearcher searcher = newSearcher(reader);
+LargeNumHitsTopDocsCollector largeCollector = new 
LargeNumHitsTopDocsCollector(250_000);
+TopScoreDocCollector regularCollector = 
TopScoreDocCollector.create(250_000, null, Integer.MAX_VALUE);
+
+searcher.search(q, largeCollector);
+searcher.search(q, regularCollector);
+
+assertEquals(largeCollector.totalHits, regularCollector.totalHits);
+
+assertEquals(largeCollector.pq, null);
+assertEquals(largeCollector.pqTop, null);
+  }
+
+  public void testPQBuild() throws IOException {
+Query q = new MatchAllDocsQuery();
+IndexSearcher searcher = newSearcher(reader);
+LargeNumHitsTopDocsCollector largeCollector = new 
LargeNumHitsTopDocsCollector(100_000);
+TopScoreDocCollector regularCollector = 
TopScoreDocCollector.create(100_000, null, Integer.MAX_VALUE);
+
+searcher.search(q, largeCollector);
+searcher.search(q, regularCollector);
+
+assertEquals(largeCollector.totalHits, regularCollector.totalHits);
+
+assertNotEquals(largeCollector.pq, null);
+assertNotEquals(largeCollector.pqTop, null);
+  }
+
+  private void runNumHits(int numHits) throws IOException {
+Query q = new MatchAllDocsQuery();
 
 Review comment:
   Maybe use a query that doesn't produce constant scores to make sure that the 
collector orders by score? For instance you could add a term to some documents 
and then build a disjunction here across a query on this term and a 
MatchAllDocsQuery?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [lucene-solr] jpountz commented on a change in pull request #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
jpountz commented on a change in pull request #754: LUCENE-8875: Introduce 
Optimized Collector For Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#discussion_r301054025
 
 

 ##
 File path: 
lucene/sandbox/src/java/org/apache/lucene/search/LargeNumHitsTopDocsCollector.java
 ##
 @@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+
+import org.apache.lucene.index.LeafReaderContext;
+
+import static org.apache.lucene.search.TopDocsCollector.EMPTY_TOPDOCS;
+
+/**
+ * Optimized collector for large number of hits.
+ * The collector maintains an ArrayList of hits until it accumulates
+ * the requested number of hits. Post that, it builds a Priority Queue
+ * and starts filtering further hits based on the minimum competitive
+ * score.
+ */
+public class LargeNumHitsTopDocsCollector implements Collector {
+  private final List hits = new ArrayList<>();
+  private final int numHits;
+  HitQueue pq;
+  ScoreDoc pqTop;
+  int totalHits;
+  /** Whether {@link #totalHits} is exact or a lower bound. */
+  protected TotalHits.Relation totalHitsRelation = TotalHits.Relation.EQUAL_TO;
+
+  public LargeNumHitsTopDocsCollector(int numHits) {
+this.numHits = numHits;
+this.totalHits = 0;
+  }
+
+  // We always return COMPLETE since this collector should ideally
+  // be used only with large number of hits case
+  @Override
+  public ScoreMode scoreMode() {
+return ScoreMode.COMPLETE;
+  }
+
+  @Override
+  public LeafCollector getLeafCollector(LeafReaderContext context) {
+final int docBase = context.docBase;
+return new TopScoreDocCollector.ScorerLeafCollector() {
+
+  @Override
+  public void setScorer(Scorable scorer) throws IOException {
+super.setScorer(scorer);
+updateMinCompetitiveScore(scorer);
+  }
+
+  @Override
+  public void collect(int doc) throws IOException {
+float score = scorer.score();
+
+// This collector relies on the fact that scorers produce positive 
values:
+assert score >= 0; // NOTE: false for NaN
+
+if (totalHits < numHits) {
+  hits.add(new ScoreDoc(doc, score));
+  totalHits++;
+  return;
+} else if (totalHits == numHits) {
+  // Convert the list to a priority queue
+
+  // We should get here only when priority queue
+  // has been built
+  assert pq == null;
+  assert pqTop == null;
+  pq = new HitQueue(numHits, false);
+
+  for (ScoreDoc scoreDoc : hits) {
+pq.add(scoreDoc);
+  }
+
+  pqTop = pq.top();
 
 Review comment:
   maybe we should also set `hits = null` here to make it eligible to garbage 
collection?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
jpountz commented on a change in pull request #754: LUCENE-8875: Introduce 
Optimized Collector For Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#discussion_r301055146
 
 

 ##
 File path: 
lucene/sandbox/src/java/org/apache/lucene/search/LargeNumHitsTopDocsCollector.java
 ##
 @@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+
+import org.apache.lucene.index.LeafReaderContext;
+
+import static org.apache.lucene.search.TopDocsCollector.EMPTY_TOPDOCS;
+
+/**
+ * Optimized collector for large number of hits.
+ * The collector maintains an ArrayList of hits until it accumulates
+ * the requested number of hits. Post that, it builds a Priority Queue
+ * and starts filtering further hits based on the minimum competitive
+ * score.
+ */
+public class LargeNumHitsTopDocsCollector implements Collector {
+  private final List hits = new ArrayList<>();
+  private final int numHits;
+  HitQueue pq;
+  ScoreDoc pqTop;
+  int totalHits;
+  /** Whether {@link #totalHits} is exact or a lower bound. */
+  protected TotalHits.Relation totalHitsRelation = TotalHits.Relation.EQUAL_TO;
+
+  public LargeNumHitsTopDocsCollector(int numHits) {
+this.numHits = numHits;
+this.totalHits = 0;
+  }
+
+  // We always return COMPLETE since this collector should ideally
+  // be used only with large number of hits case
+  @Override
+  public ScoreMode scoreMode() {
+return ScoreMode.COMPLETE;
+  }
+
+  @Override
+  public LeafCollector getLeafCollector(LeafReaderContext context) {
+final int docBase = context.docBase;
+return new TopScoreDocCollector.ScorerLeafCollector() {
+
+  @Override
+  public void setScorer(Scorable scorer) throws IOException {
+super.setScorer(scorer);
+updateMinCompetitiveScore(scorer);
+  }
+
+  @Override
+  public void collect(int doc) throws IOException {
+float score = scorer.score();
+
+// This collector relies on the fact that scorers produce positive 
values:
+assert score >= 0; // NOTE: false for NaN
+
+if (totalHits < numHits) {
+  hits.add(new ScoreDoc(doc, score));
+  totalHits++;
+  return;
+} else if (totalHits == numHits) {
+  // Convert the list to a priority queue
+
+  // We should get here only when priority queue
+  // has been built
+  assert pq == null;
+  assert pqTop == null;
+  pq = new HitQueue(numHits, false);
+
+  for (ScoreDoc scoreDoc : hits) {
+pq.add(scoreDoc);
+  }
+
+  pqTop = pq.top();
+}
+
+if (score > pqTop.score) {
+  pqTop.doc = doc + docBase;
+  pqTop.score = score;
+  pqTop = pq.updateTop();
+  updateMinCompetitiveScore(scorer);
+}
+++totalHits;
+  }
+};
+  }
+
+  protected void updateMinCompetitiveScore(Scorable scorer) throws IOException 
{
+if (pqTop != null) {
+  scorer.setMinCompetitiveScore(Math.nextUp(pqTop.score));
+}
+  }
+
+  public TopDocs topDocs(int howMany) {
+
+if (howMany <= 0 || howMany > totalHits) {
+  throw new IllegalArgumentException("Incorrect number of hits requested");
+}
+
+ScoreDoc[] results = new ScoreDoc[howMany];
+
+// Get the requested results from either hits list or PQ
+populateResults(results, howMany);
+
+return newTopDocs(results);
+  }
+
+  /**
+   * Populates the results array with the ScoreDoc instances. This can be
+   * overridden in case a different ScoreDoc type should be returned.
+   */
+  protected void populateResults(ScoreDoc[] results, int howMany) {
+if (pq != null) {
+  assert totalHits >= numHits;
+  for (int i = howMany - 1; i >= 0; i--) {
+results[i] = pq.pop();
+  }
+  return;
+}
+
+// Total number of hits collected were less than numHits
+assert totalHits < numHits;
+Collections.sort(hits, Comparator.comparing((ScoreDoc scoreDoc) ->
+

[GitHub] [lucene-solr] jpountz commented on a change in pull request #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
jpountz commented on a change in pull request #754: LUCENE-8875: Introduce 
Optimized Collector For Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#discussion_r301056023
 
 

 ##
 File path: 
lucene/sandbox/src/test/org/apache/lucene/search/TestLargeNumHitsTopDocsCollector.java
 ##
 @@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import java.io.IOException;
+
+import org.apache.lucene.document.Document;
+import org.apache.lucene.index.IndexReader;
+import org.apache.lucene.index.RandomIndexWriter;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.LuceneTestCase;
+
+public class TestLargeNumHitsTopDocsCollector extends LuceneTestCase {
+  private Directory dir;
+  private IndexReader reader;
+
+  @Override
+  public void setUp() throws Exception {
+super.setUp();
+dir = newDirectory();
+RandomIndexWriter writer = new RandomIndexWriter(random(), dir);
+for (int i = 0; i < 200_000; i++) {
+  writer.addDocument(new Document());
+}
+reader = writer.getReader();
+writer.close();
+  }
+
+  @Override
+  public void tearDown() throws Exception {
+reader.close();
+dir.close();
+dir = null;
+super.tearDown();
+  }
+  public void testLargeNumAndSparseHits() throws Exception {
+runNumHits(100_000);
+  }
+
+  public void testSingleNumHit() throws Exception {
+runNumHits(1);
+  }
+
+  public void testLowNumberOfHits() throws Exception {
+runNumHits(25);
+  }
+
+  public void testIllegalArguments() throws IOException {
+Query q = new MatchAllDocsQuery();
+IndexSearcher searcher = newSearcher(reader);
+LargeNumHitsTopDocsCollector largeCollector = new 
LargeNumHitsTopDocsCollector(15);
+TopScoreDocCollector regularCollector = TopScoreDocCollector.create(15, 
null, Integer.MAX_VALUE);
+
+searcher.search(q, largeCollector);
+searcher.search(q, regularCollector);
+
+assertEquals(largeCollector.totalHits, regularCollector.totalHits);
+
+expectThrows(IllegalArgumentException.class, () -> {
+  largeCollector.topDocs(350_000);
+});
+  }
+
+  public void testNoPQBuild() throws IOException {
+Query q = new MatchAllDocsQuery();
+IndexSearcher searcher = newSearcher(reader);
+LargeNumHitsTopDocsCollector largeCollector = new 
LargeNumHitsTopDocsCollector(250_000);
+TopScoreDocCollector regularCollector = 
TopScoreDocCollector.create(250_000, null, Integer.MAX_VALUE);
+
+searcher.search(q, largeCollector);
+searcher.search(q, regularCollector);
+
+assertEquals(largeCollector.totalHits, regularCollector.totalHits);
+
+assertEquals(largeCollector.pq, null);
+assertEquals(largeCollector.pqTop, null);
+  }
+
+  public void testPQBuild() throws IOException {
+Query q = new MatchAllDocsQuery();
+IndexSearcher searcher = newSearcher(reader);
+LargeNumHitsTopDocsCollector largeCollector = new 
LargeNumHitsTopDocsCollector(100_000);
+TopScoreDocCollector regularCollector = 
TopScoreDocCollector.create(100_000, null, Integer.MAX_VALUE);
+
+searcher.search(q, largeCollector);
+searcher.search(q, regularCollector);
+
+assertEquals(largeCollector.totalHits, regularCollector.totalHits);
+
+assertNotEquals(largeCollector.pq, null);
+assertNotEquals(largeCollector.pqTop, null);
+  }
+
+  private void runNumHits(int numHits) throws IOException {
+Query q = new MatchAllDocsQuery();
+IndexSearcher searcher = newSearcher(reader);
+LargeNumHitsTopDocsCollector largeCollector = new 
LargeNumHitsTopDocsCollector(numHits);
+TopScoreDocCollector regularCollector = 
TopScoreDocCollector.create(numHits, null, Integer.MAX_VALUE);
+
+searcher.search(q, largeCollector);
+searcher.search(q, regularCollector);
+
+assertEquals(largeCollector.totalHits, regularCollector.totalHits);
+
+TopDocs firstTopDocs = largeCollector.topDocs();
+TopDocs secondTopDocs = regularCollector.topDocs();
+
+assertEquals(firstTopDocs.scoreDocs.length, 
secondTopDocs.scoreDocs.length);
+
+for (int i = 0; i < firstTopDocs.scoreDocs.length; i++) {
+  ScoreDoc firstScoreDoc = 

[GitHub] [lucene-solr] jpountz commented on a change in pull request #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
jpountz commented on a change in pull request #754: LUCENE-8875: Introduce 
Optimized Collector For Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#discussion_r301054359
 
 

 ##
 File path: 
lucene/sandbox/src/java/org/apache/lucene/search/LargeNumHitsTopDocsCollector.java
 ##
 @@ -0,0 +1,177 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+
+import org.apache.lucene.index.LeafReaderContext;
+
+import static org.apache.lucene.search.TopDocsCollector.EMPTY_TOPDOCS;
+
+/**
+ * Optimized collector for large number of hits.
+ * The collector maintains an ArrayList of hits until it accumulates
+ * the requested number of hits. Post that, it builds a Priority Queue
+ * and starts filtering further hits based on the minimum competitive
+ * score.
+ */
+public class LargeNumHitsTopDocsCollector implements Collector {
+  private final List hits = new ArrayList<>();
+  private final int numHits;
+  HitQueue pq;
+  ScoreDoc pqTop;
+  int totalHits;
+  /** Whether {@link #totalHits} is exact or a lower bound. */
+  protected TotalHits.Relation totalHitsRelation = TotalHits.Relation.EQUAL_TO;
+
+  public LargeNumHitsTopDocsCollector(int numHits) {
+this.numHits = numHits;
+this.totalHits = 0;
+  }
+
+  // We always return COMPLETE since this collector should ideally
+  // be used only with large number of hits case
+  @Override
+  public ScoreMode scoreMode() {
+return ScoreMode.COMPLETE;
+  }
+
+  @Override
+  public LeafCollector getLeafCollector(LeafReaderContext context) {
+final int docBase = context.docBase;
+return new TopScoreDocCollector.ScorerLeafCollector() {
+
+  @Override
+  public void setScorer(Scorable scorer) throws IOException {
+super.setScorer(scorer);
+updateMinCompetitiveScore(scorer);
+  }
+
+  @Override
+  public void collect(int doc) throws IOException {
+float score = scorer.score();
+
+// This collector relies on the fact that scorers produce positive 
values:
+assert score >= 0; // NOTE: false for NaN
+
+if (totalHits < numHits) {
+  hits.add(new ScoreDoc(doc, score));
+  totalHits++;
+  return;
+} else if (totalHits == numHits) {
+  // Convert the list to a priority queue
+
+  // We should get here only when priority queue
+  // has been built
+  assert pq == null;
+  assert pqTop == null;
+  pq = new HitQueue(numHits, false);
+
+  for (int i = 0; i < hits.size(); i++) {
+pq.add(hits.get(i));
+  }
+
+  pqTop = pq.top();
+}
+
+if (score > pqTop.score) {
+  pqTop.doc = doc + docBase;
+  pqTop.score = score;
+  pqTop = pq.updateTop();
+  updateMinCompetitiveScore(scorer);
+}
+++totalHits;
+  }
+};
+  }
+
+  protected void updateMinCompetitiveScore(Scorable scorer) throws IOException 
{
+if (pqTop != null) {
+  scorer.setMinCompetitiveScore(Math.nextUp(pqTop.score));
+}
+totalHitsRelation = TotalHits.Relation.GREATER_THAN_OR_EQUAL_TO;
+  }
 
 Review comment:
   I was thinking of removing the entire method actually. By the way I'm 
surprised that no tests fail since AssertingScorer fails if 
`setMinCompetitiveScore` is called and the score mode is not `TOP_SCORES`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
jpountz commented on a change in pull request #754: LUCENE-8875: Introduce 
Optimized Collector For Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#discussion_r301056532
 
 

 ##
 File path: 
lucene/sandbox/src/test/org/apache/lucene/search/TestLargeNumHitsTopDocsCollector.java
 ##
 @@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import java.io.IOException;
+
+import org.apache.lucene.document.Document;
+import org.apache.lucene.index.IndexReader;
+import org.apache.lucene.index.RandomIndexWriter;
+import org.apache.lucene.store.Directory;
+import org.apache.lucene.util.LuceneTestCase;
+
+public class TestLargeNumHitsTopDocsCollector extends LuceneTestCase {
+  private Directory dir;
+  private IndexReader reader;
+
+  @Override
+  public void setUp() throws Exception {
+super.setUp();
+dir = newDirectory();
+RandomIndexWriter writer = new RandomIndexWriter(random(), dir);
+for (int i = 0; i < 200_000; i++) {
+  writer.addDocument(new Document());
+}
+reader = writer.getReader();
+writer.close();
+  }
+
+  @Override
+  public void tearDown() throws Exception {
+reader.close();
+dir.close();
+dir = null;
+super.tearDown();
+  }
+  public void testLargeNumAndSparseHits() throws Exception {
+runNumHits(100_000);
+  }
+
+  public void testSingleNumHit() throws Exception {
+runNumHits(1);
+  }
+
+  public void testLowNumberOfHits() throws Exception {
+runNumHits(25);
+  }
+
+  public void testIllegalArguments() throws IOException {
+Query q = new MatchAllDocsQuery();
+IndexSearcher searcher = newSearcher(reader);
+LargeNumHitsTopDocsCollector largeCollector = new 
LargeNumHitsTopDocsCollector(15);
+TopScoreDocCollector regularCollector = TopScoreDocCollector.create(15, 
null, Integer.MAX_VALUE);
+
+searcher.search(q, largeCollector);
+searcher.search(q, regularCollector);
+
+assertEquals(largeCollector.totalHits, regularCollector.totalHits);
+
+expectThrows(IllegalArgumentException.class, () -> {
+  largeCollector.topDocs(350_000);
+});
 
 Review comment:
   can you check the error message?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] atris opened a new pull request #769: LUCENE-8905: Better Error Handling For Illegal Arguments

2019-07-08 Thread GitBox
atris opened a new pull request #769: LUCENE-8905: Better Error Handling For 
Illegal Arguments
URL: https://github.com/apache/lucene-solr/pull/769
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Solaris (64bit/jdk1.8.0) - Build # 224 - Still Unstable!

2019-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Solaris/224/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testNodeMarkersRegistration

Error Message:
Path /autoscaling/nodeAdded/127.0.0.1:10085_solr wasn't created

Stack Trace:
java.lang.AssertionError: Path /autoscaling/nodeAdded/127.0.0.1:10085_solr 
wasn't created
at 
__randomizedtesting.SeedInfo.seed([7271B145F16FDF45:6ACB3949FF5A12AA]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testNodeMarkersRegistration(TestSimTriggerIntegration.java:988)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testTriggerThrottling

Error Message:
Both triggers did not fire event after 

[JENKINS] Lucene-Solr-Tests-master - Build # 3426 - Failure

2019-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3426/

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionWithTlogReplicasTest.test

Error Message:
Timeout occurred while waiting response from server at: 
http://127.0.0.1:45513/xy_/z

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: http://127.0.0.1:45513/xy_/z
at 
__randomizedtesting.SeedInfo.seed([98A88E12536A50DA:10FCB1C8FD963D22]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:667)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1274)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1792)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1813)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1730)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollectionRetry(AbstractFullDistribZkTestBase.java:2042)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:214)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:135)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[GitHub] [lucene-solr] atris commented on issue #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
atris commented on issue #754: LUCENE-8875: Introduce Optimized Collector For 
Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#issuecomment-509169211
 
 
   @jpountz Thanks for the comments, update the PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on a change in pull request #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
atris commented on a change in pull request #754: LUCENE-8875: Introduce 
Optimized Collector For Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#discussion_r301020065
 
 

 ##
 File path: 
lucene/sandbox/src/java/org/apache/lucene/search/LargeNumHitsTopDocsCollector.java
 ##
 @@ -0,0 +1,177 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+
+import org.apache.lucene.index.LeafReaderContext;
+
+import static org.apache.lucene.search.TopDocsCollector.EMPTY_TOPDOCS;
+
+/**
+ * Optimized collector for large number of hits.
+ * The collector maintains an ArrayList of hits until it accumulates
+ * the requested number of hits. Post that, it builds a Priority Queue
+ * and starts filtering further hits based on the minimum competitive
+ * score.
+ */
+public class LargeNumHitsTopDocsCollector implements Collector {
+  private final List hits = new ArrayList<>();
+  private final int numHits;
+  HitQueue pq;
+  ScoreDoc pqTop;
+  int totalHits;
+  /** Whether {@link #totalHits} is exact or a lower bound. */
+  protected TotalHits.Relation totalHitsRelation = TotalHits.Relation.EQUAL_TO;
+
+  public LargeNumHitsTopDocsCollector(int numHits) {
+this.numHits = numHits;
+this.totalHits = 0;
+  }
+
+  // We always return COMPLETE since this collector should ideally
+  // be used only with large number of hits case
+  @Override
+  public ScoreMode scoreMode() {
+return ScoreMode.COMPLETE;
+  }
+
+  @Override
+  public LeafCollector getLeafCollector(LeafReaderContext context) {
+final int docBase = context.docBase;
+return new TopScoreDocCollector.ScorerLeafCollector() {
+
+  @Override
+  public void setScorer(Scorable scorer) throws IOException {
+super.setScorer(scorer);
+updateMinCompetitiveScore(scorer);
+  }
+
+  @Override
+  public void collect(int doc) throws IOException {
+float score = scorer.score();
+
+// This collector relies on the fact that scorers produce positive 
values:
+assert score >= 0; // NOTE: false for NaN
+
+if (totalHits < numHits) {
+  hits.add(new ScoreDoc(doc, score));
+  totalHits++;
+  return;
+} else if (totalHits == numHits) {
+  // Convert the list to a priority queue
+
+  // We should get here only when priority queue
+  // has been built
+  assert pq == null;
+  assert pqTop == null;
+  pq = new HitQueue(numHits, false);
+
+  for (int i = 0; i < hits.size(); i++) {
+pq.add(hits.get(i));
+  }
+
+  pqTop = pq.top();
+}
+
+if (score > pqTop.score) {
+  pqTop.doc = doc + docBase;
+  pqTop.score = score;
+  pqTop = pq.updateTop();
+  updateMinCompetitiveScore(scorer);
+}
+++totalHits;
+  }
+};
+  }
+
+  protected void updateMinCompetitiveScore(Scorable scorer) throws IOException 
{
+if (pqTop != null) {
+  scorer.setMinCompetitiveScore(Math.nextUp(pqTop.score));
+}
+totalHitsRelation = TotalHits.Relation.GREATER_THAN_OR_EQUAL_TO;
+  }
+
+  public TopDocs topDocs(int howMany) {
+
+if (howMany <= 0 || howMany > totalHits) {
+  throw new IllegalArgumentException("Incorrect number of hits requested");
+}
+
+ScoreDoc[] results = new ScoreDoc[howMany];
+
+// Get the requested results from either hits list or PQ
+populateResults(results, howMany);
+
+return newTopDocs(results);
+  }
+
+  /**
+   * Populates the results array with the ScoreDoc instances. This can be
+   * overridden in case a different ScoreDoc type should be returned.
+   */
+  protected void populateResults(ScoreDoc[] results, int howMany) {
+if (pq != null) {
+  assert totalHits >= numHits;
+  for (int i = howMany - 1; i >= 0; i--) {
+results[i] = pq.pop();
+  }
+  return;
+}
+
+// Total number of hits collected were less than numHits
+assert totalHits < numHits;
+Collections.sort(hits, new 

[JENKINS] Lucene-Solr-8.x-Linux (32bit/jdk1.8.0_201) - Build # 841 - Unstable!

2019-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/841/
Java: 32bit/jdk1.8.0_201 -client -XX:+UseSerialGC

6 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.document.TestFeatureSort

Error Message:
1 thread leaked from SUITE scope at org.apache.lucene.document.TestFeatureSort: 
1) Thread[id=11, name=LuceneTestCase-1-thread-1, state=WAITING, 
group=TGRP-TestFeatureSort] at sun.misc.Unsafe.park(Native Method)  
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.lucene.document.TestFeatureSort: 
   1) Thread[id=11, name=LuceneTestCase-1-thread-1, state=WAITING, 
group=TGRP-TestFeatureSort]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([3EDD6643FFC6739B]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.document.TestFeatureSort

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=11, 
name=LuceneTestCase-1-thread-1, state=WAITING, group=TGRP-TestFeatureSort]  
   at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=11, name=LuceneTestCase-1-thread-1, state=WAITING, 
group=TGRP-TestFeatureSort]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([3EDD6643FFC6739B]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.document.TestFeatureSort

Error Message:
1 thread leaked from SUITE scope at org.apache.lucene.document.TestFeatureSort: 
1) Thread[id=1020, name=LuceneTestCase-446-thread-1, state=WAITING, 
group=TGRP-TestFeatureSort] at sun.misc.Unsafe.park(Native Method)  
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at 

Re: [JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11.0.3) - Build # 24366 - Failure!

2019-07-08 Thread Dawid Weiss
This looks like a bug/ race in ECJ that is used for linting here. I
didn't investigate, but I have such an impression -- it cannot be
reliably reproduced, yet happens from time to time.

D.

On Mon, Jul 8, 2019 at 11:57 AM Adrien Grand  wrote:
>
> Does anyone know why we are getting these accessibility issues?
>
> On Mon, Jul 8, 2019 at 10:09 AM Policeman Jenkins Server
>  wrote:
> >
> > Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24366/
> > Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC
> >
> > All tests passed
> >
> > Build Log:
> > [...truncated 2030 lines...]
> >[junit4] JVM J1: stderr was not empty, see: 
> > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J1-20190708_063127_7027301280393236134214.syserr
> >[junit4] >>> JVM J1 emitted unexpected output (verbatim) 
> >[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> > deprecated in version 9.0 and will likely be removed in a future release.
> >[junit4] <<< JVM J1: EOF 
> >
> > [...truncated 3 lines...]
> >[junit4] JVM J0: stderr was not empty, see: 
> > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J0-20190708_063127_7024504749479836881192.syserr
> >[junit4] >>> JVM J0 emitted unexpected output (verbatim) 
> >[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> > deprecated in version 9.0 and will likely be removed in a future release.
> >[junit4] <<< JVM J0: EOF 
> >
> > [...truncated 5 lines...]
> >[junit4] JVM J2: stderr was not empty, see: 
> > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J2-20190708_063127_70212783206905665212676.syserr
> >[junit4] >>> JVM J2 emitted unexpected output (verbatim) 
> >[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> > deprecated in version 9.0 and will likely be removed in a future release.
> >[junit4] <<< JVM J2: EOF 
> >
> > [...truncated 304 lines...]
> >[junit4] JVM J1: stderr was not empty, see: 
> > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J1-20190708_064432_68310171217942484809105.syserr
> >[junit4] >>> JVM J1 emitted unexpected output (verbatim) 
> >[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> > deprecated in version 9.0 and will likely be removed in a future release.
> >[junit4] <<< JVM J1: EOF 
> >
> > [...truncated 3 lines...]
> >[junit4] JVM J2: stderr was not empty, see: 
> > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J2-20190708_064432_68314573784609112931613.syserr
> >[junit4] >>> JVM J2 emitted unexpected output (verbatim) 
> >[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> > deprecated in version 9.0 and will likely be removed in a future release.
> >[junit4] <<< JVM J2: EOF 
> >
> >[junit4] JVM J0: stderr was not empty, see: 
> > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J0-20190708_064432_6831859223900288849170.syserr
> >[junit4] >>> JVM J0 emitted unexpected output (verbatim) 
> >[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> > deprecated in version 9.0 and will likely be removed in a future release.
> >[junit4] <<< JVM J0: EOF 
> >
> > [...truncated 1085 lines...]
> >[junit4] JVM J1: stderr was not empty, see: 
> > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J1-20190708_064634_37214848669064955530728.syserr
> >[junit4] >>> JVM J1 emitted unexpected output (verbatim) 
> >[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> > deprecated in version 9.0 and will likely be removed in a future release.
> >[junit4] <<< JVM J1: EOF 
> >
> > [...truncated 3 lines...]
> >[junit4] JVM J2: stderr was not empty, see: 
> > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J2-20190708_064634_3728464235375514233638.syserr
> >[junit4] >>> JVM J2 emitted unexpected output (verbatim) 
> >[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> > deprecated in version 9.0 and will likely be removed in a future release.
> >[junit4] <<< JVM J2: EOF 
> >
> >[junit4] JVM J0: stderr was not empty, see: 
> > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J0-20190708_064634_37212451067559916694570.syserr
> >[junit4] >>> JVM J0 emitted unexpected output (verbatim) 
> >[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> > deprecated in version 9.0 and will likely be removed in a future release.
> >[junit4] <<< JVM J0: EOF 
> >
> > 

Re: [JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 142 - Still Failing

2019-07-08 Thread Adrien Grand
For those who haven't followed the 8.1.2 release thread, we are asking
infra for help about this issue at
https://issues.apache.org/jira/browse/INFRA-18701.

On Sun, Jul 7, 2019 at 7:24 PM Apache Jenkins Server
 wrote:
>
> Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/142/
>
> No tests ran.
>
> Build Log:
> [...truncated 24989 lines...]
> [asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
> invalid part, must have at least one section (e.g., chapter, appendix, etc.)
> [asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
> part, must have at least one section (e.g., chapter, appendix, etc.)
>  [java] Processed 2587 links (2117 relative) to 3396 anchors in 259 files
>  [echo] Validated Links & Anchors via: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/
>
> -dist-changes:
>  [copy] Copying 4 files to 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes
>
> package:
>
> -unpack-solr-tgz:
>
> -ensure-solr-tgz-exists:
> [mkdir] Created dir: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked
> [untar] Expanding: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.2.0.tgz
>  into 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked
>
> generate-maven-artifacts:
>
> resolve:
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>

Re: [JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11.0.3) - Build # 24366 - Failure!

2019-07-08 Thread Adrien Grand
Does anyone know why we are getting these accessibility issues?

On Mon, Jul 8, 2019 at 10:09 AM Policeman Jenkins Server
 wrote:
>
> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24366/
> Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC
>
> All tests passed
>
> Build Log:
> [...truncated 2030 lines...]
>[junit4] JVM J1: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J1-20190708_063127_7027301280393236134214.syserr
>[junit4] >>> JVM J1 emitted unexpected output (verbatim) 
>[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
>[junit4] <<< JVM J1: EOF 
>
> [...truncated 3 lines...]
>[junit4] JVM J0: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J0-20190708_063127_7024504749479836881192.syserr
>[junit4] >>> JVM J0 emitted unexpected output (verbatim) 
>[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
>[junit4] <<< JVM J0: EOF 
>
> [...truncated 5 lines...]
>[junit4] JVM J2: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J2-20190708_063127_70212783206905665212676.syserr
>[junit4] >>> JVM J2 emitted unexpected output (verbatim) 
>[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
>[junit4] <<< JVM J2: EOF 
>
> [...truncated 304 lines...]
>[junit4] JVM J1: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J1-20190708_064432_68310171217942484809105.syserr
>[junit4] >>> JVM J1 emitted unexpected output (verbatim) 
>[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
>[junit4] <<< JVM J1: EOF 
>
> [...truncated 3 lines...]
>[junit4] JVM J2: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J2-20190708_064432_68314573784609112931613.syserr
>[junit4] >>> JVM J2 emitted unexpected output (verbatim) 
>[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
>[junit4] <<< JVM J2: EOF 
>
>[junit4] JVM J0: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J0-20190708_064432_6831859223900288849170.syserr
>[junit4] >>> JVM J0 emitted unexpected output (verbatim) 
>[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
>[junit4] <<< JVM J0: EOF 
>
> [...truncated 1085 lines...]
>[junit4] JVM J1: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J1-20190708_064634_37214848669064955530728.syserr
>[junit4] >>> JVM J1 emitted unexpected output (verbatim) 
>[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
>[junit4] <<< JVM J1: EOF 
>
> [...truncated 3 lines...]
>[junit4] JVM J2: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J2-20190708_064634_3728464235375514233638.syserr
>[junit4] >>> JVM J2 emitted unexpected output (verbatim) 
>[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
>[junit4] <<< JVM J2: EOF 
>
>[junit4] JVM J0: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J0-20190708_064634_37212451067559916694570.syserr
>[junit4] >>> JVM J0 emitted unexpected output (verbatim) 
>[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
>[junit4] <<< JVM J0: EOF 
>
> [...truncated 236 lines...]
>[junit4] JVM J0: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/icu/test/temp/junit4-J0-20190708_064925_0766912284488591662497.syserr
>[junit4] >>> JVM J0 emitted unexpected output (verbatim) 
>[junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be 

[jira] [Created] (SOLR-13612) Error 500 with update extract handler on Solr 7.4.0

2019-07-08 Thread Julien Massiera (JIRA)
Julien Massiera created SOLR-13612:
--

 Summary: Error 500 with update extract handler on Solr 7.4.0
 Key: SOLR-13612
 URL: https://issues.apache.org/jira/browse/SOLR-13612
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: UpdateRequestProcessors
Affects Versions: 7.4
Reporter: Julien Massiera


When sending a document via multipart POST update request, if a doc parameter 
name contains too much chars, the POST method fails with a 500 code error and 
one can see the following exception in the logs : 


{code:java}
ERROR 2019-06-20T09:43:41,089 (qtp1625082366-13) - 
Solr|Solr|solr.servlet.HttpSolrCall|[c:FileShare s:shard1 r:core_node2 
x:FileShare_shard1_replica_n1] o.a.s.s.HttpSolrCall 
null:org.apache.commons.fileupload.FileUploadException: Header section has more 
than 10240 bytes (maybe it is not properly terminated)

    at 
org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:362)

    at 
org.apache.commons.fileupload.servlet.ServletFileUpload.parseRequest(ServletFileUpload.java:115)

    at 
org.apache.solr.servlet.SolrRequestParsers$MultipartRequestParser.parseParamsAndFillStreams(SolrRequestParsers.java:602)

    at 
org.apache.solr.servlet.SolrRequestParsers$StandardRequestParser.parseParamsAndFillStreams(SolrRequestParsers.java:784)

    at 
org.apache.solr.servlet.SolrRequestParsers.parse(SolrRequestParsers.java:167)

    at 
org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:317)

    at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:469)

    at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)

    at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)

    at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)

    at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)

    at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)

    at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)

    at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)

    at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)

    at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)

    at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)

    at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)

    at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)

    at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)

    at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)

    at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)

    at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)

    at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)

    at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)

    at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)

    at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)

    at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)

    at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)

    at org.eclipse.jetty.server.Server.handle(Server.java:531)

    at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)

    at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)

    at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)

    at 
org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)

    at 
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)

    at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)

    at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)

    at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)

    at 

[GitHub] [lucene-solr] jpountz commented on a change in pull request #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
jpountz commented on a change in pull request #754: LUCENE-8875: Introduce 
Optimized Collector For Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#discussion_r301000239
 
 

 ##
 File path: 
lucene/sandbox/src/java/org/apache/lucene/search/LargeNumHitsTopDocsCollector.java
 ##
 @@ -0,0 +1,177 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+
+import org.apache.lucene.index.LeafReaderContext;
+
+import static org.apache.lucene.search.TopDocsCollector.EMPTY_TOPDOCS;
+
+/**
+ * Optimized collector for large number of hits.
+ * The collector maintains an ArrayList of hits until it accumulates
+ * the requested number of hits. Post that, it builds a Priority Queue
+ * and starts filtering further hits based on the minimum competitive
+ * score.
+ */
+public class LargeNumHitsTopDocsCollector implements Collector {
+  private final List hits = new ArrayList<>();
+  private final int numHits;
+  HitQueue pq;
+  ScoreDoc pqTop;
+  int totalHits;
+  /** Whether {@link #totalHits} is exact or a lower bound. */
+  protected TotalHits.Relation totalHitsRelation = TotalHits.Relation.EQUAL_TO;
+
+  public LargeNumHitsTopDocsCollector(int numHits) {
+this.numHits = numHits;
+this.totalHits = 0;
+  }
+
+  // We always return COMPLETE since this collector should ideally
+  // be used only with large number of hits case
+  @Override
+  public ScoreMode scoreMode() {
+return ScoreMode.COMPLETE;
+  }
+
+  @Override
+  public LeafCollector getLeafCollector(LeafReaderContext context) {
+final int docBase = context.docBase;
+return new TopScoreDocCollector.ScorerLeafCollector() {
+
+  @Override
+  public void setScorer(Scorable scorer) throws IOException {
+super.setScorer(scorer);
+updateMinCompetitiveScore(scorer);
+  }
+
+  @Override
+  public void collect(int doc) throws IOException {
+float score = scorer.score();
+
+// This collector relies on the fact that scorers produce positive 
values:
+assert score >= 0; // NOTE: false for NaN
+
+if (totalHits < numHits) {
+  hits.add(new ScoreDoc(doc, score));
+  totalHits++;
+  return;
+} else if (totalHits == numHits) {
+  // Convert the list to a priority queue
+
+  // We should get here only when priority queue
+  // has been built
+  assert pq == null;
+  assert pqTop == null;
+  pq = new HitQueue(numHits, false);
+
+  for (int i = 0; i < hits.size(); i++) {
+pq.add(hits.get(i));
+  }
+
+  pqTop = pq.top();
+}
+
+if (score > pqTop.score) {
+  pqTop.doc = doc + docBase;
+  pqTop.score = score;
+  pqTop = pq.updateTop();
+  updateMinCompetitiveScore(scorer);
+}
+++totalHits;
+  }
+};
+  }
+
+  protected void updateMinCompetitiveScore(Scorable scorer) throws IOException 
{
+if (pqTop != null) {
+  scorer.setMinCompetitiveScore(Math.nextUp(pqTop.score));
+}
+totalHitsRelation = TotalHits.Relation.GREATER_THAN_OR_EQUAL_TO;
+  }
+
+  public TopDocs topDocs(int howMany) {
+
+if (howMany <= 0 || howMany > totalHits) {
+  throw new IllegalArgumentException("Incorrect number of hits requested");
+}
+
+ScoreDoc[] results = new ScoreDoc[howMany];
+
+// Get the requested results from either hits list or PQ
+populateResults(results, howMany);
+
+return newTopDocs(results);
+  }
+
+  /**
+   * Populates the results array with the ScoreDoc instances. This can be
+   * overridden in case a different ScoreDoc type should be returned.
+   */
+  protected void populateResults(ScoreDoc[] results, int howMany) {
+if (pq != null) {
+  assert totalHits >= numHits;
+  for (int i = howMany - 1; i >= 0; i--) {
+results[i] = pq.pop();
+  }
+  return;
+}
+
+// Total number of hits collected were less than numHits
+assert totalHits < numHits;
+Collections.sort(hits, new 

[GitHub] [lucene-solr] jpountz commented on a change in pull request #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
jpountz commented on a change in pull request #754: LUCENE-8875: Introduce 
Optimized Collector For Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#discussion_r300999281
 
 

 ##
 File path: 
lucene/sandbox/src/java/org/apache/lucene/search/LargeNumHitsTopDocsCollector.java
 ##
 @@ -0,0 +1,177 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+
+import org.apache.lucene.index.LeafReaderContext;
+
+import static org.apache.lucene.search.TopDocsCollector.EMPTY_TOPDOCS;
+
+/**
+ * Optimized collector for large number of hits.
+ * The collector maintains an ArrayList of hits until it accumulates
+ * the requested number of hits. Post that, it builds a Priority Queue
+ * and starts filtering further hits based on the minimum competitive
+ * score.
+ */
+public class LargeNumHitsTopDocsCollector implements Collector {
+  private final List hits = new ArrayList<>();
+  private final int numHits;
+  HitQueue pq;
+  ScoreDoc pqTop;
+  int totalHits;
+  /** Whether {@link #totalHits} is exact or a lower bound. */
+  protected TotalHits.Relation totalHitsRelation = TotalHits.Relation.EQUAL_TO;
+
+  public LargeNumHitsTopDocsCollector(int numHits) {
+this.numHits = numHits;
+this.totalHits = 0;
+  }
+
+  // We always return COMPLETE since this collector should ideally
+  // be used only with large number of hits case
+  @Override
+  public ScoreMode scoreMode() {
+return ScoreMode.COMPLETE;
+  }
+
+  @Override
+  public LeafCollector getLeafCollector(LeafReaderContext context) {
+final int docBase = context.docBase;
+return new TopScoreDocCollector.ScorerLeafCollector() {
+
+  @Override
+  public void setScorer(Scorable scorer) throws IOException {
+super.setScorer(scorer);
+updateMinCompetitiveScore(scorer);
+  }
+
+  @Override
+  public void collect(int doc) throws IOException {
+float score = scorer.score();
+
+// This collector relies on the fact that scorers produce positive 
values:
+assert score >= 0; // NOTE: false for NaN
+
+if (totalHits < numHits) {
+  hits.add(new ScoreDoc(doc, score));
+  totalHits++;
+  return;
+} else if (totalHits == numHits) {
+  // Convert the list to a priority queue
+
+  // We should get here only when priority queue
+  // has been built
+  assert pq == null;
+  assert pqTop == null;
+  pq = new HitQueue(numHits, false);
+
+  for (int i = 0; i < hits.size(); i++) {
+pq.add(hits.get(i));
+  }
+
+  pqTop = pq.top();
+}
+
+if (score > pqTop.score) {
+  pqTop.doc = doc + docBase;
+  pqTop.score = score;
+  pqTop = pq.updateTop();
+  updateMinCompetitiveScore(scorer);
+}
+++totalHits;
+  }
+};
+  }
+
+  protected void updateMinCompetitiveScore(Scorable scorer) throws IOException 
{
+if (pqTop != null) {
+  scorer.setMinCompetitiveScore(Math.nextUp(pqTop.score));
+}
+totalHitsRelation = TotalHits.Relation.GREATER_THAN_OR_EQUAL_TO;
+  }
 
 Review comment:
   Maybe remove this logic, this is unlikely to help when fetching lots of hits.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
jpountz commented on a change in pull request #754: LUCENE-8875: Introduce 
Optimized Collector For Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#discussion_r300999896
 
 

 ##
 File path: 
lucene/sandbox/src/java/org/apache/lucene/search/LargeNumHitsTopDocsCollector.java
 ##
 @@ -0,0 +1,177 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+
+import org.apache.lucene.index.LeafReaderContext;
+
+import static org.apache.lucene.search.TopDocsCollector.EMPTY_TOPDOCS;
+
+/**
+ * Optimized collector for large number of hits.
+ * The collector maintains an ArrayList of hits until it accumulates
+ * the requested number of hits. Post that, it builds a Priority Queue
+ * and starts filtering further hits based on the minimum competitive
+ * score.
+ */
+public class LargeNumHitsTopDocsCollector implements Collector {
+  private final List hits = new ArrayList<>();
+  private final int numHits;
+  HitQueue pq;
+  ScoreDoc pqTop;
+  int totalHits;
+  /** Whether {@link #totalHits} is exact or a lower bound. */
+  protected TotalHits.Relation totalHitsRelation = TotalHits.Relation.EQUAL_TO;
+
+  public LargeNumHitsTopDocsCollector(int numHits) {
+this.numHits = numHits;
+this.totalHits = 0;
+  }
+
+  // We always return COMPLETE since this collector should ideally
+  // be used only with large number of hits case
+  @Override
+  public ScoreMode scoreMode() {
+return ScoreMode.COMPLETE;
+  }
+
+  @Override
+  public LeafCollector getLeafCollector(LeafReaderContext context) {
+final int docBase = context.docBase;
+return new TopScoreDocCollector.ScorerLeafCollector() {
+
+  @Override
+  public void setScorer(Scorable scorer) throws IOException {
+super.setScorer(scorer);
+updateMinCompetitiveScore(scorer);
+  }
+
+  @Override
+  public void collect(int doc) throws IOException {
+float score = scorer.score();
+
+// This collector relies on the fact that scorers produce positive 
values:
+assert score >= 0; // NOTE: false for NaN
+
+if (totalHits < numHits) {
+  hits.add(new ScoreDoc(doc, score));
+  totalHits++;
+  return;
+} else if (totalHits == numHits) {
+  // Convert the list to a priority queue
+
+  // We should get here only when priority queue
+  // has been built
+  assert pq == null;
+  assert pqTop == null;
+  pq = new HitQueue(numHits, false);
+
+  for (int i = 0; i < hits.size(); i++) {
+pq.add(hits.get(i));
+  }
+
+  pqTop = pq.top();
+}
+
+if (score > pqTop.score) {
+  pqTop.doc = doc + docBase;
+  pqTop.score = score;
+  pqTop = pq.updateTop();
+  updateMinCompetitiveScore(scorer);
+}
+++totalHits;
+  }
+};
+  }
+
+  protected void updateMinCompetitiveScore(Scorable scorer) throws IOException 
{
+if (pqTop != null) {
+  scorer.setMinCompetitiveScore(Math.nextUp(pqTop.score));
+}
+totalHitsRelation = TotalHits.Relation.GREATER_THAN_OR_EQUAL_TO;
+  }
+
+  public TopDocs topDocs(int howMany) {
+
+if (howMany <= 0 || howMany > totalHits) {
+  throw new IllegalArgumentException("Incorrect number of hits requested");
+}
+
+ScoreDoc[] results = new ScoreDoc[howMany];
+
+// Get the requested results from either hits list or PQ
+populateResults(results, howMany);
+
+return newTopDocs(results);
+  }
+
+  /**
+   * Populates the results array with the ScoreDoc instances. This can be
+   * overridden in case a different ScoreDoc type should be returned.
+   */
+  protected void populateResults(ScoreDoc[] results, int howMany) {
+if (pq != null) {
+  assert totalHits >= numHits;
+  for (int i = howMany - 1; i >= 0; i--) {
+results[i] = pq.pop();
+  }
+  return;
+}
+
+// Total number of hits collected were less than numHits
+assert totalHits < numHits;
+Collections.sort(hits, new 

[GitHub] [lucene-solr] jpountz commented on a change in pull request #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
jpountz commented on a change in pull request #754: LUCENE-8875: Introduce 
Optimized Collector For Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#discussion_r300998906
 
 

 ##
 File path: 
lucene/sandbox/src/java/org/apache/lucene/search/LargeNumHitsTopDocsCollector.java
 ##
 @@ -0,0 +1,177 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+
+import org.apache.lucene.index.LeafReaderContext;
+
+import static org.apache.lucene.search.TopDocsCollector.EMPTY_TOPDOCS;
+
+/**
+ * Optimized collector for large number of hits.
+ * The collector maintains an ArrayList of hits until it accumulates
+ * the requested number of hits. Post that, it builds a Priority Queue
+ * and starts filtering further hits based on the minimum competitive
+ * score.
+ */
+public class LargeNumHitsTopDocsCollector implements Collector {
+  private final List hits = new ArrayList<>();
+  private final int numHits;
+  HitQueue pq;
+  ScoreDoc pqTop;
+  int totalHits;
+  /** Whether {@link #totalHits} is exact or a lower bound. */
+  protected TotalHits.Relation totalHitsRelation = TotalHits.Relation.EQUAL_TO;
+
+  public LargeNumHitsTopDocsCollector(int numHits) {
+this.numHits = numHits;
+this.totalHits = 0;
+  }
+
+  // We always return COMPLETE since this collector should ideally
+  // be used only with large number of hits case
+  @Override
+  public ScoreMode scoreMode() {
+return ScoreMode.COMPLETE;
+  }
+
+  @Override
+  public LeafCollector getLeafCollector(LeafReaderContext context) {
+final int docBase = context.docBase;
+return new TopScoreDocCollector.ScorerLeafCollector() {
+
+  @Override
+  public void setScorer(Scorable scorer) throws IOException {
+super.setScorer(scorer);
+updateMinCompetitiveScore(scorer);
+  }
+
+  @Override
+  public void collect(int doc) throws IOException {
+float score = scorer.score();
+
+// This collector relies on the fact that scorers produce positive 
values:
+assert score >= 0; // NOTE: false for NaN
+
+if (totalHits < numHits) {
+  hits.add(new ScoreDoc(doc, score));
+  totalHits++;
+  return;
+} else if (totalHits == numHits) {
+  // Convert the list to a priority queue
+
+  // We should get here only when priority queue
+  // has been built
+  assert pq == null;
+  assert pqTop == null;
+  pq = new HitQueue(numHits, false);
+
+  for (int i = 0; i < hits.size(); i++) {
+pq.add(hits.get(i));
 
 Review comment:
   use `for (ScoreDoc scoreDoc : hits)`  instead?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] thomaswoeckinger commented on issue #665: Fixes SOLR-13539

2019-07-08 Thread GitBox
thomaswoeckinger commented on issue #665: Fixes SOLR-13539
URL: https://github.com/apache/lucene-solr/pull/665#issuecomment-509148931
 
 
   Anything new?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] thomaswoeckinger commented on issue #755: SOLR-13592: Introduce EmbeddedSolrTestBase for better integration tests

2019-07-08 Thread GitBox
thomaswoeckinger commented on issue #755: SOLR-13592: Introduce 
EmbeddedSolrTestBase for better integration tests
URL: https://github.com/apache/lucene-solr/pull/755#issuecomment-509148592
 
 
   @gerlowskija Anything new?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13257) Enable replica routing affinity for better cache usage

2019-07-08 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880136#comment-16880136
 ] 

Lucene/Solr QA commented on SOLR-13257:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
42s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  2m 10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate ref guide {color} | 
{color:green}  1m 57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 44s{color} 
| {color:red} core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m  2s{color} 
| {color:red} solrj in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.TestRandomFaceting |
|   | solr.search.similarities.TestNonDefinedSimilarityFactory |
|   | solr.search.similarities.TestLMJelinekMercerSimilarityFactory |
|   | solr.response.TestRawTransformer |
|   | solr.client.solrj.embedded.SolrExampleEmbeddedTest |
|   | solr.client.solrj.GetByIdTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-13257 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973864/SOLR-13257.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  validaterefguide  |
| uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon 
Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / ac209b6 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | LTS |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/482/artifact/out/patch-unit-solr_core.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/482/artifact/out/patch-unit-solr_solrj.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/482/testReport/ |
| modules | C: solr/core solr/solrj solr/solr-ref-guide U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/482/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Enable replica routing affinity for better cache usage
> --
>
> Key: SOLR-13257
> URL: https://issues.apache.org/jira/browse/SOLR-13257
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Michael Gibney
>Priority: Minor
> Attachments: AffinityShardHandlerFactory.java, SOLR-13257.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For each shard in a distributed request, Solr currently routes each request 
> randomly via 
> [ShufflingReplicaListTransformer|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/component/ShufflingReplicaListTransformer.java]
>  to a particular replica. In setups with replication factor >1, this normally 
> results in a situation where subsequent requests (which one would hope/expect 
> to leverage cached results from previous related requests) end up getting 
> routed to a replica that hasn't seen any related requests.
> The problem can be replicated by issuing a relatively expensive query (maybe 
> containing common terms?). The first request initializes the 
> {{queryResultCache}} on the consulted replicas. If replication factor >1 and 
> 

[jira] [Commented] (LUCENE-8803) Provide a FieldComparator to allow sorting by a feature from a FeatureField

2019-07-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880119#comment-16880119
 ] 

ASF subversion and git services commented on LUCENE-8803:
-

Commit a329953429b6fa40ccf94a0253cf892b329edc3c in lucene-solr's branch 
refs/heads/branch_8x from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a329953 ]

LUCENE-8803: Change the way that reverse ordering is implemented.

This addresses some test failures when IndexSearcher is created with an executor
and merges hits with TopDocs#merge.


> Provide a FieldComparator to allow sorting by a feature from a FeatureField
> ---
>
> Key: LUCENE-8803
> URL: https://issues.apache.org/jira/browse/LUCENE-8803
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Colin Goodheart-Smithe
>Priority: Major
> Fix For: master (9.0), 8.2
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> It would be useful to be able to sort search hits by the value of a feature 
> from a feature field (e.g. pagerank). A FieldComparatorSource implementation 
> that enables this would create a convenient generic way to sort using values 
> from feature fields.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8803) Provide a FieldComparator to allow sorting by a feature from a FeatureField

2019-07-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880120#comment-16880120
 ] 

ASF subversion and git services commented on LUCENE-8803:
-

Commit ac209b637d68c84ce1402b6b8967514ce9cf6854 in lucene-solr's branch 
refs/heads/master from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ac209b6 ]

LUCENE-8803: Change the way that reverse ordering is implemented.

This addresses some test failures when IndexSearcher is created with an executor
and merges hits with TopDocs#merge.


> Provide a FieldComparator to allow sorting by a feature from a FeatureField
> ---
>
> Key: LUCENE-8803
> URL: https://issues.apache.org/jira/browse/LUCENE-8803
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Colin Goodheart-Smithe
>Priority: Major
> Fix For: master (9.0), 8.2
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> It would be useful to be able to sort search hits by the value of a feature 
> from a feature field (e.g. pagerank). A FieldComparatorSource implementation 
> that enables this would create a convenient generic way to sort using values 
> from feature fields.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11.0.3) - Build # 24366 - Failure!

2019-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24366/
Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 2030 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J1-20190708_063127_7027301280393236134214.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J0-20190708_063127_7024504749479836881192.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 5 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J2-20190708_063127_70212783206905665212676.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 304 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J1-20190708_064432_68310171217942484809105.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J2-20190708_064432_68314573784609112931613.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J0-20190708_064432_6831859223900288849170.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 1085 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J1-20190708_064634_37214848669064955530728.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J2-20190708_064634_3728464235375514233638.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J0-20190708_064634_37212451067559916694570.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 236 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/icu/test/temp/junit4-J0-20190708_064925_0766912284488591662497.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/icu/test/temp/junit4-J1-20190708_064925_0768688495461911691824.syserr
   [junit4] >>> JVM J1 emitted 

[GitHub] [lucene-solr] atris commented on issue #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-08 Thread GitBox
atris commented on issue #754: LUCENE-8875: Introduce Optimized Collector For 
Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#issuecomment-509109583
 
 
   @jpountz I have pushed a new iteration which does as discussed i.e. builds a 
hits list and populates hits as long as the number of collected hits are lesser 
than number of hits requested. Once that threshold is reached, a priority queue 
is built and minimum competitive score is set and used to filter further hits.
   
   Please let me know if it looks fine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8905) TopDocsCollector Should Have Better Error Handling For Illegal Arguments

2019-07-08 Thread Atri Sharma (JIRA)
Atri Sharma created LUCENE-8905:
---

 Summary: TopDocsCollector Should Have Better Error Handling For 
Illegal Arguments
 Key: LUCENE-8905
 URL: https://issues.apache.org/jira/browse/LUCENE-8905
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Atri Sharma


While writing some tests, I realised that TopDocsCollector does not behave well 
when illegal arguments are passed in (for eg, requesting more hits than the 
number of hits collected). Instead, we return a TopDocs instance with 0 hits.

 

This can be problematic when queries are being formed by applications. This can 
hide bugs where malformed queries return no hits and that is surfaced upstream 
to client applications.

 

I found a TODO at the relevant code space, so I believe it is time to fix the 
problem and throw an IllegalArgumentsException.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-11.0.3) - Build # 5246 - Still Failing!

2019-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/5246/
Java: 64bit/jdk-11.0.3 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.MetricsHistoryWithAuthIntegrationTest.testValuesAreCollected

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([E0CE7182081945C7:C8332CE5D9FDDA82]:0)
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertNotNull(Assert.java:712)
at org.junit.Assert.assertNotNull(Assert.java:722)
at 
org.apache.solr.cloud.MetricsHistoryWithAuthIntegrationTest.testValuesAreCollected(MetricsHistoryWithAuthIntegrationTest.java:86)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)




Build Log:
[...truncated 13257 lines...]
   [junit4] Suite: org.apache.solr.cloud.MetricsHistoryWithAuthIntegrationTest
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1384 - Still Failing

2019-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1384/

No tests ran.

Build Log:
[...truncated 24573 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2587 links (2117 relative) to 3397 anchors in 259 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-9.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail: