[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1953 - Still Failing

2019-09-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1953/

No tests ran.

Build Log:
[...truncated 25 lines...]
ERROR: Failed to check out http://svn.apache.org/repos/asf/lucene/test-data
org.tmatesoft.svn.core.SVNException: svn: E175002: connection refused by the 
server
svn: E175002: OPTIONS request failed on '/repos/asf/lucene/test-data'
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:112)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:96)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:765)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:352)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:340)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:910)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:702)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:113)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1035)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getLatestRevision(DAVRepository.java:164)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.getRevisionNumber(SvnNgRepositoryAccess.java:119)
at 
org.tmatesoft.svn.core.internal.wc2.SvnRepositoryAccess.getLocations(SvnRepositoryAccess.java:178)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.createRepositoryFor(SvnNgRepositoryAccess.java:43)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgAbstractUpdate.checkout(SvnNgAbstractUpdate.java:831)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:26)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgOperationRunner.run(SvnNgOperationRunner.java:20)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:21)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1239)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
hudson.scm.subversion.CheckoutUpdater$SubversionUpdateTask.perform(CheckoutUpdater.java:133)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:176)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:134)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168)
at 
hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:1041)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:1017)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:990)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at 
org.tmatesoft.svn.core.internal.util.SVNSocketConnection.run(SVNSocketConnection.java:57)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
... 4 more
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)

[jira] [Commented] (SOLR-13745) Test should close resources: AtomicUpdateProcessorFactoryTest

2019-09-06 Thread Hoss Man (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924690#comment-16924690
 ] 

Hoss Man commented on SOLR-13745:
-

Interesting...

David: i suspect the reason these test bugs didn't manifest until after your 
commits in SOLR-13728 is because the new code you added in that issue causes 
DistributedUpdateProcessor to now call {{req.getSearcher().count(...)}} – 
resulting in {{SolrQueryRequestBase.searcherHolder}} getting populated in a way 
that it wouldn't have been previously for some of the {{LocalSolrQueryRequest}} 
instances used in this test.

As for why it didn't fail when you ran tests before committing SOLR-13728 ... 
i'm guessing that maybe this is because of SOLR-13747 / SOLR-12988 ?

(I've already confirmed SOLR-13746 is the reason [yetus's patch review build of 
SOLR-13728|https://builds.apache.org/job/PreCommit-SOLR-Build/543/testReport/] 
didn't catch this either)

> Test should close resources: AtomicUpdateProcessorFactoryTest 
> --
>
> Key: SOLR-13745
> URL: https://issues.apache.org/jira/browse/SOLR-13745
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 8.3
>
>
> This tests hangs after the test runs because there are directory or request 
> resources (not sure yet) that are not closed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13747) 'ant test' should fail on JVM's w/known SSL bugs

2019-09-06 Thread Hoss Man (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-13747:

Attachment: SOLR-13747.patch
Status: Open  (was: Open)


Some background...

In SOLR-12988, during the dicsussion of re-enabling SSL testing under java11 
knowing that some java 11 versions were broken, I made the following comments...

{quote}
(on the Junit tests side, having assumes around JVM version is fine – because 
even then it's not a "silent" behavior change, it's an explicitly "test ignored 
because XYZ")
{quote}

{quote}
if devs are running tests with a broken JVM, then the tests can & should fail 
... that's the job of the tests. it's a bad idea to make the tests "hide" the 
failure by "faking" that things work using a degraded cipher, or skipping SSL 
completely (yes, i also think mark's changes to SSLTestConfig in December as 
part of his commit on this issue was a terrible idea as well) ... the *ONLY* 
thing we should _consider_ allowing tests to change about their behavior if 
they see a JVM is "broken" is to SKIP ie: 
assume(SomethingThatIsFalseForTheBrokenJVM)
{quote}

Ultimately, adding an {{SSLTestConfig.assumeSslIsSafeToTest()}} method seemed 
better then doing a hard {{fail(..)}} in any test that wanted to use SSL -- 
particularly once we realized that (at that time) every available version of 
Java 13 was affected by SSL bugs.  {{SKIP}} ing tests (instead of failing 
outright) ment we could still have jenkins jobs running the latest jdk13-ea 
available looking for _other_ bugs, w/o getting noise due to known SSL bugs.

But the fact that SOLR-13746 slipped through the cracks has caused me to 
seriously regret that decision -- and lead me to wonder:

* Do we have committers who are _still_ running {{ant test}} with "bad" JDKs 
that don't realize thousands of tests are getting skipped?
* What if down the road a jenkins node gets rebuilt/reverted to use an older 
jdk11 version, would anyone notice?



The attached patch adds a new 
{{TestSSLTestConfig.testFailIfUserRunsTestsWithJVMThatHasKnownSSLBugs}} to the 
{{solr/test-framework}} module that does what i's name implies (with an 
informative message) when it detects that 
{{SSLTestConfig.assumeSslIsSafeToTest()}} throws an assumption in the the 
current JVM.

I considered just replacing {{SSLTestConfig.assumeSslIsSafeToTest()}} with a 
{{SSLTestConfig.failTheBuildUnlesseSslIsSafeToTest()}} but realized that the 
potential deluge of thousands of test failures that might occur for an aspiring 
contributor who attempts to run Solr tests w/no idea their JDK is broken could 
be overwhelming and scare people off before they even begin.  A single clear 
cut error (in addition to thousands of tests being {{SKIP}} ed) seemed more 
inviting.

I should note: It's possible that down the road we will again find ourselves in 
this situation...

bq. ...particularly once we realized that (at that time) every available 
version of Java 13 was affected by SSL bugs...

...with some future "Java XX", whose every available 'ea' build we recognize as 
being completely broken for SSL -- but we want still want to let jenkins try to 
look for _other_ bugs w/o the "noise" of this test failing every build.  If 
that day comes, we can update {{SSLTestConfig.assumeSslIsSafeToTest()}} to 
{{SKIP}} SSL on those JVM builds, and "whitelist" them in 
{{TestSSLTestConfig.testFailIfUserRunsTestsWithJVMThatHasKnownSSLBugs}}.




> 'ant test' should fail on JVM's w/known SSL bugs
> 
>
> Key: SOLR-13747
> URL: https://issues.apache.org/jira/browse/SOLR-13747
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13747.patch
>
>
> If {{ant test}} (or the future gradle equivalent) is run w/a JVM that has 
> known SSL bugs, there should be an obvious {{BUILD FAILED}} because of this 
> -- so the user knows they should upgrade their JVM (rather then relying on 
> the user to notice that SSL tests were {{SKIP}} ed)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13747) 'ant test' should fail on JVM's w/known SSL bugs

2019-09-06 Thread Hoss Man (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-13747:

Description: If {{ant test}} (or the future gradle equivalent) is run w/a 
JVM that has known SSL bugs, there should be an obvious {{BUILD FAILED}} 
because of this -- so the user knows they should upgrade their JVM (rather then 
relying on the user to notice that SSL tests were {{SKIP}} ed)  (was: 
If {{ant test}} (or the future gradle equivalent) is run w/a JVM that has known 
SSL bugs, there should be an obvious {{BUILD FAILED}} because of this -- so the 
user knows they should upgrade their JVM (rather then relying on the user to 
notice that SSL tests were {{SKIP}}ed))

> 'ant test' should fail on JVM's w/known SSL bugs
> 
>
> Key: SOLR-13747
> URL: https://issues.apache.org/jira/browse/SOLR-13747
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> If {{ant test}} (or the future gradle equivalent) is run w/a JVM that has 
> known SSL bugs, there should be an obvious {{BUILD FAILED}} because of this 
> -- so the user knows they should upgrade their JVM (rather then relying on 
> the user to notice that SSL tests were {{SKIP}} ed)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13747) 'ant test' should fail on JVM's w/known SSL bugs

2019-09-06 Thread Hoss Man (Jira)
Hoss Man created SOLR-13747:
---

 Summary: 'ant test' should fail on JVM's w/known SSL bugs
 Key: SOLR-13747
 URL: https://issues.apache.org/jira/browse/SOLR-13747
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man



If {{ant test}} (or the future gradle equivalent) is run w/a JVM that has known 
SSL bugs, there should be an obvious {{BUILD FAILED}} because of this -- so the 
user knows they should upgrade their JVM (rather then relying on the user to 
notice that SSL tests were {{SKIP}}ed)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13746) Apache jenkins needs JVM 11 upgraded to at least 11.0.3 (SSL bugs)

2019-09-06 Thread Hoss Man (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924661#comment-16924661
 ] 

Hoss Man commented on SOLR-13746:
-

[~thetaphi] / [~steve_rowe] - is this still something you guys have control 
over, or do we need to get infra involved?

> Apache jenkins needs JVM 11 upgraded to at least 11.0.3 (SSL bugs)
> --
>
> Key: SOLR-13746
> URL: https://issues.apache.org/jira/browse/SOLR-13746
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> I just realized that back in June, there was a misscommunication between 
> myself & Uwe (and a lack of double checking on my part!) regarding upgrading 
> the JVM versions on our jenkins machines...
>  * 
> [http://mail-archives.apache.org/mod_mbox/lucene-dev/201906.mbox/%3calpine.DEB.2.11.1906181434350.23523@tray%3e]
>  * 
> [http://mail-archives.apache.org/mod_mbox/lucene-dev/201906.mbox/%3C00b301d52918$d27b2f60$77718e20$@thetaphi.de%3E]
> ...Uwe only updated the JVMs on _his_ policeman jenkins machines - the JVM 
> used on the _*apache*_  jenkins nodes is still (as of 2019-09-06)  
> "11.0.1+13-LTS" ...
> [https://builds.apache.org/view/L/view/Lucene/job/Lucene-Solr-Tests-master/3689/consoleText]
> {noformat}
> ...
> [java-info] java version "11.0.1"
> [java-info] Java(TM) SE Runtime Environment (11.0.1+13-LTS, Oracle 
> Corporation)
> [java-info] Java HotSpot(TM) 64-Bit Server VM (11.0.1+13-LTS, Oracle 
> Corporation)
> ...
> {noformat}
> This means that even after the changes made in SOLR-12988 to re-enable SSL 
> testing on java11, all Apache jenkins 'master' builds, (including, AFAICT the 
> yetus / 'Patch Review' builds) are still SKIPping thousands of tests that use 
> SSL (either explicitly, or due to randomization) becauseof the logic in 
> SSLTestConfig to detects  bad JVM versions an prevent confusion/spurious 
> failures.
> We really need to get the jenkins nodes updated to openjdk 11.0.3 or 11.0.4 
> ASAP.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13746) Apache jenkins needs JVM 11 upgraded to at least 11.0.3 (SSL bugs)

2019-09-06 Thread Hoss Man (Jira)
Hoss Man created SOLR-13746:
---

 Summary: Apache jenkins needs JVM 11 upgraded to at least 11.0.3 
(SSL bugs)
 Key: SOLR-13746
 URL: https://issues.apache.org/jira/browse/SOLR-13746
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


I just realized that back in June, there was a misscommunication between myself 
& Uwe (and a lack of double checking on my part!) regarding upgrading the JVM 
versions on our jenkins machines...
 * 
[http://mail-archives.apache.org/mod_mbox/lucene-dev/201906.mbox/%3calpine.DEB.2.11.1906181434350.23523@tray%3e]
 * 
[http://mail-archives.apache.org/mod_mbox/lucene-dev/201906.mbox/%3C00b301d52918$d27b2f60$77718e20$@thetaphi.de%3E]

...Uwe only updated the JVMs on _his_ policeman jenkins machines - the JVM used 
on the _*apache*_  jenkins nodes is still (as of 2019-09-06)  "11.0.1+13-LTS" 
...

[https://builds.apache.org/view/L/view/Lucene/job/Lucene-Solr-Tests-master/3689/consoleText]
{noformat}
...
[java-info] java version "11.0.1"
[java-info] Java(TM) SE Runtime Environment (11.0.1+13-LTS, Oracle Corporation)
[java-info] Java HotSpot(TM) 64-Bit Server VM (11.0.1+13-LTS, Oracle 
Corporation)
...
{noformat}
This means that even after the changes made in SOLR-12988 to re-enable SSL 
testing on java11, all Apache jenkins 'master' builds, (including, AFAICT the 
yetus / 'Patch Review' builds) are still SKIPping thousands of tests that use 
SSL (either explicitly, or due to randomization) becauseof the logic in 
SSLTestConfig to detects  bad JVM versions an prevent confusion/spurious 
failures.

We really need to get the jenkins nodes updated to openjdk 11.0.3 or 11.0.4 
ASAP.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13661) A package management system for Solr

2019-09-06 Thread Noble Paul (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924657#comment-16924657
 ] 

Noble Paul commented on SOLR-13661:
---

[~janhoy] 

I think we can leverage the package discovery  dependency management done in 
your PoC. My efforts are mainly focussed on efficiently loading/reloading the 
binaries inside Solr so that there is no disruption to the cluster and the 
requests in flight. 

 

I'm glad that [~ichattopadhyaya] is looking into this

> A package management system for Solr
> 
>
> Key: SOLR-13661
> URL: https://issues.apache.org/jira/browse/SOLR-13661
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>  Labels: package
>
> Solr needs a unified cohesive package management system so that users can 
> deploy/redeploy plugins in a safe manner. This is an umbrella issue to 
> eventually build that solution



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] msokolov opened a new pull request #862: LUCENE-8971: Enable constructing JapaneseTokenizer with custom dictio…

2019-09-06 Thread GitBox
msokolov opened a new pull request #862: LUCENE-8971: Enable constructing 
JapaneseTokenizer with custom dictio…
URL: https://github.com/apache/lucene-solr/pull/862
 
 
   …nary
   
   # Description
   
   Extends the API of JapaneseTokenizer so it can accept a dictionary other 
than the built-in one. The built-in dictionary remains the default, so existing 
usage is unchanged; this just opens up the possibility of supplying a different 
dictionary/language model to use when tokenizing Japanese.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them

2019-09-06 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924616#comment-16924616
 ] 

David Smiley commented on SOLR-13677:
-

+Yes, lets revert this now then+.  I appreciate that [~ab] is setting a good 
bar for quality software!

> All Metrics Gauges should be unregistered by the objects that registered them
> -
>
> Key: SOLR-13677
> URL: https://issues.apache.org/jira/browse/SOLR-13677
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 8.3
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> The life cycle of Metrics producers are managed by the core (mostly). So, if 
> the lifecycle of the object is different from that of the core itself, these 
> objects will never be unregistered from the metrics registry. This will lead 
> to memory leaks



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Alias Id condundrum

2019-09-06 Thread David Smiley
On Wed, Sep 4, 2019 at 11:26 PM Gus Heck  wrote:

> It seems that the real time get handler doesn't play nice with aliases.
> The current (and past) behavior seems to be that it only works for the
> first collection listed in the alias. This seems to be pretty clearly a
> bug, as one certainly would expect the /get executed against an alias to
> either refuse to work with aliases or work across all collections in the
> alias rather than silently working only on the first collection.
>

I think it should just refuse to work (throw an exception) if there are
multiple collections in the alias -- simple.  It's okay for components to
have a limitation.

Solr's internal use of RTG isn't affected by this scenario.  I believe few
users even use RTG but yes of course some do and I know of at least one.
In the one case I saw RTG used, it was an nice optimization that replaced
its former mode of operation that worked fine.

~ David

>


(Oh my) Spans in Solr json request

2019-09-06 Thread Mikhail Khludnev
Hello,

Finally we let users to send span queries via XML (yeah) query parser. But
I feel awkward to invoke XML under Json. Straightforward approach lead us
to bunch of span[Or|And|Not|Etc] QParser plugins. Are there any more
elegant ideas?

-- 
Sincerely yours
Mikhail Khludnev


Re: precommit fail or is it me?

2019-09-06 Thread Michael Sokolov
I replied to a separate thread - seems that I had dangling symlinks
left over from removing my Maven repo at some point in the past. I
hadn't used this particular folder in a while ...

By the way

find -L solr -type l  -exec rm -fr {} \;

will remove broken symlinks

On Fri, Sep 6, 2019 at 10:15 AM Christine Poerschke (BLOOMBERG/
LONDON)  wrote:
>
> Interesting. Haven't come across this one myself, only the "javax.naming.*" 
> precommit errors (already mentioned previously) appear and disappear for me.
>
> The jetty version in the below seems less than the latest, could that be 
> related? 
> https://github.com/apache/lucene-solr/commit/0c24aa6c75a288e8d42c436162ca221518287d46#diff-020d173883031924455d3daf40e70d93
>
> Christine
>
> From: dev@lucene.apache.org At: 09/06/19 14:53:47
> To: java-...@lucene.apache.org
> Subject: precommit fail or is it me?
>
> Is anybody else seeing this error:
>
> ...workspace/lucene/lucene_baseline/build.xml:117: The following error
> occurred while executing this line:
> .../workspace/lucene/lucene_baseline/lucene/build.xml:90: The
> following error occurred while executing this line:
> .../workspace/lucene/lucene_baseline/lucene/tools/custom-tasks.xml:62:
> JAR resource does not exist:
> replicator/lib/jetty-continuation-9.4.14.v20181114.jar
>
> Should I be using gradle instead of ant precommit??
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13661) A package management system for Solr

2019-09-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924582#comment-16924582
 ] 

Jan Høydahl commented on SOLR-13661:


It is a branch in my git fork. See the PR I added to 13662 but it is hopelessly 
out of sync with master.. There was not much interest in the POC and it was too 
big an effort for me to run alone. Happy to help bring parts of it into the 
current package effort. I’m particularly happy with the package discovery, 
download and dependency handling of the POC.

> A package management system for Solr
> 
>
> Key: SOLR-13661
> URL: https://issues.apache.org/jira/browse/SOLR-13661
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>  Labels: package
>
> Solr needs a unified cohesive package management system so that users can 
> deploy/redeploy plugins in a safe manner. This is an umbrella issue to 
> eventually build that solution



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: precommit fail or is it me?

2019-09-06 Thread Michael Sokolov
ant clean clean-jars did not fix it;

sokolovm@ski4➜workspace/lucene/lucene_baseline(msokolov)» find -name
jetty-continuation\*

./solr/licenses/jetty-continuation-9.4.19.v20190610.jar.sha1
./solr/server/lib/jetty-continuation-9.4.14.v20181114.jar
./lucene/licenses/jetty-continuation-9.4.19.v20190610.jar.sha1
./lucene/replicator/lib/jetty-continuation-9.4.14.v20181114.jar
./lucene/replicator/lib/jetty-continuation-9.4.19.v20190610.jar

seems weird that I would have two versions

Looking in solr/server/lib and lucene/replicator/lib I see lots of
broken symlinks - I think I must have cleaned out my maven cache at
some point leaving these dangling, and then something isn't able to
clean up broken symlinks? Anyway after I removed all broken symlinks
(from other places too), build is progressing normally

Thanks for the pointer, Hoss

-Mike

On Fri, Sep 6, 2019 at 11:44 AM Chris Hostetter
 wrote:
>
>
> My guess is you somehow have a very old jar (or sha1 file that is leading
> it to look for a jar you don't have) for an outdated version of jetty --
> we are certainly not using jetty 9.4.14.v20181114 in master or branch_8x
>
> what does `find -name \*jetty-continuation\*` report on your system?
>
> does `ant clean clean-jars` help?
>
> what does `git clean --dry-run -dx` say after you try to run ant clean
> clean-jars?
>
> (it won't delet anything with --dry-run, but it might tell you if you have
> unexpected stuff)
>
> : Date: Fri, 6 Sep 2019 09:53:30 -0400
> : From: Michael Sokolov 
> : Reply-To: dev@lucene.apache.org
> : To: java-...@lucene.apache.org
> : Subject: precommit fail or is it me?
> :
> : Is anybody else seeing this error:
> :
> : ...workspace/lucene/lucene_baseline/build.xml:117: The following error
> : occurred while executing this line:
> : .../workspace/lucene/lucene_baseline/lucene/build.xml:90: The
> : following error occurred while executing this line:
> : .../workspace/lucene/lucene_baseline/lucene/tools/custom-tasks.xml:62:
> : JAR resource does not exist:
> : replicator/lib/jetty-continuation-9.4.14.v20181114.jar
> :
> : Should I be using gradle instead of ant precommit??
> :
> : -
> : To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> : For additional commands, e-mail: dev-h...@lucene.apache.org
> :
> :
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3689 - Still unstable

2019-09-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3689/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest

Error Message:
ObjectTracker found 6 object(s) that were not released!!! [SolrIndexSearcher, 
SolrCore, MockDirectoryWrapper, MockDirectoryWrapper, SolrIndexSearcher, 
MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.search.SolrIndexSearcher  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.search.SolrIndexSearcher.(SolrIndexSearcher.java:308)  at 
org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2132)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2305)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2041)  at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:702)
  at 
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:102)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1079)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1066)
  at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:160)
  at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:281) 
 at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:188)  at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
  at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:200)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2598)  at 
org.apache.solr.servlet.DirectSolrConnection.request(DirectSolrConnection.java:125)
  at org.apache.solr.util.TestHarness.update(TestHarness.java:286)  at 
org.apache.solr.util.BaseTestHarness.checkUpdateStatus(BaseTestHarness.java:274)
  at 
org.apache.solr.util.BaseTestHarness.validateUpdate(BaseTestHarness.java:244)  
at org.apache.solr.SolrTestCaseJ4.checkUpdateU(SolrTestCaseJ4.java:943)  at 
org.apache.solr.SolrTestCaseJ4.assertU(SolrTestCaseJ4.java:922)  at 
org.apache.solr.SolrTestCaseJ4.assertU(SolrTestCaseJ4.java:916)  at 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest.testBasics(AtomicUpdateProcessorFactoryTest.java:113)
  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)  at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.base/java.lang.reflect.Method.invoke(Method.java:566)  at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
  at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
  at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
  at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
  at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
  at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
  at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
  at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
  at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
  at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
  at 

[GitHub] [lucene-solr] janhoy opened a new pull request #861: SOLR-10665 POC for a PF4J based plugin system

2019-09-06 Thread GitBox
janhoy opened a new pull request #861: SOLR-10665 POC for a PF4J based plugin 
system
URL: https://github.com/apache/lucene-solr/pull/861
 
 
   Creating PR for this old issue


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13745) Test should close resources: AtomicUpdateProcessorFactoryTest

2019-09-06 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-13745.
-
Fix Version/s: 8.3
   Resolution: Fixed

> Test should close resources: AtomicUpdateProcessorFactoryTest 
> --
>
> Key: SOLR-13745
> URL: https://issues.apache.org/jira/browse/SOLR-13745
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 8.3
>
>
> This tests hangs after the test runs because there are directory or request 
> resources (not sure yet) that are not closed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8753) New PostingFormat - UniformSplit

2019-09-06 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-8753.
--
Fix Version/s: 8.3
   Resolution: Fixed

Thanks [~bruno.roustant] and [~juan.duran]!

BTW in the 8.x backport, precommit failed because JDK 8 doesn't like the stray 
"" in the package-info.java files so I removed them.

> New PostingFormat - UniformSplit
> 
>
> Key: LUCENE-8753
> URL: https://issues.apache.org/jira/browse/LUCENE-8753
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 8.0
>Reporter: Bruno Roustant
>Assignee: David Smiley
>Priority: Major
> Fix For: 8.3
>
> Attachments: Uniform Split Technique.pdf, luceneutil.benchmark.txt
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> This is a proposal to add a new PostingsFormat called "UniformSplit" with 4 
> objectives:
>  - Clear design and simple code.
>  - Easily extensible, for both the logic and the index format.
>  - Light memory usage with a very compact FST.
>  - Focus on efficient TermQuery, PhraseQuery and PrefixQuery performance.
> (the pdf attached explains visually the technique in more details)
>  The principle is to split the list of terms into blocks and use a FST to 
> access the block, but not as a prefix trie, rather with a seek-floor pattern. 
> For the selection of the blocks, there is a target average block size (number 
> of terms), with an allowed delta variation (10%) to compare the terms and 
> select the one with the minimal distinguishing prefix.
>  There are also several optimizations inside the block to make it more 
> compact and speed up the loading/scanning.
> The performance obtained is interesting with the luceneutil benchmark, 
> comparing UniformSplit with BlockTree. Find it in the first comment and also 
> attached for better formatting.
> Although the precise percentages vary between runs, three main points:
>  - TermQuery and PhraseQuery are improved.
>  - PrefixQuery and WildcardQuery are ok.
>  - Fuzzy queries are clearly less performant, because BlockTree is so 
> optimized for them.
> Compared to BlockTree, FST size is reduced by 15%, and segment writing time 
> is reduced by 20%. So this PostingsFormat scales to lots of docs, as 
> BlockTree.
> This initial version passes all Lucene tests. Use “ant test 
> -Dtests.codec=UniformSplitTesting” to test with this PostingsFormat.
> Subjectively, we think we have fulfilled our goal of code simplicity. And we 
> have already exercised this PostingsFormat extensibility to create a 
> different flavor for our own use-case.
> Contributors: Juan Camilo Rodriguez Duran, Bruno Roustant, David Smiley



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13728) Fail partial updates if it would inadvertently remove nested docs

2019-09-06 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924482#comment-16924482
 ] 

David Smiley commented on SOLR-13728:
-

I'm puzzled but lets discuss further in SOLR-13745 -- an issue I both filed and 
fixed for the problem you identified within the last half hour.

> Fail partial updates if it would inadvertently remove nested docs
> -
>
> Key: SOLR-13728
> URL: https://issues.apache.org/jira/browse/SOLR-13728
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 8.3
>
> Attachments: SOLR-13728.patch
>
>
> In SOLR-12638 Solr gained the ability to do partial updates (aka atomic 
> updates) to nested documents.  However this feature only works if the schema 
> meets certain circumstances.  We can know we don't support it and fail the 
> request – what I propose here.  This is much friendlier than wiping out 
> existing documents.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13745) Test should close resources: AtomicUpdateProcessorFactoryTest

2019-09-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924478#comment-16924478
 ] 

ASF subversion and git services commented on SOLR-13745:


Commit 454db9831ebc9437ea4afa39dc78422121eb00e7 in lucene-solr's branch 
refs/heads/branch_8x from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=454db98 ]

SOLR-13745: AtomicUpdateProcessorFactoryTest should close request

(cherry picked from commit da158ab22924bf9b2d6d14bbc69338c01fe77a7a)


> Test should close resources: AtomicUpdateProcessorFactoryTest 
> --
>
> Key: SOLR-13745
> URL: https://issues.apache.org/jira/browse/SOLR-13745
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
>
> This tests hangs after the test runs because there are directory or request 
> resources (not sure yet) that are not closed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13745) Test should close resources: AtomicUpdateProcessorFactoryTest

2019-09-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924476#comment-16924476
 ] 

ASF subversion and git services commented on SOLR-13745:


Commit da158ab22924bf9b2d6d14bbc69338c01fe77a7a in lucene-solr's branch 
refs/heads/master from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=da158ab ]

SOLR-13745: AtomicUpdateProcessorFactoryTest should close request


> Test should close resources: AtomicUpdateProcessorFactoryTest 
> --
>
> Key: SOLR-13745
> URL: https://issues.apache.org/jira/browse/SOLR-13745
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
>
> This tests hangs after the test runs because there are directory or request 
> resources (not sure yet) that are not closed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13745) Test should close resources: AtomicUpdateProcessorFactoryTest

2019-09-06 Thread David Smiley (Jira)
David Smiley created SOLR-13745:
---

 Summary: Test should close resources: 
AtomicUpdateProcessorFactoryTest 
 Key: SOLR-13745
 URL: https://issues.apache.org/jira/browse/SOLR-13745
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley
Assignee: David Smiley


This tests hangs after the test runs because there are directory or request 
resources (not sure yet) that are not closed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13728) Fail partial updates if it would inadvertently remove nested docs

2019-09-06 Thread Hoss Man (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924463#comment-16924463
 ] 

Hoss Man commented on SOLR-13728:
-


Huh?

No i'm directly refering to Commit c8203e4787b8ad21e1270781ba4e09fd7f3acb00 ...

{noformat}
hossman@slate:~/lucene/dev [j11] [master] $ git co 
c8203e4787b8ad21e1270781ba4e09fd7f3acb00 && ant clean && cd solr/core/ && ant 
test -Dtestcase=AtomicUpdateProcessorFactoryTest
...
   [junit4]   2> NOTE: Linux 5.0.0-27-generic amd64/AdoptOpenJDK 11.0.4 
(64-bit)/cpus=8,threads=2,free=199278080,total=522190848
   [junit4]   2> NOTE: All tests run in this JVM: 
[AtomicUpdateProcessorFactoryTest]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=AtomicUpdateProcessorFactoryTest -Dtests.seed=9CA837338CB8D055 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=eu-ES 
-Dtests.timezone=Indian/Kerguelen -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] ERROR   0.00s | AtomicUpdateProcessorFactoryTest (suite) <<<
   [junit4]> Throwable #1: java.lang.AssertionError: ObjectTracker found 6 
object(s) that were not released!!! [SolrCore, SolrIndexSearcher, 
MockDirectoryWrapper, MockDirectoryWrapper, SolrIndexSearcher, 
MockDirectoryWrapper]
   [junit4]> 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore
   [junit4]>at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
   [junit4]>at 
org.apache.solr.core.SolrCore.(SolrCore.java:1093)
...



hossman@slate:~/lucene/dev/solr/core [j11] [c8203e4787b] $ cd ../../ && git co 
c8203e4787b8ad21e1270781ba4e09fd7f3acb00~1
Previous HEAD position was c8203e4787b SOLR-13728: fail partial updates to 
child docs when not supported.
HEAD is now at 2552986e872 LUCENE-8917: Fix Solr's TestCodecSupport to stop 
trying to use the now-removed Direct docValues format


hossman@slate:~/lucene/dev [j11] [2552986e872] $ ant clean && cd solr/core/ && 
ant test -Dtestcase=AtomicUpdateProcessorFactoryTest
...
common.test:

BUILD SUCCESSFUL
Total time: 1 minute 10 seconds



hossman@slate:~/lucene/dev/solr/core [j11] [2552986e872] $ ant test  
-Dtestcase=AtomicUpdateProcessorFactoryTest -Dtests.seed=9CA837338CB8D055 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=eu-ES 
-Dtests.timezone=Indian/Kerguelen -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
...
common.test:

BUILD SUCCESSFUL
Total time: 19 seconds
{noformat}

> Fail partial updates if it would inadvertently remove nested docs
> -
>
> Key: SOLR-13728
> URL: https://issues.apache.org/jira/browse/SOLR-13728
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 8.3
>
> Attachments: SOLR-13728.patch
>
>
> In SOLR-12638 Solr gained the ability to do partial updates (aka atomic 
> updates) to nested documents.  However this feature only works if the schema 
> meets certain circumstances.  We can know we don't support it and fail the 
> request – what I propose here.  This is much friendlier than wiping out 
> existing documents.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8951) Create issues@ and builds@ lists and update notifications

2019-09-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/LUCENE-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924460#comment-16924460
 ] 

Jan Høydahl commented on LUCENE-8951:
-

Who has the karma to redirect Jira, GitHub and Apache Jenkins traffic?

> Create issues@ and builds@ lists and update notifications
> -
>
> Key: LUCENE-8951
> URL: https://issues.apache.org/jira/browse/LUCENE-8951
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>
> Issue to plan and execute decision from dev mailing list 
> [https://lists.apache.org/thread.html/762d72a9045642dc488dc7a2fd0a525707e5fa5671ac0648a3604c9b@%3Cdev.lucene.apache.org%3E]
>  # Create mailing lists as an announce only list (/)
>  # Subscribe all emails that will be allowed to post (/)
>  # Update websites with info about the new lists (/)
>  # Announce to dev@ list that the change will happen
>  # Modify Jira and Github bots to post to issues@ list instead of dev@
>  # Modify Jenkins (including Policeman and other) to post to builds@
>  # Announce to dev@ list that the change is effective



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8951) Create issues@ and builds@ lists and update notifications

2019-09-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/LUCENE-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924459#comment-16924459
 ] 

Jan Høydahl commented on LUCENE-8951:
-

Also updated these pages:
 * [https://lucene.apache.org/index.html#6-september-2019-new-mailing-lists]
 * [https://lucene.apache.org/core/discussion.html] 

Will send this ANNOUNCE email to general@ and dev@:
{quote}
*[ANNOUNCE] New builds@ and issues@ mailing lists*

The Lucene project has added two new announce mailing lists, 
`iss...@lucene.aparche.org` and `bui...@lucene.apache.org`. 
High-volume automated emails from our bug tracker, JIRA and GitHub will be 
moved from the `dev@` list to `issues@` and
automated emails from our Jenkins CI build servers will be moved from the 
`dev@` list to `builds@`.

This is an effort to reduce the sometimes overwhelming email volume on our main 
development mailing list and thus make it
easier for the community to follow important discussions by humans on the 
`dev@lucene.apache.org` list.

Everyone who wants to continue receiving these automated emails should sign up 
for one or both of the two new lists.
Sign-up instructions can be found on the Lucene-java[1] and Solr[2] web sites.

[1] https://lucene.apache.org/core/discussion.html

[2] https://lucene.apache.org/solr/community.html
{quote}

> Create issues@ and builds@ lists and update notifications
> -
>
> Key: LUCENE-8951
> URL: https://issues.apache.org/jira/browse/LUCENE-8951
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>
> Issue to plan and execute decision from dev mailing list 
> [https://lists.apache.org/thread.html/762d72a9045642dc488dc7a2fd0a525707e5fa5671ac0648a3604c9b@%3Cdev.lucene.apache.org%3E]
>  # Create mailing lists as an announce only list (/)
>  # Subscribe all emails that will be allowed to post (/)
>  # Update websites with info about the new lists (/)
>  # Announce to dev@ list that the change will happen
>  # Modify Jira and Github bots to post to issues@ list instead of dev@
>  # Modify Jenkins (including Policeman and other) to post to builds@
>  # Announce to dev@ list that the change is effective



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13728) Fail partial updates if it would inadvertently remove nested docs

2019-09-06 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-13728.
-
Resolution: Fixed

[~hossman] I'm confident you mean to comment on SOLR-13523 (June 20th), not 
this one.

> Fail partial updates if it would inadvertently remove nested docs
> -
>
> Key: SOLR-13728
> URL: https://issues.apache.org/jira/browse/SOLR-13728
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 8.3
>
> Attachments: SOLR-13728.patch
>
>
> In SOLR-12638 Solr gained the ability to do partial updates (aka atomic 
> updates) to nested documents.  However this feature only works if the schema 
> meets certain circumstances.  We can know we don't support it and fail the 
> request – what I propose here.  This is much friendlier than wiping out 
> existing documents.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13728) Fail partial updates if it would inadvertently remove nested docs

2019-09-06 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924433#comment-16924433
 ] 

David Smiley commented on SOLR-13728:
-

I'll investigate [~hossman].  I had run the tests right before committing, and 
looked for CI failures this morning.  So this is a mystery.

> Fail partial updates if it would inadvertently remove nested docs
> -
>
> Key: SOLR-13728
> URL: https://issues.apache.org/jira/browse/SOLR-13728
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 8.3
>
> Attachments: SOLR-13728.patch
>
>
> In SOLR-12638 Solr gained the ability to do partial updates (aka atomic 
> updates) to nested documents.  However this feature only works if the schema 
> meets certain circumstances.  We can know we don't support it and fail the 
> request – what I propose here.  This is much friendlier than wiping out 
> existing documents.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on issue #828: LUCENE-8753: UniformSplitPostingsFormat

2019-09-06 Thread GitBox
dsmiley commented on issue #828: LUCENE-8753: UniformSplitPostingsFormat
URL: https://github.com/apache/lucene-solr/pull/828#issuecomment-528935783
 
 
   Merged: 
https://github.com/apache/lucene-solr/commit/b963b7c3dbecda86c2917ad341caee63b93815ac


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley closed pull request #828: LUCENE-8753: UniformSplitPostingsFormat

2019-09-06 Thread GitBox
dsmiley closed pull request #828: LUCENE-8753: UniformSplitPostingsFormat
URL: https://github.com/apache/lucene-solr/pull/828
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-09-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924394#comment-16924394
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 665273ccbe3237775771e4daade8253a604b2c70 in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=665273c ]

SOLR-13105: More search, sample, agg copy 2


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them

2019-09-06 Thread Mark Miller (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924391#comment-16924391
 ] 

Mark Miller commented on SOLR-13677:


Reverts should take place right away, else we can easily end up in a bad 
situation. Development should be worked out on the branch, not in our release 
branches.

Please let's do a simple and fast revert and then commit when we have consensus.

> All Metrics Gauges should be unregistered by the objects that registered them
> -
>
> Key: SOLR-13677
> URL: https://issues.apache.org/jira/browse/SOLR-13677
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 8.3
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> The life cycle of Metrics producers are managed by the core (mostly). So, if 
> the lifecycle of the object is different from that of the core itself, these 
> objects will never be unregistered from the metrics registry. This will lead 
> to memory leaks



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 3595 - Still Unstable

2019-09-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/3595/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-8.x/33/consoleText

[repro] Revision: 3a20ebc3a66d60645c6902ba24a9ffa4a16841d0

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=AtomicUpdateProcessorFactoryTest 
-Dtests.seed=FF9E4FAF2D584ED0 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=is-IS -Dtests.timezone=America/St_Lucia -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  
-Dtestcase=DimensionalRoutedAliasUpdateProcessorTest -Dtests.method=testTimeCat 
-Dtests.seed=FF9E4FAF2D584ED0 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=da -Dtests.timezone=America/Rainy_River -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
b963b7c3dbecda86c2917ad341caee63b93815ac
[repro] git fetch
[repro] git checkout 3a20ebc3a66d60645c6902ba24a9ffa4a16841d0

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AtomicUpdateProcessorFactoryTest
[repro]   DimensionalRoutedAliasUpdateProcessorTest
[repro] ant compile-test

[...truncated 3579 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.AtomicUpdateProcessorFactoryTest|*.DimensionalRoutedAliasUpdateProcessorTest"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=FF9E4FAF2D584ED0 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=is-IS -Dtests.timezone=America/St_Lucia -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 3549 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.update.processor.DimensionalRoutedAliasUpdateProcessorTest
[repro]   5/5 failed: 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest

[repro] Re-testing 100% failures at the tip of branch_8x
[repro] git fetch
[repro] git checkout branch_8x

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 96 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AtomicUpdateProcessorFactoryTest
[repro] ant compile-test

[...truncated 3581 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.AtomicUpdateProcessorFactoryTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=FF9E4FAF2D584ED0 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=is-IS -Dtests.timezone=America/St_Lucia -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 3658 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_8x:
[repro]   5/5 failed: 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest

[repro] Re-testing 100% failures at the tip of branch_8x without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AtomicUpdateProcessorFactoryTest
[repro] ant compile-test

[...truncated 3581 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.AtomicUpdateProcessorFactoryTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=is-IS -Dtests.timezone=America/St_Lucia -Dtests.asserts=true 

[JENKINS] Lucene-Solr-repro - Build # 3594 - Still Unstable

2019-09-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/3594/

[...truncated 33 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-8.x/532/consoleText

[repro] Revision: 3a20ebc3a66d60645c6902ba24a9ffa4a16841d0

[repro] Repro line:  ant test  -Dtestcase=AtomicUpdateProcessorFactoryTest 
-Dtests.seed=FE6D055DF05ECE -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=zh -Dtests.timezone=JST -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
b963b7c3dbecda86c2917ad341caee63b93815ac
[repro] git fetch
[repro] git checkout 3a20ebc3a66d60645c6902ba24a9ffa4a16841d0

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AtomicUpdateProcessorFactoryTest
[repro] ant compile-test

[...truncated 3579 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.AtomicUpdateProcessorFactoryTest" -Dtests.showOutput=onerror  
-Dtests.seed=FE6D055DF05ECE -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=zh -Dtests.timezone=JST -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 3695 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   5/5 failed: 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest

[repro] Re-testing 100% failures at the tip of branch_8x
[repro] git fetch
[repro] git checkout branch_8x

[...truncated 3 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AtomicUpdateProcessorFactoryTest
[repro] ant compile-test

[...truncated 3579 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.AtomicUpdateProcessorFactoryTest" -Dtests.showOutput=onerror  
-Dtests.seed=FE6D055DF05ECE -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=zh -Dtests.timezone=JST -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 3795 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_8x:
[repro]   5/5 failed: 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest

[repro] Re-testing 100% failures at the tip of branch_8x without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AtomicUpdateProcessorFactoryTest
[repro] ant compile-test

[...truncated 3579 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.AtomicUpdateProcessorFactoryTest" -Dtests.showOutput=onerror  
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=zh -Dtests.timezone=JST 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 3756 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_8x without a seed:
[repro]   5/5 failed: 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest
[repro] git checkout b963b7c3dbecda86c2917ad341caee63b93815ac

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-8753) New PostingFormat - UniformSplit

2019-09-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924365#comment-16924365
 ] 

ASF subversion and git services commented on LUCENE-8753:
-

Commit b8a1857b0bd235bc9d4833276b1e60c9865aa04b in lucene-solr's branch 
refs/heads/branch_8x from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b8a1857 ]

LUCENE-8753: New UniformSplit and SharedTermsUniformSplit PostingsFormats

(cherry picked from commit b963b7c3dbecda86c2917ad341caee63b93815ac)


> New PostingFormat - UniformSplit
> 
>
> Key: LUCENE-8753
> URL: https://issues.apache.org/jira/browse/LUCENE-8753
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 8.0
>Reporter: Bruno Roustant
>Assignee: David Smiley
>Priority: Major
> Attachments: Uniform Split Technique.pdf, luceneutil.benchmark.txt
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> This is a proposal to add a new PostingsFormat called "UniformSplit" with 4 
> objectives:
>  - Clear design and simple code.
>  - Easily extensible, for both the logic and the index format.
>  - Light memory usage with a very compact FST.
>  - Focus on efficient TermQuery, PhraseQuery and PrefixQuery performance.
> (the pdf attached explains visually the technique in more details)
>  The principle is to split the list of terms into blocks and use a FST to 
> access the block, but not as a prefix trie, rather with a seek-floor pattern. 
> For the selection of the blocks, there is a target average block size (number 
> of terms), with an allowed delta variation (10%) to compare the terms and 
> select the one with the minimal distinguishing prefix.
>  There are also several optimizations inside the block to make it more 
> compact and speed up the loading/scanning.
> The performance obtained is interesting with the luceneutil benchmark, 
> comparing UniformSplit with BlockTree. Find it in the first comment and also 
> attached for better formatting.
> Although the precise percentages vary between runs, three main points:
>  - TermQuery and PhraseQuery are improved.
>  - PrefixQuery and WildcardQuery are ok.
>  - Fuzzy queries are clearly less performant, because BlockTree is so 
> optimized for them.
> Compared to BlockTree, FST size is reduced by 15%, and segment writing time 
> is reduced by 20%. So this PostingsFormat scales to lots of docs, as 
> BlockTree.
> This initial version passes all Lucene tests. Use “ant test 
> -Dtests.codec=UniformSplitTesting” to test with this PostingsFormat.
> Subjectively, we think we have fulfilled our goal of code simplicity. And we 
> have already exercised this PostingsFormat extensibility to create a 
> different flavor for our own use-case.
> Contributors: Juan Camilo Rodriguez Duran, Bruno Roustant, David Smiley



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-8.x - Build # 533 - Still Unstable

2019-09-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/533/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest

Error Message:
ObjectTracker found 5 object(s) that were not released!!! [SolrCore, 
SolrIndexSearcher, MockDirectoryWrapper, MockDirectoryWrapper, 
SolrIndexSearcher] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1093)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:914)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1252)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:766)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.search.SolrIndexSearcher  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.search.SolrIndexSearcher.(SolrIndexSearcher.java:308)  at 
org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2143)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2316)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2052)  at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:702)
  at 
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:102)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1079)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1066)
  at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:160)
  at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:281) 
 at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:188)  at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
  at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:200)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2609)  at 
org.apache.solr.servlet.DirectSolrConnection.request(DirectSolrConnection.java:125)
  at org.apache.solr.util.TestHarness.update(TestHarness.java:286)  at 
org.apache.solr.util.BaseTestHarness.checkUpdateStatus(BaseTestHarness.java:274)
  at 
org.apache.solr.util.BaseTestHarness.validateUpdate(BaseTestHarness.java:244)  
at org.apache.solr.SolrTestCaseJ4.checkUpdateU(SolrTestCaseJ4.java:943)  at 
org.apache.solr.SolrTestCaseJ4.assertU(SolrTestCaseJ4.java:922)  at 
org.apache.solr.SolrTestCaseJ4.assertU(SolrTestCaseJ4.java:916)  at 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest.testBasics(AtomicUpdateProcessorFactoryTest.java:113)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)  at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
  at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
  at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
  at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
  at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
  at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
  at 

[jira] [Reopened] (SOLR-13728) Fail partial updates if it would inadvertently remove nested docs

2019-09-06 Thread Hoss Man (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-13728:
-

these commits appear to be the cause of a 100% failure rate in {{ant test 
-Dtestcase=AtomicUpdateProcessorFactoryTest}} in recent jenkins builds.

the failures reproduce for me on master, regardless of see or any other jvm 
options (haven't tested branch_8x) yet.

the failures related to tracking of unclosed directories...

{noformat}
   [junit4]   2> 17393 ERROR (coreCloseExecutor-15-thread-1) [x:collection1 
] o.a.s.c.CachingDirectoryFactory Timeout waiting for all directory ref counts 
to be released - gave up waiting on 
CachedDir<>
   [junit4]   2> 17397 ERROR (coreCloseExecutor-15-thread-1) [x:collection1 
] o.a.s.c.CachingDirectoryFactory Error closing 
directory:org.apache.solr.common.SolrException: Timeout waiting for all 
directory ref counts to be released - gave up waiting on 
CachedDir<>
   [junit4]   2>at 
org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:178)
   [junit4]   2>at 
org.apache.solr.core.SolrCore.close(SolrCore.java:1699)
   [junit4]   2>at 
org.apache.solr.core.SolrCores.lambda$close$0(SolrCores.java:139)
   [junit4]   2>at 
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   [junit4]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
   [junit4]   2>at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   [junit4]   2>at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   [junit4]   2>at java.base/java.lang.Thread.run(Thread.java:834)
   [junit4]   2> 
   [junit4]   2> 17399 ERROR (coreCloseExecutor-15-thread-1) [x:collection1 
] o.a.s.c.SolrCore java.lang.AssertionError: 2
   [junit4]   2>at 
org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:192)
   [junit4]   2>at 
org.apache.solr.core.SolrCore.close(SolrCore.java:1699)
   [junit4]   2>at 
org.apache.solr.core.SolrCores.lambda$close$0(SolrCores.java:139)
   [junit4]   2>at 
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   [junit4]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
   [junit4]   2>at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   [junit4]   2>at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   [junit4]   2>at java.base/java.lang.Thread.run(Thread.java:834)
   [junit4]   2> 
   [junit4]   2> 17399 ERROR (coreCloseExecutor-15-thread-1) [x:collection1 
] o.a.s.c.SolrCores Error shutting down core:java.lang.AssertionError: 2
   [junit4]   2>at 
org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:192)
   [junit4]   2>at 
org.apache.solr.core.SolrCore.close(SolrCore.java:1699)
   [junit4]   2>at 
org.apache.solr.core.SolrCores.lambda$close$0(SolrCores.java:139)
   [junit4]   2>at 
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   [junit4]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
   [junit4]   2>at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   [junit4]   2>at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   [junit4]   2>at java.base/java.lang.Thread.run(Thread.java:834)
   [junit4]   2> 
...
   [junit4]   2> 78497 INFO  
(SUITE-AtomicUpdateProcessorFactoryTest-seed#[4E875A6AF0417D9C]-worker) [ ] 
o.a.s.SolrTestCaseJ4 --- 
Done waiting for tracked resources to be released
   [junit4]   2> NOTE: test params are: codec=Lucene80, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@917add1),
 locale=sr-Cyrl-ME, timezone=Canada/Saskatchewan
   [junit4]   2> NOTE: Linux 5.0.0-27-generic amd64/AdoptOpenJDK 11.0.4 
(64-bit)/cpus=8,threads=2,free=407897088,total=522190848
   [junit4]   2> NOTE: All tests run in this JVM: 
[AtomicUpdateProcessorFactoryTest]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=AtomicUpdateProcessorFactoryTest -Dtests.seed=4E875A6AF0417D9C 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sr-Cyrl-ME 
-Dtests.timezone=Canada/Saskatchewan -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   0.00s | AtomicUpdateProcessorFactoryTest (suite) <<<
   [junit4]> Throwable #1: java.lang.AssertionError: ObjectTracker found 6 
object(s) that were not released!!! [SolrCore, 

[jira] [Commented] (LUCENE-8753) New PostingFormat - UniformSplit

2019-09-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924352#comment-16924352
 ] 

ASF subversion and git services commented on LUCENE-8753:
-

Commit b963b7c3dbecda86c2917ad341caee63b93815ac in lucene-solr's branch 
refs/heads/master from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b963b7c ]

LUCENE-8753: New UniformSplit and SharedTermsUniformSplit PostingsFormats


> New PostingFormat - UniformSplit
> 
>
> Key: LUCENE-8753
> URL: https://issues.apache.org/jira/browse/LUCENE-8753
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 8.0
>Reporter: Bruno Roustant
>Assignee: David Smiley
>Priority: Major
> Attachments: Uniform Split Technique.pdf, luceneutil.benchmark.txt
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> This is a proposal to add a new PostingsFormat called "UniformSplit" with 4 
> objectives:
>  - Clear design and simple code.
>  - Easily extensible, for both the logic and the index format.
>  - Light memory usage with a very compact FST.
>  - Focus on efficient TermQuery, PhraseQuery and PrefixQuery performance.
> (the pdf attached explains visually the technique in more details)
>  The principle is to split the list of terms into blocks and use a FST to 
> access the block, but not as a prefix trie, rather with a seek-floor pattern. 
> For the selection of the blocks, there is a target average block size (number 
> of terms), with an allowed delta variation (10%) to compare the terms and 
> select the one with the minimal distinguishing prefix.
>  There are also several optimizations inside the block to make it more 
> compact and speed up the loading/scanning.
> The performance obtained is interesting with the luceneutil benchmark, 
> comparing UniformSplit with BlockTree. Find it in the first comment and also 
> attached for better formatting.
> Although the precise percentages vary between runs, three main points:
>  - TermQuery and PhraseQuery are improved.
>  - PrefixQuery and WildcardQuery are ok.
>  - Fuzzy queries are clearly less performant, because BlockTree is so 
> optimized for them.
> Compared to BlockTree, FST size is reduced by 15%, and segment writing time 
> is reduced by 20%. So this PostingsFormat scales to lots of docs, as 
> BlockTree.
> This initial version passes all Lucene tests. Use “ant test 
> -Dtests.codec=UniformSplitTesting” to test with this PostingsFormat.
> Subjectively, we think we have fulfilled our goal of code simplicity. And we 
> have already exercised this PostingsFormat extensibility to create a 
> different flavor for our own use-case.
> Contributors: Juan Camilo Rodriguez Duran, Bruno Roustant, David Smiley



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: precommit fail or is it me?

2019-09-06 Thread Chris Hostetter


My guess is you somehow have a very old jar (or sha1 file that is leading 
it to look for a jar you don't have) for an outdated version of jetty -- 
we are certainly not using jetty 9.4.14.v20181114 in master or branch_8x

what does `find -name \*jetty-continuation\*` report on your system?

does `ant clean clean-jars` help?

what does `git clean --dry-run -dx` say after you try to run ant clean 
clean-jars?

(it won't delet anything with --dry-run, but it might tell you if you have 
unexpected stuff)

: Date: Fri, 6 Sep 2019 09:53:30 -0400
: From: Michael Sokolov 
: Reply-To: dev@lucene.apache.org
: To: java-...@lucene.apache.org
: Subject: precommit fail or is it me?
: 
: Is anybody else seeing this error:
: 
: ...workspace/lucene/lucene_baseline/build.xml:117: The following error
: occurred while executing this line:
: .../workspace/lucene/lucene_baseline/lucene/build.xml:90: The
: following error occurred while executing this line:
: .../workspace/lucene/lucene_baseline/lucene/tools/custom-tasks.xml:62:
: JAR resource does not exist:
: replicator/lib/jetty-continuation-9.4.14.v20181114.jar
: 
: Should I be using gradle instead of ant precommit??
: 
: -
: To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: For additional commands, e-mail: dev-h...@lucene.apache.org
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-09-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924329#comment-16924329
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit cad3e6f8563cb5bc42b70a8351d7180bee92cead in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=cad3e6f ]

SOLR-13105: More search, sample, agg copy 1


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on issue #771: LUCENE-8620: Update Tessellator logic to label if triangle edges belongs to the original polygon

2019-09-06 Thread GitBox
nknize commented on issue #771: LUCENE-8620: Update Tessellator logic to label 
if triangle edges belongs to the original polygon
URL: https://github.com/apache/lucene-solr/pull/771#issuecomment-528893586
 
 
   Awesome. LGTM! Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8951) Create issues@ and builds@ lists and update notifications

2019-09-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/LUCENE-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924301#comment-16924301
 ] 

Jan Høydahl commented on LUCENE-8951:
-

Updated website to list the two new lists. Also added our Slack channel and 
fixed some old wiki links:

[https://lucene.apache.org/solr/community.html] 

> Create issues@ and builds@ lists and update notifications
> -
>
> Key: LUCENE-8951
> URL: https://issues.apache.org/jira/browse/LUCENE-8951
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>
> Issue to plan and execute decision from dev mailing list 
> [https://lists.apache.org/thread.html/762d72a9045642dc488dc7a2fd0a525707e5fa5671ac0648a3604c9b@%3Cdev.lucene.apache.org%3E]
>  # Create mailing lists as an announce only list (/)
>  # Subscribe all emails that will be allowed to post (/)
>  # Update websites with info about the new lists (/)
>  # Announce to dev@ list that the change will happen
>  # Modify Jira and Github bots to post to issues@ list instead of dev@
>  # Modify Jenkins (including Policeman and other) to post to builds@
>  # Announce to dev@ list that the change is effective



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8951) Create issues@ and builds@ lists and update notifications

2019-09-06 Thread Jira


 [ 
https://issues.apache.org/jira/browse/LUCENE-8951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated LUCENE-8951:

Description: 
Issue to plan and execute decision from dev mailing list 
[https://lists.apache.org/thread.html/762d72a9045642dc488dc7a2fd0a525707e5fa5671ac0648a3604c9b@%3Cdev.lucene.apache.org%3E]
 # Create mailing lists as an announce only list (/)
 # Subscribe all emails that will be allowed to post (/)
 # Update websites with info about the new lists (/)
 # Announce to dev@ list that the change will happen
 # Modify Jira and Github bots to post to issues@ list instead of dev@
 # Modify Jenkins (including Policeman and other) to post to builds@
 # Announce to dev@ list that the change is effective

  was:
Issue to plan and execute decision from dev mailing list 
[https://lists.apache.org/thread.html/762d72a9045642dc488dc7a2fd0a525707e5fa5671ac0648a3604c9b@%3Cdev.lucene.apache.org%3E]
 # Create mailing lists as an announce only list (/)
 # Subscribe all emails that will be allowed to post (/)
 # Update websites with info about the new lists
 # Announce to dev@ list that the change will happen
 # Modify Jira and Github bots to post to issues@ list instead of dev@
 # Modify Jenkins (including Policeman and other) to post to builds@
 # Announce to dev@ list that the change is effective


> Create issues@ and builds@ lists and update notifications
> -
>
> Key: LUCENE-8951
> URL: https://issues.apache.org/jira/browse/LUCENE-8951
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>
> Issue to plan and execute decision from dev mailing list 
> [https://lists.apache.org/thread.html/762d72a9045642dc488dc7a2fd0a525707e5fa5671ac0648a3604c9b@%3Cdev.lucene.apache.org%3E]
>  # Create mailing lists as an announce only list (/)
>  # Subscribe all emails that will be allowed to post (/)
>  # Update websites with info about the new lists (/)
>  # Announce to dev@ list that the change will happen
>  # Modify Jira and Github bots to post to issues@ list instead of dev@
>  # Modify Jenkins (including Policeman and other) to post to builds@
>  # Announce to dev@ list that the change is effective



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8932) Allow BKDReader packedIndex to be off heap

2019-09-06 Thread Jack Conradson (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924293#comment-16924293
 ] 

Jack Conradson commented on LUCENE-8932:


[~jpountz] Thanks for the thorough review!  I will incorporate your feedback 
into a patch next week and try to move forward from there.

> Allow BKDReader packedIndex to be off heap
> --
>
> Key: LUCENE-8932
> URL: https://issues.apache.org/jira/browse/LUCENE-8932
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jack Conradson
>Priority: Minor
> Attachments: LUCENE-8932.patch
>
>
> This change modifies BKDReader to read the packedIndex bytes off heap rather 
> than load them all on heap at a single time.
> Questions for discussion:
>  # Should BKDReader only support packedIndex off heap?
>  # If not, how should the choice be made?
> Using luceneutils IndexAndSearchOpenStreetMaps present the following test 
> results:
> with -box -points (patch)
> READER MB: 1.1345596313476562
> BEST M hits/sec: 73.34277344984474
> BEST QPS: 74.63011169783009
> with -box -points (original)
> READER MB: 1.7249317169189453
> BEST M hits/sec: 73.77125157623486
> BEST QPS: 75.06611062353801
> with -nearest 10 -points (patch)
> READER MB: 1.1345596313476562
> BEST M hits/sec: 0.013586298373879497
> BEST QPS: 1358.6298373879497
> with -nearest 10 -points (original)
> READER MB: 1.7249317169189453
> BEST M hits/sec: 0.01445208197367343
> BEST QPS: 1445.208197367343
> with -box -geo3d (patch)
> READER MB: 1.1345596313476562
> BEST M hits/sec: 39.84968715299074
> BEST QPS: 40.54914292796736
> with -box -geo3d (original)
> READER MB: 1.7456226348876953
> BEST M hits/sec: 40.45051734329004
> BEST QPS: 41.160519101846695



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re:precommit fail or is it me?

2019-09-06 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Interesting. Haven't come across this one myself, only the "javax.naming.*" 
precommit errors (already mentioned previously) appear and disappear for me.

The jetty version in the below seems less than the latest, could that be 
related? 
https://github.com/apache/lucene-solr/commit/0c24aa6c75a288e8d42c436162ca221518287d46#diff-020d173883031924455d3daf40e70d93

Christine

From: dev@lucene.apache.org At: 09/06/19 14:53:47To:  java-...@lucene.apache.org
Subject: precommit fail or is it me?

Is anybody else seeing this error:

...workspace/lucene/lucene_baseline/build.xml:117: The following error
occurred while executing this line:
.../workspace/lucene/lucene_baseline/lucene/build.xml:90: The
following error occurred while executing this line:
.../workspace/lucene/lucene_baseline/lucene/tools/custom-tasks.xml:62:
JAR resource does not exist:
replicator/lib/jetty-continuation-9.4.14.v20181114.jar

Should I be using gradle instead of ant precommit??

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




[GitHub] [lucene-solr] cpoerschke commented on issue #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-06 Thread GitBox
cpoerschke commented on issue #300: SOLR-11831: Skip second grouping step if 
group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#issuecomment-528868837
 
 
   > ... Will also annotate specific observations (and spoiler alert) one 
unexpected test failure mystery. ...
   
   Alright, annotations are complete, let me know what you think?
   
   And here's the test failure mystery i.e. for
   
   ```
   query("q", "{!func}id_i1", "group.skip.second.step", true, "fl", "id," + i1, 
"group", "true", "group.field", i1, "group.limit", 1, "sort", i1+" desc");
   ```
   
   it mostly gives this error
   
   ```
   [junit4]   2> 15996 ERROR 
(TEST-TestDistributedGrouping.test-seed#[722F68DB7C831C15]) [ ] 
o.a.s.BaseDis
   tributedSearchTestCase Mismatched responses:
   [junit4]   2> 
{responseHeader={status=0,QTime=3},grouped={a_i1={matches=272,groups=[{groupValue=9,docli
   st={numFound=1,start=0,maxScore=-1.0,docs=[SolrDocument{a_i1=9, 
id=1000}]}}, {groupValue=,doclist={num
   Found=1,start=0,maxScore=-1.0,docs=[SolrDocument{a_i1=, id=500}]}}, 
{groupValue=4321,doclist={numFound=1,s
   tart=0,maxScore=-1.0,docs=[SolrDocument{id=10, a_i1=4321}]}}, 
{groupValue=876,doclist={numFound=1,start=0,maxS
   core=-1.0,docs=[SolrDocument{id=8, a_i1=876}]}}, 
{groupValue=500,doclist={numFound=1,start=0,maxScore=-1.0,doc
   s=[SolrDocument{id=5, a_i1=500}]}}, 
{groupValue=379,doclist={numFound=1,start=0,maxScore=-1.0,docs=[SolrDocume
   nt{id=12, a_i1=379}]}}, 
{groupValue=233,doclist={numFound=1,start=0,maxScore=-1.0,docs=[SolrDocument{id=23,
 a_
   i1=233}]}}, 
{groupValue=232,doclist={numFound=1,start=0,maxScore=-1.0,docs=[SolrDocument{id=19,
 a_i1=232}]}}, 
   
{groupValue=123,doclist={numFound=1,start=0,maxScore=-1.0,docs=[SolrDocument{id=7,
 a_i1=123}]}}, {groupValue=1
   00,doclist={numFound=1,start=0,maxScore=-1.0,docs=[SolrDocument{id=1, 
a_i1=100}]}}]}}}
   [junit4]   2> 
{responseHeader={status=0,QTime=0},grouped={a_i1={matches=272,groups=[{groupValue=9,docli
   st={numFound=100,start=0,docs=[SolrDocument{a_i1=9, id=1000}]}}, 
{groupValue=,doclist={numFound=100,st
   art=0,docs=[SolrDocument{a_i1=, id=500}]}}, 
{groupValue=4321,doclist={numFound=1,start=0,docs=[SolrDocumen
   t{id=10, a_i1=4321}]}}, 
{groupValue=876,doclist={numFound=1,start=0,docs=[SolrDocument{id=8, 
a_i1=876}]}}, {gr
   oupValue=500,doclist={numFound=1,start=0,docs=[SolrDocument{id=5, 
a_i1=500}]}}, {groupValue=379,doclist={numFo
   und=1,start=0,docs=[SolrDocument{id=12, a_i1=379}]}}, 
{groupValue=233,doclist={numFound=1,start=0,docs=[SolrDo
   cument{id=23, a_i1=233}]}}, 
{groupValue=232,doclist={numFound=5,start=0,docs=[SolrDocument{id=18, 
a_i1=232}]}}
   , {groupValue=123,doclist={numFound=1,start=0,docs=[SolrDocument{id=7, 
a_i1=123}]}}, {groupValue=100,doclist={
   numFound=1,start=0,docs=[SolrDocument{id=1, a_i1=100}]}}]}}}
   ...
   [junit4] FAILURE 14.4s | TestDistributedGrouping.test <<<
   [junit4]> Throwable #1: junit.framework.AssertionFailedError: 
.grouped[a_i1].groups[7].doclist[0][id][0
   ]:19!=18
   ...
   ```
   
   but sometimes it succeeds (and about the documents 18 and 19 in group 232 
being indexed e.g. at 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.2.0/solr/core/src/test/org/apache/solr/TestDistributedGrouping.java#L115-L144
 nothing jumps out as unusual).
   
   
   
   And last but not least, thank you @diegoceccarelli for continuing to work on 
this pull request!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



precommit fail or is it me?

2019-09-06 Thread Michael Sokolov
Is anybody else seeing this error:

...workspace/lucene/lucene_baseline/build.xml:117: The following error
occurred while executing this line:
.../workspace/lucene/lucene_baseline/lucene/build.xml:90: The
following error occurred while executing this line:
.../workspace/lucene/lucene_baseline/lucene/tools/custom-tasks.xml:62:
JAR resource does not exist:
replicator/lib/jetty-continuation-9.4.14.v20181114.jar

Should I be using gradle instead of ant precommit??

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-06 Thread GitBox
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second 
grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r321742912
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/TestDistributedGrouping.java
 ##
 @@ -425,6 +426,65 @@ public void test() throws Exception {
 
 //Debug
 simpleQuery("q", "*:*", "rows", 10, "fl", "id," + i1, "group", "true", 
"group.field", i1, "debug", "true");
+doTestGroupSkipSecondStep();
+  }
+
+  /*
+SOLR-11831, test skipping the second grouping step if the query only 
retrieves on document per group
+   */
+  private void doTestGroupSkipSecondStep() throws Exception {
+ignoreException(GroupParams.GROUP_SKIP_DISTRIBUTED_SECOND); // don't print 
stack trace for exception raised by group.skip.second.step
+// Ignore numFound if group.skip.second.step is enabled because the number 
of documents per group will not be computed (will default to 1)
 
 Review comment:
   > ... // Ignore numFound if group.skip.second.step is enabled because the 
number of documents per group will not be computed (will default to 1) ...
   
   Curiosity only at this point, might it be possible to (somehow) test for the 
`numFound` being returned to be `1`
   
   And as you already mentioned elsewhere in this pull request, the `numFound` 
always being `1` for `group.skip.second.step=true` needs to be clearly 
documented.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-06 Thread GitBox
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second 
grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r321740457
 
 

 ##
 File path: 
lucene/grouping/src/java/org/apache/lucene/search/grouping/FirstPassGroupingCollector.java
 ##
 @@ -139,10 +139,18 @@ public ScoreMode scoreMode() {
   // System.out.println("  group=" + (group.groupValue == null ? "null" : 
group.groupValue.toString()));
   SearchGroup searchGroup = new SearchGroup<>();
   searchGroup.groupValue = group.groupValue;
+  // We pass this around so that we can get the corresponding solr id when 
serializing the search group to send to the federator
+  searchGroup.topDocLuceneId = group.topDoc;
   searchGroup.sortValues = new Object[sortFieldCount];
   for(int sortFieldIDX=0;sortFieldIDXhttps://lucene.apache.org/solr/guide/8_1/common-query-parameters.html#sort-parameter
 documentation reminded me that sorting by functions is possible. If (as the 
code snippet here seems to suggest) `group.skip.second.step=true` currently 
assumes that the sort element is a field (and not a function) then let's 
include that in the documentation, validation code and test coverage. Support 
for the not-yet-supported things could of course be added subsequently in 
future. Does that kind of make sense?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-06 Thread GitBox
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second 
grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r321734947
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/TestDistributedGrouping.java
 ##
 @@ -425,6 +426,65 @@ public void test() throws Exception {
 
 //Debug
 simpleQuery("q", "*:*", "rows", 10, "fl", "id," + i1, "group", "true", 
"group.field", i1, "debug", "true");
+doTestGroupSkipSecondStep();
+  }
+
+  /*
+SOLR-11831, test skipping the second grouping step if the query only 
retrieves on document per group
+   */
+  private void doTestGroupSkipSecondStep() throws Exception {
+ignoreException(GroupParams.GROUP_SKIP_DISTRIBUTED_SECOND); // don't print 
stack trace for exception raised by group.skip.second.step
+// Ignore numFound if group.skip.second.step is enabled because the number 
of documents per group will not be computed (will default to 1)
+handle.put("numFound", SKIP);
+query("q", "{!func}id_i1", "rows", 3, "group.skip.second.step", true, 
"group.limit", 1,  "fl",  "id," + i1, "group", "true",
+"group.field", i1);
+query("q", "kings", "group.skip.second.step", true, "fl", "id," + i1, 
"group", "true", "group.field", i1);
+query("q", "{!func}id_i1", "rows", 3, "group.skip.second.step", true,  
"fl",  "id," + i1, "group", "true",
+"group.field", i1);
+query("q", "1234doesnotmatchanything1234", "group.skip.second.step", true, 
"fl", "id," + i1, "group", "true", "group.field", i1);
+
+ignoreException("Illegal grouping specification");
+// ngroups will return the corrent results, the problem is that numFound 
for each group might be wrong in case of multishard setting - but there is no 
way to
+// enable/disable it.
+//assertSimpleQueryThrows("q", "{!func}id_i1", "group.skip.second.step", 
true, "fl", "id," + i1, "group", "true", "group.field", i1, "group.ngroups", 
true);
+assertSimpleQueryThrows("q", "{!func}id", "group.skip.second.step", true, 
"fl", "id," + i1, "group", "true", "group.field", i1, "group.limit", 5);
+assertSimpleQueryThrows("q", "{!func}id_i1", "group.skip.second.step", 
true, "fl", "id," + i1, "group", "true", "group.field", i1, "group.limit", 0);
+// group sorted in a different way should fail
+assertSimpleQueryThrows("q", "{!func}id_i1", "group.skip.second.step", 
true, "fl", "id," + i1, "group", "true", "group.field", i1, "group.limit", 0, 
"sort", i1+" desc");
+assertSimpleQueryThrows("q", "{!func}id_i1", "group.skip.second.step", 
true, "fl", "id," + i1, "group", "true", "group.field", i1, "group.limit", 0, 
"group.sort", i1+" desc");
+query("q", "{!func}id_i1", "rows", 3, "group.skip.second.step", true,  
"fl",  "id," + i1, "group", "true",
+"group.field", i1, "sort", tlong+" desc,"+i1+" asc", "group.sort", 
tlong+" desc");
+
+query("q", "{!func}id_i1", "rows", 3, "group.skip.second.step", true,  
"fl",  "id," + i1, "group", "true",
+"group.field", i1, "sort", tlong+" desc,"+i1+" asc", "group.sort", 
tlong+" desc");
+query("q", "{!func}id_i1", "rows", 3, "group.skip.second.step", true,  
"fl",  "id," + i1, "group", "true",
+"group.field", i1, "sort", tlong+" desc,"+i1+" asc", "group.sort", 
tlong+" desc,"+ i1+" asc");
+// not a prefix, should fail
+assertSimpleQueryThrows("q", "{!func}id_i1", "rows", 3, 
"group.skip.second.step", true,  "fl",  "id," + i1, "group", "true",
+"group.field", i1, "sort", tlong+" desc,"+i1+" asc", "group.sort",i1+" 
asc,"+tlong+" desc");
+
+// check group.main == true
 
 Review comment:
   minor: let's add comments as to why the `group.main` and `group.format == 
simple` test coverage is of interest. i only vaguely recall something about a 
different end result(?) transformer code path and the response not including 
any (per group) `numFound` figure? if there is no numFound figure then perhaps 
these tests could happen after the `handle.remove("numFound")` call?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 3593 - Unstable

2019-09-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/3593/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-8.x/207/consoleText

[repro] Revision: 3a20ebc3a66d60645c6902ba24a9ffa4a16841d0

[repro] Repro line:  ant test  -Dtestcase=AtomicUpdateProcessorFactoryTest 
-Dtests.seed=B6096B61CEEA9FD8 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=fr-BE -Dtests.timezone=America/Virgin 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
6574ae63d43f1a5a60c126a6d766d242883bf806
[repro] git fetch
[repro] git checkout 3a20ebc3a66d60645c6902ba24a9ffa4a16841d0

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AtomicUpdateProcessorFactoryTest
[repro] ant compile-test

[...truncated 3579 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.AtomicUpdateProcessorFactoryTest" -Dtests.showOutput=onerror  
-Dtests.seed=B6096B61CEEA9FD8 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=fr-BE -Dtests.timezone=America/Virgin 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 3801 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   5/5 failed: 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest

[repro] Re-testing 100% failures at the tip of branch_8x
[repro] git fetch
[repro] git checkout branch_8x

[...truncated 3 lines...]
[repro] git merge --ff-only

[...truncated 8 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AtomicUpdateProcessorFactoryTest
[repro] ant compile-test

[...truncated 3579 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.AtomicUpdateProcessorFactoryTest" -Dtests.showOutput=onerror  
-Dtests.seed=B6096B61CEEA9FD8 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=fr-BE -Dtests.timezone=America/Virgin 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 3641 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_8x:
[repro]   5/5 failed: 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest

[repro] Re-testing 100% failures at the tip of branch_8x without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AtomicUpdateProcessorFactoryTest
[repro] ant compile-test

[...truncated 3579 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.AtomicUpdateProcessorFactoryTest" -Dtests.showOutput=onerror  
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=fr-BE -Dtests.timezone=America/Virgin -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 3612 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_8x without a seed:
[repro]   5/5 failed: 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest
[repro] git checkout 6574ae63d43f1a5a60c126a6d766d242883bf806

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-06 Thread GitBox
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second 
grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r321732406
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/TestDistributedGrouping.java
 ##
 @@ -425,6 +426,65 @@ public void test() throws Exception {
 
 //Debug
 simpleQuery("q", "*:*", "rows", 10, "fl", "id," + i1, "group", "true", 
"group.field", i1, "debug", "true");
+doTestGroupSkipSecondStep();
+  }
+
+  /*
+SOLR-11831, test skipping the second grouping step if the query only 
retrieves on document per group
+   */
+  private void doTestGroupSkipSecondStep() throws Exception {
+ignoreException(GroupParams.GROUP_SKIP_DISTRIBUTED_SECOND); // don't print 
stack trace for exception raised by group.skip.second.step
+// Ignore numFound if group.skip.second.step is enabled because the number 
of documents per group will not be computed (will default to 1)
+handle.put("numFound", SKIP);
+query("q", "{!func}id_i1", "rows", 3, "group.skip.second.step", true, 
"group.limit", 1,  "fl",  "id," + i1, "group", "true",
+"group.field", i1);
+query("q", "kings", "group.skip.second.step", true, "fl", "id," + i1, 
"group", "true", "group.field", i1);
+query("q", "{!func}id_i1", "rows", 3, "group.skip.second.step", true,  
"fl",  "id," + i1, "group", "true",
+"group.field", i1);
+query("q", "1234doesnotmatchanything1234", "group.skip.second.step", true, 
"fl", "id," + i1, "group", "true", "group.field", i1);
+
+ignoreException("Illegal grouping specification");
+// ngroups will return the corrent results, the problem is that numFound 
for each group might be wrong in case of multishard setting - but there is no 
way to
+// enable/disable it.
+//assertSimpleQueryThrows("q", "{!func}id_i1", "group.skip.second.step", 
true, "fl", "id," + i1, "group", "true", "group.field", i1, "group.ngroups", 
true);
+assertSimpleQueryThrows("q", "{!func}id", "group.skip.second.step", true, 
"fl", "id," + i1, "group", "true", "group.field", i1, "group.limit", 5);
+assertSimpleQueryThrows("q", "{!func}id_i1", "group.skip.second.step", 
true, "fl", "id," + i1, "group", "true", "group.field", i1, "group.limit", 0);
+// group sorted in a different way should fail
 
 Review comment:
   Following on from the above reasoning, test coverage for `sort` and 
`group.sort` interaction with `group.skip.second.step` could include something 
like
   
   ```
   query("q", "{!func}id_i1", "group.skip.second.step", true, "fl", "id," + i1, 
"group", "true", "group.field", i1, "group.limit", 1,
 "sort", i1+" desc");
   query("q", "{!func}id_i1", "group.skip.second.step", true, "fl", "id," + i1, 
"group", "true", "group.field", i1, "group.limit", 1,
 "sort", i1+" desc", "group.sort", i1+" desc");
   ```
   
   i.e. `group.sort` being absent is fine and `group.sort` matching `sort` is 
fine too. These should pass, I think, no?
   
   
https://github.com/cpoerschke/lucene-solr/commit/3690cb3d4ed537546d2d876eb00dcc0fb735a557
 has those two tests commented out because mysteriously most of the time they 
fail (but part of the time they succeed!). Will add failure example output.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-06 Thread GitBox
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second 
grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r321730287
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/TestDistributedGrouping.java
 ##
 @@ -425,6 +426,65 @@ public void test() throws Exception {
 
 //Debug
 simpleQuery("q", "*:*", "rows", 10, "fl", "id," + i1, "group", "true", 
"group.field", i1, "debug", "true");
+doTestGroupSkipSecondStep();
+  }
+
+  /*
+SOLR-11831, test skipping the second grouping step if the query only 
retrieves on document per group
+   */
+  private void doTestGroupSkipSecondStep() throws Exception {
+ignoreException(GroupParams.GROUP_SKIP_DISTRIBUTED_SECOND); // don't print 
stack trace for exception raised by group.skip.second.step
+// Ignore numFound if group.skip.second.step is enabled because the number 
of documents per group will not be computed (will default to 1)
+handle.put("numFound", SKIP);
+query("q", "{!func}id_i1", "rows", 3, "group.skip.second.step", true, 
"group.limit", 1,  "fl",  "id," + i1, "group", "true",
+"group.field", i1);
+query("q", "kings", "group.skip.second.step", true, "fl", "id," + i1, 
"group", "true", "group.field", i1);
+query("q", "{!func}id_i1", "rows", 3, "group.skip.second.step", true,  
"fl",  "id," + i1, "group", "true",
+"group.field", i1);
+query("q", "1234doesnotmatchanything1234", "group.skip.second.step", true, 
"fl", "id," + i1, "group", "true", "group.field", i1);
+
+ignoreException("Illegal grouping specification");
+// ngroups will return the corrent results, the problem is that numFound 
for each group might be wrong in case of multishard setting - but there is no 
way to
+// enable/disable it.
+//assertSimpleQueryThrows("q", "{!func}id_i1", "group.skip.second.step", 
true, "fl", "id," + i1, "group", "true", "group.field", i1, "group.ngroups", 
true);
+assertSimpleQueryThrows("q", "{!func}id", "group.skip.second.step", true, 
"fl", "id," + i1, "group", "true", "group.field", i1, "group.limit", 5);
+assertSimpleQueryThrows("q", "{!func}id_i1", "group.skip.second.step", 
true, "fl", "id," + i1, "group", "true", "group.field", i1, "group.limit", 0);
+// group sorted in a different way should fail
 
 Review comment:
   These two tests after the `group sorted in a different way should fail` 
comment are interesting:
   * the comment suggests that failure is expected because of `sort/group.sort` 
parameters but wouldn't it also fail because of the `group.limit=0` i.e. to 
truely test about sorting the group.limit should be absent (default is 1) or 
explicitly set to 1.
   * eliminating the group.limit would give the queries below but shouldn't the 
first of the two queries pass?
 * query 1: if sort is present and group.sort is absent then sort would 
also be used for group.sort -- it should pass?
 * query 2: sort is absent and so the default is `sort=score` (I think) of 
which the `group.sort` is not a prefix match -- it should fail.
   ```
   assertSimpleQueryThrows(...,"sort", i1+" desc");
   assertSimpleQueryThrows(..., "group.sort", i1+" desc");
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-06 Thread GitBox
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second 
grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r321726345
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/TestDistributedGrouping.java
 ##
 @@ -425,6 +426,65 @@ public void test() throws Exception {
 
 //Debug
 simpleQuery("q", "*:*", "rows", 10, "fl", "id," + i1, "group", "true", 
"group.field", i1, "debug", "true");
+doTestGroupSkipSecondStep();
+  }
+
+  /*
+SOLR-11831, test skipping the second grouping step if the query only 
retrieves on document per group
+   */
+  private void doTestGroupSkipSecondStep() throws Exception {
+ignoreException(GroupParams.GROUP_SKIP_DISTRIBUTED_SECOND); // don't print 
stack trace for exception raised by group.skip.second.step
+// Ignore numFound if group.skip.second.step is enabled because the number 
of documents per group will not be computed (will default to 1)
+handle.put("numFound", SKIP);
+query("q", "{!func}id_i1", "rows", 3, "group.skip.second.step", true, 
"group.limit", 1,  "fl",  "id," + i1, "group", "true",
+"group.field", i1);
+query("q", "kings", "group.skip.second.step", true, "fl", "id," + i1, 
"group", "true", "group.field", i1);
 
 Review comment:
   The intent of the two `{!func}id_i1` tests
   
   ```
   query("q", "{!func}id_i1", "rows", 3, "group.skip.second.step", true, 
"group.limit", 1,  "fl",  "id," + i1, "group", "true", "group.field", i1);
   ...
   query("q", "{!func}id_i1", "rows", 3, "group.skip.second.step", true,  "fl", 
 "id," + i1, "group", "true", "group.field", i1);
   ```
   
   looks to be related to the `group.limit` parameter's absence or presence 
(with value `1`). The recently added `variantQuery` helper method could 
potentially make that intent clearer, assuming that was the intent. 
https://github.com/cpoerschke/lucene-solr/commit/3690cb3d4ed537546d2d876eb00dcc0fb735a557
 as some scribbles in the `doTestGroupSkipSecondStepAlt` method. What do you 
think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3688 - Failure

2019-09-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3688/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MockDirectoryWrapper, SolrIndexSearcher, SolrCore, SolrIndexSearcher, 
MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:544)  
at org.apache.solr.core.SolrCore.(SolrCore.java:995)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:914)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1241)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:766)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:834)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.search.SolrIndexSearcher  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.search.SolrIndexSearcher.(SolrIndexSearcher.java:308)  at 
org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2154)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2305)  at 
org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1147)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:1029)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:914)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1241)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:766)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:834)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1093)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:914)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1241)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:766)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:834)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.search.SolrIndexSearcher  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.search.SolrIndexSearcher.(SolrIndexSearcher.java:308)  at 
org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2132)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2305)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2041)  at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:702)
  at 
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:102)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1079)
  at 

[GitHub] [lucene-solr] nknize commented on a change in pull request #771: LUCENE-8620: Update Tessellator logic to label if triangle edges belongs to the original polygon

2019-09-06 Thread GitBox
nknize commented on a change in pull request #771: LUCENE-8620: Update 
Tessellator logic to label if triangle edges belongs to the original polygon
URL: https://github.com/apache/lucene-solr/pull/771#discussion_r321719110
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/geo/Tessellator.java
 ##
 @@ -920,6 +1055,8 @@ public static final boolean pointInPolygon(final 
List tessellation, do
 private Node previousZ;
 // next z node
 private Node nextZ;
+// if the edge from this node to the next node is part of the polygon edges
+private boolean nextEdgeFromPolygon;
 
 Review comment:
   ```suggestion
   private boolean isNextEdgeFromPolygon;
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on a change in pull request #771: LUCENE-8620: Update Tessellator logic to label if triangle edges belongs to the original polygon

2019-09-06 Thread GitBox
nknize commented on a change in pull request #771: LUCENE-8620: Update 
Tessellator logic to label if triangle edges belongs to the original polygon
URL: https://github.com/apache/lucene-solr/pull/771#discussion_r321719788
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/geo/Tessellator.java
 ##
 @@ -979,9 +1118,11 @@ public String toString() {
   /** Triangle in the tessellated mesh */
   public final static class Triangle {
 Node[] vertex;
+boolean[] edgeFromPolygon;
 
-protected Triangle(Node a, Node b, Node c) {
+protected Triangle(Node a, boolean abFromPolygon, Node b, boolean 
bcFromPolygon, Node c, boolean caFromPolygon) {
 
 Review comment:
   ```suggestion
   protected Triangle(Node a, boolean isABfromPolygon, Node b, boolean 
isBCfromPolygon, Node c, boolean isCAfromPolygon) {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on a change in pull request #771: LUCENE-8620: Update Tessellator logic to label if triangle edges belongs to the original polygon

2019-09-06 Thread GitBox
nknize commented on a change in pull request #771: LUCENE-8620: Update 
Tessellator logic to label if triangle edges belongs to the original polygon
URL: https://github.com/apache/lucene-solr/pull/771#discussion_r321718297
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/geo/Tessellator.java
 ##
 @@ -590,14 +601,123 @@ private static final boolean splitEarcut(Object 
polygon, final Node start, final
 return false;
   }
 
+  /** Computes if edge defined by a and b overlaps with a polygon edge **/
+  private static boolean edgeFromPolygon(final Node a, final Node b, final 
boolean isMorton) {
+if (isMorton) {
+  return mortonEdgeFromPolygon(a, b);
+}
+Node next = a;
+do {
+  if (pointInLine(next, next.next, a) && pointInLine(next, next.next, b)) {
+return next.nextEdgeFromPolygon;
+  }
+  if (pointInLine(next, next.previous, a) && pointInLine(next, 
next.previous, b)) {
+return next.previous.nextEdgeFromPolygon;
+  }
+  next = next.next;
+} while(next != a);
+return false;
+  }
+
+  /** Uses morton code for speed to determine whether or not and edge defined 
by a and b overlaps with a polygon edge */
+  private static final boolean mortonEdgeFromPolygon(final Node a, final Node 
b) {
 
 Review comment:
   ```suggestion
 private static final boolean isMortonEdgeFromPolygon(final Node a, final 
Node b) {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on a change in pull request #771: LUCENE-8620: Update Tessellator logic to label if triangle edges belongs to the original polygon

2019-09-06 Thread GitBox
nknize commented on a change in pull request #771: LUCENE-8620: Update 
Tessellator logic to label if triangle edges belongs to the original polygon
URL: https://github.com/apache/lucene-solr/pull/771#discussion_r321718707
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/geo/Tessellator.java
 ##
 @@ -590,14 +601,123 @@ private static final boolean splitEarcut(Object 
polygon, final Node start, final
 return false;
   }
 
+  /** Computes if edge defined by a and b overlaps with a polygon edge **/
+  private static boolean edgeFromPolygon(final Node a, final Node b, final 
boolean isMorton) {
+if (isMorton) {
+  return mortonEdgeFromPolygon(a, b);
+}
+Node next = a;
+do {
+  if (pointInLine(next, next.next, a) && pointInLine(next, next.next, b)) {
+return next.nextEdgeFromPolygon;
+  }
+  if (pointInLine(next, next.previous, a) && pointInLine(next, 
next.previous, b)) {
+return next.previous.nextEdgeFromPolygon;
+  }
+  next = next.next;
+} while(next != a);
+return false;
+  }
+
+  /** Uses morton code for speed to determine whether or not and edge defined 
by a and b overlaps with a polygon edge */
+  private static final boolean mortonEdgeFromPolygon(final Node a, final Node 
b) {
+// edge bbox (flip the bits so negative encoded values are < positive 
encoded values)
+final int minTX = StrictMath.min(a.x, b.x) ^ 0x8000;
+final int minTY = StrictMath.min(a.y, b.y) ^ 0x8000;
+final int maxTX = StrictMath.max(a.x, b.x) ^ 0x8000;
+final int maxTY = StrictMath.max(a.y, b.y) ^ 0x8000;
+
+// z-order range for the current edge;
+final long minZ = BitUtil.interleave(minTX, minTY);
+final long maxZ = BitUtil.interleave(maxTX, maxTY);
+
+// now make sure we don't have other points inside the potential ear;
+
+// look for points inside edge in both directions
+Node p = a.previousZ;
+Node n = a.nextZ;
+while (p != null && Long.compareUnsigned(p.morton, minZ) >= 0
+&& n != null && Long.compareUnsigned(n.morton, maxZ) <= 0) {
+  if (pointInLine(p, p.next, a) && pointInLine(p, p.next, b)) {
+return p.nextEdgeFromPolygon;
+  }
+  if (pointInLine(p, p.previous, a) && pointInLine(p, p.previous, b)) {
+return p.previous.nextEdgeFromPolygon;
+  }
+
+  p = p.previousZ;
+
+  if (pointInLine(n, n.next, a) && pointInLine(n, n.next, b)) {
+return n.nextEdgeFromPolygon;
+  }
+  if (pointInLine(n, n.previous, a) && pointInLine(n, n.previous, b)) {
+return n.previous.nextEdgeFromPolygon;
+  }
+
+  n = n.nextZ;
+}
+
+// first look for points inside the edge in decreasing z-order
+while (p != null && Long.compareUnsigned(p.morton, minZ) >= 0) {
+  if (pointInLine(p, p.next, a) && pointInLine(p, p.next, b)) {
+return p.nextEdgeFromPolygon;
+  }
+  if (pointInLine(p, p.previous, a) && pointInLine(p, p.previous, b)) {
+return p.previous.nextEdgeFromPolygon;
+  }
+  p = p.previousZ;
+}
+// then look for points in increasing z-order
+while (n != null &&
+Long.compareUnsigned(n.morton, maxZ) <= 0) {
+  if (pointInLine(n, n.next, a) && pointInLine(n, n.next, b)) {
+return n.nextEdgeFromPolygon;
+  }
+  if (pointInLine(n, n.previous, a) && pointInLine(n, n.previous, b)) {
+return n.previous.nextEdgeFromPolygon;
+  }
+  n = n.nextZ;
+}
+return false;
+  }
+
+  private static boolean pointInLine(final Node a, final Node b, final Node 
point) {
+return pointInLine(a, b, point.getX(), point.getY());
+  }
+
+  private static boolean pointInLine(final Node a, final Node b, final double 
lon, final double lat) {
+final double dxc = lon - a.getX();
+final double dyc = lat - a.getY();
+
+final double dxl = b.getX() - a.getX();
+final double dyl = b.getY() - a.getY();
+
+if (dxc * dyl - dyc * dxl == 0) {
+  if (Math.abs(dxl) >= Math.abs(dyl)) {
+return dxl > 0 ?
+a.getX() <= lon && lon <= b.getX() :
+b.getX() <= lon && lon <= a.getX();
+  } else {
+return dyl > 0 ?
+a.getY() <= lat && lat <= b.getY() :
+b.getY() <= lat && lat <= a.getY();
+  }
+}
+return false;
+  }
+
+
+  /** Links two polygon vertices using a bridge. **/  /** Links 
two polygon vert
 
 Review comment:
   dangling comment?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional 

[GitHub] [lucene-solr] nknize commented on a change in pull request #771: LUCENE-8620: Update Tessellator logic to label if triangle edges belongs to the original polygon

2019-09-06 Thread GitBox
nknize commented on a change in pull request #771: LUCENE-8620: Update 
Tessellator logic to label if triangle edges belongs to the original polygon
URL: https://github.com/apache/lucene-solr/pull/771#discussion_r321718778
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/geo/Tessellator.java
 ##
 @@ -590,14 +601,123 @@ private static final boolean splitEarcut(Object 
polygon, final Node start, final
 return false;
   }
 
+  /** Computes if edge defined by a and b overlaps with a polygon edge **/
+  private static boolean edgeFromPolygon(final Node a, final Node b, final 
boolean isMorton) {
+if (isMorton) {
+  return mortonEdgeFromPolygon(a, b);
+}
+Node next = a;
+do {
+  if (pointInLine(next, next.next, a) && pointInLine(next, next.next, b)) {
+return next.nextEdgeFromPolygon;
+  }
+  if (pointInLine(next, next.previous, a) && pointInLine(next, 
next.previous, b)) {
+return next.previous.nextEdgeFromPolygon;
+  }
+  next = next.next;
+} while(next != a);
+return false;
+  }
+
+  /** Uses morton code for speed to determine whether or not and edge defined 
by a and b overlaps with a polygon edge */
+  private static final boolean mortonEdgeFromPolygon(final Node a, final Node 
b) {
+// edge bbox (flip the bits so negative encoded values are < positive 
encoded values)
+final int minTX = StrictMath.min(a.x, b.x) ^ 0x8000;
+final int minTY = StrictMath.min(a.y, b.y) ^ 0x8000;
+final int maxTX = StrictMath.max(a.x, b.x) ^ 0x8000;
+final int maxTY = StrictMath.max(a.y, b.y) ^ 0x8000;
+
+// z-order range for the current edge;
+final long minZ = BitUtil.interleave(minTX, minTY);
+final long maxZ = BitUtil.interleave(maxTX, maxTY);
+
+// now make sure we don't have other points inside the potential ear;
+
+// look for points inside edge in both directions
+Node p = a.previousZ;
+Node n = a.nextZ;
+while (p != null && Long.compareUnsigned(p.morton, minZ) >= 0
+&& n != null && Long.compareUnsigned(n.morton, maxZ) <= 0) {
+  if (pointInLine(p, p.next, a) && pointInLine(p, p.next, b)) {
+return p.nextEdgeFromPolygon;
+  }
+  if (pointInLine(p, p.previous, a) && pointInLine(p, p.previous, b)) {
+return p.previous.nextEdgeFromPolygon;
+  }
+
+  p = p.previousZ;
+
+  if (pointInLine(n, n.next, a) && pointInLine(n, n.next, b)) {
+return n.nextEdgeFromPolygon;
+  }
+  if (pointInLine(n, n.previous, a) && pointInLine(n, n.previous, b)) {
+return n.previous.nextEdgeFromPolygon;
+  }
+
+  n = n.nextZ;
+}
+
+// first look for points inside the edge in decreasing z-order
+while (p != null && Long.compareUnsigned(p.morton, minZ) >= 0) {
+  if (pointInLine(p, p.next, a) && pointInLine(p, p.next, b)) {
+return p.nextEdgeFromPolygon;
+  }
+  if (pointInLine(p, p.previous, a) && pointInLine(p, p.previous, b)) {
+return p.previous.nextEdgeFromPolygon;
+  }
+  p = p.previousZ;
+}
+// then look for points in increasing z-order
+while (n != null &&
+Long.compareUnsigned(n.morton, maxZ) <= 0) {
+  if (pointInLine(n, n.next, a) && pointInLine(n, n.next, b)) {
+return n.nextEdgeFromPolygon;
+  }
+  if (pointInLine(n, n.previous, a) && pointInLine(n, n.previous, b)) {
+return n.previous.nextEdgeFromPolygon;
+  }
+  n = n.nextZ;
+}
+return false;
+  }
+
+  private static boolean pointInLine(final Node a, final Node b, final Node 
point) {
+return pointInLine(a, b, point.getX(), point.getY());
+  }
+
+  private static boolean pointInLine(final Node a, final Node b, final double 
lon, final double lat) {
+final double dxc = lon - a.getX();
+final double dyc = lat - a.getY();
+
+final double dxl = b.getX() - a.getX();
+final double dyl = b.getY() - a.getY();
+
+if (dxc * dyl - dyc * dxl == 0) {
+  if (Math.abs(dxl) >= Math.abs(dyl)) {
+return dxl > 0 ?
+a.getX() <= lon && lon <= b.getX() :
+b.getX() <= lon && lon <= a.getX();
+  } else {
+return dyl > 0 ?
+a.getY() <= lat && lat <= b.getY() :
+b.getY() <= lat && lat <= a.getY();
+  }
+}
+return false;
+  }
+
+
+  /** Links two polygon vertices using a bridge. **/  /** Links 
two polygon vert
+
   /** Links two polygon vertices using a bridge. **/
-  private static final Node splitPolygon(final Node a, final Node b) {
+  private static final Node splitPolygon(final Node a, final Node b, boolean 
edgeFromPolygon) {
 
 Review comment:
   ```suggestion
 private static final Node splitPolygon(final Node a, final Node b, boolean 
isEdgeFromPolygon) {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to 

[GitHub] [lucene-solr] nknize commented on a change in pull request #771: LUCENE-8620: Update Tessellator logic to label if triangle edges belongs to the original polygon

2019-09-06 Thread GitBox
nknize commented on a change in pull request #771: LUCENE-8620: Update 
Tessellator logic to label if triangle edges belongs to the original polygon
URL: https://github.com/apache/lucene-solr/pull/771#discussion_r321718513
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/geo/Tessellator.java
 ##
 @@ -590,14 +601,123 @@ private static final boolean splitEarcut(Object 
polygon, final Node start, final
 return false;
   }
 
+  /** Computes if edge defined by a and b overlaps with a polygon edge **/
+  private static boolean edgeFromPolygon(final Node a, final Node b, final 
boolean isMorton) {
+if (isMorton) {
+  return mortonEdgeFromPolygon(a, b);
+}
+Node next = a;
+do {
+  if (pointInLine(next, next.next, a) && pointInLine(next, next.next, b)) {
+return next.nextEdgeFromPolygon;
+  }
+  if (pointInLine(next, next.previous, a) && pointInLine(next, 
next.previous, b)) {
+return next.previous.nextEdgeFromPolygon;
+  }
+  next = next.next;
+} while(next != a);
+return false;
+  }
+
+  /** Uses morton code for speed to determine whether or not and edge defined 
by a and b overlaps with a polygon edge */
+  private static final boolean mortonEdgeFromPolygon(final Node a, final Node 
b) {
+// edge bbox (flip the bits so negative encoded values are < positive 
encoded values)
+final int minTX = StrictMath.min(a.x, b.x) ^ 0x8000;
+final int minTY = StrictMath.min(a.y, b.y) ^ 0x8000;
+final int maxTX = StrictMath.max(a.x, b.x) ^ 0x8000;
+final int maxTY = StrictMath.max(a.y, b.y) ^ 0x8000;
+
+// z-order range for the current edge;
+final long minZ = BitUtil.interleave(minTX, minTY);
+final long maxZ = BitUtil.interleave(maxTX, maxTY);
+
+// now make sure we don't have other points inside the potential ear;
+
+// look for points inside edge in both directions
+Node p = a.previousZ;
+Node n = a.nextZ;
+while (p != null && Long.compareUnsigned(p.morton, minZ) >= 0
+&& n != null && Long.compareUnsigned(n.morton, maxZ) <= 0) {
+  if (pointInLine(p, p.next, a) && pointInLine(p, p.next, b)) {
+return p.nextEdgeFromPolygon;
+  }
+  if (pointInLine(p, p.previous, a) && pointInLine(p, p.previous, b)) {
+return p.previous.nextEdgeFromPolygon;
+  }
+
+  p = p.previousZ;
+
+  if (pointInLine(n, n.next, a) && pointInLine(n, n.next, b)) {
+return n.nextEdgeFromPolygon;
+  }
+  if (pointInLine(n, n.previous, a) && pointInLine(n, n.previous, b)) {
+return n.previous.nextEdgeFromPolygon;
+  }
+
+  n = n.nextZ;
+}
+
+// first look for points inside the edge in decreasing z-order
+while (p != null && Long.compareUnsigned(p.morton, minZ) >= 0) {
+  if (pointInLine(p, p.next, a) && pointInLine(p, p.next, b)) {
+return p.nextEdgeFromPolygon;
+  }
+  if (pointInLine(p, p.previous, a) && pointInLine(p, p.previous, b)) {
+return p.previous.nextEdgeFromPolygon;
+  }
+  p = p.previousZ;
+}
+// then look for points in increasing z-order
+while (n != null &&
+Long.compareUnsigned(n.morton, maxZ) <= 0) {
+  if (pointInLine(n, n.next, a) && pointInLine(n, n.next, b)) {
+return n.nextEdgeFromPolygon;
+  }
+  if (pointInLine(n, n.previous, a) && pointInLine(n, n.previous, b)) {
+return n.previous.nextEdgeFromPolygon;
+  }
+  n = n.nextZ;
+}
+return false;
+  }
+
+  private static boolean pointInLine(final Node a, final Node b, final Node 
point) {
+return pointInLine(a, b, point.getX(), point.getY());
+  }
+
+  private static boolean pointInLine(final Node a, final Node b, final double 
lon, final double lat) {
 
 Review comment:
   ```suggestion
 private static boolean isPointInLine(final Node a, final Node b, final 
double lon, final double lat) {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on a change in pull request #771: LUCENE-8620: Update Tessellator logic to label if triangle edges belongs to the original polygon

2019-09-06 Thread GitBox
nknize commented on a change in pull request #771: LUCENE-8620: Update 
Tessellator logic to label if triangle edges belongs to the original polygon
URL: https://github.com/apache/lucene-solr/pull/771#discussion_r321719001
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/geo/Tessellator.java
 ##
 @@ -873,13 +998,23 @@ private static boolean pointInEar(final double x, final 
double y, final double a
 
   /** compute whether the given x, y point is in a triangle; uses the winding 
order method */
   public static boolean pointInTriangle (double x, double y, double ax, double 
ay, double bx, double by, double cx, double cy) {
-int a = orient(x, y, ax, ay, bx, by);
-int b = orient(x, y, bx, by, cx, cy);
-if (a == 0 || b == 0 || a < 0 == b < 0) {
-  int c = orient(x, y, cx, cy, ax, ay);
-  return c == 0 || (c < 0 == (b < 0 || a < 0));
+double minX = StrictMath.min(ax, StrictMath.min(bx, cx));
+double minY = StrictMath.min(ay, StrictMath.min(by, cy));
+double maxX = StrictMath.max(ax, StrictMath.max(bx, cx));
+double maxY = StrictMath.max(ay, StrictMath.max(by, cy));
+//check the bounding box because if the triangle is degenerated, e.g 
points and lines, we need to filter out
 
 Review comment:
   :+1: 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on a change in pull request #771: LUCENE-8620: Update Tessellator logic to label if triangle edges belongs to the original polygon

2019-09-06 Thread GitBox
nknize commented on a change in pull request #771: LUCENE-8620: Update 
Tessellator logic to label if triangle edges belongs to the original polygon
URL: https://github.com/apache/lucene-solr/pull/771#discussion_r321718163
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/geo/Tessellator.java
 ##
 @@ -590,14 +601,123 @@ private static final boolean splitEarcut(Object 
polygon, final Node start, final
 return false;
   }
 
+  /** Computes if edge defined by a and b overlaps with a polygon edge **/
+  private static boolean edgeFromPolygon(final Node a, final Node b, final 
boolean isMorton) {
 
 Review comment:
   ```suggestion
 private static boolean isEdgeFromPolygon(final Node a, final Node b, final 
boolean isMortonOptimized) {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on a change in pull request #771: LUCENE-8620: Update Tessellator logic to label if triangle edges belongs to the original polygon

2019-09-06 Thread GitBox
nknize commented on a change in pull request #771: LUCENE-8620: Update 
Tessellator logic to label if triangle edges belongs to the original polygon
URL: https://github.com/apache/lucene-solr/pull/771#discussion_r321721428
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/geo/Tessellator.java
 ##
 @@ -1004,6 +1145,11 @@ public double getX(int vertex) {
   return this.vertex[vertex].getX();
 }
 
+/** get if edge is shared with the polygon for the given edge */
+public boolean fromPolygon(int vertex) {
 
 Review comment:
   ```suggestion
   public boolean isEdgeFromPolygon(int startVertex) {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on a change in pull request #771: LUCENE-8620: Update Tessellator logic to label if triangle edges belongs to the original polygon

2019-09-06 Thread GitBox
nknize commented on a change in pull request #771: LUCENE-8620: Update 
Tessellator logic to label if triangle edges belongs to the original polygon
URL: https://github.com/apache/lucene-solr/pull/771#discussion_r321718423
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/geo/Tessellator.java
 ##
 @@ -590,14 +601,123 @@ private static final boolean splitEarcut(Object 
polygon, final Node start, final
 return false;
   }
 
+  /** Computes if edge defined by a and b overlaps with a polygon edge **/
+  private static boolean edgeFromPolygon(final Node a, final Node b, final 
boolean isMorton) {
+if (isMorton) {
+  return mortonEdgeFromPolygon(a, b);
+}
+Node next = a;
+do {
+  if (pointInLine(next, next.next, a) && pointInLine(next, next.next, b)) {
+return next.nextEdgeFromPolygon;
+  }
+  if (pointInLine(next, next.previous, a) && pointInLine(next, 
next.previous, b)) {
+return next.previous.nextEdgeFromPolygon;
+  }
+  next = next.next;
+} while(next != a);
+return false;
+  }
+
+  /** Uses morton code for speed to determine whether or not and edge defined 
by a and b overlaps with a polygon edge */
+  private static final boolean mortonEdgeFromPolygon(final Node a, final Node 
b) {
+// edge bbox (flip the bits so negative encoded values are < positive 
encoded values)
+final int minTX = StrictMath.min(a.x, b.x) ^ 0x8000;
+final int minTY = StrictMath.min(a.y, b.y) ^ 0x8000;
+final int maxTX = StrictMath.max(a.x, b.x) ^ 0x8000;
+final int maxTY = StrictMath.max(a.y, b.y) ^ 0x8000;
+
+// z-order range for the current edge;
+final long minZ = BitUtil.interleave(minTX, minTY);
+final long maxZ = BitUtil.interleave(maxTX, maxTY);
+
+// now make sure we don't have other points inside the potential ear;
+
+// look for points inside edge in both directions
+Node p = a.previousZ;
+Node n = a.nextZ;
+while (p != null && Long.compareUnsigned(p.morton, minZ) >= 0
+&& n != null && Long.compareUnsigned(n.morton, maxZ) <= 0) {
+  if (pointInLine(p, p.next, a) && pointInLine(p, p.next, b)) {
+return p.nextEdgeFromPolygon;
+  }
+  if (pointInLine(p, p.previous, a) && pointInLine(p, p.previous, b)) {
+return p.previous.nextEdgeFromPolygon;
+  }
+
+  p = p.previousZ;
+
+  if (pointInLine(n, n.next, a) && pointInLine(n, n.next, b)) {
+return n.nextEdgeFromPolygon;
+  }
+  if (pointInLine(n, n.previous, a) && pointInLine(n, n.previous, b)) {
+return n.previous.nextEdgeFromPolygon;
+  }
+
+  n = n.nextZ;
+}
+
+// first look for points inside the edge in decreasing z-order
+while (p != null && Long.compareUnsigned(p.morton, minZ) >= 0) {
+  if (pointInLine(p, p.next, a) && pointInLine(p, p.next, b)) {
+return p.nextEdgeFromPolygon;
+  }
+  if (pointInLine(p, p.previous, a) && pointInLine(p, p.previous, b)) {
+return p.previous.nextEdgeFromPolygon;
+  }
+  p = p.previousZ;
+}
+// then look for points in increasing z-order
+while (n != null &&
+Long.compareUnsigned(n.morton, maxZ) <= 0) {
+  if (pointInLine(n, n.next, a) && pointInLine(n, n.next, b)) {
+return n.nextEdgeFromPolygon;
+  }
+  if (pointInLine(n, n.previous, a) && pointInLine(n, n.previous, b)) {
+return n.previous.nextEdgeFromPolygon;
+  }
+  n = n.nextZ;
+}
+return false;
+  }
+
+  private static boolean pointInLine(final Node a, final Node b, final Node 
point) {
 
 Review comment:
   ```suggestion
 private static boolean isPointInLine(final Node a, final Node b, final 
Node point) {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on a change in pull request #771: LUCENE-8620: Update Tessellator logic to label if triangle edges belongs to the original polygon

2019-09-06 Thread GitBox
nknize commented on a change in pull request #771: LUCENE-8620: Update 
Tessellator logic to label if triangle edges belongs to the original polygon
URL: https://github.com/apache/lucene-solr/pull/771#discussion_r321722381
 
 

 ##
 File path: lucene/sandbox/src/test/org/apache/lucene/geo/TestTessellator.java
 ##
 @@ -578,4 +581,77 @@ private double area(List triangles) 
{
 }
 return area;
   }
+
+  private void checkTriangleEdgesFromPolygon(Polygon p, Tessellator.Triangle 
t) {
+// first edge
+assertEquals(t.fromPolygon(0), edgeFromPolygon(p, t.getX(0), t.getY(0), 
t.getX(1), t.getY(1)));
+// second edge
+assertEquals(t.fromPolygon(1), edgeFromPolygon(p, t.getX(1), t.getY(1), 
t.getX(2), t.getY(2)));
+// third edge
+assertEquals(t.fromPolygon(2), edgeFromPolygon(p, t.getX(2), t.getY(2), 
t.getX(0), t.getY(0)));
+  }
+
+  private boolean edgeFromPolygon(Polygon p, double aLon, double aLat, double 
bLon, double bLat) {
 
 Review comment:
   ```suggestion
 private boolean isEdgeFromPolygon(Polygon p, double aLon, double aLat, 
double bLon, double bLat) {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-06 Thread GitBox
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second 
grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r321724239
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/TestDistributedGrouping.java
 ##
 @@ -425,6 +426,65 @@ public void test() throws Exception {
 
 //Debug
 simpleQuery("q", "*:*", "rows", 10, "fl", "id," + i1, "group", "true", 
"group.field", i1, "debug", "true");
+doTestGroupSkipSecondStep();
+  }
+
+  /*
+SOLR-11831, test skipping the second grouping step if the query only 
retrieves on document per group
+   */
+  private void doTestGroupSkipSecondStep() throws Exception {
+ignoreException(GroupParams.GROUP_SKIP_DISTRIBUTED_SECOND); // don't print 
stack trace for exception raised by group.skip.second.step
+// Ignore numFound if group.skip.second.step is enabled because the number 
of documents per group will not be computed (will default to 1)
+handle.put("numFound", SKIP);
+query("q", "{!func}id_i1", "rows", 3, "group.skip.second.step", true, 
"group.limit", 1,  "fl",  "id," + i1, "group", "true",
+"group.field", i1);
+query("q", "kings", "group.skip.second.step", true, "fl", "id," + i1, 
"group", "true", "group.field", i1);
+query("q", "{!func}id_i1", "rows", 3, "group.skip.second.step", true,  
"fl",  "id," + i1, "group", "true",
+"group.field", i1);
+query("q", "1234doesnotmatchanything1234", "group.skip.second.step", true, 
"fl", "id," + i1, "group", "true", "group.field", i1);
+
+ignoreException("Illegal grouping specification");
+// ngroups will return the corrent results, the problem is that numFound 
for each group might be wrong in case of multishard setting - but there is no 
way to
+// enable/disable it.
+//assertSimpleQueryThrows("q", "{!func}id_i1", "group.skip.second.step", 
true, "fl", "id," + i1, "group", "true", "group.field", i1, "group.ngroups", 
true);
 
 Review comment:
   The commented upon and commented out test here jumped out. How about 
disallowing `group.ngroups=true` when `group.skip.second.step=true` is used? In 
the multi-shard case the numFound values for each group would be wrong, in the 
single-shard case the numFound values would be right but then in a 
single-sharded setup the usage of distributed grouping would be less likely 
(though not impossible) presumably? 
https://github.com/cpoerschke/lucene-solr/commit/f3d715c5b0ea708c15cace8f889e12e48292d79b
 has a potential code change (but no documentation edit yet).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-06 Thread GitBox
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second 
grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r321722336
 
 

 ##
 File path: solr/solr-ref-guide/src/result-grouping.adoc
 ##
 @@ -114,6 +114,11 @@ Setting this parameter to a number greater than 0 enables 
caching for result gro
 +
 Testing has shown that group caching only improves search time with Boolean, 
wildcard, and fuzzy queries. For simple queries like term or "match all" 
queries, group caching degrades performance.
 
+`group.skip.second.step`::
+This parameter can be set to `true` if only one document per group needs to be 
retrieved. Result Grouping executes two searches; if enabled this option will 
disable the second search improving the performance. By default the value is 
set to `false`. It can be set to `true` of if `group.limit` is 1, and 
`group.sort` fields list is a prefix of `sort` fields list (e.g., if `sort=id 
asc,name desc` and `group.sort=id asc` is fine, but  `sort=id asc,name desc` 
and `group.sort=name desc` is not). Also it cannot be used together with 
<>.
 
 Review comment:
   Thanks for adding the detailed documentation here!
   
   
https://github.com/cpoerschke/lucene-solr/commit/5af6f12f8efb20f9866d1b4c1fc94747b689ccca
 has one `s/of if/if` edit and suggests to turn the `sort=id...` into 
`sort=price...` or similar in case the use of 'id' could lead to confusion 
since sorting by (document) id is perhaps a bit unusual, what do you think?
   
   For the YouTube link, would it make sense to include wording with the 
description e.g. the `Learning to Rank: From Theory to Production` title so 
that readers have more clarity about what they are about to click on?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on issue #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-09-06 Thread GitBox
cpoerschke commented on issue #300: SOLR-11831: Skip second grouping step if 
group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#issuecomment-528843185
 
 
   Thanks @diegoceccarelli for the pushes and the documentation additions too!
   
   > ... precommit is failing at the moment due to "Rat problems" ...
   
   Interesting, strange, seemed to be fine for me locally.
   
   I've started looking at the tests and 
https://github.com/cpoerschke/lucene-solr/commits/github-bloomberg-SOLR-11831-cpoerschke-12
 shares some work-in-progress scribbles. A `variantQuery` helper method was 
recently added in the `TestDistributedGrouping` class, I wonder if it and/or 
additional comments or formatting could help clarify the intent and rationale 
behind some of the test queries. Will also annotate specific observations (and 
spoiler alert) one unexpected test failure mystery.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-NightlyTests-8.x - Build # 33 - Unstable

2019-09-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-8.x/33/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [SolrCore, 
SolrIndexSearcher, MockDirectoryWrapper, SolrIndexSearcher] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1093)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:914)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1252)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:766)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.search.SolrIndexSearcher  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.search.SolrIndexSearcher.(SolrIndexSearcher.java:308)  at 
org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2143)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2316)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2052)  at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:702)
  at 
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:102)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1079)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1066)
  at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:160)
  at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:281) 
 at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:188)  at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
  at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:200)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2609)  at 
org.apache.solr.servlet.DirectSolrConnection.request(DirectSolrConnection.java:125)
  at org.apache.solr.util.TestHarness.update(TestHarness.java:286)  at 
org.apache.solr.util.BaseTestHarness.checkUpdateStatus(BaseTestHarness.java:274)
  at 
org.apache.solr.util.BaseTestHarness.validateUpdate(BaseTestHarness.java:244)  
at org.apache.solr.SolrTestCaseJ4.checkUpdateU(SolrTestCaseJ4.java:943)  at 
org.apache.solr.SolrTestCaseJ4.assertU(SolrTestCaseJ4.java:922)  at 
org.apache.solr.SolrTestCaseJ4.assertU(SolrTestCaseJ4.java:916)  at 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest.testBasics(AtomicUpdateProcessorFactoryTest.java:113)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)  at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
  at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
  at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
  at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
  at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
  at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
  at 

[jira] [Commented] (LUCENE-8964) Allow GeoJSON parser to properly skip string arrays

2019-09-06 Thread Ignacio Vera (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924220#comment-16924220
 ] 

Ignacio Vera commented on LUCENE-8964:
--

Thanks Alex! Patch looks good I will commit soon.

> Allow GeoJSON parser to properly skip string arrays
> ---
>
> Key: LUCENE-8964
> URL: https://issues.apache.org/jira/browse/LUCENE-8964
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: trunk
>Reporter: Alexander Reelsen
>Assignee: Ignacio Vera
>Priority: Trivial
> Attachments: lucene-parse-geojson-arrays-0.patch
>
>
> The Geo JSON parser throws an exception when trying to parse an array of 
> strings, which is somewhat common in some free geojson services like 
> [https://whosonfirst.org|https://whosonfirst.org/]
> An example file can be seen at 
> [https://data.whosonfirst.org/101/748/479/101748479.geojson]
> This fixes the parser to also parse a string array.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-8964) Allow GeoJSON parser to properly skip string arrays

2019-09-06 Thread Ignacio Vera (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera reassigned LUCENE-8964:


Assignee: Ignacio Vera

> Allow GeoJSON parser to properly skip string arrays
> ---
>
> Key: LUCENE-8964
> URL: https://issues.apache.org/jira/browse/LUCENE-8964
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: trunk
>Reporter: Alexander Reelsen
>Assignee: Ignacio Vera
>Priority: Trivial
> Attachments: lucene-parse-geojson-arrays-0.patch
>
>
> The Geo JSON parser throws an exception when trying to parse an array of 
> strings, which is somewhat common in some free geojson services like 
> [https://whosonfirst.org|https://whosonfirst.org/]
> An example file can be seen at 
> [https://data.whosonfirst.org/101/748/479/101748479.geojson]
> This fixes the parser to also parse a string array.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8971) Enable constructing JapaneseTokenizer from custom dictionary

2019-09-06 Thread Mike Sokolov (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Sokolov updated LUCENE-8971:
-
Description: This is basically just finishing up what was started in 
LUCENE-8863. It adds a public constructor to {{JapaneseTokenizer }}that lets 
you bring-your-own dictionary, plus exposing the necessary constructors for 
{{UnknownDictionary,TokenInfoDictionary,}} and {{ConnectionCosts.}}  (was: This 
is basically just finishing up what was started in LUCENE-8863. It adds a 
public constructor to {JapaneseTokenizer} that lets you bring-your-own 
dictionary, plus exposing the necessary constructors for {UnknownDictionary}, 
{TokenInfoDictionary}, and {ConnectionCosts}.)

> Enable constructing JapaneseTokenizer from custom dictionary 
> -
>
> Key: LUCENE-8971
> URL: https://issues.apache.org/jira/browse/LUCENE-8971
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Priority: Major
>
> This is basically just finishing up what was started in LUCENE-8863. It adds 
> a public constructor to {{JapaneseTokenizer }}that lets you bring-your-own 
> dictionary, plus exposing the necessary constructors for 
> {{UnknownDictionary,TokenInfoDictionary,}} and {{ConnectionCosts.}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8971) Enable constructing JapaneseTokenizer from custom dictionary

2019-09-06 Thread Mike Sokolov (Jira)
Mike Sokolov created LUCENE-8971:


 Summary: Enable constructing JapaneseTokenizer from custom 
dictionary 
 Key: LUCENE-8971
 URL: https://issues.apache.org/jira/browse/LUCENE-8971
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mike Sokolov


This is basically just finishing up what was started in LUCENE-8863. It adds a 
public constructor to {JapaneseTokenizer} that lets you bring-your-own 
dictionary, plus exposing the necessary constructors for {UnknownDictionary}, 
{TokenInfoDictionary}, and {ConnectionCosts}.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8966) KoreanTokenizer should split unknown words on digits

2019-09-06 Thread Mike Sokolov (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924206#comment-16924206
 ] 

Mike Sokolov commented on LUCENE-8966:
--

> For complex number grouping and normalization, Namgyu Kim added a 
> KoreanNumberFilter in https://issues.apache.org/jira/browse/LUCENE-8812

Ah thanks, I'll have a look

> KoreanTokenizer should split unknown words on digits
> 
>
> Key: LUCENE-8966
> URL: https://issues.apache.org/jira/browse/LUCENE-8966
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jim Ferenczi
>Priority: Minor
> Attachments: LUCENE-8966.patch, LUCENE-8966.patch
>
>
> Since https://issues.apache.org/jira/browse/LUCENE-8548 the Korean tokenizer 
> groups characters of unknown words if they belong to the same script or an 
> inherited one. This is ok for inputs like Мoscow (with a Cyrillic М and the 
> rest in Latin) but this rule doesn't work well on digits since they are 
> considered common with other scripts. For instance the input "44사이즈" is kept 
> as is even though "사이즈" is part of the dictionary. We should restore the 
> original behavior and splits any unknown words if a digit is followed by 
> another type.
> This issue was first discovered in 
> [https://github.com/elastic/elasticsearch/issues/46365]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8920) Reduce size of FSTs due to use of direct-addressing encoding

2019-09-06 Thread Mike Sokolov (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924203#comment-16924203
 ] 

Mike Sokolov commented on LUCENE-8920:
--

If I understand you correctly, T1 is the threshold we introduced earlier this 
year (or its inverse DIRECT_ARC_LOAD_FACTOR in fst.Builder). It's currently set 
to 4, or (1/4 as T1 in your formulation).  There was pre-existing logic to 
decide (var-encoded) list vs. the (fixed-size, packed) array encoding; my 
change was piggy-backed on that. It's a threshold on N that depends on the 
depth in the FST. See FST.shouldExpand.

If you want to write up the open addressing idea in more detail, it's fine to 
add comments here unless you think they are too long / inconvenient to write in 
this form, then maybe attach a doc? I think that goes directly to the point of 
reducing space consumption, so this issue seems like a fine place for it.

> Reduce size of FSTs due to use of direct-addressing encoding 
> -
>
> Key: LUCENE-8920
> URL: https://issues.apache.org/jira/browse/LUCENE-8920
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Priority: Blocker
> Fix For: 8.3
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Some data can lead to worst-case ~4x RAM usage due to this optimization. 
> Several ideas were suggested to combat this on the mailing list:
> bq. I think we can improve thesituation here by tracking, per-FST instance, 
> the size increase we're seeing while building (or perhaps do a preliminary 
> pass before building) in order to decide whether to apply the encoding. 
> bq. we could also make the encoding a bit more efficient. For instance I 
> noticed that arc metadata is pretty large in some cases (in the 10-20 bytes) 
> which make gaps very costly. Associating each label with a dense id and 
> having an intermediate lookup, ie. lookup label -> id and then id->arc offset 
> instead of doing label->arc directly could save a lot of space in some cases? 
> Also it seems that we are repeating the label in the arc metadata when 
> array-with-gaps is used, even though it shouldn't be necessary since the 
> label is implicit from the address?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13240) UTILIZENODE action results in an exception

2019-09-06 Thread Christine Poerschke (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-13240:
---
Fix Version/s: 8.3
   master (9.0)
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks everyone!

> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Assignee: Christine Poerschke
>Priority: Major
> Fix For: master (9.0), 8.3
>
> Attachments: SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> SOLR-13240.patch, solr-solrj-7.5.0.jar
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat 
> org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)\n\tat 
> 

[jira] [Created] (SOLR-13744) Extend REST API for JWTAuthPlugin

2019-09-06 Thread Jira
Jan Høydahl created SOLR-13744:
--

 Summary: Extend REST API for JWTAuthPlugin
 Key: SOLR-13744
 URL: https://issues.apache.org/jira/browse/SOLR-13744
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: security
Reporter: Jan Høydahl


JWTAuthPlugin now supports multiple issuers in the config.

Add support to REST API for adding and removing these by name, e.g.
{code:java}
{
 "add-issuer": {"name": "myIss", "aud": "myAud" ...},
 "remove-issuer": {"name": "myIss"}
}{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8920) Reduce size of FSTs due to use of direct-addressing encoding

2019-09-06 Thread Bruno Roustant (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924178#comment-16924178
 ] 

Bruno Roustant edited comment on LUCENE-8920 at 9/6/19 11:57 AM:
-

I'd love to work on that, but I'm pretty busy so I can't start immediately. If 
you can start on it soon I'll be happy to help and review.

I'll try to think more about the subject. Where should I post my remarks/ideas? 
Here in the thread or in an attached doc?

Some additional thoughts:
 * Threshold T1 to find to decide when direct-addressing is best (N / (max 
label - min label) >= T1). E.g. with T1 = 50% worst case is memory x2 right? 
(although there is the var length encoding difference...). Did you try that, 
what is the perf?
 * Threshold T2 to find to decide if a list is better (N < T2) or if 
open-addressing is more appropriate.
 * If N is close to 2^p, the probability that open-addressing aborts (can't 
store a label in less than L tries) is high. Do we double the array size 
(2^(p+1)) or can we take 1.5x2^p to save memory? (my intuition is the second, 
but need some testing about the load factor)


was (Author: bruno.roustant):
I'd love to work on that, but I'm pretty busy so I can't start immediately. If 
you can start on it soon I'll be happy to help and review.

I'll try to think more about the subject. Where should I post my remarks/ideas? 
Here in the thread or in an attached doc?

Some additional thoughts:
 * Threshold T1 to find to decide when direct-addressing is best (N / (max 
label - min label) >= T1). E.g. with T1 = 50% worst case is memory x2 right? 
(although there is the var length encoding difference...). Did you try that, 
what is the perf?
 * Threshold T2 to find to decide if a list is better (N < T2) or if 
open-addressing is more appropriate.
 * If N is close to 2^p, the probability that open-addressing aborts (can't 
store a label in less than L tries) is high. Do we double the array size 
(2^(p+1)) or can we take 1.5x2^p to save memory? (my intuition is the second, 
but need some testing about the load factor)
 * I think var-length List and fixed-length Binary-Search options could be 
merged to always have a var-length List that can be binary searched with low 
impact on perf. This is a work in itself, but it can help reduce the FST memory 
and thus free some bytes for the faster options.

> Reduce size of FSTs due to use of direct-addressing encoding 
> -
>
> Key: LUCENE-8920
> URL: https://issues.apache.org/jira/browse/LUCENE-8920
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Priority: Blocker
> Fix For: 8.3
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Some data can lead to worst-case ~4x RAM usage due to this optimization. 
> Several ideas were suggested to combat this on the mailing list:
> bq. I think we can improve thesituation here by tracking, per-FST instance, 
> the size increase we're seeing while building (or perhaps do a preliminary 
> pass before building) in order to decide whether to apply the encoding. 
> bq. we could also make the encoding a bit more efficient. For instance I 
> noticed that arc metadata is pretty large in some cases (in the 10-20 bytes) 
> which make gaps very costly. Associating each label with a dense id and 
> having an intermediate lookup, ie. lookup label -> id and then id->arc offset 
> instead of doing label->arc directly could save a lot of space in some cases? 
> Also it seems that we are repeating the label in the arc metadata when 
> array-with-gaps is used, even though it shouldn't be necessary since the 
> label is implicit from the address?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13240) UTILIZENODE action results in an exception

2019-09-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924180#comment-16924180
 ] 

ASF subversion and git services commented on SOLR-13240:


Commit 6b5759efaf2e96042e247fa86b34bd9d8297abb8 in lucene-solr's branch 
refs/heads/branch_8x from Christine Poerschke
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6b5759e ]

SOLR-13240: Fixed UTILIZENODE action resulting in IllegalArgumentException.
(Hendrik Haddorp, Richard Goodman, Tim Owen, shalin, noble, Christine Poerschke)


> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Assignee: Christine Poerschke
>Priority: Major
> Attachments: SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> SOLR-13240.patch, solr-solrj-7.5.0.jar
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat
>  
> 

[jira] [Commented] (SOLR-13240) UTILIZENODE action results in an exception

2019-09-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924179#comment-16924179
 ] 

ASF subversion and git services commented on SOLR-13240:


Commit 6574ae63d43f1a5a60c126a6d766d242883bf806 in lucene-solr's branch 
refs/heads/master from Christine Poerschke
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6574ae6 ]

SOLR-13240: Fixed UTILIZENODE action resulting in IllegalArgumentException.
(Hendrik Haddorp, Richard Goodman, Tim Owen, shalin, noble, Christine Poerschke)


> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Assignee: Christine Poerschke
>Priority: Major
> Attachments: SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, SOLR-13240.patch, 
> SOLR-13240.patch, solr-solrj-7.5.0.jar
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat
>  
> 

[jira] [Commented] (LUCENE-8920) Reduce size of FSTs due to use of direct-addressing encoding

2019-09-06 Thread Bruno Roustant (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924178#comment-16924178
 ] 

Bruno Roustant commented on LUCENE-8920:


I'd love to work on that, but I'm pretty busy so I can't start immediately. If 
you can start on it soon I'll be happy to help and review.

I'll try to think more about the subject. Where should I post my remarks/ideas? 
Here in the thread or in an attached doc?

Some additional thoughts:
 * Threshold T1 to find to decide when direct-addressing is best (N / (max 
label - min label) >= T1). E.g. with T1 = 50% worst case is memory x2 right? 
(although there is the var length encoding difference...). Did you try that, 
what is the perf?
 * Threshold T2 to find to decide if a list is better (N < T2) or if 
open-addressing is more appropriate.
 * If N is close to 2^p, the probability that open-addressing aborts (can't 
store a label in less than L tries) is high. Do we double the array size 
(2^(p+1)) or can we take 1.5x2^p to save memory? (my intuition is the second, 
but need some testing about the load factor)
 * I think var-length List and fixed-length Binary-Search options could be 
merged to always have a var-length List that can be binary searched with low 
impact on perf. This is a work in itself, but it can help reduce the FST memory 
and thus free some bytes for the faster options.

> Reduce size of FSTs due to use of direct-addressing encoding 
> -
>
> Key: LUCENE-8920
> URL: https://issues.apache.org/jira/browse/LUCENE-8920
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Priority: Blocker
> Fix For: 8.3
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Some data can lead to worst-case ~4x RAM usage due to this optimization. 
> Several ideas were suggested to combat this on the mailing list:
> bq. I think we can improve thesituation here by tracking, per-FST instance, 
> the size increase we're seeing while building (or perhaps do a preliminary 
> pass before building) in order to decide whether to apply the encoding. 
> bq. we could also make the encoding a bit more efficient. For instance I 
> noticed that arc metadata is pretty large in some cases (in the 10-20 bytes) 
> which make gaps very costly. Associating each label with a dense id and 
> having an intermediate lookup, ie. lookup label -> id and then id->arc offset 
> instead of doing label->arc directly could save a lot of space in some cases? 
> Also it seems that we are repeating the label in the arc metadata when 
> array-with-gaps is used, even though it shouldn't be necessary since the 
> label is implicit from the address?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on issue #860: SOLR-13734 JWTAuthPlugin to support multiple issuers

2019-09-06 Thread GitBox
janhoy commented on issue #860: SOLR-13734 JWTAuthPlugin to support multiple 
issuers
URL: https://github.com/apache/lucene-solr/pull/860#issuecomment-528818950
 
 
   Reviewers: This PR includes changes from #852 and therefore looks bigger 
than it is. A diff between the two can be seen here:
   
https://github.com/cominvent/lucene-solr/compare/SOLR-13713-multiple-jwks...cominvent:SOLR-13734-jwt-multiple-issuers
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13734) JWTAuthPlugin to support multiple issuers

2019-09-06 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13734:
---
Fix Version/s: 8.3

> JWTAuthPlugin to support multiple issuers
> -
>
> Key: SOLR-13734
> URL: https://issues.apache.org/jira/browse/SOLR-13734
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: JWT, authentication
> Fix For: 8.3
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In some large enterprise environments, there is more than one [Identity 
> Provider|https://en.wikipedia.org/wiki/Identity_provider] to issue tokens for 
> users. The equivalent example from the public internet is logging in to a 
> website and choose between multiple pre-defined IdPs (such as Google, GitHub, 
> Facebook etc) in the Oauth2/OIDC flow.
> In the enterprise the IdPs could be public ones but most likely they will be 
> private IdPs in various networks inside the enterprise. Users will interact 
> with a search application, e.g. one providing enterprise wide search, and 
> will authenticate with one out of several IdPs depending on their local 
> affiliation. The search app will then request an access token (JWT) for the 
> user and issue requests to Solr using that token.
> The JWT plugin currently supports exactly one IdP. This JIRA will extend 
> support for multiple IdPs for access token validation only. To limit the 
> scope of this Jira, Admin UI login must still happen to the "primary" IdP. 
> Supporting multiple IdPs for Admin UI login can be done in followup issues.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13713) JWTAuthPlugin to support multiple JWKS endpoints

2019-09-06 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13713:
---
Fix Version/s: 8.3

> JWTAuthPlugin to support multiple JWKS endpoints
> 
>
> Key: SOLR-13713
> URL: https://issues.apache.org/jira/browse/SOLR-13713
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Affects Versions: 8.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: JWT
> Fix For: 8.3
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Some [Identity Providers|https://en.wikipedia.org/wiki/Identity_provider] do 
> not expose all JWK keys used to sign access tokens through the main [JWKS 
> |https://auth0.com/docs/jwks] endpoint exposed through OIDC Discovery. For 
> instance Ping Federate can have multiple Token Providers, each exposing its 
> signing keys through separate JWKS endpoints. 
> To support these, the JWT plugin should optinally accept an array of URLs for 
> the {{jwkUrl}} configuration option. If an array is provided, then we'll 
> fetch all the JWKS and validate the JWT against all before we fail the 
> request.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13734) JWTAuthPlugin to support multiple issuers

2019-09-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924158#comment-16924158
 ] 

Jan Høydahl commented on SOLR-13734:


Precommit and tests pass. Reviews welcome. Plan to merge Thursday next week.

> JWTAuthPlugin to support multiple issuers
> -
>
> Key: SOLR-13734
> URL: https://issues.apache.org/jira/browse/SOLR-13734
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: JWT, authentication
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In some large enterprise environments, there is more than one [Identity 
> Provider|https://en.wikipedia.org/wiki/Identity_provider] to issue tokens for 
> users. The equivalent example from the public internet is logging in to a 
> website and choose between multiple pre-defined IdPs (such as Google, GitHub, 
> Facebook etc) in the Oauth2/OIDC flow.
> In the enterprise the IdPs could be public ones but most likely they will be 
> private IdPs in various networks inside the enterprise. Users will interact 
> with a search application, e.g. one providing enterprise wide search, and 
> will authenticate with one out of several IdPs depending on their local 
> affiliation. The search app will then request an access token (JWT) for the 
> user and issue requests to Solr using that token.
> The JWT plugin currently supports exactly one IdP. This JIRA will extend 
> support for multiple IdPs for access token validation only. To limit the 
> scope of this Jira, Admin UI login must still happen to the "primary" IdP. 
> Supporting multiple IdPs for Admin UI login can be done in followup issues.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13734) JWTAuthPlugin to support multiple issuers

2019-09-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924154#comment-16924154
 ] 

Jan Høydahl commented on SOLR-13734:


Sample {{security.json}} with the new syntax:
{code:javascript}
{
  "authentication": {
"class": "solr.JWTAuthPlugin",
"scope": "solr:read solr:write solr:admin",
"issuers": [
  {
"name": "myMainIssuer",
"iss": "https://mainIdp/;,
"aud": "solr",
"jwkUrl": ["https://mainIdp/jwk-endpoint1;, 
"https://mainIdp/jwk-endpoint2;]
  }, 
  {
"name": "myExtraIssuer",
"wellKnownUrl": "https://extraIdp/.well-known/openid-configuration;
  }
]
  }
}{code}
This syntax makes the configuration easier to read and can be used for 
configuring any number of issuers. Note that the old syntax of configuring 
'its', 'wellKnownUrl' etc as top-level JSON keys still works for back-compat 
but will generate a log warning.

> JWTAuthPlugin to support multiple issuers
> -
>
> Key: SOLR-13734
> URL: https://issues.apache.org/jira/browse/SOLR-13734
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: JWT, authentication
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In some large enterprise environments, there is more than one [Identity 
> Provider|https://en.wikipedia.org/wiki/Identity_provider] to issue tokens for 
> users. The equivalent example from the public internet is logging in to a 
> website and choose between multiple pre-defined IdPs (such as Google, GitHub, 
> Facebook etc) in the Oauth2/OIDC flow.
> In the enterprise the IdPs could be public ones but most likely they will be 
> private IdPs in various networks inside the enterprise. Users will interact 
> with a search application, e.g. one providing enterprise wide search, and 
> will authenticate with one out of several IdPs depending on their local 
> affiliation. The search app will then request an access token (JWT) for the 
> user and issue requests to Solr using that token.
> The JWT plugin currently supports exactly one IdP. This JIRA will extend 
> support for multiple IdPs for access token validation only. To limit the 
> scope of this Jira, Admin UI login must still happen to the "primary" IdP. 
> Supporting multiple IdPs for Admin UI login can be done in followup issues.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 202 - Still Failing

2019-09-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/202/

No tests ran.

Build Log:
[...truncated 24871 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2594 links (2120 relative) to 3410 anchors in 259 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.3.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:

[jira] [Created] (LUCENE-8970) TopFieldCollector(s) Should Prepopulate Sentinel Objects

2019-09-06 Thread Atri Sharma (Jira)
Atri Sharma created LUCENE-8970:
---

 Summary: TopFieldCollector(s) Should Prepopulate Sentinel Objects
 Key: LUCENE-8970
 URL: https://issues.apache.org/jira/browse/LUCENE-8970
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Atri Sharma


We do not repopulate the hit queue with sentinel values today, thus leading to 
extra checks and extra code.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8969) Fix abusive usage of assert in ArrayUtil

2019-09-06 Thread Tomoko Uchida (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida updated LUCENE-8969:
--
Summary: Fix abusive usage of assert in ArrayUtil  (was: Fix abusive usage 
of asset in ArrayUtil)

> Fix abusive usage of assert in ArrayUtil
> 
>
> Key: LUCENE-8969
> URL: https://issues.apache.org/jira/browse/LUCENE-8969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Tomoko Uchida
>Priority: Minor
>
> Methods in {{o.a.l.util.ArrayUtil}} uses {{assert}} statements for argument 
> checks.
>  It would be suitable to throw \{{IllegalArgumentExceptions}}s instead of 
> assertions here, to improve traceability when the violations occur? Sometimes 
> I had difficulty in identifying the cause of assertion errors...



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8969) Fix abusive usage of asset in ArrayUtil

2019-09-06 Thread Tomoko Uchida (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida updated LUCENE-8969:
--
Summary: Fix abusive usage of asset in ArrayUtil  (was: Fix abusive usage 
of asset in ArrayUtils)

> Fix abusive usage of asset in ArrayUtil
> ---
>
> Key: LUCENE-8969
> URL: https://issues.apache.org/jira/browse/LUCENE-8969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Tomoko Uchida
>Priority: Minor
>
> Methods in {{o.a.l.util.ArrayUtil}} uses {{assert}} statements for argument 
> checks.
>  It would be suitable to throw \{{IllegalArgumentExceptions}}s instead of 
> assertions here, to improve traceability when the violations occur? Sometimes 
> I had difficulty in identifying the cause of assertion errors...



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8969) Fix abusive usage of asset in ArrayUtils

2019-09-06 Thread Tomoko Uchida (Jira)
Tomoko Uchida created LUCENE-8969:
-

 Summary: Fix abusive usage of asset in ArrayUtils
 Key: LUCENE-8969
 URL: https://issues.apache.org/jira/browse/LUCENE-8969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Tomoko Uchida


Methods in {{o.a.l.util.ArrayUtil}} uses {{assert}} statements for argument 
checks.
 It would be suitable to throw \{{IllegalArgumentExceptions}}s instead of 
assertions here, to improve traceability when the violations occur? Sometimes I 
had difficulty in identifying the cause of assertion errors...



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13734) JWTAuthPlugin to support multiple issuers

2019-09-06 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924087#comment-16924087
 ] 

Jan Høydahl commented on SOLR-13734:


PR [#860|https://github.com/apache/lucene-solr/pull/860] submitted, feedback 
welcome.

I chose a JSON Array instead of Object for the 'issuers' config, and instead 
require the 'name' key in that object. See also commit messages in the PR for 
details of the changes.

> JWTAuthPlugin to support multiple issuers
> -
>
> Key: SOLR-13734
> URL: https://issues.apache.org/jira/browse/SOLR-13734
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: JWT, authentication
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In some large enterprise environments, there is more than one [Identity 
> Provider|https://en.wikipedia.org/wiki/Identity_provider] to issue tokens for 
> users. The equivalent example from the public internet is logging in to a 
> website and choose between multiple pre-defined IdPs (such as Google, GitHub, 
> Facebook etc) in the Oauth2/OIDC flow.
> In the enterprise the IdPs could be public ones but most likely they will be 
> private IdPs in various networks inside the enterprise. Users will interact 
> with a search application, e.g. one providing enterprise wide search, and 
> will authenticate with one out of several IdPs depending on their local 
> affiliation. The search app will then request an access token (JWT) for the 
> user and issue requests to Solr using that token.
> The JWT plugin currently supports exactly one IdP. This JIRA will extend 
> support for multiple IdPs for access token validation only. To limit the 
> scope of this Jira, Admin UI login must still happen to the "primary" IdP. 
> Supporting multiple IdPs for Admin UI login can be done in followup issues.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy opened a new pull request #860: SOLR-13734 JWTAuthPlugin to support multiple issuers

2019-09-06 Thread GitBox
janhoy opened a new pull request #860: SOLR-13734 JWTAuthPlugin to support 
multiple issuers
URL: https://github.com/apache/lucene-solr/pull/860
 
 
   # Description
   
   See https://issues.apache.org/jira/browse/SOLR-13734
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [x] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [x] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [x] I am authorized to contribute this code to the ASF and have removed 
any code I do not have a license to distribute.
   - [x] I have developed this patch against the `master` branch.
   - [x] I have run `ant precommit` and the appropriate test suite.
   - [x] I have added tests for my changes.
   - [x] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-8.x - Build # 532 - Unstable

2019-09-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/532/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [SolrCore, 
MockDirectoryWrapper, SolrIndexSearcher, SolrIndexSearcher] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1093)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:914)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1252)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:766)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:99)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:805)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:1003)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:914)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1252)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:766)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.search.SolrIndexSearcher  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.search.SolrIndexSearcher.(SolrIndexSearcher.java:308)  at 
org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2143)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2316)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2052)  at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:702)
  at 
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:102)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1079)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1066)
  at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:160)
  at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:281) 
 at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:188)  at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
  at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:200)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2609)  at 
org.apache.solr.servlet.DirectSolrConnection.request(DirectSolrConnection.java:125)
  at org.apache.solr.util.TestHarness.update(TestHarness.java:286)  at 
org.apache.solr.util.BaseTestHarness.checkUpdateStatus(BaseTestHarness.java:274)
  at 
org.apache.solr.util.BaseTestHarness.validateUpdate(BaseTestHarness.java:244)  
at org.apache.solr.SolrTestCaseJ4.checkUpdateU(SolrTestCaseJ4.java:943)  at 
org.apache.solr.SolrTestCaseJ4.assertU(SolrTestCaseJ4.java:922)  at 
org.apache.solr.SolrTestCaseJ4.assertU(SolrTestCaseJ4.java:916)  at 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest.testBasics(AtomicUpdateProcessorFactoryTest.java:113)
  at 

[GitHub] [lucene-solr] iverase commented on issue #857: LUCENE-8968: Improve performance of WITHIN and DISJOINT queries for Shape queries

2019-09-06 Thread GitBox
iverase commented on issue #857: LUCENE-8968: Improve performance of WITHIN and 
DISJOINT queries for Shape queries
URL: https://github.com/apache/lucene-solr/pull/857#issuecomment-528770584
 
 
   I have run the performance benchmark defined 
[here](https://github.com/mikemccand/luceneutil/pull/44)  which uses around 
~13M polygons with a distribution similar to luceneutil geo benchmarks. The 
result with this approach is better for within and disjoint.
   
   Still performance for WITHIN or DISJOINT queries that match only few 
documents is not good as it needs to visit most of the documents.
   
   Shape|Operation|M hits/sec Dev|M hits/sec Base |M hits/sec Diff| QPS  Dev| 
QPS Base| QPS Diff| Hit count Dev|Hit count Base|Hit count Diff|
   | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
   |point|within|0.00|0.00| 0%|368.28|4.16|8759%|0|0| 0%|
   |box|within|0.57|0.42|36%|3.89|2.86|36%|32911251|32911251| 0%|
   |poly 10|within|0.68|0.49|40%|2.61|1.87|40%|58873224|58873224| 0%|
   |polyMedium|within|0.04|0.03|35%|2.52|1.86|35%|522739|522739| 0%|
   |polyRussia|within|0.32|0.15|110%|1.32|0.63|110%|244661|244661| 0%|
   |point|disjoint|236.15|43.13|448%|17.94|3.28|448%|2962178156|2962178156| 0%|
   |box|disjoint|157.47|31.89|394%|12.10|2.45|394%|2929099536|2929099536| 0%|
   |poly 10|disjoint|75.69|22.01|244%|5.87|1.71|244%|2903116231|2903116231| 0%|
   |polyMedium|disjoint|77.04|22.80|238%|5.86|1.73|238%|433924372|433924372| 0%|
   |polyRussia|disjoint|18.74|8.87|111%|1.45|0.69|111%|12920400|12920400| 0%|
   |point|intersects|0.00|0.00|-3%|362.28|372.58|-3%|2644|2644| 0%|
   |box|intersects|4.63|4.69|-1%|31.47|31.92|-1%|33081264|33081264| 0%|
   |poly 10|intersects|2.05|2.13|-3%|7.83|8.11|-3%|59064569|59064569| 0%|
   |polyMedium|intersects|0.14|0.13| 4%|8.55|8.23| 4%|528812|528812| 0%|
   |polyRussia|intersects|0.37|0.37| 0%|1.52|1.51| 0%|244848|244848| 0%|


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-8.x - Build # 207 - Unstable

2019-09-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-8.x/207/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest

Error Message:
ObjectTracker found 6 object(s) that were not released!!! [SolrIndexSearcher, 
MockDirectoryWrapper, SolrIndexSearcher, MockDirectoryWrapper, 
MockDirectoryWrapper, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.search.SolrIndexSearcher  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.search.SolrIndexSearcher.(SolrIndexSearcher.java:308)  at 
org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2143)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2316)  at 
org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2052)  at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:702)
  at 
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:102)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1079)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1066)
  at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:160)
  at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:281) 
 at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:188)  at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
  at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:200)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2609)  at 
org.apache.solr.servlet.DirectSolrConnection.request(DirectSolrConnection.java:125)
  at org.apache.solr.util.TestHarness.update(TestHarness.java:286)  at 
org.apache.solr.util.BaseTestHarness.checkUpdateStatus(BaseTestHarness.java:274)
  at 
org.apache.solr.util.BaseTestHarness.validateUpdate(BaseTestHarness.java:244)  
at org.apache.solr.SolrTestCaseJ4.checkUpdateU(SolrTestCaseJ4.java:943)  at 
org.apache.solr.SolrTestCaseJ4.assertU(SolrTestCaseJ4.java:922)  at 
org.apache.solr.SolrTestCaseJ4.assertU(SolrTestCaseJ4.java:916)  at 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest.testBasics(AtomicUpdateProcessorFactoryTest.java:113)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)  at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
  at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
  at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
  at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
  at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
  at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
  at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
  at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
  at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
  at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
  at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
  at