[jira] [Updated] (LUCENE-8957) Update examples in CustomAnalyzer Javadocs

2019-08-27 Thread Tomoko Uchida (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida updated LUCENE-8957:
--
Attachment: LUCENE-8957.patch

> Update examples in CustomAnalyzer Javadocs
> --
>
> Key: LUCENE-8957
> URL: https://issues.apache.org/jira/browse/LUCENE-8957
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
> Attachments: LUCENE-8957.patch
>
>
> CustomAnalyzer Javadocs need to be updated:
> - Remove {{StandardFilterFactory}} from examples
> - Use {{Factory.NAME}} instead of {{Factory.class}} in the examples



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 194 - Still Unstable

2019-08-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/194/

3 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
at 
__randomizedtesting.SeedInfo.seed([97BB7CA1C967B0B:825C641B5D90D08F]:0)
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
at 
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:308)
at 
org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at 
org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:396)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.retryOnPortBindFailure(JettySolrRunner.java:558)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:497)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:465)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:499)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-BadApples-8.x-Linux (32bit/jdk1.8.0_201) - Build # 107 - Unstable!

2019-08-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-8.x-Linux/107/
Java: 32bit/jdk1.8.0_201 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader

Error Message:
Doc with id=4 not found in 
https://127.0.0.1:33845/solr/outOfSyncReplicasCannotBecomeLeader-false due to: 
Path not found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=4 not found in 
https://127.0.0.1:33845/solr/outOfSyncReplicasCannotBecomeLeader-false due to: 
Path not found: /id; rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([471DEEE3DFB28547:39F6CEF31CD58A7D]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.TestCloudConsistency.assertDocExists(TestCloudConsistency.java:285)
at 
org.apache.solr.cloud.TestCloudConsistency.assertDocsExistInAllReplicas(TestCloudConsistency.java:269)
at 
org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader(TestCloudConsistency.java:140)
at 
org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader(TestCloudConsistency.java:99)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Created] (LUCENE-8957) Update examples in CustomAnalyzer Javadocs

2019-08-27 Thread Tomoko Uchida (Jira)
Tomoko Uchida created LUCENE-8957:
-

 Summary: Update examples in CustomAnalyzer Javadocs
 Key: LUCENE-8957
 URL: https://issues.apache.org/jira/browse/LUCENE-8957
 Project: Lucene - Core
  Issue Type: Task
  Components: modules/analysis
Reporter: Tomoko Uchida
Assignee: Tomoko Uchida


CustomAnalyzer Javadocs need to be updated:

- Remove {{StandardFilterFactory}} from examples
- Use {{Factory.NAME}} instead of {{Factory.class}} in the examples



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13452) Update the lucene-solr build from Ivy+Ant+Maven (shadow build) to Gradle.

2019-08-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917404#comment-16917404
 ] 

ASF subversion and git services commented on SOLR-13452:


Commit e5c992ebaf315a9f3ec856622d0024e30cffd4bf in lucene-solr's branch 
refs/heads/jira/SOLR-13452_gradle_5 from Tomoko Uchida
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e5c992e ]

SOLR-13452: Don't copy IDEA workspace.iml (and instead use workspacec settings 
generated by Gradle IDEA plugin).


> Update the lucene-solr build from Ivy+Ant+Maven (shadow build) to Gradle.
> -
>
> Key: SOLR-13452
> URL: https://issues.apache.org/jira/browse/SOLR-13452
> Project: Solr
>  Issue Type: Improvement
>  Components: Build
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: gradle-build.pdf
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I took some things from the great work that Dat did in 
> [https://github.com/apache/lucene-solr/tree/jira/gradle] and took the ball a 
> little further.
>  
> When working with gradle in sub modules directly, I recommend 
> [https://github.com/dougborg/gdub]
> This gradle branch uses the following plugin for version locking, version 
> configuration and version consistency across modules: 
> [https://github.com/palantir/gradle-consistent-versions]
>  
> https://github.com/apache/lucene-solr/tree/jira/SOLR-13452_gradle_5



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] MarcusSorealheis commented on a change in pull request #805: SOLR-13649 change the default behavior of the basic authentication plugin.

2019-08-27 Thread GitBox
MarcusSorealheis commented on a change in pull request #805: SOLR-13649 change 
the default behavior of the basic authentication plugin.
URL: https://github.com/apache/lucene-solr/pull/805#discussion_r318374760
 
 

 ##
 File path: 
solr/core/src/test/org/apache/solr/handler/admin/SecurityConfHandlerTest.java
 ##
 @@ -55,19 +56,19 @@ public void testEdit() throws Exception {
   basicAuth.init((Map) 
securityCfg.getData().get("authentication"));
   assertTrue(basicAuth.authenticate("tom", "TomIsUberCool"));
 
-
   command = "{\n" +
   "'set-user': {'harry':'HarryIsCool'},\n" +
-  "'delete-user': ['tom','harry']\n" +
+  "'delete-user': ['tom']\n" +
   "}";
   o = new 
ContentStreamBase.ByteArrayStream(command.getBytes(StandardCharsets.UTF_8), "");
   req.setContentStreams(Collections.singletonList(o));
   handler.handleRequestBody(req, new SolrQueryResponse());
   securityCfg = handler.m.get("/security.json");
+
   assertEquals(3, securityCfg.getVersion());
   Map result = (Map) securityCfg.getData().get("authentication");
   result = (Map) result.get("credentials");
-  assertTrue(result.isEmpty());
+  assertEquals(1,result.size());
 
 Review comment:
   previously all users were deleted but that simply isn't possible anymore if 
authentication is enabled. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] MarcusSorealheis commented on issue #805: SOLR-13649 change the default behavior of the basic authentication plugin.

2019-08-27 Thread GitBox
MarcusSorealheis commented on issue #805: SOLR-13649 change the default 
behavior of the basic authentication plugin.
URL: https://github.com/apache/lucene-solr/pull/805#issuecomment-525563889
 
 
   @shalinmangar if you run the tests now they all work


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1943 - Still Failing

2019-08-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1943/

No tests ran.

Build Log:
[...truncated 25 lines...]
ERROR: Failed to check out http://svn.apache.org/repos/asf/lucene/test-data
org.tmatesoft.svn.core.SVNException: svn: E175002: connection refused by the 
server
svn: E175002: OPTIONS request failed on '/repos/asf/lucene/test-data'
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:112)
at 
org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:96)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:765)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:352)
at 
org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:340)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:910)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:702)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:113)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1035)
at 
org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getLatestRevision(DAVRepository.java:164)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.getRevisionNumber(SvnNgRepositoryAccess.java:119)
at 
org.tmatesoft.svn.core.internal.wc2.SvnRepositoryAccess.getLocations(SvnRepositoryAccess.java:178)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.createRepositoryFor(SvnNgRepositoryAccess.java:43)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgAbstractUpdate.checkout(SvnNgAbstractUpdate.java:831)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:26)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:11)
at 
org.tmatesoft.svn.core.internal.wc2.ng.SvnNgOperationRunner.run(SvnNgOperationRunner.java:20)
at 
org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:21)
at 
org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1239)
at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294)
at 
hudson.scm.subversion.CheckoutUpdater$SubversionUpdateTask.perform(CheckoutUpdater.java:133)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:176)
at 
hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:134)
at 
hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168)
at 
hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:1041)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:1017)
at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:990)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at 
org.tmatesoft.svn.core.internal.util.SVNSocketConnection.run(SVNSocketConnection.java:57)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
... 4 more
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)

[JENKINS] Lucene-Solr-Tests-master - Build # 3634 - Failure

2019-08-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3634/

All tests passed

Build Log:
[...truncated 64895 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj729803311
 [ecj-lint] Compiling 48 source files to /tmp/ecj729803311
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 23)
 [ecj-lint] import javax.naming.NamingException;
 [ecj-lint]
 [ecj-lint] The type javax.naming.NamingException is not accessible
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 28)
 [ecj-lint] public class MockInitialContextFactory implements 
InitialContextFactory {
 [ecj-lint]  ^
 [ecj-lint] The type MockInitialContextFactory must implement the inherited 
abstract method InitialContextFactory.getInitialContext(Hashtable)
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 30)
 [ecj-lint] private final javax.naming.Context context;
 [ecj-lint]   
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
 [ecj-lint] context = mock(javax.naming.Context.class);
 [ecj-lint] ^^^
 [ecj-lint] context cannot be resolved to a variable
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
 [ecj-lint] context = mock(javax.naming.Context.class);
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 36)
 [ecj-lint] when(context.lookup(anyString())).thenAnswer(invocation -> 
objects.get(invocation.getArgument(0)));
 [ecj-lint]  ^^^
 [ecj-lint] context cannot be resolved
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 38)
 [ecj-lint] } catch (NamingException e) {
 [ecj-lint]  ^^^
 [ecj-lint] NamingException cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 45)
 [ecj-lint] public javax.naming.Context getInitialContext(Hashtable env) {
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 46)
 [ecj-lint] return context;
 [ecj-lint]^^^
 [ecj-lint] context cannot be resolved to a variable
 [ecj-lint] --
 [ecj-lint] 9 problems (9 errors)

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:634: 
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:101: 
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build.xml:651:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/common-build.xml:479:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2015:
 The following error occurred while executing this line:

[GitHub] [lucene-solr] anshumg opened a new pull request #849: SOLR-13257: Cleanup code and make the AffinityReplicaTransformer constructors private (#848)

2019-08-27 Thread GitBox
anshumg opened a new pull request #849: SOLR-13257: Cleanup code and make the 
AffinityReplicaTransformer constructors private (#848)
URL: https://github.com/apache/lucene-solr/pull/849
 
 
   SOLR-13257: Cleanup code and make the constructors private as the 
constructor is supposed to be called via the static getInstance method.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13257) Enable replica routing affinity for better cache usage

2019-08-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917249#comment-16917249
 ] 

ASF subversion and git services commented on SOLR-13257:


Commit 0c9ec35f88fc07aa6ecb0fef2deea705aaee39c2 in lucene-solr's branch 
refs/heads/master from Anshum Gupta
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0c9ec35 ]

SOLR-13257: Cleanup code and make the AffinityReplicaTransformer constructors 
private (#848)

SOLR-13257: Cleanup code and make the constructors private as the constructor 
is supposed to be called via the static getInstance method.

> Enable replica routing affinity for better cache usage
> --
>
> Key: SOLR-13257
> URL: https://issues.apache.org/jira/browse/SOLR-13257
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Michael Gibney
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: master (9.0), 8.3
>
> Attachments: AffinityShardHandlerFactory.java, SOLR-13257.patch, 
> SOLR-13257.patch
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> For each shard in a distributed request, Solr currently routes each request 
> randomly via 
> [ShufflingReplicaListTransformer|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/component/ShufflingReplicaListTransformer.java]
>  to a particular replica. In setups with replication factor >1, this normally 
> results in a situation where subsequent requests (which one would hope/expect 
> to leverage cached results from previous related requests) end up getting 
> routed to a replica that hasn't seen any related requests.
> The problem can be replicated by issuing a relatively expensive query (maybe 
> containing common terms?). The first request initializes the 
> {{queryResultCache}} on the consulted replicas. If replication factor >1 and 
> there are a sufficient number of shards, subsequent requests will likely be 
> routed to at least one replica that _hasn't_ seen the query before. The 
> replicas with uninitialized caches become a bottleneck, and from the client's 
> perspective, many subsequent requests appear not to benefit from caching at 
> all.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13257) Enable replica routing affinity for better cache usage

2019-08-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917248#comment-16917248
 ] 

ASF subversion and git services commented on SOLR-13257:


Commit 0c9ec35f88fc07aa6ecb0fef2deea705aaee39c2 in lucene-solr's branch 
refs/heads/master from Anshum Gupta
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0c9ec35 ]

SOLR-13257: Cleanup code and make the AffinityReplicaTransformer constructors 
private (#848)

SOLR-13257: Cleanup code and make the constructors private as the constructor 
is supposed to be called via the static getInstance method.

> Enable replica routing affinity for better cache usage
> --
>
> Key: SOLR-13257
> URL: https://issues.apache.org/jira/browse/SOLR-13257
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Michael Gibney
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: master (9.0), 8.3
>
> Attachments: AffinityShardHandlerFactory.java, SOLR-13257.patch, 
> SOLR-13257.patch
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> For each shard in a distributed request, Solr currently routes each request 
> randomly via 
> [ShufflingReplicaListTransformer|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/component/ShufflingReplicaListTransformer.java]
>  to a particular replica. In setups with replication factor >1, this normally 
> results in a situation where subsequent requests (which one would hope/expect 
> to leverage cached results from previous related requests) end up getting 
> routed to a replica that hasn't seen any related requests.
> The problem can be replicated by issuing a relatively expensive query (maybe 
> containing common terms?). The first request initializes the 
> {{queryResultCache}} on the consulted replicas. If replication factor >1 and 
> there are a sufficient number of shards, subsequent requests will likely be 
> routed to at least one replica that _hasn't_ seen the query before. The 
> replicas with uninitialized caches become a bottleneck, and from the client's 
> perspective, many subsequent requests appear not to benefit from caching at 
> all.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] anshumg merged pull request #848: SOLR-13257: Cleanup code and make the AffinityReplicaTransformer constructors private

2019-08-27 Thread GitBox
anshumg merged pull request #848: SOLR-13257: Cleanup code and make the 
AffinityReplicaTransformer constructors private
URL: https://github.com/apache/lucene-solr/pull/848
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] anshumg commented on issue #768: SOLR-13472: Defer authorization to be done on forwarded nodes

2019-08-27 Thread GitBox
anshumg commented on issue #768: SOLR-13472: Defer authorization to be done on 
forwarded nodes
URL: https://github.com/apache/lucene-solr/pull/768#issuecomment-525520683
 
 
   @chatman - is there something stopping you from merging this in?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] anshumg commented on a change in pull request #768: SOLR-13472: Defer authorization to be done on forwarded nodes

2019-08-27 Thread GitBox
anshumg commented on a change in pull request #768: SOLR-13472: Defer 
authorization to be done on forwarded nodes
URL: https://github.com/apache/lucene-solr/pull/768#discussion_r318335441
 
 

 ##
 File path: solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java
 ##
 @@ -464,6 +464,35 @@ protected void extractRemotePath(String collectionName, 
String origCorename) thr
 }
   }
 
+  Action authorize() throws IOException {
 
 Review comment:
   At some point it would make sense to move a ton of this stuff out of this 
single file :) Not now though, we can do that in a different issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-13257) Enable replica routing affinity for better cache usage

2019-08-27 Thread Anshum Gupta (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reopened SOLR-13257:
-
  Assignee: Anshum Gupta  (was: Tomás Fernández Löbbe)

Reopening to commit a minor change and some cleanup.

> Enable replica routing affinity for better cache usage
> --
>
> Key: SOLR-13257
> URL: https://issues.apache.org/jira/browse/SOLR-13257
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Michael Gibney
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: master (9.0), 8.3
>
> Attachments: AffinityShardHandlerFactory.java, SOLR-13257.patch, 
> SOLR-13257.patch
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> For each shard in a distributed request, Solr currently routes each request 
> randomly via 
> [ShufflingReplicaListTransformer|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/component/ShufflingReplicaListTransformer.java]
>  to a particular replica. In setups with replication factor >1, this normally 
> results in a situation where subsequent requests (which one would hope/expect 
> to leverage cached results from previous related requests) end up getting 
> routed to a replica that hasn't seen any related requests.
> The problem can be replicated by issuing a relatively expensive query (maybe 
> containing common terms?). The first request initializes the 
> {{queryResultCache}} on the consulted replicas. If replication factor >1 and 
> there are a sufficient number of shards, subsequent requests will likely be 
> routed to at least one replica that _hasn't_ seen the query before. The 
> replicas with uninitialized caches become a bottleneck, and from the client's 
> perspective, many subsequent requests appear not to benefit from caching at 
> all.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] anshumg opened a new pull request #848: SOLR-13257: Cleanup code and make the AffinityReplicaTransformer constructors private

2019-08-27 Thread GitBox
anshumg opened a new pull request #848: SOLR-13257: Cleanup code and make the 
AffinityReplicaTransformer constructors private
URL: https://github.com/apache/lucene-solr/pull/848
 
 
   # Description
   
   Cleanup code and make the AffinityReplicaTransformer  constructors private 
as the constructor is supposed to be called via the static getInstance method.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3633 - Still unstable

2019-08-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3633/

1 tests failed.
FAILED:  org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain.testBespoke

Error Message:
Error from server at 
http://127.0.0.1:44555/solr/org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain_collection:
 Expected mime type application/octet-stream but got text/html.   
 
Error 500 Server Error  HTTP ERROR 500 
Problem accessing 
/solr/org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain_collection/select.
 Reason: Server ErrorCaused 
by:java.lang.AssertionError  at 
java.base/java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1896)  at 
java.base/java.util.HashMap$TreeNode.putTreeVal(HashMap.java:2061)  at 
java.base/java.util.HashMap.putVal(HashMap.java:633)  at 
java.base/java.util.HashMap.put(HashMap.java:607)  at 
org.apache.solr.search.LRUCache.put(LRUCache.java:197)  at 
org.apache.solr.search.SolrCacheHolder.put(SolrCacheHolder.java:88)  at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1449)
  at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:568)  at 
org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1484)
  at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:398)
  at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:305)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:200)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2598)  at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:780)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:424)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:351)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:165)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:753)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:505)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)  at 
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:781)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:917)
  at java.base/java.lang.Thread.run(Thread.java:834)  http://eclipse.org/jetty;>Powered by Jetty:// 9.4.19.v20190610  
  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:44555/solr/org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain_collection:
 Expected mime type application/octet-stream but got text/html. 


Error 500 Server Error

HTTP ERROR 500
Problem accessing 
/solr/org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain_collection/select.
 Reason:
Server ErrorCaused by:java.lang.AssertionError
at 
java.base/java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1896)
at java.base/java.util.HashMap$TreeNode.putTreeVal(HashMap.java:2061)
at java.base/java.util.HashMap.putVal(HashMap.java:633)
at java.base/java.util.HashMap.put(HashMap.java:607)
at org.apache.solr.search.LRUCache.put(LRUCache.java:197)
at org.apache.solr.search.SolrCacheHolder.put(SolrCacheHolder.java:88)
at 

[jira] [Resolved] (SOLR-13542) Code cleanup - Avoid using stream filter count where possible

2019-08-27 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-13542.
--
Fix Version/s: 8.3
   master (9.0)
   Resolution: Fixed

> Code cleanup - Avoid using stream filter count where possible
> -
>
> Key: SOLR-13542
> URL: https://issues.apache.org/jira/browse/SOLR-13542
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Koen De Groote
>Priority: Trivial
> Fix For: master (9.0), 8.3
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is another ticket in the logic of 
> https://issues.apache.org/jira/browse/LUCENE-8847
>  
> Not a feature, not serious, don't review this when you could be reviewing 
> actual features or critical bugs, don't want to take your time away with this.
>  
> Intellij's static code analysis bumped into this.
>  
> I also saw most other instances in the code where such a filter needed to 
> happen already used
> {code:java}
> anyMatch{code}
> instead of count(). So I applied the suggested fixes. Code cleanup and 
> potentially a small performance gain. As far as my understanding goes, since 
> it is not a simple count that's happening, there's no known size for the 
> evaluator to return and as such it has to iterate over the entire collection. 
> Whereas anyMatch and noneMatch will use the predicate to stop the instance 
> the condition is met.
> It just so happens that all affected instances exist within the SolrJ 
> namespace.
>  
> All tests have run, all succeed.
>  
> EDIT: Github PR: https://github.com/apache/lucene-solr/pull/717



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13542) Code cleanup - Avoid using stream filter count where possible

2019-08-27 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917183#comment-16917183
 ] 

Tomás Fernández Löbbe commented on SOLR-13542:
--

I moved the CHANGES entry to 8.3 and merged to both branches

> Code cleanup - Avoid using stream filter count where possible
> -
>
> Key: SOLR-13542
> URL: https://issues.apache.org/jira/browse/SOLR-13542
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Koen De Groote
>Priority: Trivial
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is another ticket in the logic of 
> https://issues.apache.org/jira/browse/LUCENE-8847
>  
> Not a feature, not serious, don't review this when you could be reviewing 
> actual features or critical bugs, don't want to take your time away with this.
>  
> Intellij's static code analysis bumped into this.
>  
> I also saw most other instances in the code where such a filter needed to 
> happen already used
> {code:java}
> anyMatch{code}
> instead of count(). So I applied the suggested fixes. Code cleanup and 
> potentially a small performance gain. As far as my understanding goes, since 
> it is not a simple count that's happening, there's no known size for the 
> evaluator to return and as such it has to iterate over the entire collection. 
> Whereas anyMatch and noneMatch will use the predicate to stop the instance 
> the condition is met.
> It just so happens that all affected instances exist within the SolrJ 
> namespace.
>  
> All tests have run, all succeed.
>  
> EDIT: Github PR: https://github.com/apache/lucene-solr/pull/717



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13542) Code cleanup - Avoid using stream filter count where possible

2019-08-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917179#comment-16917179
 ] 

ASF subversion and git services commented on SOLR-13542:


Commit 5aa9a02421820fa3717fde1edf9effa8e5161191 in lucene-solr's branch 
refs/heads/branch_8x from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5aa9a02 ]

SOLR-13542: Move CHANGES entry to 8.3. Added contributor


> Code cleanup - Avoid using stream filter count where possible
> -
>
> Key: SOLR-13542
> URL: https://issues.apache.org/jira/browse/SOLR-13542
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Koen De Groote
>Priority: Trivial
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is another ticket in the logic of 
> https://issues.apache.org/jira/browse/LUCENE-8847
>  
> Not a feature, not serious, don't review this when you could be reviewing 
> actual features or critical bugs, don't want to take your time away with this.
>  
> Intellij's static code analysis bumped into this.
>  
> I also saw most other instances in the code where such a filter needed to 
> happen already used
> {code:java}
> anyMatch{code}
> instead of count(). So I applied the suggested fixes. Code cleanup and 
> potentially a small performance gain. As far as my understanding goes, since 
> it is not a simple count that's happening, there's no known size for the 
> evaluator to return and as such it has to iterate over the entire collection. 
> Whereas anyMatch and noneMatch will use the predicate to stop the instance 
> the condition is met.
> It just so happens that all affected instances exist within the SolrJ 
> namespace.
>  
> All tests have run, all succeed.
>  
> EDIT: Github PR: https://github.com/apache/lucene-solr/pull/717



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13542) Code cleanup - Avoid using stream filter count where possible

2019-08-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917163#comment-16917163
 ] 

ASF subversion and git services commented on SOLR-13542:


Commit 7b589ad7699a7318df4a4267db736e7ac1f7e7e8 in lucene-solr's branch 
refs/heads/master from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=7b589ad ]

SOLR-13542: Move CHANGES entry to 8.3. Added contributor


> Code cleanup - Avoid using stream filter count where possible
> -
>
> Key: SOLR-13542
> URL: https://issues.apache.org/jira/browse/SOLR-13542
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Koen De Groote
>Priority: Trivial
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is another ticket in the logic of 
> https://issues.apache.org/jira/browse/LUCENE-8847
>  
> Not a feature, not serious, don't review this when you could be reviewing 
> actual features or critical bugs, don't want to take your time away with this.
>  
> Intellij's static code analysis bumped into this.
>  
> I also saw most other instances in the code where such a filter needed to 
> happen already used
> {code:java}
> anyMatch{code}
> instead of count(). So I applied the suggested fixes. Code cleanup and 
> potentially a small performance gain. As far as my understanding goes, since 
> it is not a simple count that's happening, there's no known size for the 
> evaluator to return and as such it has to iterate over the entire collection. 
> Whereas anyMatch and noneMatch will use the predicate to stop the instance 
> the condition is met.
> It just so happens that all affected instances exist within the SolrJ 
> namespace.
>  
> All tests have run, all succeed.
>  
> EDIT: Github PR: https://github.com/apache/lucene-solr/pull/717



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk-12.0.1) - Build # 1092 - Unstable!

2019-08-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/1092/
Java: 64bit/jdk-12.0.1 -XX:+UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  org.apache.lucene.index.TestTieredMergePolicy.testSimulateUpdates

Error Message:
numSegments=79, allowed=78.0

Stack Trace:
java.lang.AssertionError: numSegments=79, allowed=78.0
at 
__randomizedtesting.SeedInfo.seed([E38D9351539C415B:6D6D0864885C2DA3]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.lucene.index.TestTieredMergePolicy.assertSegmentInfos(TestTieredMergePolicy.java:88)
at 
org.apache.lucene.index.BaseMergePolicyTestCase.doTestSimulateUpdates(BaseMergePolicyTestCase.java:422)
at 
org.apache.lucene.index.TestTieredMergePolicy.testSimulateUpdates(TestTieredMergePolicy.java:718)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  org.apache.lucene.index.TestTieredMergePolicy.testSimulateUpdates

Error Message:
numSegments=79, allowed=78.0

Stack Trace:
java.lang.AssertionError: numSegments=79, allowed=78.0
at 
__randomizedtesting.SeedInfo.seed([E38D9351539C415B:6D6D0864885C2DA3]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 

[jira] [Commented] (SOLR-13720) Impossible to create effective ToParenBlockJoinQuery in custom QParser

2019-08-27 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917127#comment-16917127
 ] 

Lucene/Solr QA commented on SOLR-13720:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  4m 48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  4m 48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  4m 48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 56m 
18s{color} | {color:green} core in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-13720 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978707/SOLR-13720.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 00f4bbe |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/541/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/541/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Impossible to create effective ToParenBlockJoinQuery in custom QParser
> --
>
> Key: SOLR-13720
> URL: https://issues.apache.org/jira/browse/SOLR-13720
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
>Reporter: Stanislav Livotov
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: SOLR-13720.patch, SOLR-13720.patch
>
>
> According to Solr [ducumentation|#SolrPlugins-QParserPlugin]  QParser is 
> treated as a legal plugin.
>  
> However, it is impossible to create an effective ToParentBlockJoin query 
> without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter 
> method from BlockJoinParentQParser) or dirty hacks(like creating 
> org.apache.solr.search.join package with some accessor method to 
> package-private methods in plugin code and adding it in WEB-INF/lib directory 
> in order to be loaded by the same ClassLoader).
> I don't see a truly clean way how to fix it, but at least we can help custom 
> plugin developers to create it a little bit easier by making 
> BlockJoinParentQParser#getCachedFilter public and 
> BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for 
> BitDocIdSetFilterWrapper#filter. 
>  
>  
> In order to create 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8954) Refactor Nori(Korean) Analyzer

2019-08-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917089#comment-16917089
 ] 

ASF subversion and git services commented on LUCENE-8954:
-

Commit eb44ad324dba76b48d0e2b9011e3c76e93d3cc03 in lucene-solr's branch 
refs/heads/master from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=eb44ad3 ]

LUCENE-8954: Fix precommit


> Refactor Nori(Korean) Analyzer
> --
>
> Key: LUCENE-8954
> URL: https://issues.apache.org/jira/browse/LUCENE-8954
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Namgyu Kim
>Assignee: Namgyu Kim
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> There are many codes that can be refactored in the Nori analyzer.
> (whitespace, wrong type casting, unnecessary throws, C-style array, ...)
> I think it's good to proceed if we can.
> It has nothing to do with the actual working of Nori.
> I'll just remove unnecessary code and make the code simple.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 458 - Still Failing

2019-08-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/458/

All tests passed

Build Log:
[...truncated 55065 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj1668471514
 [ecj-lint] Compiling 40 source files to /tmp/ecj1668471514
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/analysis/nori/src/java/org/apache/lucene/analysis/ko/KoreanPartOfSpeechStopFilter.java
 (at line 23)
 [ecj-lint] import java.util.stream.Collectors;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.stream.Collectors is never used
 [ecj-lint] --
 [ecj-lint] 1 problem (1 error)

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/build.xml:643:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/build.xml:101:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/build.xml:207:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/common-build.xml:2181:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/analysis/build.xml:153:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/analysis/build.xml:38:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/common-build.xml:2009:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/lucene/common-build.xml:2048:
 Compile failed; see the compiler error output for details.

Total time: 172 minutes 45 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-13542) Code cleanup - Avoid using stream filter count where possible

2019-08-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916985#comment-16916985
 ] 

ASF subversion and git services commented on SOLR-13542:


Commit 00f4bbe6fc814b9fa86b9f231b13b6c03a3a045e in lucene-solr's branch 
refs/heads/master from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=00f4bbe ]

Merge pull request #717 from KoenDG/SOLR-13542

SOLR-13542: Code cleanup - Avoid using stream filter count where possible

> Code cleanup - Avoid using stream filter count where possible
> -
>
> Key: SOLR-13542
> URL: https://issues.apache.org/jira/browse/SOLR-13542
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Koen De Groote
>Priority: Trivial
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is another ticket in the logic of 
> https://issues.apache.org/jira/browse/LUCENE-8847
>  
> Not a feature, not serious, don't review this when you could be reviewing 
> actual features or critical bugs, don't want to take your time away with this.
>  
> Intellij's static code analysis bumped into this.
>  
> I also saw most other instances in the code where such a filter needed to 
> happen already used
> {code:java}
> anyMatch{code}
> instead of count(). So I applied the suggested fixes. Code cleanup and 
> potentially a small performance gain. As far as my understanding goes, since 
> it is not a simple count that's happening, there's no known size for the 
> evaluator to return and as such it has to iterate over the entire collection. 
> Whereas anyMatch and noneMatch will use the predicate to stop the instance 
> the condition is met.
> It just so happens that all affected instances exist within the SolrJ 
> namespace.
>  
> All tests have run, all succeed.
>  
> EDIT: Github PR: https://github.com/apache/lucene-solr/pull/717



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13542) Code cleanup - Avoid using stream filter count where possible

2019-08-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916986#comment-16916986
 ] 

ASF subversion and git services commented on SOLR-13542:


Commit 00f4bbe6fc814b9fa86b9f231b13b6c03a3a045e in lucene-solr's branch 
refs/heads/master from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=00f4bbe ]

Merge pull request #717 from KoenDG/SOLR-13542

SOLR-13542: Code cleanup - Avoid using stream filter count where possible

> Code cleanup - Avoid using stream filter count where possible
> -
>
> Key: SOLR-13542
> URL: https://issues.apache.org/jira/browse/SOLR-13542
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Koen De Groote
>Priority: Trivial
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is another ticket in the logic of 
> https://issues.apache.org/jira/browse/LUCENE-8847
>  
> Not a feature, not serious, don't review this when you could be reviewing 
> actual features or critical bugs, don't want to take your time away with this.
>  
> Intellij's static code analysis bumped into this.
>  
> I also saw most other instances in the code where such a filter needed to 
> happen already used
> {code:java}
> anyMatch{code}
> instead of count(). So I applied the suggested fixes. Code cleanup and 
> potentially a small performance gain. As far as my understanding goes, since 
> it is not a simple count that's happening, there's no known size for the 
> evaluator to return and as such it has to iterate over the entire collection. 
> Whereas anyMatch and noneMatch will use the predicate to stop the instance 
> the condition is met.
> It just so happens that all affected instances exist within the SolrJ 
> namespace.
>  
> All tests have run, all succeed.
>  
> EDIT: Github PR: https://github.com/apache/lucene-solr/pull/717



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] tflobbe merged pull request #717: SOLR-13542: Code cleanup - Avoid using stream filter count where possible

2019-08-27 Thread GitBox
tflobbe merged pull request #717: SOLR-13542: Code cleanup - Avoid using stream 
filter count where possible
URL: https://github.com/apache/lucene-solr/pull/717
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11384) add support for distributed graph query

2019-08-27 Thread Komal (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916955#comment-16916955
 ] 

Komal edited comment on SOLR-11384 at 8/27/19 5:59 PM:
---

[~kwatters] Thanks for the revert. I am eagerly looking for this functionality 
in my current work. Given, we don't need caching currently, I would love to 
have an opportunity to chat and see how we may take this forward. Is your work 
open-source already or can you provide a patch here. That would help me 
understand and do my homework before I discuss further any details. 


was (Author: komal_vmware):
[~kwatters] Thanks for the revert. I am eagerly looking for this functionality 
in my current work. Given, we don't need caching currently, I would love to 
have an opportunity to chat and see how we may take this forward. Is your work 
open-source already or can you provide a patch here. That would help me 
understand, and do my homework before I discuss further any details. 

> add support for distributed graph query
> ---
>
> Key: SOLR-11384
> URL: https://issues.apache.org/jira/browse/SOLR-11384
> Project: Solr
>  Issue Type: Improvement
>Reporter: Kevin Watters
>Priority: Minor
>
> Creating this ticket to track the work that I've done on the distributed 
> graph traversal support in solr.
> Current GraphQuery will only work on a single core, which introduces some 
> limits on where it can be used and also complexities if you want to scale it. 
>  I believe there's a strong desire to support a fully distributed method of 
> doing the Graph Query.  I'm working on a patch, it's not complete yet, but if 
> anyone would like to have a look at the approach and implementation,  I 
> welcome much feedback.  
> The flow for the distributed graph query is almost exactly the same as the 
> normal graph query.  The only difference is how it discovers the "frontier 
> query" at each level of the traversal.  
> When a distribute graph query request comes in, each shard begins by running 
> the root query, to know where to start on it's shard.  Each participating 
> shard then discovers it's edges for the next hop.  Those edges are then 
> broadcast to all other participating shards.  The shard then receives all the 
> parts of the frontier query , assembles it, and executes it.
> This process continues on each shard until there are no new edges left, or 
> the maxDepth of the traversal has finished.
> The approach is to introduce a FrontierBroker that resides as a singleton on 
> each one of the solr nodes in the cluster.  When a graph query is created, it 
> can do a getInstance() on it so it can listen on the frontier parts coming in.
> Initially, I was using an external Kafka broker to handle this, and it did 
> work pretty well.  The new approach is migrating the FrontierBroker to be a 
> request handler in Solr, and potentially to use the SolrJ client to publish 
> the edges to each node in the cluster.
> There are a few outstanding design questions, first being, how do we know 
> what the list of shards are that are participating in the current query 
> request?  Is that easy info to get at?
> Second,  currently, we are serializing a query object between the shards, 
> perhaps we should consider a slightly different abstraction, and serialize 
> lists of "edge" objects between the nodes.   The point of this would be to 
> batch the exploration/traversal of current frontier to help avoid large 
> bursts of memory being required.
> Thrid, what sort of caching strategy should be introduced for the frontier 
> queries, if any?  And if we do some caching there, how/when should the 
> entries be expired and auto-warmed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11384) add support for distributed graph query

2019-08-27 Thread Komal (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916955#comment-16916955
 ] 

Komal commented on SOLR-11384:
--

[~kwatters] Thanks for the revert. I am eagerly looking for this functionality 
in my current work. Given, we don't need caching currently, I would love to 
have an opportunity to chat and see how we may take this forward. Is your work 
open-source already or can you provide a patch here. That would help me 
understand, and do my homework before I discuss further any details. 

> add support for distributed graph query
> ---
>
> Key: SOLR-11384
> URL: https://issues.apache.org/jira/browse/SOLR-11384
> Project: Solr
>  Issue Type: Improvement
>Reporter: Kevin Watters
>Priority: Minor
>
> Creating this ticket to track the work that I've done on the distributed 
> graph traversal support in solr.
> Current GraphQuery will only work on a single core, which introduces some 
> limits on where it can be used and also complexities if you want to scale it. 
>  I believe there's a strong desire to support a fully distributed method of 
> doing the Graph Query.  I'm working on a patch, it's not complete yet, but if 
> anyone would like to have a look at the approach and implementation,  I 
> welcome much feedback.  
> The flow for the distributed graph query is almost exactly the same as the 
> normal graph query.  The only difference is how it discovers the "frontier 
> query" at each level of the traversal.  
> When a distribute graph query request comes in, each shard begins by running 
> the root query, to know where to start on it's shard.  Each participating 
> shard then discovers it's edges for the next hop.  Those edges are then 
> broadcast to all other participating shards.  The shard then receives all the 
> parts of the frontier query , assembles it, and executes it.
> This process continues on each shard until there are no new edges left, or 
> the maxDepth of the traversal has finished.
> The approach is to introduce a FrontierBroker that resides as a singleton on 
> each one of the solr nodes in the cluster.  When a graph query is created, it 
> can do a getInstance() on it so it can listen on the frontier parts coming in.
> Initially, I was using an external Kafka broker to handle this, and it did 
> work pretty well.  The new approach is migrating the FrontierBroker to be a 
> request handler in Solr, and potentially to use the SolrJ client to publish 
> the edges to each node in the cluster.
> There are a few outstanding design questions, first being, how do we know 
> what the list of shards are that are participating in the current query 
> request?  Is that easy info to get at?
> Second,  currently, we are serializing a query object between the shards, 
> perhaps we should consider a slightly different abstraction, and serialize 
> lists of "edge" objects between the nodes.   The point of this would be to 
> batch the exploration/traversal of current frontier to help avoid large 
> bursts of memory being required.
> Thrid, what sort of caching strategy should be introduced for the frontier 
> queries, if any?  And if we do some caching there, how/when should the 
> entries be expired and auto-warmed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3632 - Still Failing

2019-08-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3632/

All tests passed

Build Log:
[...truncated 55139 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj222017698
 [ecj-lint] Compiling 40 source files to /tmp/ecj222017698
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/analysis/nori/src/java/org/apache/lucene/analysis/ko/KoreanPartOfSpeechStopFilter.java
 (at line 23)
 [ecj-lint] import java.util.stream.Collectors;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.stream.Collectors is never used
 [ecj-lint] --
 [ecj-lint] 1 problem (1 error)

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:634: 
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:101: 
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build.xml:207:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2181:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/analysis/build.xml:153:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/analysis/build.xml:38:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2009:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2048:
 Compile failed; see the compiler error output for details.

Total time: 165 minutes 47 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1434 - Still Failing

2019-08-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1434/

No tests ran.

Build Log:
[...truncated 24456 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2594 links (2120 relative) to 3411 anchors in 259 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-9.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.


[jira] [Commented] (SOLR-13720) Impossible to create effective ToParenBlockJoinQuery in custom QParser

2019-08-27 Thread Stanislav Livotov (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916854#comment-16916854
 ] 

Stanislav Livotov commented on SOLR-13720:
--

Thank you [~mkhludnev] for a quick response! I had added a simple test that 
just checks that it is possible to create TPBJQ manually. 

> Impossible to create effective ToParenBlockJoinQuery in custom QParser
> --
>
> Key: SOLR-13720
> URL: https://issues.apache.org/jira/browse/SOLR-13720
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
>Reporter: Stanislav Livotov
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: SOLR-13720.patch, SOLR-13720.patch
>
>
> According to Solr [ducumentation|#SolrPlugins-QParserPlugin]  QParser is 
> treated as a legal plugin.
>  
> However, it is impossible to create an effective ToParentBlockJoin query 
> without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter 
> method from BlockJoinParentQParser) or dirty hacks(like creating 
> org.apache.solr.search.join package with some accessor method to 
> package-private methods in plugin code and adding it in WEB-INF/lib directory 
> in order to be loaded by the same ClassLoader).
> I don't see a truly clean way how to fix it, but at least we can help custom 
> plugin developers to create it a little bit easier by making 
> BlockJoinParentQParser#getCachedFilter public and 
> BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for 
> BitDocIdSetFilterWrapper#filter. 
>  
>  
> In order to create 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13720) Impossible to create effective ToParenBlockJoinQuery in custom QParser

2019-08-27 Thread Stanislav Livotov (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanislav Livotov updated SOLR-13720:
-
Attachment: SOLR-13720.patch

> Impossible to create effective ToParenBlockJoinQuery in custom QParser
> --
>
> Key: SOLR-13720
> URL: https://issues.apache.org/jira/browse/SOLR-13720
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
>Reporter: Stanislav Livotov
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: SOLR-13720.patch, SOLR-13720.patch
>
>
> According to Solr [ducumentation|#SolrPlugins-QParserPlugin]  QParser is 
> treated as a legal plugin.
>  
> However, it is impossible to create an effective ToParentBlockJoin query 
> without copy-pasting(BitDocIdSetFilterWrapper class and getCachedFilter 
> method from BlockJoinParentQParser) or dirty hacks(like creating 
> org.apache.solr.search.join package with some accessor method to 
> package-private methods in plugin code and adding it in WEB-INF/lib directory 
> in order to be loaded by the same ClassLoader).
> I don't see a truly clean way how to fix it, but at least we can help custom 
> plugin developers to create it a little bit easier by making 
> BlockJoinParentQParser#getCachedFilter public and 
> BlockJoinParentQParser#BitDocIdSetFilterWrapper and providing getter for 
> BitDocIdSetFilterWrapper#filter. 
>  
>  
> In order to create 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8955) Move compare logic to IntersectVisitor in NearestNeighbor

2019-08-27 Thread Ignacio Vera (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916777#comment-16916777
 ] 

Ignacio Vera commented on LUCENE-8955:
--

Nice speed up of 10 nearest points query on geo benchmarks (65%):

https://home.apache.org/~mikemccand/geobench.html#search-nearest_10

> Move compare logic to IntersectVisitor in NearestNeighbor
> -
>
> Key: LUCENE-8955
> URL: https://issues.apache.org/jira/browse/LUCENE-8955
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Minor
> Fix For: 8.3
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Similar to LUCENE-8914, move compare logic to the IntersectVisitor so we can 
> take advantage of the improvement added on LUCENE-7862. I ran the 
> geoBenchmark for nearest 10 locally and the change provides an improvement of 
> around 30%. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1942 - Still Failing

2019-08-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1942/

All tests passed

Build Log:
[...truncated 55777 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj477389487
 [ecj-lint] Compiling 40 source files to /tmp/ecj477389487
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/analysis/nori/src/java/org/apache/lucene/analysis/ko/KoreanPartOfSpeechStopFilter.java
 (at line 23)
 [ecj-lint] import java.util.stream.Collectors;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.stream.Collectors is never used
 [ecj-lint] --
 [ecj-lint] 1 problem (1 error)

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/build.xml:652:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/build.xml:101:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/build.xml:207:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/common-build.xml:2181:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/analysis/build.xml:153:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/analysis/build.xml:38:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/common-build.xml:2009:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/common-build.xml:2048:
 Compile failed; see the compiler error output for details.

Total time: 500 minutes 6 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11384) add support for distributed graph query

2019-08-27 Thread Kevin Watters (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916717#comment-16916717
 ] 

Kevin Watters commented on SOLR-11384:
--

[~erickerickson]  Streaming expressions are fundamentally different in their 
semantics to the graph query.  If there is renewed interested in this 
functionality, we can revisit it.

At the moment, we're in the process of building a new cross collection join 
operator (XCJF  cross-collection join filter).  The work there is a stepping 
stone for a fully distributed graph traversal.

[~komal_vmware] if you have a use case, let's chat about it.  I do have a 
version of the distributed graph query working locally, but I don't consider it 
prime time due to a few pesky items related to caching.

> add support for distributed graph query
> ---
>
> Key: SOLR-11384
> URL: https://issues.apache.org/jira/browse/SOLR-11384
> Project: Solr
>  Issue Type: Improvement
>Reporter: Kevin Watters
>Priority: Minor
>
> Creating this ticket to track the work that I've done on the distributed 
> graph traversal support in solr.
> Current GraphQuery will only work on a single core, which introduces some 
> limits on where it can be used and also complexities if you want to scale it. 
>  I believe there's a strong desire to support a fully distributed method of 
> doing the Graph Query.  I'm working on a patch, it's not complete yet, but if 
> anyone would like to have a look at the approach and implementation,  I 
> welcome much feedback.  
> The flow for the distributed graph query is almost exactly the same as the 
> normal graph query.  The only difference is how it discovers the "frontier 
> query" at each level of the traversal.  
> When a distribute graph query request comes in, each shard begins by running 
> the root query, to know where to start on it's shard.  Each participating 
> shard then discovers it's edges for the next hop.  Those edges are then 
> broadcast to all other participating shards.  The shard then receives all the 
> parts of the frontier query , assembles it, and executes it.
> This process continues on each shard until there are no new edges left, or 
> the maxDepth of the traversal has finished.
> The approach is to introduce a FrontierBroker that resides as a singleton on 
> each one of the solr nodes in the cluster.  When a graph query is created, it 
> can do a getInstance() on it so it can listen on the frontier parts coming in.
> Initially, I was using an external Kafka broker to handle this, and it did 
> work pretty well.  The new approach is migrating the FrontierBroker to be a 
> request handler in Solr, and potentially to use the SolrJ client to publish 
> the edges to each node in the cluster.
> There are a few outstanding design questions, first being, how do we know 
> what the list of shards are that are participating in the current query 
> request?  Is that easy info to get at?
> Second,  currently, we are serializing a query object between the shards, 
> perhaps we should consider a slightly different abstraction, and serialize 
> lists of "edge" objects between the nodes.   The point of this would be to 
> batch the exploration/traversal of current frontier to help avoid large 
> bursts of memory being required.
> Thrid, what sort of caching strategy should be introduced for the frontier 
> queries, if any?  And if we do some caching there, how/when should the 
> entries be expired and auto-warmed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9418) Statistical Phrase Identifier

2019-08-27 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-9418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916697#comment-16916697
 ] 

David Smiley commented on SOLR-9418:


I just want to point out that there's no trace of this in the Solr Reference 
Guide, and as-such is basically a hidden feature.

> Statistical Phrase Identifier
> -
>
> Key: SOLR-9418
> URL: https://issues.apache.org/jira/browse/SOLR-9418
> Project: Solr
>  Issue Type: New Feature
>Reporter: Akash Mehta
>Assignee: Hoss Man
>Priority: Major
> Fix For: 7.5, 8.0
>
> Attachments: SOLR-9418.patch, SOLR-9418.patch, SOLR-9418.patch, 
> SOLR-9418.zip
>
>
> h2. *Summary:*
> The Statistical Phrase Identifier is a Solr contribution that takes in a 
> string of text and then leverages a language model (an Apache Lucene/Solr 
> inverted index) to predict how the inputted text should be divided into 
> phrases. The intended purpose of this tool is to parse short-text queries 
> into phrases prior to executing a keyword search (as opposed parsing out each 
> keyword as a single term).
> It is being generously donated to the Solr project by CareerBuilder, with the 
> original source code and a quickly demo-able version located here:  
> [https://github.com/careerbuilder/statistical-phrase-identifier|https://github.com/careerbuilder/statistical-phrase-identifier,]
> h2. *Purpose:*
> Assume you're building a job search engine, and one of your users searches 
> for the following:
>  _machine learning research and development Portland, OR software engineer 
> AND hadoop, java_
> Most search engines will natively parse this query into the following boolean 
> representation:
>  _(machine AND learning AND research AND development AND Portland) OR 
> (software AND engineer AND hadoop AND java)_
> While this query may still yield relevant results, it is clear that the 
> intent of the user wasn't understood very well at all. By leveraging the 
> Statistical Phrase Identifier on this string prior to query parsing, you can 
> instead expect the following parsing:
> _{machine learning} \{and} \{research and development} \{Portland, OR} 
> \{software engineer} \{AND} \{hadoop,} \{java}_
> It is then possile to modify all the multi-word phrases prior to executing 
> the search:
>  _"machine learning" and "research and development" "Portland, OR" "software 
> engineer" AND hadoop, java_
> Of course, you could do your own query parsing to specifically handle the 
> boolean syntax, but the following would eventually be interpreted correctly 
> by Apache Solr and most other search engines:
>  _"machine learning" AND "research and development" AND "Portland, OR" AND 
> "software engineer" AND hadoop AND java_ 
> h2. *History:*
> This project was originally implemented by the search team at CareerBuilder 
> in the summer of 2015 for use as part of their semantic search system. In the 
> summer of 2016, Akash Mehta, implemented a much simpler version as a proof of 
> concept based upon publicly available information about the CareerBuilder 
> implementation (the first attached patch).  In July of 2018, CareerBuilder 
> open sourced their original version 
> ([https://github.com/careerbuilder/statistical-phrase-identifier),|https://github.com/careerbuilder/statistical-phrase-identifier,]
>  and agreed to also donate the code to the Apache Software foundation as a 
> Solr contribution. An Solr patch with the CareerBuilder version was added to 
> this issue on September 5th, 2018, and community feedback and contributions 
> are encouraged.
> This issue was originally titled the "Probabilistic Query Parser", but the 
> name has now been updated to "Statistical Phrase Identifier" to avoid 
> ambiguity with Solr's query parsers (per some of the feedback on this issue), 
> as the implementation is actually just a mechanism for identifying phrases 
> statistically from a string and is NOT a Solr query parser. 
> h2. *Example usage:*
> h3. (See contrib readme or configuration files in the patch for full 
> configuration details)
> h3. *{{Request:}}*
> {code:java}
> http://localhost:8983/solr/spi/parse?q=darth vader obi wan kenobi anakin 
> skywalker toad x men magneto professor xavier{code}
> h3. *{{Response:}}* 
> {code:java}
> {
>   "responseHeader":{
>     "status":0,
>     "QTime":25},
>     "top_parsed_query":"{darth vader} {obi wan kenobi} {anakin skywalker} 
> {toad} {x men} {magneto} {professor xavier}",
>     "top_parsed_phrases":[
>       "darth vader",
>       "obi wan kenobi",
>       "anakin skywalker",
>       "toad",
>       "x-men",
>       "magneto",
>       "professor xavier"],
>       "potential_parsings":[{
>       "parsed_phrases":["darth vader",
>       "obi wan kenobi",
>       "anakin skywalker",
>       "toad",
>       "x-men",
>       "magneto",
>       

[jira] [Commented] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them

2019-08-27 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916695#comment-16916695
 ] 

Andrzej Bialecki  commented on SOLR-13677:
--

There were several issues that we pointed out in your patch before the last 
refactoring - and yet after the last refactoring you didn't give anyone a 
chance to review your changes before committing.

I object to this on principle - knowing that there are issues to be resolved 
you should've waited a reasonable time for review, and 5 hours is not 
reasonable, especially during summer holidays. On this grounds alone I'd like 
to request that you revert these commits.

However, in addition to that there are still several issues remaining in the 
committed changes that need to be fixed - and it would be just easier and 
cleaner to do this by reverting, fixing and committing them again after review:

* {{SolrMetrics}} is a misleading name - it implies more functionality than it 
really provides, which is just a dumb context to keep several references 
together. Previous patch used {{MetricsInfo}} which was not ideal but better 
that this. I propose {{SolrMetricsContext}}.
* Also, the class and its methods have 0 javadocs - I hope that one day this 
will actually become a precommit error.
* looking at usages of this class it seems to me that perhaps it should not 
include {{scope}} - take for example 
{{SolrCoreMetricManager.registerMetricProducer}} (which hasn't been converted 
yet to the new API!), or any other use of the old 
{{SolrMetricProducer.initializeMetrics}}: it would be so much easier to create 
just one {{SolrMetrics}} object and pass it to each producer, regardless of its 
scope.
* I don't understand the need to keep a reference {{SolrMetrics.parent}}. This 
is used only in one place in 
{{MetricsHandlerTest.RefreshablePluginHolder.closeHandler()}}, which looks like 
a hack and has no way of working anyway (comparing hex hashCode to plain int 
hashCode). 
* {{GaugeWrapper.unregisterGauges}}: expression 
{{wrapper.getTag().contains(tagSegment)}} already includes 
{{tagSegment.equals(wrapper.getTag())}}.
* I don't understand the conditional in {{SolrMetricProducer.close()}} - 
basically it means that you unregister on {{close}} only non-root components 
and all their children. Why exclude root components?
* {{SolrMetrics.gauge}} really doesn't need to mess with explicitly creating a 
{{GaugeWrapper}}, and the new (undocumented) method in {{SolrMetricManager}} 
that you added is dangerous - because it exposes the ability to easily register 
unwrapped gauges. Instead {{SolrMetric.gauge}} should simply delegate to 
{{SolrMetricManager.registerGauge}}. Also, providing a null metricName is an 
error and should throw an exception - is there any situation where you would 
want that?
* I don't see any need for the newly added methods in {{SolrMetricManager}} - 
existing methods were sufficient for implementing the functionality in 
{{SolrMetrics}}. Please remove them, they only increase the size of the API 
without adding any benefit.
* {{SolrMetricManager.unregisterGauges(String registryName, String 
tagSegment)}} should have some javadoc now, because it's not obvious what is a 
"tagSegment".
* {{SolrMetricProducer.getUniqueMetricTag}} has a misleading javadoc comment - 
it does not produce a metric name but a tag for tying together objects of the 
same life-cycle.
* Additionally, if {{parentName.contains(name)}} should be an error - I don't 
see when it would be a valid occurrence?
* {{SolrMetricProducer.initializeMetrics(SolrMetrics)}} has this peculiar 
message in the exception: ""This means , the class has not implemented both of 
these methods". We can do so much better here...
* the separator for tag hierarchy is ":" even though the javadoc claims it's 
"A.B.C". FWIW I think ":" is a better choice, so just the javadoc needs to be 
fixed.
* {{SolrMetricProducer.getMetrics()}} should not have a default implementation 
on master, only on branch_8x. Also, the name of the method should reflect the 
name of the object it returns, whatever we end up with.
* similarly, {{SolrMetricProducer.initializeMetrics(SolrMetricManager ...)}} 
should be deprecated on branch_8x and removed from master.
* {{PluginBag.put(String, T)}} should not take care of closing old instances of 
plugins - instead the corresponding {{PluginBag.put(String, PluginHolder)}} 
(which is also called directly from SolrCore) should do this, otherwise in some 
cases the old plugin will be closed and in some others it won't. This method 
should also check for object identity, if the code re-registers the same 
instance of plugin holder.
* {{RequestHandlerBase.getMetricRegistry}} will throw an NPE when called before 
{{initializeMetrics}} is called.
* why does {{SuggestComponent}} still call 
{{metricsInfo.metricManager.registerGauge}} instead of {{metricsInfo.gauge}} ? 
BTW, the name of this field 

[GitHub] [lucene-solr] atris edited a comment on issue #823: LUCENE-8939: Introduce Shared Count Early Termination In Parallel Search

2019-08-27 Thread GitBox
atris edited a comment on issue #823: LUCENE-8939: Introduce Shared Count Early 
Termination In Parallel Search
URL: https://github.com/apache/lucene-solr/pull/823#issuecomment-525287168
 
 
   @jimczi  Thanks for reviewing the PR.
   
   I will follow up with a JIRA and a PR for `TopScoreDocCollector` that does 
the same as this one.
   
   I ran Luceneutil a bunch of times and took an average of performances, the 
numbers look like:
   
   TaskQPS baseline  StdDevQPS my_modified_version  
StdDevPct diff
   BrowseDateTaxoFacets 3445.32  (5.0%) 3293.96  (5.5%)   
-4.4% ( -14% -6%)
Prefix3  218.41 (23.7%)  212.36 (17.4%)   
-2.8% ( -35% -   50%)
Respell  287.17  (7.8%)  280.86  (8.6%)   
-2.2% ( -17% -   15%)
MedTerm 3415.85  (4.4%) 3349.44  (3.6%)   
-1.9% (  -9% -6%)
   Wildcard  236.80 (12.5%)  232.30 (12.4%)   
-1.9% ( -23% -   26%)
AndHighHigh  498.85 (10.9%)  489.48 (10.1%)   
-1.9% ( -20% -   21%)
  HighTermDayOfYearSort  600.10  (3.7%)  588.84  (2.2%)   
-1.9% (  -7% -4%)
  OrHighMed  590.54  (2.9%)  581.16  (4.3%)   
-1.6% (  -8% -5%)
LowTerm 3943.78  (2.8%) 3882.24  (3.4%)   
-1.6% (  -7% -4%)
LowSloppyPhrase 1689.48  (2.5%) 1669.56  (2.2%)   
-1.2% (  -5% -3%)
   BrowseDayOfYearTaxoFacets 8421.12  (1.7%) 8334.95  (2.5%)   
-1.0% (  -5% -3%)
   PKLookup  146.25  (4.7%)  144.86  (5.4%)   
-0.9% ( -10% -9%)
  OrHighLow 1043.32  (2.1%) 1033.71  (2.7%)   
-0.9% (  -5% -3%)
 IntNRQ 1419.85  (2.2%) 1407.79  (1.7%)   
-0.8% (  -4% -3%)
 AndHighLow 2100.43  (2.7%) 2083.03  (2.9%)   
-0.8% (  -6% -4%)
   HighSpanNear  258.87 (18.4%)  256.75 (22.3%)   
-0.8% ( -35% -   48%)
 AndHighMed  936.63  (2.5%)  929.39  (2.0%)   
-0.8% (  -5% -3%)
  HighTermMonthSort 1668.46  (2.2%) 1656.43  (3.0%)   
-0.7% (  -5% -4%)
MedSpanNear 1157.98  (2.5%) 1150.21  (2.5%)   
-0.7% (  -5% -4%)
  MedPhrase  515.46  (6.3%)  512.41  (6.0%)   
-0.6% ( -12% -   12%)
  LowPhrase  761.37  (2.4%)  757.55  (2.1%)   
-0.5% (  -4% -4%)
   BrowseDayOfYearSSDVFacets 1802.01  (2.7%) 1794.09  (1.8%)   
-0.4% (  -4% -4%)
LowSpanNear 1460.48  (3.2%) 1454.29  (3.2%)   
-0.4% (  -6% -6%)
  BrowseMonthTaxoFacets 8490.66  (1.8%) 8459.18  (2.2%)   
-0.4% (  -4% -3%)
MedSloppyPhrase  672.44  (9.3%)  670.70  (4.8%)   
-0.3% ( -13% -   15%)
   HighTerm 1356.56  (2.5%) 1354.39  (2.0%)   
-0.2% (  -4% -4%)
 OrHighHigh  580.03 (11.5%)  579.81 (11.0%)   
-0.0% ( -20% -   25%)
 Fuzzy1  268.95  (6.4%)  269.28  (5.8%)
0.1% ( -11% -   13%)
  BrowseMonthSSDVFacets 2025.43  (1.7%) 2029.44  (2.5%)
0.2% (  -3% -4%)
 HighPhrase  466.42 (14.7%)  469.01 (13.4%)
0.6% ( -23% -   33%)
   HighIntervalsOrdered  389.06 (13.4%)  391.67 (13.8%)
0.7% ( -23% -   32%)
   HighSloppyPhrase  771.58 (11.8%)  777.44  (8.9%)
0.8% ( -17% -   24%)
 Fuzzy2   88.96 (18.0%)   90.19 (12.7%)
1.4% ( -24% -   39%)
   
   
   
   Looks like the performance is consistent and we see no degradation. WDYT?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on issue #823: LUCENE-8939: Introduce Shared Count Early Termination In Parallel Search

2019-08-27 Thread GitBox
atris commented on issue #823: LUCENE-8939: Introduce Shared Count Early 
Termination In Parallel Search
URL: https://github.com/apache/lucene-solr/pull/823#issuecomment-525287168
 
 
   @jimczi  Thanks for reviewing the PR.
   
   I will follow up with a JIRA and a PR for `TopScoreDocCollector` that does 
the same as this one.
   
   I ran Luceneutil a bunch of times and took an average of performances, the 
numbers look like:
   
   TaskQPS baseline  StdDevQPS my_modified_version  
StdDevPct diff
   BrowseDateTaxoFacets 3445.32  (5.0%) 3293.96  (5.5%)   
-4.4% ( -14% -6%)
Prefix3  218.41 (23.7%)  212.36 (17.4%)   
-2.8% ( -35% -   50%)
Respell  287.17  (7.8%)  280.86  (8.6%)   
-2.2% ( -17% -   15%)
MedTerm 3415.85  (4.4%) 3349.44  (3.6%)   
-1.9% (  -9% -6%)
   Wildcard  236.80 (12.5%)  232.30 (12.4%)   
-1.9% ( -23% -   26%)
AndHighHigh  498.85 (10.9%)  489.48 (10.1%)   
-1.9% ( -20% -   21%)
  HighTermDayOfYearSort  600.10  (3.7%)  588.84  (2.2%)   
-1.9% (  -7% -4%)
  OrHighMed  590.54  (2.9%)  581.16  (4.3%)   
-1.6% (  -8% -5%)
LowTerm 3943.78  (2.8%) 3882.24  (3.4%)   
-1.6% (  -7% -4%)
LowSloppyPhrase 1689.48  (2.5%) 1669.56  (2.2%)   
-1.2% (  -5% -3%)
   BrowseDayOfYearTaxoFacets 8421.12  (1.7%) 8334.95  (2.5%)   
-1.0% (  -5% -3%)
   PKLookup  146.25  (4.7%)  144.86  (5.4%)   
-0.9% ( -10% -9%)
  OrHighLow 1043.32  (2.1%) 1033.71  (2.7%)   
-0.9% (  -5% -3%)
 IntNRQ 1419.85  (2.2%) 1407.79  (1.7%)   
-0.8% (  -4% -3%)
 AndHighLow 2100.43  (2.7%) 2083.03  (2.9%)   
-0.8% (  -6% -4%)
   HighSpanNear  258.87 (18.4%)  256.75 (22.3%)   
-0.8% ( -35% -   48%)
 AndHighMed  936.63  (2.5%)  929.39  (2.0%)   
-0.8% (  -5% -3%)
  HighTermMonthSort 1668.46  (2.2%) 1656.43  (3.0%)   
-0.7% (  -5% -4%)
MedSpanNear 1157.98  (2.5%) 1150.21  (2.5%)   
-0.7% (  -5% -4%)
  MedPhrase  515.46  (6.3%)  512.41  (6.0%)   
-0.6% ( -12% -   12%)
  LowPhrase  761.37  (2.4%)  757.55  (2.1%)   
-0.5% (  -4% -4%)
   BrowseDayOfYearSSDVFacets 1802.01  (2.7%) 1794.09  (1.8%)   
-0.4% (  -4% -4%)
LowSpanNear 1460.48  (3.2%) 1454.29  (3.2%)   
-0.4% (  -6% -6%)
  BrowseMonthTaxoFacets 8490.66  (1.8%) 8459.18  (2.2%)   
-0.4% (  -4% -3%)
MedSloppyPhrase  672.44  (9.3%)  670.70  (4.8%)   
-0.3% ( -13% -   15%)
   HighTerm 1356.56  (2.5%) 1354.39  (2.0%)   
-0.2% (  -4% -4%)
 OrHighHigh  580.03 (11.5%)  579.81 (11.0%)   
-0.0% ( -20% -   25%)
 Fuzzy1  268.95  (6.4%)  269.28  (5.8%)
0.1% ( -11% -   13%)
  BrowseMonthSSDVFacets 2025.43  (1.7%) 2029.44  (2.5%)
0.2% (  -3% -4%)
 HighPhrase  466.42 (14.7%)  469.01 (13.4%)
0.6% ( -23% -   33%)
   HighIntervalsOrdered  389.06 (13.4%)  391.67 (13.8%)
0.7% ( -23% -   32%)
   HighSloppyPhrase  771.58 (11.8%)  777.44  (8.9%)
0.8% ( -17% -   24%)
 Fuzzy2   88.96 (18.0%)   90.19 (12.7%)
1.4% ( -24% -   39%)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8403) Support 'filtered' term vectors - don't require all terms to be present

2019-08-27 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916687#comment-16916687
 ] 

David Smiley commented on LUCENE-8403:
--

bq. Does it make sense for me to adapt the patch to support pattern based 
filtering?

I think you should discuss the idea here first since there's a blocker.

What I'd most prefer, if [~rcmuir] might approve, is a way for the 
TermVectorsFormat to somehow advertise that it's contents do not align with a 
PostingsFormat.  Straw-man: perhaps a {{TermVectorsFormat.isFiltered()}} 
method.  In such a case, CheckIndex could still check that the TVF API works 
(it would call CheckIndex.checkFields(tvFields, ...) but it would not compare 
it to the terms() -- logic gated by {{doSlowChecks}} param in 
{{testTermVectors()}}.  This would be very general and allow all manner of 
variations a term vector might have from the analyzed text.  

A less general approach is one akin to Hoss's suggestion that the TVF 
advertises *which* terms are consistent.  Though not a list which is way too 
inflexible, more like a callback method such as 
{{TermVectorsFormat.acceptsTerm(BytesRef)}}.

I don't think IndexWriterConfig should be modified as I think this is too 
expert to warrant that.

Atri, curious, what exactly was the error message string thrown by CheckIndex 
for a filtered term?

Side note: TermVectorsWriter's API is dated; ought to look more like Postings 
writing.  I have some old notes on a plan to tackle that.

> Support 'filtered' term vectors - don't require all terms to be present
> ---
>
> Key: LUCENE-8403
> URL: https://issues.apache.org/jira/browse/LUCENE-8403
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael Braun
>Priority: Minor
> Attachments: LUCENE-8403.patch
>
>
> The genesis of this was a conversation and idea from [~dsmiley] several years 
> ago.
> In order to optimize term vector storage, we may not actually need all tokens 
> to be present in the term vectors - and if so, ideally our codec could just 
> opt not to store them.
> I attempted to fork the standard codec and override the TermVectorsFormat and 
> TermVectorsWriter to ignore storing certain Terms within a field. This 
> worked, however, CheckIndex checks that the terms present in the standard 
> postings are also present in the TVs, if TVs enabled. So this then doesn't 
> work as 'valid' according to CheckIndex.
> Can the TermVectorsFormat be made in such a way to support configuration of 
> tokens that should not be stored (benefits: less storage, more optimal 
> retrieval per doc)? Is this valuable to the wider community? Is there a way 
> we can design this to not break CheckIndex's contract while at the same time 
> lessening storage for unneeded tokens?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11.0.3) - Build # 24617 - Failure!

2019-08-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24617/
Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 55159 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj796965134
 [ecj-lint] Compiling 40 source files to /tmp/ecj796965134
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/analysis/nori/src/java/org/apache/lucene/analysis/ko/KoreanPartOfSpeechStopFilter.java
 (at line 23)
 [ecj-lint] import java.util.stream.Collectors;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.stream.Collectors is never used
 [ecj-lint] --
 [ecj-lint] 1 problem (1 error)

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:634: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build.xml:207: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:2181: 
The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/analysis/build.xml:153: 
The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/analysis/build.xml:38: 
The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:2009: 
The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:2048: 
Compile failed; see the compiler error output for details.

Total time: 83 minutes 55 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-13721) TestApiFramework#testFramework failing in master consistently

2019-08-27 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916632#comment-16916632
 ] 

Erick Erickson commented on SOLR-13721:
---

[~noble.paul] It does happen on master (but perhaps a different root cause?)

Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24614/
Java: 64bit/jdk-11.0.3 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:
incorrect value for path /spec[0]/methods[0] in :{   "spec":[ {   
"documentation":"https://lucene.apache.org/solr/guide/collection-management.html#list;,
   "description":"List all available collections and their properties.",
   "methods":["GET"],   "url":{"paths":[   "/collections",  
 "/c"]}}, {   
"documentation":"https://lucene.apache.org/solr/guide/collection-management.html#create;,
   "description":"Create collections and collection aliases, backup or 
restore collections, and delete collections and aliases.",   
"methods":["POST"],   "url":{"paths":[   "/collections",   
"/c"]},   "commands":{ "create":{   "type":"object",
   
"documentation":"https://lucene.apache.org/solr/guide/collection-management.html#create;,
   "description":"Create a collection.",   "properties":{   
  "name":{   "type":"string",   "description":"The 
name of the collection to be created."}, "config":{   
"type":"string",   "description":"The name of the configuration set 
(which must already be stored in ZooKeeper) to use for this collection. If not 
provided, Solr will default to the collection name as the configuration set 
name."}, "router":{   "type":"object",   
"documentation":"https://lucene.apache.org/solr/guide/shards-and-indexi

> TestApiFramework#testFramework failing in master consistently
> -
>
> Key: SOLR-13721
> URL: https://issues.apache.org/jira/browse/SOLR-13721
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24612/testReport/junit/org.apache.solr.handler.admin/TestApiFramework/testFramework/]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-13721) TestApiFramework#testFramework failing in master consistently

2019-08-27 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-13721:
--
Comment: was deleted

(was: [~noble.paul] Some evidence that it's not Java 11 .vs. Java 8

Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24614/
Java: 64bit/jdk-11.0.3 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:
incorrect value for path /spec[0]/methods[0] in :{   "spec":[ {   
"documentation":"https://lucene.apache.org/solr/guide/collection-management.html#list;,
   "description":"List all available collections and their properties.",
   "methods":["GET"],   "url":{"paths":[   "/collections",  
 "/c"]}}, {   
"documentation":"https://lucene.apache.org/solr/guide/collection-management.html#create;,
   "description":"Create collections and collection aliases, backup or 
restore collections, and delete collections and aliases.",   
"methods":["POST"],   "url":{"paths":[   "/collections",   
"/c"]},   "commands":{ "create":{   "type":"object",
   
"documentation":"https://lucene.apache.org/solr/guide/collection-management.html#create;,
   "description":"Create a collection.",   "properties":{   
  "name":{   "type":"string",   "description":"The 
name of the collection to be created."}, "config":{   
"type":"string",   "description":"The name of the configuration set 
(which must already be stored in ZooKeeper) to use for this collection. If not 
provided, Solr will default to the collection name as the configuration set 
name."}, "router":{   "type":"object",   
"documentation":"https://lucene.apache.org/solr/guide/shards-and-indexi
)

> TestApiFramework#testFramework failing in master consistently
> -
>
> Key: SOLR-13721
> URL: https://issues.apache.org/jira/browse/SOLR-13721
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24612/testReport/junit/org.apache.solr.handler.admin/TestApiFramework/testFramework/]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13721) TestApiFramework#testFramework failing in master consistently

2019-08-27 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916629#comment-16916629
 ] 

Erick Erickson commented on SOLR-13721:
---

[~noble.paul] Some evidence that it's not Java 11 .vs. Java 8

Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24614/
Java: 64bit/jdk-11.0.3 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.admin.TestApiFramework.testFramework

Error Message:
incorrect value for path /spec[0]/methods[0] in :{   "spec":[ {   
"documentation":"https://lucene.apache.org/solr/guide/collection-management.html#list;,
   "description":"List all available collections and their properties.",
   "methods":["GET"],   "url":{"paths":[   "/collections",  
 "/c"]}}, {   
"documentation":"https://lucene.apache.org/solr/guide/collection-management.html#create;,
   "description":"Create collections and collection aliases, backup or 
restore collections, and delete collections and aliases.",   
"methods":["POST"],   "url":{"paths":[   "/collections",   
"/c"]},   "commands":{ "create":{   "type":"object",
   
"documentation":"https://lucene.apache.org/solr/guide/collection-management.html#create;,
   "description":"Create a collection.",   "properties":{   
  "name":{   "type":"string",   "description":"The 
name of the collection to be created."}, "config":{   
"type":"string",   "description":"The name of the configuration set 
(which must already be stored in ZooKeeper) to use for this collection. If not 
provided, Solr will default to the collection name as the configuration set 
name."}, "router":{   "type":"object",   
"documentation":"https://lucene.apache.org/solr/guide/shards-and-indexi


> TestApiFramework#testFramework failing in master consistently
> -
>
> Key: SOLR-13721
> URL: https://issues.apache.org/jira/browse/SOLR-13721
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24612/testReport/junit/org.apache.solr.handler.admin/TestApiFramework/testFramework/]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11384) add support for distributed graph query

2019-08-27 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916626#comment-16916626
 ] 

Erick Erickson commented on SOLR-11384:
---

[~kwatters] Echoing Jeroen's question. Should we close this and let streaming 
handle it?

> add support for distributed graph query
> ---
>
> Key: SOLR-11384
> URL: https://issues.apache.org/jira/browse/SOLR-11384
> Project: Solr
>  Issue Type: Improvement
>Reporter: Kevin Watters
>Priority: Minor
>
> Creating this ticket to track the work that I've done on the distributed 
> graph traversal support in solr.
> Current GraphQuery will only work on a single core, which introduces some 
> limits on where it can be used and also complexities if you want to scale it. 
>  I believe there's a strong desire to support a fully distributed method of 
> doing the Graph Query.  I'm working on a patch, it's not complete yet, but if 
> anyone would like to have a look at the approach and implementation,  I 
> welcome much feedback.  
> The flow for the distributed graph query is almost exactly the same as the 
> normal graph query.  The only difference is how it discovers the "frontier 
> query" at each level of the traversal.  
> When a distribute graph query request comes in, each shard begins by running 
> the root query, to know where to start on it's shard.  Each participating 
> shard then discovers it's edges for the next hop.  Those edges are then 
> broadcast to all other participating shards.  The shard then receives all the 
> parts of the frontier query , assembles it, and executes it.
> This process continues on each shard until there are no new edges left, or 
> the maxDepth of the traversal has finished.
> The approach is to introduce a FrontierBroker that resides as a singleton on 
> each one of the solr nodes in the cluster.  When a graph query is created, it 
> can do a getInstance() on it so it can listen on the frontier parts coming in.
> Initially, I was using an external Kafka broker to handle this, and it did 
> work pretty well.  The new approach is migrating the FrontierBroker to be a 
> request handler in Solr, and potentially to use the SolrJ client to publish 
> the edges to each node in the cluster.
> There are a few outstanding design questions, first being, how do we know 
> what the list of shards are that are participating in the current query 
> request?  Is that easy info to get at?
> Second,  currently, we are serializing a query object between the shards, 
> perhaps we should consider a slightly different abstraction, and serialize 
> lists of "edge" objects between the nodes.   The point of this would be to 
> batch the exploration/traversal of current frontier to help avoid large 
> bursts of memory being required.
> Thrid, what sort of caching strategy should be introduced for the frontier 
> queries, if any?  And if we do some caching there, how/when should the 
> entries be expired and auto-warmed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11384) add support for distributed graph query

2019-08-27 Thread Komal (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916581#comment-16916581
 ] 

Komal commented on SOLR-11384:
--

Any updates on this ticket? Do we finally have distributed graph traversal 
support in solr? 

> add support for distributed graph query
> ---
>
> Key: SOLR-11384
> URL: https://issues.apache.org/jira/browse/SOLR-11384
> Project: Solr
>  Issue Type: Improvement
>Reporter: Kevin Watters
>Priority: Minor
>
> Creating this ticket to track the work that I've done on the distributed 
> graph traversal support in solr.
> Current GraphQuery will only work on a single core, which introduces some 
> limits on where it can be used and also complexities if you want to scale it. 
>  I believe there's a strong desire to support a fully distributed method of 
> doing the Graph Query.  I'm working on a patch, it's not complete yet, but if 
> anyone would like to have a look at the approach and implementation,  I 
> welcome much feedback.  
> The flow for the distributed graph query is almost exactly the same as the 
> normal graph query.  The only difference is how it discovers the "frontier 
> query" at each level of the traversal.  
> When a distribute graph query request comes in, each shard begins by running 
> the root query, to know where to start on it's shard.  Each participating 
> shard then discovers it's edges for the next hop.  Those edges are then 
> broadcast to all other participating shards.  The shard then receives all the 
> parts of the frontier query , assembles it, and executes it.
> This process continues on each shard until there are no new edges left, or 
> the maxDepth of the traversal has finished.
> The approach is to introduce a FrontierBroker that resides as a singleton on 
> each one of the solr nodes in the cluster.  When a graph query is created, it 
> can do a getInstance() on it so it can listen on the frontier parts coming in.
> Initially, I was using an external Kafka broker to handle this, and it did 
> work pretty well.  The new approach is migrating the FrontierBroker to be a 
> request handler in Solr, and potentially to use the SolrJ client to publish 
> the edges to each node in the cluster.
> There are a few outstanding design questions, first being, how do we know 
> what the list of shards are that are participating in the current query 
> request?  Is that easy info to get at?
> Second,  currently, we are serializing a query object between the shards, 
> perhaps we should consider a slightly different abstraction, and serialize 
> lists of "edge" objects between the nodes.   The point of this would be to 
> batch the exploration/traversal of current frontier to help avoid large 
> bursts of memory being required.
> Thrid, what sort of caching strategy should be introduced for the frontier 
> queries, if any?  And if we do some caching there, how/when should the 
> entries be expired and auto-warmed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13661) A package management system for Solr

2019-08-27 Thread Noble Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13661:
--
Labels: package  (was: plugin)

> A package management system for Solr
> 
>
> Key: SOLR-13661
> URL: https://issues.apache.org/jira/browse/SOLR-13661
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>  Labels: package
>
> Solr needs a unified cohesive package management system so that users can 
> deploy/redeploy plugins in a safe manner. This is an umbrella issue to 
> eventually build that solution



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13661) A package management system for Solr

2019-08-27 Thread Noble Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13661:
--
Labels: package  (was: )

> A package management system for Solr
> 
>
> Key: SOLR-13661
> URL: https://issues.apache.org/jira/browse/SOLR-13661
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>  Labels: package
>
> Solr needs a unified cohesive package management system so that users can 
> deploy/redeploy plugins in a safe manner. This is an umbrella issue to 
> eventually build that solution



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13661) A package management system for Solr

2019-08-27 Thread Noble Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13661:
--
Labels: plugin  (was: package)

> A package management system for Solr
> 
>
> Key: SOLR-13661
> URL: https://issues.apache.org/jira/browse/SOLR-13661
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>  Labels: plugin
>
> Solr needs a unified cohesive package management system so that users can 
> deploy/redeploy plugins in a safe manner. This is an umbrella issue to 
> eventually build that solution



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13650) Support for named global classloaders

2019-08-27 Thread Noble Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13650:
--
Labels: package  (was: )

> Support for named global classloaders
> -
>
> Key: SOLR-13650
> URL: https://issues.apache.org/jira/browse/SOLR-13650
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>  Labels: package
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "add-package": {
>"name": "my-package" ,
>   "url" : "http://host:port/url/of/jar;,
>   "sha512":""
>   }
> }' http://localhost:8983/api/cluster
> {code}
> This means that Solr creates a globally accessible classloader with a name 
> {{my-package}} which contains all the jars of that package. 
> A component should be able to use the package by using the {{"package" : 
> "my-package"}}.
> eg:
> {code:json}
> curl -X POST -H 'Content-type:application/json' --data-binary '
> {
>   "create-searchcomponent": {
>   "name": "my-searchcomponent" ,
>   "class" : "my.path.to.ClassName",
>  "package" : "my-package"
>   }
> }' http://localhost:8983/api/c/mycollection/config 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13723) JettySolrRunner should support /api/* (the v2 end point)

2019-08-27 Thread Noble Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13723:
--
Labels: test-framework  (was: )

> JettySolrRunner should support /api/* (the v2 end point)
> 
>
> Key: SOLR-13723
> URL: https://issues.apache.org/jira/browse/SOLR-13723
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Minor
>  Labels: test-framework
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Today we are not able to write proper v2 API testcases because of this



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul opened a new pull request #847: SOLR-13723 JettySolrRunner should support /api/* (the v2 end point)

2019-08-27 Thread GitBox
noblepaul opened a new pull request #847: SOLR-13723 JettySolrRunner should 
support /api/* (the v2 end point)
URL: https://github.com/apache/lucene-solr/pull/847
 
 
   This is just a PoC , NOT TO BE MERGED


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13723) JettySolrRunner should support /api/* (the v2 end point)

2019-08-27 Thread Noble Paul (Jira)
Noble Paul created SOLR-13723:
-

 Summary: JettySolrRunner should support /api/* (the v2 end point)
 Key: SOLR-13723
 URL: https://issues.apache.org/jira/browse/SOLR-13723
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul


Today we are not able to write proper v2 API testcases because of this



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3631 - Still Failing

2019-08-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3631/

All tests passed

Build Log:
[...truncated 55082 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj954580059
 [ecj-lint] Compiling 40 source files to /tmp/ecj954580059
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/analysis/nori/src/java/org/apache/lucene/analysis/ko/KoreanPartOfSpeechStopFilter.java
 (at line 23)
 [ecj-lint] import java.util.stream.Collectors;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.stream.Collectors is never used
 [ecj-lint] --
 [ecj-lint] 1 problem (1 error)

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:634: 
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:101: 
The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build.xml:207:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2181:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/analysis/build.xml:153:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/analysis/build.xml:38:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2009:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2048:
 Compile failed; see the compiler error output for details.

Total time: 98 minutes 48 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[GitHub] [lucene-solr] MarcusSorealheis commented on issue #805: SOLR-13649 change the default behavior of the basic authentication plugin.

2019-08-27 Thread GitBox
MarcusSorealheis commented on issue #805: SOLR-13649 change the default 
behavior of the basic authentication plugin.
URL: https://github.com/apache/lucene-solr/pull/805#issuecomment-525172840
 
 
   > org.apache.solr.cloud.TestConfigSetsAPI.testUploadWithScriptUpdateProcessor
   
   Will check now


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8403) Support 'filtered' term vectors - don't require all terms to be present

2019-08-27 Thread Atri Sharma (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916404#comment-16916404
 ] 

Atri Sharma commented on LUCENE-8403:
-

Thanks for reviewing, David.

 

I did notice a CheckHits breakage on this patch – I was hoping to get some 
early feedback on the patch and then seek advice to solve the open problems.

 

Does it make sense for me to adapt the patch to support pattern based filtering?

 

RE: CheckHits fix, how about Hoss's idea to allow the TermVector codec to 
publish which terms are available?

> Support 'filtered' term vectors - don't require all terms to be present
> ---
>
> Key: LUCENE-8403
> URL: https://issues.apache.org/jira/browse/LUCENE-8403
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael Braun
>Priority: Minor
> Attachments: LUCENE-8403.patch
>
>
> The genesis of this was a conversation and idea from [~dsmiley] several years 
> ago.
> In order to optimize term vector storage, we may not actually need all tokens 
> to be present in the term vectors - and if so, ideally our codec could just 
> opt not to store them.
> I attempted to fork the standard codec and override the TermVectorsFormat and 
> TermVectorsWriter to ignore storing certain Terms within a field. This 
> worked, however, CheckIndex checks that the terms present in the standard 
> postings are also present in the TVs, if TVs enabled. So this then doesn't 
> work as 'valid' according to CheckIndex.
> Can the TermVectorsFormat be made in such a way to support configuration of 
> tokens that should not be stored (benefits: less storage, more optimal 
> retrieval per doc)? Is this valuable to the wider community? Is there a way 
> we can design this to not break CheckIndex's contract while at the same time 
> lessening storage for unneeded tokens?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org